WorldWideScience

Sample records for voice auditory feedback

  1. Analysis of the Auditory Feedback and Phonation in Normal Voices.

    Science.gov (United States)

    Arbeiter, Mareike; Petermann, Simon; Hoppe, Ulrich; Bohr, Christopher; Doellinger, Michael; Ziethe, Anke

    2018-02-01

    The aim of this study was to investigate the auditory feedback mechanisms and voice quality during phonation in response to a spontaneous pitch change in the auditory feedback. Does the pitch shift reflex (PSR) change voice pitch and voice quality? Quantitative and qualitative voice characteristics were analyzed during the PSR. Twenty-eight healthy subjects underwent transnasal high-speed video endoscopy (HSV) at 8000 fps during sustained phonation [a]. While phonating, the subjects heard their sound pitched up for 700 cents (interval of a fifth), lasting 300 milliseconds in their auditory feedback. The electroencephalography (EEG), acoustic voice signal, electroglottography (EGG), and high-speed-videoendoscopy (HSV) were analyzed to compare feedback mechanisms for the pitched and unpitched condition of the phonation paradigm statistically. Furthermore, quantitative and qualitative voice characteristics were analyzed. The PSR was successfully detected within all signals of the experimental tools (EEG, EGG, acoustic voice signal, HSV). A significant increase of the perturbation measures and an increase of the values of the acoustic parameters during the PSR were observed, especially for the audio signal. The auditory feedback mechanism seems not only to control for voice pitch but also for voice quality aspects.

  2. Effects of voice harmonic complexity on ERP responses to pitch-shifted auditory feedback.

    Science.gov (United States)

    Behroozmand, Roozbeh; Korzyukov, Oleg; Larson, Charles R

    2011-12-01

    The present study investigated the neural mechanisms of voice pitch control for different levels of harmonic complexity in the auditory feedback. Event-related potentials (ERPs) were recorded in response to+200 cents pitch perturbations in the auditory feedback of self-produced natural human vocalizations, complex and pure tone stimuli during active vocalization and passive listening conditions. During active vocal production, ERP amplitudes were largest in response to pitch shifts in the natural voice, moderately large for non-voice complex stimuli and smallest for the pure tones. However, during passive listening, neural responses were equally large for pitch shifts in voice and non-voice complex stimuli but still larger than that for pure tones. These findings suggest that pitch change detection is facilitated for spectrally rich sounds such as natural human voice and non-voice complex stimuli compared with pure tones. Vocalization-induced increase in neural responses for voice feedback suggests that sensory processing of naturally-produced complex sounds such as human voice is enhanced by means of motor-driven mechanisms (e.g. efference copies) during vocal production. This enhancement may enable the audio-vocal system to more effectively detect and correct for vocal errors in the feedback of natural human vocalizations to maintain an intended vocal output for speaking. Copyright © 2011 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  3. Sensory Processing: Advances in Understanding Structure and Function of Pitch-Shifted Auditory Feedback in Voice Control

    OpenAIRE

    Charles R Larson; Donald A Robin

    2016-01-01

    The pitch-shift paradigm has become a widely used method for studying the role of voice pitch auditory feedback in voice control. This paradigm introduces small, brief pitch shifts in voice auditory feedback to vocalizing subjects. The perturbations trigger a reflexive mechanism that counteracts the change in pitch. The underlying mechanisms of the vocal responses are thought to reflect a negative feedback control system that is similar to constructs developed to explain other forms of motor ...

  4. Error-dependent modulation of speech-induced auditory suppression for pitch-shifted voice feedback

    Directory of Open Access Journals (Sweden)

    Larson Charles R

    2011-06-01

    Full Text Available Abstract Background The motor-driven predictions about expected sensory feedback (efference copies have been proposed to play an important role in recognition of sensory consequences of self-produced motor actions. In the auditory system, this effect was suggested to result in suppression of sensory neural responses to self-produced voices that are predicted by the efference copies during vocal production in comparison with passive listening to the playback of the identical self-vocalizations. In the present study, event-related potentials (ERPs were recorded in response to upward pitch shift stimuli (PSS with five different magnitudes (0, +50, +100, +200 and +400 cents at voice onset during active vocal production and passive listening to the playback. Results Results indicated that the suppression of the N1 component during vocal production was largest for unaltered voice feedback (PSS: 0 cents, became smaller as the magnitude of PSS increased to 200 cents, and was almost completely eliminated in response to 400 cents stimuli. Conclusions Findings of the present study suggest that the brain utilizes the motor predictions (efference copies to determine the source of incoming stimuli and maximally suppresses the auditory responses to unaltered feedback of self-vocalizations. The reduction of suppression for 50, 100 and 200 cents and its elimination for 400 cents pitch-shifted voice auditory feedback support the idea that motor-driven suppression of voice feedback leads to distinctly different sensory neural processing of self vs. non-self vocalizations. This characteristic may enable the audio-vocal system to more effectively detect and correct for unexpected errors in the feedback of self-produced voice pitch compared with externally-generated sounds.

  5. Error-dependent modulation of speech-induced auditory suppression for pitch-shifted voice feedback.

    Science.gov (United States)

    Behroozmand, Roozbeh; Larson, Charles R

    2011-06-06

    The motor-driven predictions about expected sensory feedback (efference copies) have been proposed to play an important role in recognition of sensory consequences of self-produced motor actions. In the auditory system, this effect was suggested to result in suppression of sensory neural responses to self-produced voices that are predicted by the efference copies during vocal production in comparison with passive listening to the playback of the identical self-vocalizations. In the present study, event-related potentials (ERPs) were recorded in response to upward pitch shift stimuli (PSS) with five different magnitudes (0, +50, +100, +200 and +400 cents) at voice onset during active vocal production and passive listening to the playback. Results indicated that the suppression of the N1 component during vocal production was largest for unaltered voice feedback (PSS: 0 cents), became smaller as the magnitude of PSS increased to 200 cents, and was almost completely eliminated in response to 400 cents stimuli. Findings of the present study suggest that the brain utilizes the motor predictions (efference copies) to determine the source of incoming stimuli and maximally suppresses the auditory responses to unaltered feedback of self-vocalizations. The reduction of suppression for 50, 100 and 200 cents and its elimination for 400 cents pitch-shifted voice auditory feedback support the idea that motor-driven suppression of voice feedback leads to distinctly different sensory neural processing of self vs. non-self vocalizations. This characteristic may enable the audio-vocal system to more effectively detect and correct for unexpected errors in the feedback of self-produced voice pitch compared with externally-generated sounds.

  6. Sensory Processing: Advances in Understanding Structure and Function of Pitch-Shifted Auditory Feedback in Voice Control

    Directory of Open Access Journals (Sweden)

    Charles R Larson

    2016-02-01

    Full Text Available The pitch-shift paradigm has become a widely used method for studying the role of voice pitch auditory feedback in voice control. This paradigm introduces small, brief pitch shifts in voice auditory feedback to vocalizing subjects. The perturbations trigger a reflexive mechanism that counteracts the change in pitch. The underlying mechanisms of the vocal responses are thought to reflect a negative feedback control system that is similar to constructs developed to explain other forms of motor control. Another use of this technique requires subjects to voluntarily change the pitch of their voice when they hear a pitch shift stimulus. Under these conditions, short latency responses are produced that change voice pitch to match that of the stimulus. The pitch-shift technique has been used with magnetoencephalography (MEG and electroencephalography (EEG recordings, and has shown that at vocal onset there is normally a suppression of neural activity related to vocalization. However, if a pitch-shift is also presented at voice onset, there is a cancellation of this suppression, which has been interpreted to mean that one way in which a person distinguishes self-vocalization from vocalization of others is by a comparison of the intended voice and the actual voice. Studies of the pitch shift reflex in the fMRI environment show that the superior temporal gyrus (STG plays an important role in the process of controlling voice F0 based on auditory feedback. Additional studies using fMRI for effective connectivity modeling show that the left and right STG play critical roles in correcting for an error in voice production. While both the left and right STG are involved in this process, a feedback loop develops between left and right STG during perturbations, in which the left to right connection becomes stronger, and a new negative right to left connection emerges along with the emergence of other feedback loops within the cortical network tested.

  7. Auditory feedback of one’s own voice is used for high-level semantic monitoring: the self-comprehension hypothesis

    Directory of Open Access Journals (Sweden)

    Andreas eLind

    2014-03-01

    Full Text Available What would it be like if we said one thing, and heard ourselves saying something else? Would we notice something was wrong? Or would we believe we said the thing we heard? Is feedback of our own speech only used to detect errors, or does it also help to specify the meaning of what we say? Comparator models of self-monitoring favor the first alternative, and hold that our sense of agency is given by the comparison between intentions and outcomes, while inferential models argue that agency is a more fluent construct, dependent on contextual inferences about the most likely cause of an action. In this paper, we present a theory about the use of feedback during speech. Specifically, we discuss inferential models of speech production that question the standard comparator assumption that the meaning of our utterances is fully specified before articulation. We then argue that auditory feedback provides speakers with a channel for high-level, semantic self-comprehension. In support of this we discuss results using a method we recently developed called Real-time Speech Exchange (RSE. In our first study using RSE (Lind et al, submitted participants were fitted with headsets and performed a computerized Stroop task. We surreptitiously recorded words they said, and later in the test we played them back at the exact same time that the participants uttered something else, while blocking the actual feedback of their voice. Thus, participants said one thing, but heard themselves saying something else. The results showed that when timing conditions were ideal, more than two thirds of the manipulations went undetected. Crucially, in a large proportion of the non-detected manipulated trials, the inserted words were experienced as self-produced by the participants. This indicates that our sense of agency for speech has a strong inferential component, and that auditory feedback of our own voice acts as a pathway for semantic monitoring.

  8. Multivariate sensitivity to voice during auditory categorization.

    Science.gov (United States)

    Lee, Yune Sang; Peelle, Jonathan E; Kraemer, David; Lloyd, Samuel; Granger, Richard

    2015-09-01

    Past neuroimaging studies have documented discrete regions of human temporal cortex that are more strongly activated by conspecific voice sounds than by nonvoice sounds. However, the mechanisms underlying this voice sensitivity remain unclear. In the present functional MRI study, we took a novel approach to examining voice sensitivity, in which we applied a signal detection paradigm to the assessment of multivariate pattern classification among several living and nonliving categories of auditory stimuli. Within this framework, voice sensitivity can be interpreted as a distinct neural representation of brain activity that correctly distinguishes human vocalizations from other auditory object categories. Across a series of auditory categorization tests, we found that bilateral superior and middle temporal cortex consistently exhibited robust sensitivity to human vocal sounds. Although the strongest categorization was in distinguishing human voice from other categories, subsets of these regions were also able to distinguish reliably between nonhuman categories, suggesting a general role in auditory object categorization. Our findings complement the current evidence of cortical sensitivity to human vocal sounds by revealing that the greatest sensitivity during categorization tasks is devoted to distinguishing voice from nonvoice categories within human temporal cortex. Copyright © 2015 the American Physiological Society.

  9. Delayed Auditory Feedback and Movement

    Science.gov (United States)

    Pfordresher, Peter Q.; Dalla Bella, Simone

    2011-01-01

    It is well known that timing of rhythm production is disrupted by delayed auditory feedback (DAF), and that disruption varies with delay length. We tested the hypothesis that disruption depends on the state of the movement trajectory at the onset of DAF. Participants tapped isochronous rhythms at a rate specified by a metronome while hearing DAF…

  10. Tactile feedback improves auditory spatial localization

    Directory of Open Access Journals (Sweden)

    Monica eGori

    2014-10-01

    Full Text Available Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial-bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gori et al., 2014. To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds before and after training, either with tactile feedback, verbal feedback or no feedback. Audio thresholds were first measured with a spatial bisection task: subjects judged whether the second sound of a three sound sequence was spatially closer to the first or the third sound. The tactile-feedback group underwent two audio-tactile feedback sessions of 100 trials, where each auditory trial was followed by the same spatial sequence played on the subject’s forearm; auditory spatial bisection thresholds were evaluated after each session. In the verbal-feedback condition, the positions of the sounds were verbally reported to the subject after each feedback trial. The no-feedback group did the same sequence of trials, with no feedback. Performance improved significantly only after audio-tactile feedback. The results suggest that direct tactile feedback interacts with the auditory spatial localization system, possibly by a process of cross-sensory recalibration. Control tests with the subject rotated suggested that this effect occurs only when the tactile and acoustic sequences are spatially coherent. Our results suggest that the tactile system can be used to recalibrate the auditory sense of space. These results encourage the possibility of designing rehabilitation programs to help blind persons establish a robust auditory sense of space, through training with the tactile modality.

  11. Reliance on auditory feedback in children with childhood apraxia of speech.

    Science.gov (United States)

    Iuzzini-Seigel, Jenya; Hogan, Tiffany P; Guarino, Anthony J; Green, Jordan R

    2015-01-01

    Children with childhood apraxia of speech (CAS) have been hypothesized to continuously monitor their speech through auditory feedback to minimize speech errors. We used an auditory masking paradigm to determine the effect of attenuating auditory feedback on speech in 30 children: 9 with CAS, 10 with speech delay, and 11 with typical development. The masking only affected the speech of children with CAS as measured by voice onset time and vowel space area. These findings provide preliminary support for greater reliance on auditory feedback among children with CAS. Readers of this article should be able to (i) describe the motivation for investigating the role of auditory feedback in children with CAS; (ii) report the effects of feedback attenuation on speech production in children with CAS, speech delay, and typical development, and (iii) understand how the current findings may support a feedforward program deficit in children with CAS. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  12. Rhythmic walking interaction with auditory feedback

    DEFF Research Database (Denmark)

    Maculewicz, Justyna; Jylhä, Antti; Serafin, Stefania

    2015-01-01

    We present an interactive auditory display for walking with sinusoidal tones or ecological, physically-based synthetic walking sounds. The feedback is either step-based or rhythmic, with constant or adaptive tempo. In a tempo-following experiment, we investigate different interaction modes...

  13. The impact of auditory feedback on neuronavigation

    NARCIS (Netherlands)

    Willems, PWA; Noordmans, HJ; van Overbeeke, JJ; Viergever, MA; Tulleken, CAF; van der Sprenkel, JWB

    Object. We aimed to develop an auditory feedback system to be used in addition to regular neuronavigation, in an attempt to improve the usefulness of the information offered by neuronavigation systems. Instrumentation. Using a serial connection, instrument co-ordinates determined by a commercially

  14. Auditory hallucinations: A review of the ERC "VOICE" project.

    Science.gov (United States)

    Hugdahl, Kenneth

    2015-06-22

    In this invited review I provide a selective overview of recent research on brain mechanisms and cognitive processes involved in auditory hallucinations. The review is focused on research carried out in the "VOICE" ERC Advanced Grant Project, funded by the European Research Council, but I also review and discuss the literature in general. Auditory hallucinations are suggested to be perceptual phenomena, with a neuronal origin in the speech perception areas in the temporal lobe. The phenomenology of auditory hallucinations is conceptualized along three domains, or dimensions; a perceptual dimension, experienced as someone speaking to the patient; a cognitive dimension, experienced as an inability to inhibit, or ignore the voices, and an emotional dimension, experienced as the "voices" having primarily a negative, or sinister, emotional tone. I will review cognitive, imaging, and neurochemistry data related to these dimensions, primarily the first two. The reviewed data are summarized in a model that sees auditory hallucinations as initiated from temporal lobe neuronal hyper-activation that draws attentional focus inward, and which is not inhibited due to frontal lobe hypo-activation. It is further suggested that this is maintained through abnormal glutamate and possibly gamma-amino-butyric-acid transmitter mediation, which could point towards new pathways for pharmacological treatment. A final section discusses new methods of acquiring quantitative data on the phenomenology and subjective experience of auditory hallucination that goes beyond standard interview questionnaires, by suggesting an iPhone/iPod app.

  15. Effect of delayed auditory feedback on stuttering with and without central auditory processing disorders.

    Science.gov (United States)

    Picoloto, Luana Altran; Cardoso, Ana Cláudia Vieira; Cerqueira, Amanda Venuti; Oliveira, Cristiane Moço Canhetti de

    2017-12-07

    To verify the effect of delayed auditory feedback on speech fluency of individuals who stutter with and without central auditory processing disorders. The participants were twenty individuals with stuttering from 7 to 17 years old and were divided into two groups: Stuttering Group with Auditory Processing Disorders (SGAPD): 10 individuals with central auditory processing disorders, and Stuttering Group (SG): 10 individuals without central auditory processing disorders. Procedures were: fluency assessment with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF), assessment of the stuttering severity and central auditory processing (CAP). Phono Tools software was used to cause a delay of 100 milliseconds in the auditory feedback. The "Wilcoxon Signal Post" test was used in the intragroup analysis and "Mann-Whitney" test in the intergroup analysis. The DAF caused a statistically significant reduction in SG: in the frequency score of stuttering-like disfluencies in the analysis of the Stuttering Severity Instrument, in the amount of blocks and repetitions of monosyllabic words, and in the frequency of stuttering-like disfluencies of duration. Delayed auditory feedback did not cause statistically significant effects on SGAPD fluency, individuals with stuttering with auditory processing disorders. The effect of delayed auditory feedback in speech fluency of individuals who stutter was different in individuals of both groups, because there was an improvement in fluency only in individuals without auditory processing disorder.

  16. Adaptation to Delayed Speech Feedback Induces Temporal Recalibration between Vocal Sensory and Auditory Modalities

    Directory of Open Access Journals (Sweden)

    Kosuke Yamamoto

    2011-10-01

    Full Text Available We ordinarily perceive our voice sound as occurring simultaneously with vocal production, but the sense of simultaneity in vocalization can be easily interrupted by delayed auditory feedback (DAF. DAF causes normal people to have difficulty speaking fluently but helps people with stuttering to improve speech fluency. However, the underlying temporal mechanism for integrating the motor production of voice and the auditory perception of vocal sound remains unclear. In this study, we investigated the temporal tuning mechanism integrating vocal sensory and voice sounds under DAF with an adaptation technique. Participants read some sentences with specific delay times of DAF (0, 30, 75, 120 ms during three minutes to induce ‘Lag Adaptation’. After the adaptation, they then judged the simultaneity between motor sensation and vocal sound given feedback in producing simple voice but not speech. We found that speech production with lag adaptation induced a shift in simultaneity responses toward the adapted auditory delays. This indicates that the temporal tuning mechanism in vocalization can be temporally recalibrated after prolonged exposure to delayed vocal sounds. These findings suggest vocalization is finely tuned by the temporal recalibration mechanism, which acutely monitors the integration of temporal delays between motor sensation and vocal sound.

  17. Temporal recalibration in vocalization induced by adaptation of delayed auditory feedback.

    Directory of Open Access Journals (Sweden)

    Kosuke Yamamoto

    Full Text Available BACKGROUND: We ordinarily perceive our voice sound as occurring simultaneously with vocal production, but the sense of simultaneity in vocalization can be easily interrupted by delayed auditory feedback (DAF. DAF causes normal people to have difficulty speaking fluently but helps people with stuttering to improve speech fluency. However, the underlying temporal mechanism for integrating the motor production of voice and the auditory perception of vocal sound remains unclear. In this study, we investigated the temporal tuning mechanism integrating vocal sensory and voice sounds under DAF with an adaptation technique. METHODS AND FINDINGS: Participants produced a single voice sound repeatedly with specific delay times of DAF (0, 66, 133 ms during three minutes to induce 'Lag Adaptation'. They then judged the simultaneity between motor sensation and vocal sound given feedback. We found that lag adaptation induced a shift in simultaneity responses toward the adapted auditory delays. This indicates that the temporal tuning mechanism in vocalization can be temporally recalibrated after prolonged exposure to delayed vocal sounds. Furthermore, we found that the temporal recalibration in vocalization can be affected by averaging delay times in the adaptation phase. CONCLUSIONS: These findings suggest vocalization is finely tuned by the temporal recalibration mechanism, which acutely monitors the integration of temporal delays between motor sensation and vocal sound.

  18. [Design of standard voice sample text for subjective auditory perceptual evaluation of voice disorders].

    Science.gov (United States)

    Li, Jin-rang; Sun, Yan-yan; Xu, Wen

    2010-09-01

    To design a speech voice sample text with all phonemes in Mandarin for subjective auditory perceptual evaluation of voice disorders. The principles for design of a speech voice sample text are: The short text should include the 21 initials and 39 finals, this may cover all the phonemes in Mandarin. Also, the short text should have some meanings. A short text was made out. It had 155 Chinese words, and included 21 initials and 38 finals (the final, ê, was not included because it was rarely used in Mandarin). Also, the text covered 17 light tones and one "Erhua". The constituent ratios of the initials and finals presented in this short text were statistically similar as those in Mandarin according to the method of similarity of the sample and population (r = 0.742, P text were statistically not similar as those in Mandarin (r = 0.731, P > 0.05). A speech voice sample text with all phonemes in Mandarin was made out. The constituent ratios of the initials and finals presented in this short text are similar as those in Mandarin. Its value for subjective auditory perceptual evaluation of voice disorders need further study.

  19. Stuttering adults' lack of pre-speech auditory modulation normalizes when speaking with delayed auditory feedback.

    Science.gov (United States)

    Daliri, Ayoub; Max, Ludo

    2018-02-01

    Auditory modulation during speech movement planning is limited in adults who stutter (AWS), but the functional relevance of the phenomenon itself remains unknown. We investigated for AWS and adults who do not stutter (AWNS) (a) a potential relationship between pre-speech auditory modulation and auditory feedback contributions to speech motor learning and (b) the effect on pre-speech auditory modulation of real-time versus delayed auditory feedback. Experiment I used a sensorimotor adaptation paradigm to estimate auditory-motor speech learning. Using acoustic speech recordings, we quantified subjects' formant frequency adjustments across trials when continually exposed to formant-shifted auditory feedback. In Experiment II, we used electroencephalography to determine the same subjects' extent of pre-speech auditory modulation (reductions in auditory evoked potential N1 amplitude) when probe tones were delivered prior to speaking versus not speaking. To manipulate subjects' ability to monitor real-time feedback, we included speaking conditions with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF). Experiment I showed that auditory-motor learning was limited for AWS versus AWNS, and the extent of learning was negatively correlated with stuttering frequency. Experiment II yielded several key findings: (a) our prior finding of limited pre-speech auditory modulation in AWS was replicated; (b) DAF caused a decrease in auditory modulation for most AWNS but an increase for most AWS; and (c) for AWS, the amount of auditory modulation when speaking with DAF was positively correlated with stuttering frequency. Lastly, AWNS showed no correlation between pre-speech auditory modulation (Experiment II) and extent of auditory-motor learning (Experiment I) whereas AWS showed a negative correlation between these measures. Thus, findings suggest that AWS show deficits in both pre-speech auditory modulation and auditory-motor learning; however, limited pre

  20. Different auditory feedback control for echolocation and communication in horseshoe bats.

    Directory of Open Access Journals (Sweden)

    Ying Liu

    Full Text Available Auditory feedback from the animal's own voice is essential during bat echolocation: to optimize signal detection, bats continuously adjust various call parameters in response to changing echo signals. Auditory feedback seems also necessary for controlling many bat communication calls, although it remains unclear how auditory feedback control differs in echolocation and communication. We tackled this question by analyzing echolocation and communication in greater horseshoe bats, whose echolocation pulses are dominated by a constant frequency component that matches the frequency range they hear best. To maintain echoes within this "auditory fovea", horseshoe bats constantly adjust their echolocation call frequency depending on the frequency of the returning echo signal. This Doppler-shift compensation (DSC behavior represents one of the most precise forms of sensory-motor feedback known. We examined the variability of echolocation pulses emitted at rest (resting frequencies, RFs and one type of communication signal which resembles an echolocation pulse but is much shorter (short constant frequency communication calls, SCFs and produced only during social interactions. We found that while RFs varied from day to day, corroborating earlier studies in other constant frequency bats, SCF-frequencies remained unchanged. In addition, RFs overlapped for some bats whereas SCF-frequencies were always distinctly different. This indicates that auditory feedback during echolocation changed with varying RFs but remained constant or may have been absent during emission of SCF calls for communication. This fundamentally different feedback mechanism for echolocation and communication may have enabled these bats to use SCF calls for individual recognition whereas they adjusted RF calls to accommodate the daily shifts of their auditory fovea.

  1. Auditory feedback perturbation in children with developmental speech disorders

    NARCIS (Netherlands)

    Terband, H.R.; van Brenk, F.J.; van Doornik-van der Zee, J.C.

    2014-01-01

    Background/purpose: Several studies indicate a close relation between auditory and speech motor functions in children with speech sound disorders (SSD). The aim of this study was to investigate the ability to compensate and adapt for perturbed auditory feedback in children with SSD compared to

  2. Age Differences in Voice Evaluation: From Auditory-Perceptual Evaluation to Social Interactions

    Science.gov (United States)

    Lortie, Catherine L.; Deschamps, Isabelle; Guitton, Matthieu J.; Tremblay, Pascale

    2018-01-01

    Purpose: The factors that influence the evaluation of voice in adulthood, as well as the consequences of such evaluation on social interactions, are not well understood. Here, we examined the effect of listeners' age and the effect of talker age, sex, and smoking status on the auditory-perceptual evaluation of voice, voice-related psychosocial…

  3. Feedback Valence Affects Auditory Perceptual Learning Independently of Feedback Probability

    Science.gov (United States)

    Amitay, Sygal; Moore, David R.; Molloy, Katharine; Halliday, Lorna F.

    2015-01-01

    Previous studies have suggested that negative feedback is more effective in driving learning than positive feedback. We investigated the effect on learning of providing varying amounts of negative and positive feedback while listeners attempted to discriminate between three identical tones; an impossible task that nevertheless produces robust learning. Four feedback conditions were compared during training: 90% positive feedback or 10% negative feedback informed the participants that they were doing equally well, while 10% positive or 90% negative feedback informed them they were doing equally badly. In all conditions the feedback was random in relation to the listeners’ responses (because the task was to discriminate three identical tones), yet both the valence (negative vs. positive) and the probability of feedback (10% vs. 90%) affected learning. Feedback that informed listeners they were doing badly resulted in better post-training performance than feedback that informed them they were doing well, independent of valence. In addition, positive feedback during training resulted in better post-training performance than negative feedback, but only positive feedback indicating listeners were doing badly on the task resulted in learning. As we have previously speculated, feedback that better reflected the difficulty of the task was more effective in driving learning than feedback that suggested performance was better than it should have been given perceived task difficulty. But contrary to expectations, positive feedback was more effective than negative feedback in driving learning. Feedback thus had two separable effects on learning: feedback valence affected motivation on a subjectively difficult task, and learning occurred only when feedback probability reflected the subjective difficulty. To optimize learning, training programs need to take into consideration both feedback valence and probability. PMID:25946173

  4. The written voice: implicit memory effects of voice characteristics following silent reading and auditory presentation.

    Science.gov (United States)

    Abramson, Marianne

    2007-12-01

    After being familiarized with two voices, either implicit (auditory lexical decision) or explicit memory (auditory recognition) for words from silently read sentences was assessed among 32 men and 32 women volunteers. In the silently read sentences, the sex of speaker was implied in the initial words, e.g., "He said, ..." or "She said...". Tone in question versus statement was also manipulated by appropriate punctuation. Auditory lexical decision priming was found for sex- and tone-consistent items following silent reading, but only up to 5 min. after silent reading. In a second study, similar lexical decision priming was found following listening to the sentences, although these effects remained reliable after a 2-day delay. The effect sizes for lexical decision priming showed that tone-consistency and sex-consistency were strong following both silent reading and listening 5 min. after studying. These results suggest that readers create episodic traces of text from auditory images of silently read sentences as they do during listening.

  5. Auditory comprehension: from the voice up to the single word level

    OpenAIRE

    Jones, Anna Barbara

    2016-01-01

    Auditory comprehension, the ability to understand spoken language, consists of a number of different auditory processing skills. In the five studies presented in this thesis I investigated both intact and impaired auditory comprehension at different levels: voice versus phoneme perception, as well as single word auditory comprehension in terms of phonemic and semantic content. In the first study, using sounds from different continua of ‘male’-/pæ/ to ‘female’-/tæ/ and ‘male’...

  6. Adaptation to delayed auditory feedback induces the temporal recalibration effect in both speech perception and production.

    Science.gov (United States)

    Yamamoto, Kosuke; Kawabata, Hideaki

    2014-12-01

    We ordinarily speak fluently, even though our perceptions of our own voices are disrupted by various environmental acoustic properties. The underlying mechanism of speech is supposed to monitor the temporal relationship between speech production and the perception of auditory feedback, as suggested by a reduction in speech fluency when the speaker is exposed to delayed auditory feedback (DAF). While many studies have reported that DAF influences speech motor processing, its relationship to the temporal tuning effect on multimodal integration, or temporal recalibration, remains unclear. We investigated whether the temporal aspects of both speech perception and production change due to adaptation to the delay between the motor sensation and the auditory feedback. This is a well-used method of inducing temporal recalibration. Participants continually read texts with specific DAF times in order to adapt to the delay. Then, they judged the simultaneity between the motor sensation and the vocal feedback. We measured the rates of speech with which participants read the texts in both the exposure and re-exposure phases. We found that exposure to DAF changed both the rate of speech and the simultaneity judgment, that is, participants' speech gained fluency. Although we also found that a delay of 200 ms appeared to be most effective in decreasing the rates of speech and shifting the distribution on the simultaneity judgment, there was no correlation between these measurements. These findings suggest that both speech motor production and multimodal perception are adaptive to temporal lag but are processed in distinct ways.

  7. Formant compensation for auditory feedback with English vowels

    DEFF Research Database (Denmark)

    Mitsuya, Takashi; MacDonald, Ewen N; Munhall, Kevin G

    2015-01-01

    Past studies have shown that speakers spontaneously adjust their speech acoustics in response to their auditory feedback perturbed in real time. In the case of formant perturbation, the majority of studies have examined speaker's compensatory production using the English vowel /ɛ/ as in the word...... "head." Consistent behavioral observations have been reported, and there is lively discussion as to how the production system integrates auditory versus somatosensory feedback to control vowel production. However, different vowels have different oral sensation and proprioceptive information due...... to differences in the degree of lingual contact or jaw openness. This may in turn influence the ways in which speakers compensate for auditory feedback. The aim of the current study was to examine speakers' compensatory behavior with six English monophthongs. Specifically, the current study tested to see...

  8. Effect- and Performance-Based Auditory Feedback on Interpersonal Coordination

    Directory of Open Access Journals (Sweden)

    Tong-Hun Hwang

    2018-03-01

    Full Text Available When two individuals interact in a collaborative task, such as carrying a sofa or a table, usually spatiotemporal coordination of individual motor behavior will emerge. In many cases, interpersonal coordination can arise independently of verbal communication, based on the observation of the partners' movements and/or the object's movements. In this study, we investigate how social coupling between two individuals can emerge in a collaborative task under different modes of perceptual information. A visual reference condition was compared with three different conditions with new types of additional auditory feedback provided in real time: effect-based auditory feedback, performance-based auditory feedback, and combined effect/performance-based auditory feedback. We have developed a new paradigm in which the actions of both participants continuously result in a seamlessly merged effect on an object simulated by a tablet computer application. Here, participants should temporally synchronize their movements with a 90° phase difference and precisely adjust the finger dynamics in order to keep the object (a ball accurately rotating on a given circular trajectory on the tablet. Results demonstrate that interpersonal coordination in a joint task can be altered by different kinds of additional auditory information in various ways.

  9. Task-irrelevant auditory feedback facilitates motor performance in musicians

    Directory of Open Access Journals (Sweden)

    Virginia eConde

    2012-05-01

    Full Text Available An efficient and fast auditory–motor network is a basic resource for trained musicians due to the importance of motor anticipation of sound production in musical performance. When playing an instrument, motor performance always goes along with the production of sounds and the integration between both modalities plays an essential role in the course of musical training. The aim of the present study was to investigate the role of task-irrelevant auditory feedback during motor performance in musicians using a serial reaction time task (SRTT. Our hypothesis was that musicians, due to their extensive auditory–motor practice routine during musical training, have a superior performance and learning capabilities when receiving auditory feedback during SRTT relative to musicians performing the SRTT without any auditory feedback. Here we provide novel evidence that task-irrelevant auditory feedback is capable to reinforce SRTT performance but not learning, a finding that might provide further insight into auditory-motor integration in musicians on a behavioral level.

  10. Psychological Therapies for Auditory Hallucinations (Voices): Current Status and Key Directions for Future Research

    NARCIS (Netherlands)

    Thomas, N.; Hayward, M.; Peters, E; van der Gaag, M.; Bentall, R.P.; Jenner, J.; Strauss, C.; Sommer, I.E.; Johns, L.C.; Varese, F.; Gracia-Montes, J.M.; Waters, F.; Dodgson, G.; McCarthy-Jones, S.

    2014-01-01

    This report from the International Consortium on Hallucinations Research considers the current status and future directions in research on psychological therapies targeting auditory hallucinations (hearing voices). Therapy approaches have evolved from behavioral and coping-focused interventions,

  11. [Distinguishing the voice of self from others: the self-monitoring hypothesis of auditory hallucination].

    Science.gov (United States)

    Asai, Tomohisa; Tanno, Yoshihiko

    2010-08-01

    Auditory hallucinations (AH), a psychopathological phenomenon where a person hears non-existent voices, commonly occur in schizophrenia. Recent cognitive and neuroscience studies suggest that AH may be the misattribution of one's own inner speech. Self-monitoring through neural feedback mechanisms allows individuals to distinguish between their own and others' actions, including speech. AH maybe the results of an individual's inability to discriminate between their own speech and that of others. The present paper tries to integrate the three theories (behavioral, brain, and model approaches) proposed to explain the self-monitoring hypothesis of AH. In addition, we investigate the lateralization of self-other representation in the brain, as suggested by recent studies, and discuss future research directions.

  12. Auditory Masking Effects on Speech Fluency in Apraxia of Speech and Aphasia: Comparison to Altered Auditory Feedback

    Science.gov (United States)

    Jacks, Adam; Haley, Katarina L.

    2015-01-01

    Purpose: To study the effects of masked auditory feedback (MAF) on speech fluency in adults with aphasia and/or apraxia of speech (APH/AOS). We hypothesized that adults with AOS would increase speech fluency when speaking with noise. Altered auditory feedback (AAF; i.e., delayed/frequency-shifted feedback) was included as a control condition not…

  13. Stuttering Inhibition via Altered Auditory Feedback during Scripted Telephone Conversations

    Science.gov (United States)

    Hudock, Daniel; Kalinowski, Joseph

    2014-01-01

    Background: Overt stuttering is inhibited by approximately 80% when people who stutter read aloud as they hear an altered form of their speech feedback to them. However, levels of stuttering inhibition vary from 60% to 100% depending on speaking situation and signal presentation. For example, binaural presentations of delayed auditory feedback…

  14. Altered Sensory Feedbacks in Pianist's Dystonia: the altered auditory feedback paradigm and the glove effect

    Directory of Open Access Journals (Sweden)

    Felicia Pei-Hsin Cheng

    2013-12-01

    Full Text Available Background: This study investigates the effect of altered auditory feedback (AAF in musician's dystonia (MD and discusses whether altered auditory feedback can be considered as a sensory trick in MD. Furthermore, the effect of AAF is compared with altered tactile feedback, which can serve as a sensory trick in several other forms of focal dystonia. Methods: The method is based on scale analysis (Jabusch et al. 2004. Experiment 1 employs synchronization paradigm: 12 MD patients and 25 healthy pianists had to repeatedly play C-major scales in synchrony with a metronome on a MIDI-piano with 3 auditory feedback conditions: 1. normal feedback; 2. no feedback; 3. constant delayed feedback. Experiment 2 employs synchronization-continuation paradigm: 12 MD patients and 12 healthy pianists had to repeatedly play C-major scales in two phases: first in synchrony with a metronome, secondly continue the established tempo without the metronome. There are 4 experimental conditions, among them 3 are the same altered auditory feedback as in Experiment 1 and 1 is related to altered tactile sensory input. The coefficient of variation of inter-onset intervals of the key depressions was calculated to evaluate fine motor control. Results: In both experiments, the healthy controls and the patients behaved very similarly. There is no difference in the regularity of playing between the two groups under any condition, and neither did AAF nor did altered tactile feedback have a beneficial effect on patients’ fine motor control. Conclusions: The results of the two experiments suggest that in the context of our experimental designs, AAF and altered tactile feedback play a minor role in motor coordination in patients with musicians' dystonia. We propose that altered auditory and tactile feedback do not serve as effective sensory tricks and may not temporarily reduce the symptoms of patients suffering from MD in this experimental context.

  15. The self or the voice? Relative contributions of self-esteem and voice appraisal in persistent auditory hallucinations.

    Science.gov (United States)

    Fannon, Dominic; Hayward, Peter; Thompson, Neil; Green, Nicola; Surguladze, Simon; Wykes, Til

    2009-07-01

    Persistent auditory hallucinations are common, disabling and difficult to treat. Cognitive behavioural therapy is recommended in their treatment though there is limited empirical evidence of the role of cognitive factors in the formation and persistence of voices. Low self-esteem is thought to play a causal and maintaining role in a range of clinical disorders, particularly depression, which is prevalent and disabling in schizophrenia. It was hypothesized that low self-esteem is prominent in, and contributes to, depression in voice hearers. Beliefs about persistent auditory hallucinations were investigated in 82 patients using the Beliefs About Voices Questionnaire--revised in a cross-sectional design. Self-esteem and depression were assessed using standardized measures. Depression and low self-esteem were prominent as were beliefs about the omnipotence and malevolence of auditory hallucinations. Beliefs about the uncontrollability and dominance of auditory hallucinations and low self-esteem were significantly correlated with depression. Low self-esteem did not mediate the effect of beliefs about auditory hallucinations--both acted independently to contribute to depression in this sample of patients with schizophrenia and persistent auditory hallucinations. Low self-esteem is of fundamental importance to the understanding of affective disturbance in voice hearers. Therapeutic interventions need to address both the appraisal of self and hallucinations in schizophrenia. Measures which ameliorate low self-esteem can be expected to improve depressed mood in this patient group. Further elucidation of the mechanisms involved can strengthen existing models of positive psychotic symptoms and provide targets for more effective treatments.

  16. Hear today, not gone tomorrow? An exploratory longitudinal study of auditory verbal hallucinations (hearing voices).

    Science.gov (United States)

    Hartigan, Nicky; McCarthy-Jones, Simon; Hayward, Mark

    2014-01-01

    Despite an increasing volume of cross-sectional work on auditory verbal hallucinations (hearing voices), there remains a paucity of work on how the experience may change over time. The first aim of this study was to attempt replication of a previous finding that beliefs about voices are enduring and stable, irrespective of changes in the severity of voices, and do not change without a specific intervention. The second aim was to examine whether voice-hearers' interrelations with their voices change over time, without a specific intervention. A 12-month longitudinal examination of these aspects of voices was undertaken with hearers in routine clinical treatment (N = 18). We found beliefs about voices' omnipotence and malevolence were stable over a 12-month period, as were styles of interrelating between voice and hearer, despite trends towards reductions in voice-related distress and disruption. However, there was a trend for beliefs about the benevolence of voices to decrease over time. Styles of interrelating between voice and hearer appear relatively stable and enduring, as are beliefs about the voices' malevolent intent and power. Although there was some evidence that beliefs about benevolence may reduce over time, the reasons for this were not clear. Our exploratory study was limited by only being powered to detect large effect sizes. Implications for clinical practice and future research are discussed.

  17. Perceptual-Auditory and Acoustical Analysis of the Voices of Transgender Women.

    Science.gov (United States)

    Schwarz, Karine; Fontanari, Anna Martha Vaitses; Costa, Angelo Brandelli; Soll, Bianca Machado Borba; da Silva, Dhiordan Cardoso; de Sá Villas-Bôas, Anna Paula; Cielo, Carla Aparecida; Bastilha, Gabriele Rodrigues; Ribeiro, Vanessa Veis; Dorfman, Maria Elza Kazumi Yamaguti; Lobato, Maria Inês Rodrigues

    2017-09-28

    Voice is an important gender marker in the transition process as a transgender individual accepts a new gender identity. The objectives of this study were to describe and relate aspects of a perceptual-auditory analysis and the fundamental frequency (F0) of male-to-female (MtF) transsexual individuals. A case-control study was carried out with individuals aged 19-52 years who attended the Gender Identity Program of the Hospital de Clínicas of Porto Alegre. Vocal recordings from the MtF transgender and cisgender individuals (vowel /a:/ and six phrases of Consensus Auditory Perceptual Evaluation Voice [CAPE-V]) were edited and randomly coded before storage in a Dropbox folder. The voices (vowel /a:/) were analyzed by consensus on the same day by two judge speech therapists who had more than 10 years of experience in the voice area using the GRBASI perceptual-auditory vocal evaluation scale. Acoustic analysis of the voices was performed using the advanced Multi-Dimensional Voice Program software. The resonance focus and the degrees of masculinity and femininity for each voice recording were determined by listening to the CAPE-V phrases, for the same judges. There were significant differences between the groups regarding a greater frequency of subjects with F0 between 80 and 150 Hz (P = 0.003), and a greater frequency of hypernasal resonant focus (P < 0.001) in the MtF cases and greater frequency of subjects with absence of roughness (P = 0.031) in the control group. The MtF group of individuals showed altered vertical resonant focus, more masculine voices, and lower fundamental frequencies. The control group showed a significant absence of roughness. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  18. Auditory feedback blocks memory benefits of cueing during sleep.

    Science.gov (United States)

    Schreiner, Thomas; Lehmann, Mick; Rasch, Björn

    2015-10-28

    It is now widely accepted that re-exposure to memory cues during sleep reactivates memories and can improve later recall. However, the underlying mechanisms are still unknown. As reactivation during wakefulness renders memories sensitive to updating, it remains an intriguing question whether reactivated memories during sleep also become susceptible to incorporating further information after the cue. Here we show that the memory benefits of cueing Dutch vocabulary during sleep are in fact completely blocked when memory cues are directly followed by either correct or conflicting auditory feedback, or a pure tone. In addition, immediate (but not delayed) auditory stimulation abolishes the characteristic increases in oscillatory theta and spindle activity typically associated with successful reactivation during sleep as revealed by high-density electroencephalography. We conclude that plastic processes associated with theta and spindle oscillations occurring during a sensitive period immediately after the cue are necessary for stabilizing reactivated memory traces during sleep.

  19. Auditory and visual modulation of temporal lobe neurons in voice-sensitive and association cortices.

    Science.gov (United States)

    Perrodin, Catherine; Kayser, Christoph; Logothetis, Nikos K; Petkov, Christopher I

    2014-02-12

    Effective interactions between conspecific individuals can depend upon the receiver forming a coherent multisensory representation of communication signals, such as merging voice and face content. Neuroimaging studies have identified face- or voice-sensitive areas (Belin et al., 2000; Petkov et al., 2008; Tsao et al., 2008), some of which have been proposed as candidate regions for face and voice integration (von Kriegstein et al., 2005). However, it was unclear how multisensory influences occur at the neuronal level within voice- or face-sensitive regions, especially compared with classically defined multisensory regions in temporal association cortex (Stein and Stanford, 2008). Here, we characterize auditory (voice) and visual (face) influences on neuronal responses in a right-hemisphere voice-sensitive region in the anterior supratemporal plane (STP) of Rhesus macaques. These results were compared with those in the neighboring superior temporal sulcus (STS). Within the STP, our results show auditory sensitivity to several vocal features, which was not evident in STS units. We also newly identify a functionally distinct neuronal subpopulation in the STP that appears to carry the area's sensitivity to voice identity related features. Audiovisual interactions were prominent in both the STP and STS. However, visual influences modulated the responses of STS neurons with greater specificity and were more often associated with congruent voice-face stimulus pairings than STP neurons. Together, the results reveal the neuronal processes subserving voice-sensitive fMRI activity patterns in primates, generate hypotheses for testing in the visual modality, and clarify the position of voice-sensitive areas within the unisensory and multisensory processing hierarchies.

  20. Auditory and Visual Modulation of Temporal Lobe Neurons in Voice-Sensitive and Association Cortices

    Science.gov (United States)

    Perrodin, Catherine; Kayser, Christoph; Logothetis, Nikos K.

    2014-01-01

    Effective interactions between conspecific individuals can depend upon the receiver forming a coherent multisensory representation of communication signals, such as merging voice and face content. Neuroimaging studies have identified face- or voice-sensitive areas (Belin et al., 2000; Petkov et al., 2008; Tsao et al., 2008), some of which have been proposed as candidate regions for face and voice integration (von Kriegstein et al., 2005). However, it was unclear how multisensory influences occur at the neuronal level within voice- or face-sensitive regions, especially compared with classically defined multisensory regions in temporal association cortex (Stein and Stanford, 2008). Here, we characterize auditory (voice) and visual (face) influences on neuronal responses in a right-hemisphere voice-sensitive region in the anterior supratemporal plane (STP) of Rhesus macaques. These results were compared with those in the neighboring superior temporal sulcus (STS). Within the STP, our results show auditory sensitivity to several vocal features, which was not evident in STS units. We also newly identify a functionally distinct neuronal subpopulation in the STP that appears to carry the area's sensitivity to voice identity related features. Audiovisual interactions were prominent in both the STP and STS. However, visual influences modulated the responses of STS neurons with greater specificity and were more often associated with congruent voice-face stimulus pairings than STP neurons. Together, the results reveal the neuronal processes subserving voice-sensitive fMRI activity patterns in primates, generate hypotheses for testing in the visual modality, and clarify the position of voice-sensitive areas within the unisensory and multisensory processing hierarchies. PMID:24523543

  1. The auditory dorsal stream plays a crucial role in projecting hallucinated voices into external space

    NARCIS (Netherlands)

    Looijestijn, Jasper; Diederen, Kelly M. J.; Goekoop, Rutger; Sommer, Iris E. C.; Daalman, Kirstin; Kahn, Rene S.; Hoek, Hans W.; Blom, Jan Dirk

    Introduction: Verbal auditory hallucinations (VAHs) are experienced as spoken voices which seem to originate in the extracorporeal environment or inside the head. Animal and human research has identified a 'where' pathway for sound processing comprising the planum temporale, the middle frontal gyrus

  2. Weak responses to auditory feedback perturbation during articulation in persons who stutter: evidence for abnormal auditory-motor transformation.

    Directory of Open Access Journals (Sweden)

    Shanqing Cai

    Full Text Available Previous empirical observations have led researchers to propose that auditory feedback (the auditory perception of self-produced sounds when speaking functions abnormally in the speech motor systems of persons who stutter (PWS. Researchers have theorized that an important neural basis of stuttering is the aberrant integration of auditory information into incipient speech motor commands. Because of the circumstantial support for these hypotheses and the differences and contradictions between them, there is a need for carefully designed experiments that directly examine auditory-motor integration during speech production in PWS. In the current study, we used real-time manipulation of auditory feedback to directly investigate whether the speech motor system of PWS utilizes auditory feedback abnormally during articulation and to characterize potential deficits of this auditory-motor integration. Twenty-one PWS and 18 fluent control participants were recruited. Using a short-latency formant-perturbation system, we examined participants' compensatory responses to unanticipated perturbation of auditory feedback of the first formant frequency during the production of the monophthong [ε]. The PWS showed compensatory responses that were qualitatively similar to the controls' and had close-to-normal latencies (∼150 ms, but the magnitudes of their responses were substantially and significantly smaller than those of the control participants (by 47% on average, p<0.05. Measurements of auditory acuity indicate that the weaker-than-normal compensatory responses in PWS were not attributable to a deficit in low-level auditory processing. These findings are consistent with the hypothesis that stuttering is associated with functional defects in the inverse models responsible for the transformation from the domain of auditory targets and auditory error information into the domain of speech motor commands.

  3. Psychological Therapies for Auditory Hallucinations (Voices): Current Status and Key Directions for Future Research

    Science.gov (United States)

    Thomas, Neil; Hayward, Mark; Peters, Emmanuelle; van der Gaag, Mark; Bentall, Richard P.; Jenner, Jack; Strauss, Clara; Sommer, Iris E.; Johns, Louise C.; Varese, Filippo; García-Montes, José Manuel; Waters, Flavie; Dodgson, Guy; McCarthy-Jones, Simon

    2014-01-01

    This report from the International Consortium on Hallucinations Research considers the current status and future directions in research on psychological therapies targeting auditory hallucinations (hearing voices). Therapy approaches have evolved from behavioral and coping-focused interventions, through formulation-driven interventions using methods from cognitive therapy, to a number of contemporary developments. Recent developments include the application of acceptance- and mindfulness-based approaches, and consolidation of methods for working with connections between voices and views of self, others, relationships and personal history. In this article, we discuss the development of therapies for voices and review the empirical findings. This review shows that psychological therapies are broadly effective for people with positive symptoms, but that more research is required to understand the specific application of therapies to voices. Six key research directions are identified: (1) moving beyond the focus on overall efficacy to understand specific therapeutic processes targeting voices, (2) better targeting psychological processes associated with voices such as trauma, cognitive mechanisms, and personal recovery, (3) more focused measurement of the intended outcomes of therapy, (4) understanding individual differences among voice hearers, (5) extending beyond a focus on voices and schizophrenia into other populations and sensory modalities, and (6) shaping interventions for service implementation. PMID:24936081

  4. Mouth and Voice: A Relationship between Visual and Auditory Preference in the Human Superior Temporal Sulcus.

    Science.gov (United States)

    Zhu, Lin L; Beauchamp, Michael S

    2017-03-08

    Cortex in and around the human posterior superior temporal sulcus (pSTS) is known to be critical for speech perception. The pSTS responds to both the visual modality (especially biological motion) and the auditory modality (especially human voices). Using fMRI in single subjects with no spatial smoothing, we show that visual and auditory selectivity are linked. Regions of the pSTS were identified that preferred visually presented moving mouths (presented in isolation or as part of a whole face) or moving eyes. Mouth-preferring regions responded strongly to voices and showed a significant preference for vocal compared with nonvocal sounds. In contrast, eye-preferring regions did not respond to either vocal or nonvocal sounds. The converse was also true: regions of the pSTS that showed a significant response to speech or preferred vocal to nonvocal sounds responded more strongly to visually presented mouths than eyes. These findings can be explained by environmental statistics. In natural environments, humans see visual mouth movements at the same time as they hear voices, while there is no auditory accompaniment to visual eye movements. The strength of a voxel's preference for visual mouth movements was strongly correlated with the magnitude of its auditory speech response and its preference for vocal sounds, suggesting that visual and auditory speech features are coded together in small populations of neurons within the pSTS. SIGNIFICANCE STATEMENT Humans interacting face to face make use of auditory cues from the talker's voice and visual cues from the talker's mouth to understand speech. The human posterior superior temporal sulcus (pSTS), a brain region known to be important for speech perception, is complex, with some regions responding to specific visual stimuli and others to specific auditory stimuli. Using BOLD fMRI, we show that the natural statistics of human speech, in which voices co-occur with mouth movements, are reflected in the neural architecture of

  5. Effects of Delayed Auditory Feedback in Stuttering Patterns

    Directory of Open Access Journals (Sweden)

    Janeth Hernández Jaramillo

    2014-05-01

    Full Text Available The present study corresponds to a single subject design, analyzes the patterns of stuttering in the speech corpus in various oral language tasks, under the conditions of use or non-use of Delayed Auditory Feedback (DAF, in order to establish the effect of the DAF in the frequency of occur¬rence and type of dysrhythmia. The study concludes the positive effect of the DAF, with a rate of return of 25 % on the errors of fluency, with variation depending on the type of oral production task. This in turn suggests that 75 % of the disfluency or linked with top encode failures or not susceptible to resolve or compensated by the DAF. The authors discuss the implications of these findings for therapeutic intervention in stuttering.

  6. Effect of auditory feedback differs according to side of hemiparesis: a comparative pilot study

    OpenAIRE

    Robertson, Johanna VG; Hoellinger, Thomas; Lindberg, P?vel; Bensmail, Djamel; Hanneton, Sylvain; Roby-Brami, Agn?s

    2009-01-01

    Abstract Background Following stroke, patients frequently demonstrate loss of motor control and function and altered kinematic parameters of reaching movements. Feedback is an essential component of rehabilitation and auditory feedback of kinematic parameters may be a useful tool for rehabilitation of reaching movements at the impairment level. The aim of this study was to investigate the effect of 2 types of auditory feedback on the kinematics of reaching movements in hemiparetic stroke pati...

  7. Explaining the high voice superiority effect in polyphonic music: evidence from cortical evoked potentials and peripheral auditory models.

    Science.gov (United States)

    Trainor, Laurel J; Marie, Céline; Bruce, Ian C; Bidelman, Gavin M

    2014-02-01

    Natural auditory environments contain multiple simultaneously-sounding objects and the auditory system must parse the incoming complex sound wave they collectively create into parts that represent each of these individual objects. Music often similarly requires processing of more than one voice or stream at the same time, and behavioral studies demonstrate that human listeners show a systematic perceptual bias in processing the highest voice in multi-voiced music. Here, we review studies utilizing event-related brain potentials (ERPs), which support the notions that (1) separate memory traces are formed for two simultaneous voices (even without conscious awareness) in auditory cortex and (2) adults show more robust encoding (i.e., larger ERP responses) to deviant pitches in the higher than in the lower voice, indicating better encoding of the former. Furthermore, infants also show this high-voice superiority effect, suggesting that the perceptual dominance observed across studies might result from neurophysiological characteristics of the peripheral auditory system. Although musically untrained adults show smaller responses in general than musically trained adults, both groups similarly show a more robust cortical representation of the higher than of the lower voice. Finally, years of experience playing a bass-range instrument reduces but does not reverse the high voice superiority effect, indicating that although it can be modified, it is not highly neuroplastic. Results of new modeling experiments examined the possibility that characteristics of middle-ear filtering and cochlear dynamics (e.g., suppression) reflected in auditory nerve firing patterns might account for the higher-voice superiority effect. Simulations show that both place and temporal AN coding schemes well-predict a high-voice superiority across a wide range of interval spacings and registers. Collectively, we infer an innate, peripheral origin for the higher-voice superiority observed in human

  8. Auditory reafferences: The influence of real-time feedback on movement control

    Directory of Open Access Journals (Sweden)

    Christian eKennel

    2015-01-01

    Full Text Available Auditory reafferences are real-time auditory products created by a person’s own movements. Whereas the interdependency of action and perception is generally well studied, the auditory feedback channel and the influence of perceptual processes during movement execution remain largely unconsidered. We argue that movements have a rhythmic character that is closely connected to sound, making it possible to manipulate auditory reafferences online to understand their role in motor control. We examined if step sounds, occurring as a by-product of running, have an influence on the performance of a complex movement task. Twenty participants completed a hurdling task in three auditory feedback conditions: a control condition with normal auditory feedback, a white noise condition in which sound was masked, and a delayed auditory feedback condition. Overall time and kinematic data were collected. Results show that delayed auditory feedback led to a significantly slower overall time and changed kinematic parameters. Our findings complement previous investigations in a natural movement situation with nonartificial auditory cues. Our results support the existing theoretical understanding of action–perception coupling and hold potential for applied work, where naturally occurring movement sounds can be implemented in the motor learning processes.

  9. Auditory reafferences: the influence of real-time feedback on movement control.

    Science.gov (United States)

    Kennel, Christian; Streese, Lukas; Pizzera, Alexandra; Justen, Christoph; Hohmann, Tanja; Raab, Markus

    2015-01-01

    Auditory reafferences are real-time auditory products created by a person's own movements. Whereas the interdependency of action and perception is generally well studied, the auditory feedback channel and the influence of perceptual processes during movement execution remain largely unconsidered. We argue that movements have a rhythmic character that is closely connected to sound, making it possible to manipulate auditory reafferences online to understand their role in motor control. We examined if step sounds, occurring as a by-product of running, have an influence on the performance of a complex movement task. Twenty participants completed a hurdling task in three auditory feedback conditions: a control condition with normal auditory feedback, a white noise condition in which sound was masked, and a delayed auditory feedback condition. Overall time and kinematic data were collected. Results show that delayed auditory feedback led to a significantly slower overall time and changed kinematic parameters. Our findings complement previous investigations in a natural movement situation with non-artificial auditory cues. Our results support the existing theoretical understanding of action-perception coupling and hold potential for applied work, where naturally occurring movement sounds can be implemented in the motor learning processes.

  10. Auditory feedback and memory for music performance: sound evidence for an encoding effect.

    Science.gov (United States)

    Finney, Steven A; Palmer, Caroline

    2003-01-01

    Research on the effects of context and task on learning and memory has included approaches that emphasize processes during learning (e.g., Craik & Tulving, 1975) and approaches that emphasize a match of conditions during learning with conditions during a later test of memory (e.g., Morris, Bransford, & Franks, 1977; Proteau, 1992; Tulving & Thomson, 1973). We investigated the effects of auditory context on learning and retrieval in three experiments on memorized music performance (a form of serial recall). Auditory feedback (presence or absence) was manipulated while pianists learned musical pieces from notation and when they later played the pieces from memory. Auditory feedback during learning significantly improved later recall. However, auditory feedback at test did not significantly affect recall, nor was there an interaction between conditions at learning and test. Auditory feedback in music performance appears to be a contextual factor that affects learning but is relatively independent of retrieval conditions.

  11. Effects of consensus training on the reliability of auditory perceptual ratings of voice quality.

    Science.gov (United States)

    Iwarsson, Jenny; Reinholt Petersen, Niels

    2012-05-01

    This study investigates the effect of consensus training of listeners on intrarater and interrater reliability and agreement of perceptual voice analysis. The use of such training, including a reference voice sample, could be assumed to make the internal standards held in memory common and more robust, which is of great importance to reduce the variability of auditory perceptual ratings. A prospective design with testing before and after training. Thirteen students of audiologopedics served as listening subjects. The ratings were made using a multidimensional protocol with four-point equal-appearing interval scales. The stimuli consisted of text reading by authentic dysphonic patients. The consensus training for each perceptual voice parameter included (1) definition, (2) underlying physiology, (3) presentation of carefully selected sound examples representing the parameter in three different grades followed by group discussions of perceived characteristics, and (4) practical exercises including imitation to make use of the listeners' proprioception. Intrarater reliability and agreement showed a marked improvement for intermittent aphonia but not for vocal fry. Interrater reliability was high for most parameters before training with a slight increase after training. Interrater agreement showed marked increases for most voice quality parameters as a result of the training. The results support the recommendation of specific consensus training, including use of a reference voice sample material, to calibrate, equalize, and stabilize the internal standards held in memory by the listeners. Copyright © 2012 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  12. Auditory display as feedback for a novel eye-tracking system for sterile operating room interaction.

    Science.gov (United States)

    Black, David; Unger, Michael; Fischer, Nele; Kikinis, Ron; Hahn, Horst; Neumuth, Thomas; Glaser, Bernhard

    2018-01-01

    The growing number of technical systems in the operating room has increased attention on developing touchless interaction methods for sterile conditions. However, touchless interaction paradigms lack the tactile feedback found in common input devices such as mice and keyboards. We propose a novel touchless eye-tracking interaction system with auditory display as a feedback method for completing typical operating room tasks. Auditory display provides feedback concerning the selected input into the eye-tracking system as well as a confirmation of the system response. An eye-tracking system with a novel auditory display using both earcons and parameter-mapping sonification was developed to allow touchless interaction for six typical scrub nurse tasks. An evaluation with novice participants compared auditory display with visual display with respect to reaction time and a series of subjective measures. When using auditory display to substitute for the lost tactile feedback during eye-tracking interaction, participants exhibit reduced reaction time compared to using visual-only display. In addition, the auditory feedback led to lower subjective workload and higher usefulness and system acceptance ratings. Due to the absence of tactile feedback for eye-tracking and other touchless interaction methods, auditory display is shown to be a useful and necessary addition to new interaction concepts for the sterile operating room, reducing reaction times while improving subjective measures, including usefulness, user satisfaction, and cognitive workload.

  13. Effect of task-related continuous auditory feedback during learning of tracking motion exercises

    Directory of Open Access Journals (Sweden)

    Rosati Giulio

    2012-10-01

    Full Text Available Abstract Background This paper presents the results of a set of experiments in which we used continuous auditory feedback to augment motor training exercises. This feedback modality is mostly underexploited in current robotic rehabilitation systems, which usually implement only very basic auditory interfaces. Our hypothesis is that properly designed continuous auditory feedback could be used to represent temporal and spatial information that could in turn, improve performance and motor learning. Methods We implemented three different experiments on healthy subjects, who were asked to track a target on a screen by moving an input device (controller with their hand. Different visual and auditory feedback modalities were envisaged. The first experiment investigated whether continuous task-related auditory feedback can help improve performance to a greater extent than error-related audio feedback, or visual feedback alone. In the second experiment we used sensory substitution to compare different types of auditory feedback with equivalent visual feedback, in order to find out whether mapping the same information on a different sensory channel (the visual channel yielded comparable effects with those gained in the first experiment. The final experiment applied a continuously changing visuomotor transformation between the controller and the screen and mapped kinematic information, computed in either coordinate system (controller or video, to the audio channel, in order to investigate which information was more relevant to the user. Results Task-related audio feedback significantly improved performance with respect to visual feedback alone, whilst error-related feedback did not. Secondly, performance in audio tasks was significantly better with respect to the equivalent sensory-substituted visual tasks. Finally, with respect to visual feedback alone, video-task-related sound feedback decreased the tracking error during the learning of a novel

  14. Multivoxel Patterns Reveal Functionally Differentiated Networks Underlying Auditory Feedback Processing of Speech

    DEFF Research Database (Denmark)

    Zheng, Zane Z.; Vicente-Grabovetsky, Alejandro; MacDonald, Ewen N.

    2013-01-01

    The everyday act of speaking involves the complex processes of speech motor control. An important component of control is monitoring, detection, and processing of errors when auditory feedback does not correspond to the intended motor gesture. Here we show, using fMRI and converging operations...... within a multivoxel pattern analysis framework, that this sensorimotor process is supported by functionally differentiated brain networks. During scanning, a real-time speech-tracking system was used to deliver two acoustically different types of distorted auditory feedback or unaltered feedback while...... human participants were vocalizing monosyllabic words, and to present the same auditory stimuli while participants were passively listening. Whole-brain analysis of neural-pattern similarity revealed three functional networks that were differentially sensitive to distorted auditory feedback during...

  15. Bottom-up influences of voice continuity in focusing selective auditory attention.

    Science.gov (United States)

    Bressler, Scott; Masud, Salwa; Bharadwaj, Hari; Shinn-Cunningham, Barbara

    2014-01-01

    Selective auditory attention causes a relative enhancement of the neural representation of important information and suppression of the neural representation of distracting sound, which enables a listener to analyze and interpret information of interest. Some studies suggest that in both vision and in audition, the "unit" on which attention operates is an object: an estimate of the information coming from a particular external source out in the world. In this view, which object ends up in the attentional foreground depends on the interplay of top-down, volitional attention and stimulus-driven, involuntary attention. Here, we test the idea that auditory attention is object based by exploring whether continuity of a non-spatial feature (talker identity, a feature that helps acoustic elements bind into one perceptual object) also influences selective attention performance. In Experiment 1, we show that perceptual continuity of target talker voice helps listeners report a sequence of spoken target digits embedded in competing reversed digits spoken by different talkers. In Experiment 2, we provide evidence that this benefit of voice continuity is obligatory and automatic, as if voice continuity biases listeners by making it easier to focus on a subsequent target digit when it is perceptually linked to what was already in the attentional foreground. Our results support the idea that feature continuity enhances streaming automatically, thereby influencing the dynamic processes that allow listeners to successfully attend to objects through time in the cacophony that assails our ears in many everyday settings.

  16. Silent reading of direct versus indirect speech activates voice-selective areas in the auditory cortex.

    Science.gov (United States)

    Yao, Bo; Belin, Pascal; Scheepers, Christoph

    2011-10-01

    In human communication, direct speech (e.g., Mary said: "I'm hungry") is perceived to be more vivid than indirect speech (e.g., Mary said [that] she was hungry). However, for silent reading, the representational consequences of this distinction are still unclear. Although many of us share the intuition of an "inner voice," particularly during silent reading of direct speech statements in text, there has been little direct empirical confirmation of this experience so far. Combining fMRI with eye tracking in human volunteers, we show that silent reading of direct versus indirect speech engenders differential brain activation in voice-selective areas of the auditory cortex. This suggests that readers are indeed more likely to engage in perceptual simulations (or spontaneous imagery) of the reported speaker's voice when reading direct speech as opposed to meaning-equivalent indirect speech statements as part of a more vivid representation of the former. Our results may be interpreted in line with embodied cognition and form a starting point for more sophisticated interdisciplinary research on the nature of auditory mental simulation during reading.

  17. Ring a bell? Adaptive Auditory Game Feedback to Sustain Performance in Stroke Rehabilitation

    DEFF Research Database (Denmark)

    Hald, Kasper; Knoche, Hendrik

    2016-01-01

    This paper investigates the effect of adaptive auditory feed- back on continued player performance for stroke patients in a Whack- a-Mole style tablet game. The feedback consisted of accumulatively in- creasing the pitch of positive feedback sounds on tasks with fast reaction time and resetting...... it after slow reaction times. The analysis was based on data was obtained in a field trial with lesion patients during their regular rehabilitation. The auditory feedback events were categorized by feedback type (positive/negative) and the associated pitch change of ei- ther high or low magnitude. Both...... feedback type and magnitude had a significant effect on players performance. Negative feedback improved re- action time on the subsequent hit by 0.42 second and positive feedback impaired performance by 0.15 seconds....

  18. Hearing the unheard: An interdisciplinary, mixed methodology study of women’s experiences of hearing voices (auditory verbal hallucinations

    Directory of Open Access Journals (Sweden)

    Simon eMcCarthy-Jones

    2015-12-01

    Full Text Available This paper explores the experiences of women who ‘hear voices’ (auditory verbal hallucinations. We begin by examining historical understandings of women hearing voices, showing these have been driven by androcentric theories of how women’s bodies functioned, leading to women being viewed as requiring their voices be interpreted by men. We show the twentieth-century was associated with recognition that the mental violation of women’s minds (represented by some voice-hearing was often a consequence of the physical violation of women’s bodies. We next report the results of a qualitative study into voice-hearing women’s experiences (N=8. This found similarities between women’s relationships with their voices and their relationships with others and the wider social context. Finally, we present results from a quantitative study comparing voice-hearing in women (n=65 and men (n=132 in a psychiatric setting. Women were more likely than men to have certain forms of voice-hearing (voices conversing and to have antecedent events of trauma, physical illness, and relationship problems. Voices identified as female may have more positive affect than male voices. We conclude that women voice-hearers have and continue to face specific challenges necessitating research and activism, and hope this paper will act as a stimulus to such work.

  19. Effect of auditory feedback differs according to side of hemiparesis: a comparative pilot study

    Directory of Open Access Journals (Sweden)

    Bensmail Djamel

    2009-12-01

    Full Text Available Abstract Background Following stroke, patients frequently demonstrate loss of motor control and function and altered kinematic parameters of reaching movements. Feedback is an essential component of rehabilitation and auditory feedback of kinematic parameters may be a useful tool for rehabilitation of reaching movements at the impairment level. The aim of this study was to investigate the effect of 2 types of auditory feedback on the kinematics of reaching movements in hemiparetic stroke patients and to compare differences between patients with right (RHD and left hemisphere damage (LHD. Methods 10 healthy controls, 8 stroke patients with LHD and 8 with RHD were included. Patient groups had similar levels of upper limb function. Two types of auditory feedback (spatial and simple were developed and provided online during reaching movements to 9 targets in the workspace. Kinematics of the upper limb were recorded with an electromagnetic system. Kinematics were compared between groups (Mann Whitney test and the effect of auditory feedback on kinematics was tested within each patient group (Friedman test. Results In the patient groups, peak hand velocity was lower, the number of velocity peaks was higher and movements were more curved than in the healthy group. Despite having a similar clinical level, kinematics differed between LHD and RHD groups. Peak velocity was similar but LHD patients had fewer velocity peaks and less curved movements than RHD patients. The addition of auditory feedback improved the curvature index in patients with RHD and deteriorated peak velocity, the number of velocity peaks and curvature index in LHD patients. No difference between types of feedback was found in either patient group. Conclusion In stroke patients, side of lesion should be considered when examining arm reaching kinematics. Further studies are necessary to evaluate differences in responses to auditory feedback between patients with lesions in opposite

  20. Exploring the use of tactile feedback in an ERP-based auditory BCI.

    Science.gov (United States)

    Schreuder, Martijn; Thurlings, Marieke E; Brouwer, Anne-Marie; Van Erp, Jan B F; Tangermann, Michael

    2012-01-01

    Giving direct, continuous feedback on a brain state is common practice in motor imagery based brain-computer interfaces (BCI), but has not been reported for BCIs based on event-related potentials (ERP), where feedback is only given once after a sequence of stimuli. Potentially, direct feedback could allow the user to adjust his strategy during a running trial to obtain the required response. In order to test the usefulness of such feedback, directionally congruent vibrotactile feedback was given during an online auditory BCI experiment. Users received either no feedback, short feedback pulses or continuous feedback. The feedback conditions showed reduced performance both on a behavioral task and in terms of classification accuracy. Several explanations are discussed that give interesting starting points for further research on this topic.

  1. Comparisons of Stuttering Frequency during and after Speech Initiation in Unaltered Feedback, Altered Auditory Feedback and Choral Speech Conditions

    Science.gov (United States)

    Saltuklaroglu, Tim; Kalinowski, Joseph; Robbins, Mary; Crawcour, Stephen; Bowers, Andrew

    2009-01-01

    Background: Stuttering is prone to strike during speech initiation more so than at any other point in an utterance. The use of auditory feedback (AAF) has been found to produce robust decreases in the stuttering frequency by creating an electronic rendition of choral speech (i.e., speaking in unison). However, AAF requires users to self-initiate…

  2. Tap Arduino: An Arduino microcontroller for low-latency auditory feedback in sensorimotor synchronization experiments.

    Science.gov (United States)

    Schultz, Benjamin G; van Vugt, Floris T

    2016-12-01

    Timing abilities are often measured by having participants tap their finger along with a metronome and presenting tap-triggered auditory feedback. These experiments predominantly use electronic percussion pads combined with software (e.g., FTAP or Max/MSP) that records responses and delivers auditory feedback. However, these setups involve unknown latencies between tap onset and auditory feedback and can sometimes miss responses or record multiple, superfluous responses for a single tap. These issues may distort measurements of tapping performance or affect the performance of the individual. We present an alternative setup using an Arduino microcontroller that addresses these issues and delivers low-latency auditory feedback. We validated our setup by having participants (N = 6) tap on a force-sensitive resistor pad connected to the Arduino and on an electronic percussion pad with various levels of force and tempi. The Arduino delivered auditory feedback through a pulse-width modulation (PWM) pin connected to a headphone jack or a wave shield component. The Arduino's PWM (M = 0.6 ms, SD = 0.3) and wave shield (M = 2.6 ms, SD = 0.3) demonstrated significantly lower auditory feedback latencies than the percussion pad (M = 9.1 ms, SD = 2.0), FTAP (M = 14.6 ms, SD = 2.8), and Max/MSP (M = 15.8 ms, SD = 3.4). The PWM and wave shield latencies were also significantly less variable than those from FTAP and Max/MSP. The Arduino missed significantly fewer taps, and recorded fewer superfluous responses, than the percussion pad. The Arduino captured all responses, whereas at lower tapping forces, the percussion pad missed more taps. Regardless of tapping force, the Arduino outperformed the percussion pad. Overall, the Arduino is a high-precision, low-latency, portable, and affordable tool for auditory experiments.

  3. Temporal control and compensation for perturbed voicing feedback

    DEFF Research Database (Denmark)

    Mitsuya, Takashi; MacDonald, Ewen; Munhall, Kevin G.

    2014-01-01

    Previous research employing a real-time auditory perturbation paradigm has shown that talkers monitor their own speech attributes such as fundamental frequency, vowel intensity, vowel formants, and fricative noise as part of speech motor control. In the case of vowel formants or fricative noise...

  4. Auditory vocal analysis and factors associated with voice disorders among teachers.

    Science.gov (United States)

    de Ceballos, Albanita Gomes da Costa; Carvalho, Fernando Martins; de Araújo, Tânia Maria; Dos Reis, Eduardo José Farias Borges

    2011-06-01

    Teachers are professionals who demand much of their voices and, consequently, present a high risk of developing vocal disorders during the course of employment. To identify factors associated with vocal disorders among teachers. An exploratory cross-sectional study, which investigated 476 teachers in primary and secondary schools in the city of Salvador, Bahia. Teachers answered a questionnaire and were submitted to auditory vocal analysis. The GRBAS was used for the diagnosis of vocal disorders. The study population comprised 82.8% women, teachers with an average age of 40.7 years, teachers with higher education (88.4%), with an average workday of 38 hours per week, average 11.5 years of professional practice and average monthly income of R$1.817.18. The prevalence of voice disorders was 53.6%. (255 teachers). The bivariate analysis showed statistically significant associations between vocal disorders and age above 40 years (PR = 1.83; 95% CI; 1.27-2.64), family history of dysphonia (PR = 1.72; 95% CI; 1.06-2.80), over 20 hours of weekly working hours (PR = 1.66; 95% CI; 1.09-2.52) and presence of chalk dust in the classroom (PR = 1.70; 95% CI; 1.14-2.53). The study concluded that teachers, 40 years old and over, with a family history of dysphonia, working over 20 hours weekly, and teaching in classrooms with chalk dust are more likely to develop voice disorders than others.

  5. Correlation of the Dysphonia Severity Index (DSI), Consensus Auditory-Perceptual Evaluation of Voice (CAPE-V), and Gender in Brazilians With and Without Voice Disorders.

    Science.gov (United States)

    Nemr, Katia; Simões-Zenari, Marcia; de Souza, Glaucia S; Hachiya, Adriana; Tsuji, Domingos H

    2016-11-01

    This study aims to analyze the Dysphonia Severity Index (DSI) in Brazilians with or without voice disorders and investigate DSI's correlation with gender and auditory-perceptual evaluation data obtained via the Consensus Auditory-Perceptual Evaluation of Voice (CAPE-V) protocol. A total of 66 Brazilian adults from both genders participated in the study, including 24 patients with dysphonia confirmed on laryngeal examination (dysphonic group [DG]) and 42 volunteers without voice or hearing complaints and without auditory-perceptual voice disorders (nondysphonic group [NDG]). The vocal tasks included in CAPE-V and DSI were performed and recorded. Data were analyzed by means of the independent t test, the Mann-Whitney U test, and Pearson correlation at the 5% significance level. Differences were found in the mean DSI values between the DG and the NDG. Differences were also found in all DSI items between the groups, except for the highest frequency parameter. In the DG, a moderate negative correlation was detected between overall dysphonia severity (CAPE-V) and DSI value, and between breathiness and DSI value, and a weak negative correlation was detected between DSI value and roughness. In the NDG, the maximum phonation time was higher among males. In both groups, the highest frequency parameter was higher among females. The DSI discriminated among Brazilians with or without voice disorders. A correlation was found between some aspects of the DSI and the CAPE-V but not between DSI and gender. Copyright © 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  6. Selective and divided attention modulates auditory-vocal integration in the processing of pitch feedback errors.

    Science.gov (United States)

    Liu, Ying; Hu, Huijing; Jones, Jeffery A; Guo, Zhiqiang; Li, Weifeng; Chen, Xi; Liu, Peng; Liu, Hanjun

    2015-08-01

    Speakers rapidly adjust their ongoing vocal productions to compensate for errors they hear in their auditory feedback. It is currently unclear what role attention plays in these vocal compensations. This event-related potential (ERP) study examined the influence of selective and divided attention on the vocal and cortical responses to pitch errors heard in auditory feedback regarding ongoing vocalisations. During the production of a sustained vowel, participants briefly heard their vocal pitch shifted up two semitones while they actively attended to auditory or visual events (selective attention), or both auditory and visual events (divided attention), or were not told to attend to either modality (control condition). The behavioral results showed that attending to the pitch perturbations elicited larger vocal compensations than attending to the visual stimuli. Moreover, ERPs were likewise sensitive to the attentional manipulations: P2 responses to pitch perturbations were larger when participants attended to the auditory stimuli compared to when they attended to the visual stimuli, and compared to when they were not explicitly told to attend to either the visual or auditory stimuli. By contrast, dividing attention between the auditory and visual modalities caused suppressed P2 responses relative to all the other conditions and caused enhanced N1 responses relative to the control condition. These findings provide strong evidence for the influence of attention on the mechanisms underlying the auditory-vocal integration in the processing of pitch feedback errors. In addition, selective attention and divided attention appear to modulate the neurobehavioral processing of pitch feedback errors in different ways. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  7. Combined mirror visual and auditory feedback therapy for upper limb phantom pain: a case report

    Directory of Open Access Journals (Sweden)

    Yan Kun

    2011-01-01

    Full Text Available Abstract Introduction Phantom limb sensation and phantom limb pain is a very common issue after amputations. In recent years there has been accumulating data implicating 'mirror visual feedback' or 'mirror therapy' as helpful in the treatment of phantom limb sensation and phantom limb pain. Case presentation We present the case of a 24-year-old Caucasian man, a left upper limb amputee, treated with mirror visual feedback combined with auditory feedback with improved pain relief. Conclusion This case may suggest that auditory feedback might enhance the effectiveness of mirror visual feedback and serve as a valuable addition to the complex multi-sensory processing of body perception in patients who are amputees.

  8. Speakers' acceptance of real-time speech exchange indicates that we use auditory feedback to specify the meaning of what we say.

    Science.gov (United States)

    Lind, Andreas; Hall, Lars; Breidegard, Björn; Balkenius, Christian; Johansson, Petter

    2014-06-01

    Speech is usually assumed to start with a clearly defined preverbal message, which provides a benchmark for self-monitoring and a robust sense of agency for one's utterances. However, an alternative hypothesis states that speakers often have no detailed preview of what they are about to say, and that they instead use auditory feedback to infer the meaning of their words. In the experiment reported here, participants performed a Stroop color-naming task while we covertly manipulated their auditory feedback in real time so that they said one thing but heard themselves saying something else. Under ideal timing conditions, two thirds of these semantic exchanges went undetected by the participants, and in 85% of all nondetected exchanges, the inserted words were experienced as self-produced. These findings indicate that the sense of agency for speech has a strong inferential component, and that auditory feedback of one's own voice acts as a pathway for semantic monitoring, potentially overriding other feedback loops. © The Author(s) 2014.

  9. Logarithmic temporal axis manipulation and its application for measuring auditory contributions in F0 control using a transformed auditory feedback procedure

    Science.gov (United States)

    Yanaga, Ryuichiro; Kawahara, Hideki

    2003-10-01

    A new parameter extraction procedure based on logarithmic transformation of the temporal axis was applied to investigate auditory effects on voice F0 control to overcome artifacts due to natural fluctuations and nonlinearities in speech production mechanisms. The proposed method may add complementary information to recent findings reported by using frequency shift feedback method [Burnett and Larson, J. Acoust. Soc. Am. 112 (2002)], in terms of dynamic aspects of F0 control. In a series of experiments, dependencies of system parameters in F0 control on subjects, F0 and style (musical expressions and speaking) were tested using six participants. They were three male and three female students specialized in musical education. They were asked to sustain a Japanese vowel /a/ for about 10 s repeatedly up to 2 min in total while hearing F0 modulated feedback speech, that was modulated using an M-sequence. The results replicated qualitatively the previous finding [Kawahara and Williams, Vocal Fold Physiology, (1995)] and provided more accurate estimates. Relations with designing an artificial singer also will be discussed. [Work partly supported by the grant in aids in scientific research (B) 14380165 and Wakayama University.

  10. Investigating the Role of Auditory Feedback in a Multimodal Biking Experience

    DEFF Research Database (Denmark)

    Bruun-Pedersen, Jon Ram; Grani, Francesco; Serafin, Stefania

    2017-01-01

    In this paper, we investigate the role of auditory feedback in affecting perception of effort while biking in a virtual environment. Subjects were biking on a stationary chair bike, while exposed to 3D renditions of a recumbent bike inside a virtual environment (VE). The VE simulated a park...... and was created in the Unity5 engine. While biking, subjects were exposed to 9 kinds of auditory feedback (3 amplitude levels with three different filters) which were continuously triggered corresponding to pedal speed, representing the sound of the wheels and bike/chain mechanics. Subjects were asked to rate...... the perception of exertion using the Borg RPE scale. Results of the experiment showed that most subjects perceived a difference in mechanical resistance from the bike between conditions, but did not consciously notice the variations of the auditory feedback, although these were significantly varied. This points...

  11. Auditory feedback affects perception of effort when exercising with a Pulley machine

    DEFF Research Database (Denmark)

    Bordegoni, Monica; Ferrise, Francesco; Grani, Francesco

    2013-01-01

    In this paper we describe an experiment that investigates the role of auditory feedback in affecting the perception of effort when using a physical pulley machine. Specifically, we investigated whether variations in the amplitude and frequency content of the pulley sound affect perception of effo...

  12. Shop 'til you hear it drop - Influence of Interactive Auditory Feedback in a Virtual Reality Supermarket

    DEFF Research Database (Denmark)

    Sikström, Erik; Høeg, Emil Rosenlund; Mangano, Luca

    2016-01-01

    In this paper we describe an experiment aiming to investigate the impact of auditory feedback in a virtual reality supermarket scenario. The participants were asked to read a shopping list and collect items one by one and place them into a shopping cart. Three conditions were presented randomly...

  13. Continuous Auditory Feedback of Eye Movements: An Exploratory Study toward Improving Oculomotor Control

    Directory of Open Access Journals (Sweden)

    Eric O. Boyer

    2017-04-01

    Full Text Available As eye movements are mostly automatic and overtly generated to attain visual goals, individuals have a poor metacognitive knowledge of their own eye movements. We present an exploratory study on the effects of real-time continuous auditory feedback generated by eye movements. We considered both a tracking task and a production task where smooth pursuit eye movements (SPEM can be endogenously generated. In particular, we used a visual paradigm which enables to generate and control SPEM in the absence of a moving visual target. We investigated whether real-time auditory feedback of eye movement dynamics might improve learning in both tasks, through a training protocol over 8 days. The results indicate that real-time sonification of eye movements can actually modify the oculomotor behavior, and reinforce intrinsic oculomotor perception. Nevertheless, large inter-individual differences were observed preventing us from reaching a strong conclusion on sensorimotor learning improvements.

  14. Object discrimination using optimized multi-frequency auditory cross-modal haptic feedback.

    Science.gov (United States)

    Gibson, Alison; Artemiadis, Panagiotis

    2014-01-01

    As the field of brain-machine interfaces and neuro-prosthetics continues to grow, there is a high need for sensor and actuation mechanisms that can provide haptic feedback to the user. Current technologies employ expensive, invasive and often inefficient force feedback methods, resulting in an unrealistic solution for individuals who rely on these devices. This paper responds through the development, integration and analysis of a novel feedback architecture where haptic information during the neural control of a prosthetic hand is perceived through multi-frequency auditory signals. Through representing force magnitude with volume and force location with frequency, the feedback architecture can translate the haptic experiences of a robotic end effector into the alternative sensory modality of sound. Previous research with the proposed cross-modal feedback method confirmed its learnability, so the current work aimed to investigate which frequency map (i.e. frequency-specific locations on the hand) is optimal in helping users distinguish between hand-held objects and tasks associated with them. After short use with the cross-modal feedback during the electromyographic (EMG) control of a prosthetic hand, testing results show that users are able to use audial feedback alone to discriminate between everyday objects. While users showed adaptation to three different frequency maps, the simplest map containing only two frequencies was found to be the most useful in discriminating between objects. This outcome provides support for the feasibility and practicality of the cross-modal feedback method during the neural control of prosthetics.

  15. Auditory feedback improves heart rate moderation during moderate-intensity exercise.

    Science.gov (United States)

    Shaykevich, Alex; Grove, J Robert; Jackson, Ben; Landers, Grant J; Dimmock, James

    2015-05-01

    The objective of this study is to determine whether exposure to automated HR feedback can produce improvements in the ability to regulate HR during moderate-intensity exercise and to evaluate the persistence of these improvements after feedback is removed. Twenty healthy adults performed 10 indoor exercise sessions on cycle ergometers over 5 wk after a twice-weekly schedule. During these sessions (FB), participants received auditory feedback designed to maintain HR within a personalized, moderate-intensity training zone between 70% and 80% of estimated maximum HR. All feedback was delivered via a custom mobile software application. Participants underwent an initial assessment (PREFB) to measure their ability to maintain exercise intensity defined by the training zone without use of feedback. After completing the feedback training, participants performed three additional assessments identical to PREFB at 1 wk (POST1), 2 wk (POST2), and 4 wk (POST3) after their last feedback session. Time in zone (TIZ), defined as the ratio of the time spent within the training zone divided by the overall time of exercise, rate of perceived exertion, instrumental attitudes, and affective attitudes were then evaluated to assess results using two-way, mixed-model ANOVA with sessions and gender as factors. Training with feedback significantly improved TIZ (P moderate-intensity exercise in healthy adults.

  16. Gender by assertiveness interaction in delayed auditory feedback.

    Science.gov (United States)

    Elias, J W; Rosenzweig, C M; Dippel, R L

    1981-04-01

    The College Self-Expression and the Marlowe-Crowne Social Desirability Scales were given to 144 undergraduates. High (N; 10 M; 10 F) and Low (N; 10 M 10 F) Assertiveness Ss were given a DAF test with a 'Phonic Mirror" and the Stroop test (naming the color of a word printed in a different color). DAF performance did not differ among the 4 subgroups (M and F, High and Low Assertiveness), except that Low Assertiveness women showed significantly greater DAF interference than the other subgroups. There was no significant correlation between the continuous interference of the DAF vs the discontinuous of the Stroop test. The difference may reside in the time available and the consequent reduction in anxiety, for the next stimulus in the Stroop test. These data show that, under certain circumstances, personality factors such as assertiveness can interact with gender to affect speech fluency and production. The ability to overcome feedback-related disfluencies in speech may be partially aided by improvement in self-concept or specific training in such behaviors as assertiveness, and this may be more important for females than males.

  17. Using voice input and audio feedback to enhance the reality of a virtual experience

    Energy Technology Data Exchange (ETDEWEB)

    Miner, N.E.

    1994-04-01

    Virtual Reality (VR) is a rapidly emerging technology which allows participants to experience a virtual environment through stimulation of the participant`s senses. Intuitive and natural interactions with the virtual world help to create a realistic experience. Typically, a participant is immersed in a virtual environment through the use of a 3-D viewer. Realistic, computer-generated environment models and accurate tracking of a participant`s view are important factors for adding realism to a virtual experience. Stimulating a participant`s sense of sound and providing a natural form of communication for interacting with the virtual world are equally important. This paper discusses the advantages and importance of incorporating voice recognition and audio feedback capabilities into a virtual world experience. Various approaches and levels of complexity are discussed. Examples of the use of voice and sound are presented through the description of a research application developed in the VR laboratory at Sandia National Laboratories.

  18. Effects of Consensus Training on the Reliability of Auditory Perceptual Ratings of Voice Quality

    DEFF Research Database (Denmark)

    Iwarsson, Jenny; Petersen, Niels Reinholt

    2012-01-01

    Objectives/Hypothesis: This study investigates the effect of consensus training of listeners on intrarater and interrater reliability and agreement of perceptual voice analysis. The use of such training, including a reference voice sample, could be assumed to make the internal standards held in m...

  19. Behavioural evidence of a dissociation between voice gender categorization and phoneme categorization using auditory morphed stimuli

    Directory of Open Access Journals (Sweden)

    Cyril R Pernet

    2014-01-01

    Full Text Available Both voice gender and speech perception rely on neuronal populations located in the peri-sylvian areas. However, whilst functional imaging studies suggest a left versus right hemisphere and anterior versus posterior dissociation between voice and speech categorization, psycholinguistic studies on talker variability suggest that these two processes (voice and speech categorization share common mechanisms. In this study, we investigated the categorical perception of voice gender (male vs. female and phonemes (/pa/ vs. /ta/ using the same stimulus continua generated by morphing. This allowed the investigation of behavioural differences while controlling acoustic characteristics, since the same stimuli were used in both tasks. Despite a higher acoustic dissimilarity between items during the phoneme categorization task (a male and female voice producing the same phonemes than the gender task (the same person producing 2 phonemes, results showed that speech information is being processed much faster than voice information. In addition, f0 or timbre equalization did not affect RT, which disagrees with the classical psycholinguistic models in which voice information is stripped away or normalized to access phonetic content. Also, despite similar response (percentages and perceptual (d’ curves, a reverse correlation analysis on acoustic features revealed, as expected, that the formant frequencies of the consonant distinguished stimuli in the phoneme task, but that only the vowel formant frequencies distinguish stimuli in the gender task. The 2nd set of results thus also disagrees with models postulating that the same acoustic information is used for voice and speech. Altogether these results suggest that voice gender categorization and phoneme categorization are dissociated at an early stage on the basis of different enhanced acoustic features that are diagnostic to the task at hand.

  20. The Effect of Delayed Auditory Feedback on Activity in the Temporal Lobe while Speaking: A Positron Emission Tomography Study

    Science.gov (United States)

    Takaso, Hideki; Eisner, Frank; Wise, Richard J. S.; Scott, Sophie K.

    2010-01-01

    Purpose: Delayed auditory feedback is a technique that can improve fluency in stutterers, while disrupting fluency in many nonstuttering individuals. The aim of this study was to determine the neural basis for the detection of and compensation for such a delay, and the effects of increases in the delay duration. Method: Positron emission…

  1. The Effects of Computerized Auditory Feedback on Electronic Article Surveillance Tag Placement in an Auto-Parts Distribution Center

    Science.gov (United States)

    Goomas, David T.

    2008-01-01

    In this report from the field, computerized auditory feedback was used to inform order selectors and order selector auditors in a distribution center to add an electronic article surveillance (EAS) adhesive tag. This was done by programming handheld computers to emit a loud beep for high-priced items upon scanning the item's bar-coded Universal…

  2. The role of auditory feedback in music-supported stroke rehabilitation: A single-blinded randomised controlled intervention.

    Science.gov (United States)

    van Vugt, F T; Kafczyk, T; Kuhn, W; Rollnik, J D; Tillmann, B; Altenmüller, E

    2016-01-01

    Learning to play musical instruments such as piano was previously shown to benefit post-stroke motor rehabilitation. Previous work hypothesised that the mechanism of this rehabilitation is that patients use auditory feedback to correct their movements and therefore show motor learning. We tested this hypothesis by manipulating the auditory feedback timing in a way that should disrupt such error-based learning. We contrasted a patient group undergoing music-supported therapy on a piano that emits sounds immediately (as in previous studies) with a group whose sounds are presented after a jittered delay. The delay was not noticeable to patients. Thirty-four patients in early stroke rehabilitation with moderate motor impairment and no previous musical background learned to play the piano using simple finger exercises and familiar children's songs. Rehabilitation outcome was not impaired in the jitter group relative to the normal group. Conversely, some clinical tests suggests the jitter group outperformed the normal group. Auditory feedback-based motor learning is not the beneficial mechanism of music-supported therapy. Immediate auditory feedback therapy may be suboptimal. Jittered delay may increase efficacy of the proposed therapy and allow patients to fully benefit from motivational factors of music training. Our study shows a novel way to test hypotheses concerning music training in a single-blinded way, which is an important improvement over existing unblinded tests of music interventions.

  3. Kinematic Analysis of Speech Sound Sequencing Errors Induced by Delayed Auditory Feedback.

    Science.gov (United States)

    Cler, Gabriel J; Lee, Jackson C; Mittelman, Talia; Stepp, Cara E; Bohland, Jason W

    2017-06-22

    Delayed auditory feedback (DAF) causes speakers to become disfluent and make phonological errors. Methods for assessing the kinematics of speech errors are lacking, with most DAF studies relying on auditory perceptual analyses, which may be problematic, as errors judged to be categorical may actually represent blends of sounds or articulatory errors. Eight typical speakers produced nonsense syllable sequences under normal and DAF (200 ms). Lip and tongue kinematics were captured with electromagnetic articulography. Time-locked acoustic recordings were transcribed, and the kinematics of utterances with and without perceived errors were analyzed with existing and novel quantitative methods. New multivariate measures showed that for 5 participants, kinematic variability for productions perceived to be error free was significantly increased under delay; these results were validated by using the spatiotemporal index measure. Analysis of error trials revealed both typical productions of a nontarget syllable and productions with articulatory kinematics that incorporated aspects of both the target and the perceived utterance. This study is among the first to characterize articulatory changes under DAF and provides evidence for different classes of speech errors, which may not be perceptually salient. New methods were developed that may aid visualization and analysis of large kinematic data sets. https://doi.org/10.23641/asha.5103067.

  4. Psycho-physiological assessment of a prosthetic hand sensory feedback system based on an auditory display: a preliminary study.

    Science.gov (United States)

    Gonzalez, Jose; Soma, Hirokazu; Sekine, Masashi; Yu, Wenwei

    2012-06-09

    Prosthetic hand users have to rely extensively on visual feedback, which seems to lead to a high conscious burden for the users, in order to manipulate their prosthetic devices. Indirect methods (electro-cutaneous, vibrotactile, auditory cues) have been used to convey information from the artificial limb to the amputee, but the usability and advantages of these feedback methods were explored mainly by looking at the performance results, not taking into account measurements of the user's mental effort, attention, and emotions. The main objective of this study was to explore the feasibility of using psycho-physiological measurements to assess cognitive effort when manipulating a robot hand with and without the usage of a sensory substitution system based on auditory feedback, and how these psycho-physiological recordings relate to temporal and grasping performance in a static setting. 10 male subjects (26+/-years old), participated in this study and were asked to come for 2 consecutive days. On the first day the experiment objective, tasks, and experiment setting was explained. Then, they completed a 30 minutes guided training. On the second day each subject was tested in 3 different modalities: Auditory Feedback only control (AF), Visual Feedback only control (VF), and Audiovisual Feedback control (AVF). For each modality they were asked to perform 10 trials. At the end of each test, the subject had to answer the NASA TLX questionnaire. Also, during the test the subject's EEG, ECG, electro-dermal activity (EDA), and respiration rate were measured. The results show that a higher mental effort is needed when the subjects rely only on their vision, and that this effort seems to be reduced when auditory feedback is added to the human-machine interaction (multimodal feedback). Furthermore, better temporal performance and better grasping performance was obtained in the audiovisual modality. The performance improvements when using auditory cues, along with vision

  5. Psycho-physiological assessment of a prosthetic hand sensory feedback system based on an auditory display: a preliminary study

    Directory of Open Access Journals (Sweden)

    Gonzalez Jose

    2012-06-01

    Full Text Available Abstract Background Prosthetic hand users have to rely extensively on visual feedback, which seems to lead to a high conscious burden for the users, in order to manipulate their prosthetic devices. Indirect methods (electro-cutaneous, vibrotactile, auditory cues have been used to convey information from the artificial limb to the amputee, but the usability and advantages of these feedback methods were explored mainly by looking at the performance results, not taking into account measurements of the user’s mental effort, attention, and emotions. The main objective of this study was to explore the feasibility of using psycho-physiological measurements to assess cognitive effort when manipulating a robot hand with and without the usage of a sensory substitution system based on auditory feedback, and how these psycho-physiological recordings relate to temporal and grasping performance in a static setting. Methods 10 male subjects (26+/-years old, participated in this study and were asked to come for 2 consecutive days. On the first day the experiment objective, tasks, and experiment setting was explained. Then, they completed a 30 minutes guided training. On the second day each subject was tested in 3 different modalities: Auditory Feedback only control (AF, Visual Feedback only control (VF, and Audiovisual Feedback control (AVF. For each modality they were asked to perform 10 trials. At the end of each test, the subject had to answer the NASA TLX questionnaire. Also, during the test the subject’s EEG, ECG, electro-dermal activity (EDA, and respiration rate were measured. Results The results show that a higher mental effort is needed when the subjects rely only on their vision, and that this effort seems to be reduced when auditory feedback is added to the human-machine interaction (multimodal feedback. Furthermore, better temporal performance and better grasping performance was obtained in the audiovisual modality. Conclusions The performance

  6. A software module for implementing auditory and visual feedback on a video-based eye tracking system

    Science.gov (United States)

    Rosanlall, Bharat; Gertner, Izidor; Geri, George A.; Arrington, Karl F.

    2016-05-01

    We describe here the design and implementation of a software module that provides both auditory and visual feedback of the eye position measured by a commercially available eye tracking system. The present audio-visual feedback module (AVFM) serves as an extension to the Arrington Research ViewPoint EyeTracker, but it can be easily modified for use with other similar systems. Two modes of audio feedback and one mode of visual feedback are provided in reference to a circular area-of-interest (AOI). Auditory feedback can be either a click tone emitted when the user's gaze point enters or leaves the AOI, or a sinusoidal waveform with frequency inversely proportional to the distance from the gaze point to the center of the AOI. Visual feedback is in the form of a small circular light patch that is presented whenever the gaze-point is within the AOI. The AVFM processes data that are sent to a dynamic-link library by the EyeTracker. The AVFM's multithreaded implementation also allows real-time data collection (1 kHz sampling rate) and graphics processing that allow display of the current/past gaze-points as well as the AOI. The feedback provided by the AVFM described here has applications in military target acquisition and personnel training, as well as in visual experimentation, clinical research, marketing research, and sports training.

  7. Perceiving a stranger's voice as being one's own: a 'rubber voice' illusion?

    Directory of Open Access Journals (Sweden)

    Zane Z Zheng

    2011-04-01

    Full Text Available We describe an illusion in which a stranger's voice, when presented as the auditory concomitant of a participant's own speech, is perceived as a modified version of their own voice. When the congruence between utterance and feedback breaks down, the illusion is also broken. Compared to a baseline condition in which participants heard their own voice as feedback, hearing a stranger's voice induced robust changes in the fundamental frequency (F0 of their production. Moreover, the shift in F0 appears to be feedback dependent, since shift patterns depended reliably on the relationship between the participant's own F0 and the stranger-voice F0. The shift in F0 was evident both when the illusion was present and after it was broken, suggesting that auditory feedback from production may be used separately for self-recognition and for vocal motor control. Our findings indicate that self-recognition of voices, like other body attributes, is malleable and context dependent.

  8. Bottom-up influences of voice continuity in focusing selective auditory attention

    OpenAIRE

    Bressler, Scott; Masud, Salwa; Bharadwaj, Hari; Shinn-Cunningham, Barbara

    2014-01-01

    Selective auditory attention causes a relative enhancement of the neural representation of important information and suppression of the neural representation of distracting sound, which enables a listener to analyze and interpret information of interest. Some studies suggest that in both vision and in audition, the “unit” on which attention operates is an object: an estimate of the information coming from a particular external source out in the world. In this view, which object ends up in the...

  9. Effects of first formant onset frequency on [-voice] judgments result from auditory processes not specific to humans.

    Science.gov (United States)

    Kluender, K R; Lotto, A J

    1994-02-01

    When F1-onset frequency is lower, longer F1 cut-back (VOT) is required for human listeners to perceive synthesized stop consonants as voiceless. K. R. Kluender [J. Acoust. Soc. Am. 90, 83-96 (1991)] found comparable effects of F1-onset frequency on the "labeling" of stop consonants by Japanese quail (coturnix coturnix japonica) trained to distinguish stop consonants varying in F1 cut-back. In that study, CVs were synthesized with natural-like rising F1 transitions, and endpoint training stimuli differed in the onset frequency of F1 because a longer cut-back resulted in a higher F1 onset. In order to assess whether earlier results were due to auditory predispositions or due to animals having learned the natural covariance between F1 cut-back and F1-onset frequency, the present experiment was conducted with synthetic continua having either a relatively low (375 Hz) or high (750 Hz) constant-frequency F1. Six birds were trained to respond differentially to endpoint stimuli from three series of synthesized /CV/s varying in duration of F1 cut-back. Second and third formant transitions were appropriate for labial, alveolar, or velar stops. Despite the fact that there was no opportunity for animal subjects to use experienced covariation of F1-onset frequency and F1 cut-back, quail typically exhibited shorter labeling boundaries (more voiceless stops) for intermediate stimuli of the continua when F1 frequency was higher. Responses by human subjects listening to the same stimuli were also collected. Results lend support to the earlier conclusion that part or all of the effect of F1 onset frequency on perception of voicing may be adequately explained by general auditory processes.(ABSTRACT TRUNCATED AT 250 WORDS)

  10. Fast negative feedback enables mammalian auditory nerve fibers to encode a wide dynamic range of sound intensities.

    Directory of Open Access Journals (Sweden)

    Mark Ospeck

    Full Text Available Mammalian auditory nerve fibers (ANF are remarkable for being able to encode a 40 dB, or hundred fold, range of sound pressure levels into their firing rate. Most of the fibers are very sensitive and raise their quiescent spike rate by a small amount for a faint sound at auditory threshold. Then as the sound intensity is increased, they slowly increase their spike rate, with some fibers going up as high as ∼300 Hz. In this way mammals are able to combine sensitivity and wide dynamic range. They are also able to discern sounds embedded within background noise. ANF receive efferent feedback, which suggests that the fibers are readjusted according to the background noise in order to maximize the information content of their auditory spike trains. Inner hair cells activate currents in the unmyelinated distal dendrites of ANF where sound intensity is rate-coded into action potentials. We model this spike generator compartment as an attenuator that employs fast negative feedback. Input current induces rapid and proportional leak currents. This way ANF are able to have a linear frequency to input current (f-I curve that has a wide dynamic range. The ANF spike generator remains very sensitive to threshold currents, but efferent feedback is able to lower its gain in response to noise.

  11. Audio Feedback to Physiotherapy Students for Viva Voce: How Effective Is "The Living Voice"?

    Science.gov (United States)

    Munro, Wendy; Hollingworth, Linda

    2014-01-01

    Assessment and feedback remains one of the categories that students are least satisfied with within the United Kingdom National Student Survey. The Student Charter promotes the use of various formats of feedback to enhance student learning. This study evaluates the use of audio MP3 as an alternative feedback mechanism to written feedback for…

  12. Effect of visual distraction and auditory feedback on patient effort during robot-assisted movement training after stroke.

    Science.gov (United States)

    Secoli, Riccardo; Milot, Marie-Helene; Rosati, Giulio; Reinkensmeyer, David J

    2011-04-23

    Practicing arm and gait movements with robotic assistance after neurologic injury can help patients improve their movement ability, but patients sometimes reduce their effort during training in response to the assistance. Reduced effort has been hypothesized to diminish clinical outcomes of robotic training. To better understand patient slacking, we studied the role of visual distraction and auditory feedback in modulating patient effort during a common robot-assisted tracking task. Fourteen participants with chronic left hemiparesis from stroke, five control participants with chronic right hemiparesis and fourteen non-impaired healthy control participants, tracked a visual target with their arms while receiving adaptive assistance from a robotic arm exoskeleton. We compared four practice conditions: the baseline tracking task alone; tracking while also performing a visual distracter task; tracking with the visual distracter and sound feedback; and tracking with sound feedback. For the distracter task, symbols were randomly displayed in the corners of the computer screen, and the participants were instructed to click a mouse button when a target symbol appeared. The sound feedback consisted of a repeating beep, with the frequency of repetition made to increase with increasing tracking error. Participants with stroke halved their effort and doubled their tracking error when performing the visual distracter task with their left hemiparetic arm. With sound feedback, however, these participants increased their effort and decreased their tracking error close to their baseline levels, while also performing the distracter task successfully. These effects were significantly smaller for the participants who used their non-paretic arm and for the participants without stroke. Visual distraction decreased participants effort during a standard robot-assisted movement training task. This effect was greater for the hemiparetic arm, suggesting that the increased demands associated

  13. Effect of visual distraction and auditory feedback on patient effort during robot-assisted movement training after stroke

    Directory of Open Access Journals (Sweden)

    Reinkensmeyer David J

    2011-04-01

    Full Text Available Abstract Background Practicing arm and gait movements with robotic assistance after neurologic injury can help patients improve their movement ability, but patients sometimes reduce their effort during training in response to the assistance. Reduced effort has been hypothesized to diminish clinical outcomes of robotic training. To better understand patient slacking, we studied the role of visual distraction and auditory feedback in modulating patient effort during a common robot-assisted tracking task. Methods Fourteen participants with chronic left hemiparesis from stroke, five control participants with chronic right hemiparesis and fourteen non-impaired healthy control participants, tracked a visual target with their arms while receiving adaptive assistance from a robotic arm exoskeleton. We compared four practice conditions: the baseline tracking task alone; tracking while also performing a visual distracter task; tracking with the visual distracter and sound feedback; and tracking with sound feedback. For the distracter task, symbols were randomly displayed in the corners of the computer screen, and the participants were instructed to click a mouse button when a target symbol appeared. The sound feedback consisted of a repeating beep, with the frequency of repetition made to increase with increasing tracking error. Results Participants with stroke halved their effort and doubled their tracking error when performing the visual distracter task with their left hemiparetic arm. With sound feedback, however, these participants increased their effort and decreased their tracking error close to their baseline levels, while also performing the distracter task successfully. These effects were significantly smaller for the participants who used their non-paretic arm and for the participants without stroke. Conclusions Visual distraction decreased participants effort during a standard robot-assisted movement training task. This effect was greater for

  14. Utility estimation of the application of auditory-visual-tactile sense feedback in respiratory gated radiation therapy

    Energy Technology Data Exchange (ETDEWEB)

    Jo, Jung Hun; KIm, Byeong Jin; Roh, Shi Won; Lee, Hyeon Chan; Jang, Hyeong Jun; Kim, Hoi Nam [Dept. of Radiation Oncology, Biomedical Engineering, Seoul St. Mary' s Hospital, Seoul (Korea, Republic of); Song, Jae Hoon [Dept. of Biomedical Engineering, Seoul St. Mary' s Hospital, Seoul (Korea, Republic of); Kim, Young Jae [Dept. of Radiological Technology, Gwang Yang Health Collage, Gwangyang (Korea, Republic of)

    2013-03-15

    The purpose of this study was to evaluate the possibility to optimize the gated treatment delivery time and maintenance of stable respiratory by the introduction of breath with the assistance of auditory-visual-tactile sense. The experimenter's respiration were measured by ANZAI 4D system. We obtained natural breathing signal, monitor-induced breathing signal, monitor and ventilator-induced breathing signal, and breath-hold signal using real time monitor during 10 minutes beam-on-time. In order to check the stability of respiratory signals distributed in each group were compared with means, standard deviation, variation value, beam{sub t}ime of the respiratory signal. The stability of each respiratory was measured in consideration of deviation change studied in each respiratory time lapse. As a result of an analysis of respiratory signal, all experimenters has showed that breathing signal used both Real time monitor and Ventilator was the most stable and shortest time. In this study, it was evaluated that respiratory gated radiation therapy with auditory-visual-tactual sense and without auditory-visual-tactual sense feedback. The study showed that respiratory gated radiation therapy delivery time could significantly be improved by the application of video feedback when this is combined with audio-tactual sense assistance. This delivery technique did prove its feasibility to limit the tumor motion during treatment delivery for all patients to a defined value while maintaining the accuracy and proved the applicability of the technique in a conventional clinical schedule.

  15. Utility estimation of the application of auditory-visual-tactile sense feedback in respiratory gated radiation therapy

    International Nuclear Information System (INIS)

    Jo, Jung Hun; KIm, Byeong Jin; Roh, Shi Won; Lee, Hyeon Chan; Jang, Hyeong Jun; Kim, Hoi Nam; Song, Jae Hoon; Kim, Young Jae

    2013-01-01

    The purpose of this study was to evaluate the possibility to optimize the gated treatment delivery time and maintenance of stable respiratory by the introduction of breath with the assistance of auditory-visual-tactile sense. The experimenter's respiration were measured by ANZAI 4D system. We obtained natural breathing signal, monitor-induced breathing signal, monitor and ventilator-induced breathing signal, and breath-hold signal using real time monitor during 10 minutes beam-on-time. In order to check the stability of respiratory signals distributed in each group were compared with means, standard deviation, variation value, beam t ime of the respiratory signal. The stability of each respiratory was measured in consideration of deviation change studied in each respiratory time lapse. As a result of an analysis of respiratory signal, all experimenters has showed that breathing signal used both Real time monitor and Ventilator was the most stable and shortest time. In this study, it was evaluated that respiratory gated radiation therapy with auditory-visual-tactual sense and without auditory-visual-tactual sense feedback. The study showed that respiratory gated radiation therapy delivery time could significantly be improved by the application of video feedback when this is combined with audio-tactual sense assistance. This delivery technique did prove its feasibility to limit the tumor motion during treatment delivery for all patients to a defined value while maintaining the accuracy and proved the applicability of the technique in a conventional clinical schedule

  16. Feedforward and feedback projections of caudal belt and parabelt areas of auditory cortex: refining the hierarchical model

    Directory of Open Access Journals (Sweden)

    Troy A Hackett

    2014-04-01

    Full Text Available Our working model of the primate auditory cortex recognizes three major regions (core, belt, parabelt, subdivided into thirteen areas. The connections between areas are topographically ordered in a manner consistent with information flow along two major anatomical axes: core-belt-parabelt and caudal-rostral. Remarkably, most of the connections supporting this model were revealed using retrograde tracing techniques. Little is known about laminar circuitry, as anterograde tracing of axon terminations has rarely been used. The purpose of the present study was to examine the laminar projections of three areas of auditory cortex, pursuant to analysis of all areas. The selected areas were: middle lateral belt (ML; caudomedial belt (CM; and caudal parabelt (CPB. Injections of anterograde tracers yielded data consistent with major features of our model, and also new findings that compel modifications. Results supporting the model were: 1 feedforward projection from ML and CM terminated in CPB; 2 feedforward projections from ML and CPB terminated in rostral areas of the belt and parabelt; and 3 feedback projections typified inputs to the core region from belt and parabelt. At odds with the model was the convergence of feedforward inputs into rostral medial belt from ML and CPB. This was unexpected since CPB is at a higher stage of the processing hierarchy, with mainly feedback projections to all other belt areas. Lastly, extending the model, feedforward projections from CM, ML, and CPB overlapped in the temporal parietal occipital area (TPO in the superior temporal sulcus, indicating significant auditory influence on sensory processing in this region. The combined results refine our working model and highlight the need to complete studies of the laminar inputs to all areas of auditory cortex. Their documentation is essential for developing informed hypotheses about the neurophysiological influences of inputs to each layer and area.

  17. The Effect of Learning Modality and Auditory Feedback on Word Memory: Cochlear-Implanted versus Normal-Hearing Adults.

    Science.gov (United States)

    Taitelbaum-Swead, Riki; Icht, Michal; Mama, Yaniv

    2017-03-01

    In recent years, the effect of cognitive abilities on the achievements of cochlear implant (CI) users has been evaluated. Some studies have suggested that gaps between CI users and normal-hearing (NH) peers in cognitive tasks are modality specific, and occur only in auditory tasks. The present study focused on the effect of learning modality (auditory, visual) and auditory feedback on word memory in young adults who were prelingually deafened and received CIs before the age of 5 yr, and their NH peers. A production effect (PE) paradigm was used, in which participants learned familiar study words by vocal production (saying aloud) or by no-production (silent reading or listening). Words were presented (1) in the visual modality (written) and (2) in the auditory modality (heard). CI users performed the visual condition twice-once with the implant ON and once with it OFF. All conditions were followed by free recall tests. Twelve young adults, long-term CI users, implanted between ages 1.7 and 4.5 yr, and who showed ≥50% in monosyllabic consonant-vowel-consonant open-set test with their implants were enrolled. A group of 14 age-matched NH young adults served as the comparison group. For each condition, we calculated the proportion of study words recalled. Mixed-measures analysis of variances were carried out with group (NH, CI) as a between-subjects variable, and learning condition (aloud or silent reading) as a within-subject variable. Following this, paired sample t tests were used to evaluate the PE size (differences between aloud and silent words) and overall recall ratios (aloud and silent words combined) in each of the learning conditions. With visual word presentation, young adults with CIs (regardless of implant status CI-ON or CI-OFF), showed comparable memory performance (and a similar PE) to NH peers. However, with auditory presentation, young adults with CIs showed poorer memory for nonproduced words (hence a larger PE) relative to their NH peers. The

  18. Effect of an auditory feedback substitution, tactilo-kinesthetic, or visual feedback on kinematics of pouring water from kettle into cup.

    Science.gov (United States)

    Portnoy, Sigal; Halaby, Orli; Dekel-Chen, Dotan; Dierick, Frédéric

    2015-11-01

    Pouring hot water from a kettle into a cup may prove a hazardous task, especially for the elderly or the visually-impaired. Individuals with deteriorating eyesight may endanger their hands by performing this task with both hands, relaying on tactilo-kinesthetic feedback (TKF). Auditory feedback (AF) may allow them to perform the task singlehandedly, thereby reducing the risk for injury. However since relying on an AF is not intuitive and requires practice, we aimed to determine if AF supplied during the task of pouring water can be used naturally as visual feedback (VF) following practice. For this purpose, we quantified, in young healthy sighted subjects (n = 20), the performance and kinematics of pouring water in the presence of three isolated feedbacks: visual, tactilo-kinesthetic, or auditory. There were no significant differences between the weights of spilled water in the AF condition compared to the TKF condition in the first, fifth or thirteenth trials. The subjectively-reported difficulty levels of using the TKF and the AF were significantly reduced between the first and thirteenth trials for both TKF (p = 0.01) and AF (p = 0.001). Trunk rotation during the first trial using the TKF was significantly lower than the trunk rotation while using VF. Also, shoulder adduction during the first trial using the TKF was significantly higher than the shoulder adduction while using the VF. During the AF trials, the median travel distance of the tip of the kettle was significantly reduced in the first trials so that in the thirtieth trial it did not differ significantly from the median travel distance during the thirtieth trial using TKF and VF. The maximal velocity of the tip of the kettle was constant for each of the feedback conditions but was higher in 10 cm s(-1) using VF than TKF, which was higher in 10 cm s(-1) from using AF. The smoothness of movement of the TKF and AF conditions, expressed by the normalized jerk score (NJSM), was one and two orders

  19. Finding your mate at a cocktail party: frequency separation promotes auditory stream segregation of concurrent voices in multi-species frog choruses.

    Directory of Open Access Journals (Sweden)

    Vivek Nityananda

    Full Text Available Vocal communication in crowded social environments is a difficult problem for both humans and nonhuman animals. Yet many important social behaviors require listeners to detect, recognize, and discriminate among signals in a complex acoustic milieu comprising the overlapping signals of multiple individuals, often of multiple species. Humans exploit a relatively small number of acoustic cues to segregate overlapping voices (as well as other mixtures of concurrent sounds, like polyphonic music. By comparison, we know little about how nonhuman animals are adapted to solve similar communication problems. One important cue enabling source segregation in human speech communication is that of frequency separation between concurrent voices: differences in frequency promote perceptual segregation of overlapping voices into separate "auditory streams" that can be followed through time. In this study, we show that frequency separation (ΔF also enables frogs to segregate concurrent vocalizations, such as those routinely encountered in mixed-species breeding choruses. We presented female gray treefrogs (Hyla chrysoscelis with a pulsed target signal (simulating an attractive conspecific call in the presence of a continuous stream of distractor pulses (simulating an overlapping, unattractive heterospecific call. When the ΔF between target and distractor was small (e.g., ≤3 semitones, females exhibited low levels of responsiveness, indicating a failure to recognize the target as an attractive signal when the distractor had a similar frequency. Subjects became increasingly more responsive to the target, as indicated by shorter latencies for phonotaxis, as the ΔF between target and distractor increased (e.g., ΔF = 6-12 semitones. These results support the conclusion that gray treefrogs, like humans, can exploit frequency separation as a perceptual cue to segregate concurrent voices in noisy social environments. The ability of these frogs to segregate

  20. Distúrbio de voz em professores: autorreferência, avaliação perceptiva da voz e das pregas vocais Voice disorders in teachers: self-report, auditory-perceptive assessment of voice and vocal fold assessment

    Directory of Open Access Journals (Sweden)

    Maria Fabiana Bonfim de Lima-Silva

    2012-12-01

    Full Text Available OBJETIVO: Analisar a presença do distúrbio de voz em professores na concordância entre autorreferência, avaliação perceptiva da voz e das pregas vocais. MÉTODOS: Deste estudo transversal, participaram 60 professores de duas escolas públicas de ensino fundamental e médio. Após responderem questionário de autopercepção (Condição de Produção Vocal do Professor - CPV-P para caracterização da amostra e levantamento de dados sobre autorreferência ao distúrbio de voz, foram submetidos à coleta de amostra de fala e exame nasofibrolaringoscópico. Para classificar as vozes, três juízes fonoaudiólogos utilizaram à escala GRBASI e, para pregas vocais (PPVV, um otorrinolaringologista descreveu as alterações encontradas. Os dados foram analisados descritivamente, e a seguir submetidos a testes de associação. RESULTADOS: No questionário, 63,3% dos participantes referiram ter ou ter tido distúrbio de voz. Do total, 43,3% foram diagnosticados com alteração em voz e 46,7%, em prega vocal. Não houve associação entre autorreferência e avaliação da voz, nem entre autorreferência e avaliação de PPVV, com registro de concordância baixa entre as três avaliações. Porém, houve associação entre a avaliação da voz e de PPVV, com concordância intermediária entre elas. CONCLUSÃO: Há maior autorreferência a distúrbio de voz do que o constatado pela avaliação perceptiva da voz e das pregas vocais. A concordância intermediária entre as duas avaliações prediz a necessidade da realização de pelo menos uma delas por ocasião da triagem em professores.PURPOSE: To analyze the presence of voice disorders in teachers in agreement between self-report, auditory-perceptive assessment of voice quality and vocal fold assessment. METHODS: The subjects of this cross-sectional study were 60 public elementary, middle and high-school teachers. After answering a self-awareness questionnaire (Voice Production Conditions of

  1. The predictability of frequency-altered auditory feedback changes the weighting of feedback and feedforward input for speech motor control.

    Science.gov (United States)

    Scheerer, Nichole E; Jones, Jeffery A

    2014-12-01

    Speech production requires the combined effort of a feedback control system driven by sensory feedback, and a feedforward control system driven by internal models. However, the factors that dictate the relative weighting of these feedback and feedforward control systems are unclear. In this event-related potential (ERP) study, participants produced vocalisations while being exposed to blocks of frequency-altered feedback (FAF) perturbations that were either predictable in magnitude (consistently either 50 or 100 cents) or unpredictable in magnitude (50- and 100-cent perturbations varying randomly within each vocalisation). Vocal and P1-N1-P2 ERP responses revealed decreases in the magnitude and trial-to-trial variability of vocal responses, smaller N1 amplitudes, and shorter vocal, P1 and N1 response latencies following predictable FAF perturbation magnitudes. In addition, vocal response magnitudes correlated with N1 amplitudes, vocal response latencies, and P2 latencies. This pattern of results suggests that after repeated exposure to predictable FAF perturbations, the contribution of the feedforward control system increases. Examination of the presentation order of the FAF perturbations revealed smaller compensatory responses, smaller P1 and P2 amplitudes, and shorter N1 latencies when the block of predictable 100-cent perturbations occurred prior to the block of predictable 50-cent perturbations. These results suggest that exposure to large perturbations modulates responses to subsequent perturbations of equal or smaller size. Similarly, exposure to a 100-cent perturbation prior to a 50-cent perturbation within a vocalisation decreased the magnitude of vocal and N1 responses, but increased P1 and P2 latencies. Thus, exposure to a single perturbation can affect responses to subsequent perturbations. © 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  2. Students' Perceived Preference for Visual and Auditory Assessment with E-Handwritten Feedback

    Science.gov (United States)

    Crews, Tena B.; Wilkinson, Kelly

    2010-01-01

    Undergraduate business communication students were surveyed to determine their perceived most effective method of assessment on writing assignments. The results indicated students' preference for a process that incorporates visual, auditory, and e-handwritten presentation via a tablet PC. Students also identified this assessment process would…

  3. Quantifying stimulus-response rehabilitation protocols by auditory feedback in Parkinson's disease gait pattern

    Science.gov (United States)

    Pineda, Gustavo; Atehortúa, Angélica; Iregui, Marcela; García-Arteaga, Juan D.; Romero, Eduardo

    2017-11-01

    External auditory cues stimulate motor related areas of the brain, activating motor ways parallel to the basal ganglia circuits and providing a temporary pattern for gait. In effect, patients may re-learn motor skills mediated by compensatory neuroplasticity mechanisms. However, long term functional gains are dependent on the nature of the pathology, follow-up is usually limited and reinforcement by healthcare professionals is crucial. Aiming to cope with these challenges, several researches and device implementations provide auditory or visual stimulation to improve Parkinsonian gait pattern, inside and outside clinical scenarios. The current work presents a semiautomated strategy for spatio-temporal feature extraction to study the relations between auditory temporal stimulation and spatiotemporal gait response. A protocol for auditory stimulation was built to evaluate the integrability of the strategy in the clinic practice. The method was evaluated in transversal measurement with an exploratory group of people with Parkinson's (n = 12 in stage 1, 2 and 3) and control subjects (n =6). The result showed a strong linear relation between auditory stimulation and cadence response in control subjects (R=0.98 +/-0.008) and PD subject in stage 2 (R=0.95 +/-0.03) and stage 3 (R=0.89 +/-0.05). Normalized step length showed a variable response between low and high gait velocity (0.2> R >0.97). The correlation between normalized mean velocity and stimulus was strong in all PD stage 2 (R>0.96) PD stage 3 (R>0.84) and controls (R>0.91) for all experimental conditions. Among participants, the largest variation from baseline was found in PD subject in stage 3 (53.61 +/-39.2 step/min, 0.12 +/- 0.06 in step length and 0.33 +/- 0.16 in mean velocity). In this group these values were higher than the own baseline. These variations are related with direct effect of metronome frequency on cadence and velocity. The variation of step length involves different regulation strategies and

  4. Long-range correlation properties in timing of skilled piano performance: the influence of auditory feedback and deep brain stimulation.

    Directory of Open Access Journals (Sweden)

    Maria eHerrojo Ruiz

    2014-09-01

    Full Text Available Unintentional timing deviations during musical performance can be conceived of as timing errors. However, recent research on humanizing computer-generated music has demonstrated that timing fluctuations that exhibit long-range temporal correlations (LRTC are preferred by human listeners. This preference can be accounted for by the ubiquitous presence of LRTC in human tapping and rhythmic performances. Interestingly, the manifestation of LRTC in tapping behavior seems to be driven in a subject-specific manner by the LRTC properties of resting-state background cortical oscillatory activity. In this framework, the current study aimed to investigate whether propagation of timing deviations during the skilled, memorized piano performance (without metronome of 17 professional pianists exhibits LRTC and whether the structure of the correlations is influenced by the presence or absence of auditory feedback.As an additional goal, we set out to investigate the influence of altering the dynamics along the cortico-basal-ganglia-thalamo-cortical network via deep brain stimulation (DBS on the LRTC properties of musical performance. Specifically, we investigated temporal deviations during the skilled piano performance of a non-professional pianist who was treated with subthalamic-deep brain stimulation (STN-DBS due to severe Parkinson's disease, with predominant tremor affecting his right upper extremity. In the tremor-affected right hand, the timing fluctuations of the performance exhibited random correlations with DBS OFF. By contrast, DBS restored long-range dependency in the temporal fluctuations, corresponding with the general motor improvement on DBS.Overall, the present investigations are the first to demonstrate the presence of LRTC in skilled piano performances, indicating that unintentional temporal deviations are correlated over a wide range of time scales. This phenomenon is stable after removal of the auditory feedback, but is altered by STN

  5. Hearing voices: does it give your patient a headache? A case of auditory hallucinations as acoustic aura in migraine

    Directory of Open Access Journals (Sweden)

    Van der Feltz-Cornelis CM

    2012-03-01

    Full Text Available Christina M van der Feltz-Cornelis1–3, Henk Biemans1, Jan Timmer11Clinical Centre for Body, Mind and Health, GGz Breburg, Tilburg, The Netherlands; 2Faculty of Social and Behavioral Sciences, Tilburg University, Tilburg, The Netherlands; 3Trimbos Instituut, Utrecht, The NetherlandsObjective: Auditory hallucinations are generally considered to be a psychotic symptom. However, they do occur without other psychotic symptoms in a substantive number of cases in the general population and can cause a lot of individual distress because of the supposed association with schizophrenia. We describe a case of nonpsychotic auditory hallucinations occurring in the context of migraine.Method: Case report and literature review.Results: A 40-year-old man presented with imperative auditory hallucinations that caused depressive and anxiety symptoms. He reported migraine with visual aura as well which started at the same time as the auditory hallucinations. The auditory hallucinations occurred in the context of nocturnal migraine attacks, preceding them as aura. No psychotic disorder was present. After treatment of the migraine with propranolol 40 mg twice daily, explanation of the etiology of the hallucinations, and mirtazapine 45 mg daily, the migraine subsided and no further hallucinations occurred. The patient recovered.Discussion: Visual auras have been described in migraine and occur quite often. Auditory hallucinations as aura in migraine have been described in children without psychosis, but this is the first case describing auditory hallucinations without psychosis as aura in migraine in an adult. For description of this kind of hallucination, DSM-IV lacks an appropriate category.Conclusion: Psychiatrists should consider migraine with acoustic aura as a possible etiological factor in patients without further psychotic symptoms presenting with auditory hallucinations, and they should ask for headache symptoms when they take the history. Prognosis may be

  6. Self-Generated Auditory Feedback as a Cue to Support Rhythmic Motor Stability

    Directory of Open Access Journals (Sweden)

    Gopher Daniel

    2011-12-01

    Full Text Available A goal of the SKILLS project is to develop Virtual Reality (VR-based training simulators for different application domains, one of which is juggling. Within this context the value of multimodal VR environments for skill acquisition is investigated. In this study, we investigated whether it was necessary to render the sounds of virtual balls hitting virtual hands within the juggling training simulator. First, we recorded sounds at the jugglers’ ears and found the sound of ball hitting hands to be audible. Second, we asked 24 jugglers to juggle under normal conditions (Audible or while listening to pink noise intended to mask the juggling sounds (Inaudible. We found that although the jugglers themselves reported no difference in their juggling across these two conditions, external juggling experts rated rhythmic stability worse in the Inaudible condition than in the Audible condition. This result suggests that auditory information should be rendered in the VR juggling training simulator.

  7. Show and Tell: Video Modeling and Instruction Without Feedback Improves Performance but Is Not Sufficient for Retention of a Complex Voice Motor Skill.

    Science.gov (United States)

    Look, Clarisse; McCabe, Patricia; Heard, Robert; Madill, Catherine J

    2018-02-02

    Modeling and instruction are frequent components of both traditional and technology-assisted voice therapy. This study investigated the value of video modeling and instruction in the early acquisition and short-term retention of a complex voice task without external feedback. Thirty participants were randomized to two conditions and trained to produce a vocal siren over 40 trials. One group received a model and verbal instructions, the other group received a model only. Sirens were analyzed for phonation time, vocal intensity, cepstral peak prominence, peak-to-peak time, and root-mean-square error at five time points. The model and instruction group showed significant improvement on more outcome measures than the model-only group. There was an interaction effect for vocal intensity, which showed that instructions facilitated greater improvement when they were first introduced. However, neither group reproduced the model's siren performance across all parameters or retained the skill 1 day later. Providing verbal instruction with a model appears more beneficial than providing a model only in the prepractice phase of acquiring a complex voice skill. Improved performance was observed; however, the higher level of performance was not retained after 40 trials in both conditions. Other prepractice variables may need to be considered. Findings have implications for traditional and technology-assisted voice therapy. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  8. The addition of voice prompts to audiovisual feedback and debriefing does not modify CPR quality or outcomes in out of hospital cardiac arrest--a prospective, randomized trial.

    Science.gov (United States)

    Bohn, Andreas; Weber, Thomas P; Wecker, Sascha; Harding, Ulf; Osada, Nani; Van Aken, Hugo; Lukas, Roman P

    2011-03-01

    Chest compression quality is a determinant of survival from out-of-hospital cardiac arrest (OHCA). ERC 2005 guidelines recommend the use of technical devices to support rescuers giving compressions. This prospective randomized study reviewed influence of different feedback configurations on survival and compression quality. 312 patients suffering an OHCA were randomly allocated to two different feedback configurations. In the limited feedback group a metronome and visual feedback was used. In the extended feedback group voice prompts were added. A training program was completed prior to implementation, performance debriefing was conducted throughout the study. Survival did not differ between the extended and limited feedback groups (47.8% vs 43.9%, p = 0.49). Average compression depth (mean ± SD: 4.74 ± 0.86 cm vs 4.84 ± 0.93 cm, p = 0.31) was similar in both groups. There were no differences in compression rate (103 ± 7 vs 102 ± 5 min(-1), p=0.74) or hands-off fraction (16.16% ± 0.07 to 17.04% ± 0.07, p = 0.38). Bystander CPR, public arrest location, presenting rhythm and chest compression depth were predictors of short term survival (ROSC to ED). Even limited CPR-feedback combined with training and ongoing debriefing leads to high chest compression quality. Bystander CPR, location, rhythm and chest compression depth are determinants of survival from out of hospital cardiac arrest. Addition of voice prompts does neither modify CPR quality nor outcome in OHCA. CC depth significantly influences survival and therefore more focus should be put on correct delivery. Further studies are needed to examine the best configuration of feedback to improve CPR quality and survival. ClinicalTrials.gov (NCT00449969), http://www.clinicalTrials.gov. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  9. Listening instead of reading : The influence of voice intonation in auditory health persuasion aimed at increasing fruit and vegetable intake

    NARCIS (Netherlands)

    Elbert, Sarah; Dijkstra, Arie

    2013-01-01

    Purpose. In auditory health persuasion, the speaker’s speech becomes salient, as there is no visual information available. Intonation of speech is one important aspect that may influence persuasion. It was experimentally tested to what extent different levels of intonation are related to persuasion.

  10. Open Touch/Sound Maps: A system to convey street data through haptic and auditory feedback

    Science.gov (United States)

    Kaklanis, Nikolaos; Votis, Konstantinos; Tzovaras, Dimitrios

    2013-08-01

    The use of spatial (geographic) information is becoming ever more central and pervasive in today's internet society but the most of it is currently inaccessible to visually impaired users. However, access in visual maps is severely restricted to visually impaired and people with blindness, due to their inability to interpret graphical information. Thus, alternative ways of a map's presentation have to be explored, in order to enforce the accessibility of maps. Multiple types of sensory perception like touch and hearing may work as a substitute of vision for the exploration of maps. The use of multimodal virtual environments seems to be a promising alternative for people with visual impairments. The present paper introduces a tool for automatic multimodal map generation having haptic and audio feedback using OpenStreetMap data. For a desired map area, an elevation map is being automatically generated and can be explored by touch, using a haptic device. A sonification and a text-to-speech (TTS) mechanism provide also audio navigation information during the haptic exploration of the map.

  11. Top-Down Modulation of Auditory-Motor Integration during Speech Production: The Role of Working Memory.

    Science.gov (United States)

    Guo, Zhiqiang; Wu, Xiuqin; Li, Weifeng; Jones, Jeffery A; Yan, Nan; Sheft, Stanley; Liu, Peng; Liu, Hanjun

    2017-10-25

    Although working memory (WM) is considered as an emergent property of the speech perception and production systems, the role of WM in sensorimotor integration during speech processing is largely unknown. We conducted two event-related potential experiments with female and male young adults to investigate the contribution of WM to the neurobehavioural processing of altered auditory feedback during vocal production. A delayed match-to-sample task that required participants to indicate whether the pitch feedback perturbations they heard during vocalizations in test and sample sequences matched, elicited significantly larger vocal compensations, larger N1 responses in the left middle and superior temporal gyrus, and smaller P2 responses in the left middle and superior temporal gyrus, inferior parietal lobule, somatosensory cortex, right inferior frontal gyrus, and insula compared with a control task that did not require memory retention of the sequence of pitch perturbations. On the other hand, participants who underwent extensive auditory WM training produced suppressed vocal compensations that were correlated with improved auditory WM capacity, and enhanced P2 responses in the left middle frontal gyrus, inferior parietal lobule, right inferior frontal gyrus, and insula that were predicted by pretraining auditory WM capacity. These findings indicate that WM can enhance the perception of voice auditory feedback errors while inhibiting compensatory vocal behavior to prevent voice control from being excessively influenced by auditory feedback. This study provides the first evidence that auditory-motor integration for voice control can be modulated by top-down influences arising from WM, rather than modulated exclusively by bottom-up and automatic processes. SIGNIFICANCE STATEMENT One outstanding question that remains unsolved in speech motor control is how the mismatch between predicted and actual voice auditory feedback is detected and corrected. The present study

  12. Face the voice

    DEFF Research Database (Denmark)

    Lønstrup, Ansa

    2014-01-01

    will be based on a reception aesthetic and phenomenological approach, the latter as presented by Don Ihde in his book Listening and Voice. Phenomenologies of Sound , and my analytical sketches will be related to theoretical statements concerning the understanding of voice and media (Cavarero, Dolar, La......Belle, Neumark). Finally, the article will discuss the specific artistic combination and our auditory experience of mediated human voices and sculpturally projected faces in an art museum context under the general conditions of the societal panophonia of disembodied and mediated voices, as promoted by Steven...

  13. Rehabilitation of the Upper Extremity after Stroke: A Case Series Evaluating REO Therapy and an Auditory Sensor Feedback for Trunk Control

    Directory of Open Access Journals (Sweden)

    G. Thielman

    2012-01-01

    Full Text Available Background and Purpose. Training in the virtual environment in post stroke rehab is being established as a new approach for neurorehabilitation, specifically, ReoTherapy (REO a robot-assisted virtual training device. Trunk stabilization strapping has been part of the concept with this device, and literature is lacking to support this for long-term functional changes with individuals after stroke. The purpose of this case series was to measure the feasibility of auditory trunk sensor feedback during REO therapy, in moderate to severely impaired individuals after stroke. Case Description. Using an open label crossover comparison design, 3 chronic stroke subjects were trained for 12 sessions over six weeks on either the REO or the control condition of task related training (TRT; after a washout period of 4 weeks; the alternative therapy was given. Outcomes. With both interventions, clinically relevant improvements were found for measures of body function and structure, as well as for activity, for two participants. Providing auditory feedback during REO training for trunk control was found to be feasible. Discussion. The degree of changes evident varied per protocol and may be due to the appropriateness of the technique chosen, as well as based on patients impaired arm motor control.

  14. Auditory-Motor Control of Vocal Production during Divided Attention: Behavioral and ERP Correlates.

    Science.gov (United States)

    Liu, Ying; Fan, Hao; Li, Jingting; Jones, Jeffery A; Liu, Peng; Zhang, Baofeng; Liu, Hanjun

    2018-01-01

    When people hear unexpected perturbations in auditory feedback, they produce rapid compensatory adjustments of their vocal behavior. Recent evidence has shown enhanced vocal compensations and cortical event-related potentials (ERPs) in response to attended pitch feedback perturbations, suggesting that this reflex-like behavior is influenced by selective attention. Less is known, however, about auditory-motor integration for voice control during divided attention. The present cross-modal study investigated the behavioral and ERP correlates of auditory feedback control of vocal pitch production during divided attention. During the production of sustained vowels, 32 young adults were instructed to simultaneously attend to both pitch feedback perturbations they heard and flashing red lights they saw. The presentation rate of the visual stimuli was varied to produce a low, intermediate, and high attentional load. The behavioral results showed that the low-load condition elicited significantly smaller vocal compensations for pitch perturbations than the intermediate-load and high-load conditions. As well, the cortical processing of vocal pitch feedback was also modulated as a function of divided attention. When compared to the low-load and intermediate-load conditions, the high-load condition elicited significantly larger N1 responses and smaller P2 responses to pitch perturbations. These findings provide the first neurobehavioral evidence that divided attention can modulate auditory feedback control of vocal pitch production.

  15. Sentence Comprehension in Adolescents with down Syndrome and Typically Developing Children: Role of Sentence Voice, Visual Context, and Auditory-Verbal Short-Term Memory.

    Science.gov (United States)

    Miolo, Giuliana; Chapman, Robins S.; Sindberg, Heidi A.

    2005-01-01

    The authors evaluated the roles of auditory-verbal short-term memory, visual short-term memory, and group membership in predicting language comprehension, as measured by an experimental sentence comprehension task (SCT) and the Test for Auditory Comprehension of Language--Third Edition (TACL-3; E. Carrow-Woolfolk, 1999) in 38 participants: 19 with…

  16. The effect of background music in auditory health persuasion

    NARCIS (Netherlands)

    Elbert, Sarah; Dijkstra, Arie

    2013-01-01

    In auditory health persuasion, threatening information regarding health is communicated by voice only. One relevant context of auditory persuasion is the addition of background music. There are different mechanisms through which background music might influence persuasion, for example through mood

  17. Auditory-motor learning influences auditory memory for music.

    Science.gov (United States)

    Brown, Rachel M; Palmer, Caroline

    2012-05-01

    In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.

  18. Speaker's voice as a memory cue.

    Science.gov (United States)

    Campeanu, Sandra; Craik, Fergus I M; Alain, Claude

    2015-02-01

    Speaker's voice occupies a central role as the cornerstone of auditory social interaction. Here, we review the evidence suggesting that speaker's voice constitutes an integral context cue in auditory memory. Investigation into the nature of voice representation as a memory cue is essential to understanding auditory memory and the neural correlates which underlie it. Evidence from behavioral and electrophysiological studies suggest that while specific voice reinstatement (i.e., same speaker) often appears to facilitate word memory even without attention to voice at study, the presence of a partial benefit of similar voices between study and test is less clear. In terms of explicit memory experiments utilizing unfamiliar voices, encoding methods appear to play a pivotal role. Voice congruency effects have been found when voice is specifically attended at study (i.e., when relatively shallow, perceptual encoding takes place). These behavioral findings coincide with neural indices of memory performance such as the parietal old/new recollection effect and the late right frontal effect. The former distinguishes between correctly identified old words and correctly identified new words, and reflects voice congruency only when voice is attended at study. Characterization of the latter likely depends upon voice memory, rather than word memory. There is also evidence to suggest that voice effects can be found in implicit memory paradigms. However, the presence of voice effects appears to depend greatly on the task employed. Using a word identification task, perceptual similarity between study and test conditions is, like for explicit memory tests, crucial. In addition, the type of noise employed appears to have a differential effect. While voice effects have been observed when white noise is used at both study and test, using multi-talker babble does not confer the same results. In terms of neuroimaging research modulations, characterization of an implicit memory effect

  19. Amygdala and auditory cortex exhibit distinct sensitivity to relevant acoustic features of auditory emotions.

    Science.gov (United States)

    Pannese, Alessia; Grandjean, Didier; Frühholz, Sascha

    2016-12-01

    Discriminating between auditory signals of different affective value is critical to successful social interaction. It is commonly held that acoustic decoding of such signals occurs in the auditory system, whereas affective decoding occurs in the amygdala. However, given that the amygdala receives direct subcortical projections that bypass the auditory cortex, it is possible that some acoustic decoding occurs in the amygdala as well, when the acoustic features are relevant for affective discrimination. We tested this hypothesis by combining functional neuroimaging with the neurophysiological phenomena of repetition suppression (RS) and repetition enhancement (RE) in human listeners. Our results show that both amygdala and auditory cortex responded differentially to physical voice features, suggesting that the amygdala and auditory cortex decode the affective quality of the voice not only by processing the emotional content from previously processed acoustic features, but also by processing the acoustic features themselves, when these are relevant to the identification of the voice's affective value. Specifically, we found that the auditory cortex is sensitive to spectral high-frequency voice cues when discriminating vocal anger from vocal fear and joy, whereas the amygdala is sensitive to vocal pitch when discriminating between negative vocal emotions (i.e., anger and fear). Vocal pitch is an instantaneously recognized voice feature, which is potentially transferred to the amygdala by direct subcortical projections. These results together provide evidence that, besides the auditory cortex, the amygdala too processes acoustic information, when this is relevant to the discrimination of auditory emotions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Auditory interfaces in automated driving: an international survey

    Directory of Open Access Journals (Sweden)

    Pavlo Bazilinskyy

    2015-08-01

    Full Text Available This study investigated peoples’ opinion on auditory interfaces in contemporary cars and their willingness to be exposed to auditory feedback in automated driving. We used an Internet-based survey to collect 1,205 responses from 91 countries. The respondents stated their attitudes towards two existing auditory driver assistance systems, a parking assistant (PA and a forward collision warning system (FCWS, as well as towards a futuristic augmented sound system (FS proposed for fully automated driving. The respondents were positive towards the PA and FCWS, and rated the willingness to have automated versions of these systems as 3.87 and 3.77, respectively (on a scale from 1 = disagree strongly to 5 = agree strongly. The respondents tolerated the FS (the mean willingness to use it was 3.00 on the same scale. The results showed that among the available response options, the female voice was the most preferred feedback type for takeover requests in highly automated driving, regardless of whether the respondents’ country was English speaking or not. The present results could be useful for designers of automated vehicles and other stakeholders.

  1. Performance of Phonatory Deviation Diagrams in Synthesized Voice Analysis.

    Science.gov (United States)

    Lopes, Leonardo Wanderley; da Silva, Karoline Evangelista; da Silva Evangelista, Deyverson; Almeida, Anna Alice; Silva, Priscila Oliveira Costa; Lucero, Jorge; Behlau, Mara

    2018-05-02

    To analyze the performance of a phonatory deviation diagram (PDD) in discriminating the presence and severity of voice deviation and the predominant voice quality of synthesized voices. A speech-language pathologist performed the auditory-perceptual analysis of the synthesized voice (n = 871). The PDD distribution of voice signals was analyzed according to area, quadrant, shape, and density. Differences in signal distribution regarding the PDD area and quadrant were detected when differentiating the signals with and without voice deviation and with different predominant voice quality. Differences in signal distribution were found in all PDD parameters as a function of the severity of voice disorder. The PDD area and quadrant can differentiate normal voices from deviant synthesized voices. There are differences in signal distribution in PDD area and quadrant as a function of the severity of voice disorder and the predominant voice quality. However, the PDD area and quadrant do not differentiate the signals as a function of severity of voice disorder and differentiated only the breathy and rough voices from the normal and strained voices. PDD density is able to differentiate only signals with moderate and severe deviation. PDD shape shows differences between signals with different severities of voice deviation. © 2018 S. Karger AG, Basel.

  2. Avaliação perceptivo-auditiva e fatores associados à alteração vocal em professores Auditory vocal analysis and factors associated with voice disorders among teachers

    Directory of Open Access Journals (Sweden)

    Albanita Gomes da Costa de Ceballos

    2011-06-01

    the city of Salvador, Bahia. Teachers answered a questionnaire and were submitted to auditory vocal analysis. The GRBAS was used for the diagnosis of vocal disorders. RESULTS: The study population comprised 82.8% women, teachers with an average age of 40.7 years, teachers with higher education (88.4%, with an average workday of 38 hours per week, average 11.5 years of professional practice and average monthly income of R$1.817.18. The prevalence of voice disorders was 53.6%. (255 teachers. The bivariate analysis showed statistically significant associations between vocal disorders and age above 40 years (PR = 1.83; 95% CI; 1.27-2.64, family history of dysphonia (PR = 1.72; 95% CI; 1.06-2.80, over 20 hours of weekly working hours (PR = 1.66; 95% CI; 1.09-2.52 and presence of chalk dust in the classroom (PR = 1.70; 95% CI; 1.14-2.53. CONCLUSION: The study concluded that teachers, 40 years old and over, with a family history of dysphonia, working over 20 hours weekly, and teaching in classrooms with chalk dust are more likely to develop voice disorders than others.

  3. Glottal inverse filtering analysis of human voice production — A ...

    Indian Academy of Sciences (India)

    A (grossly) simplified manner to study the functioning of the human speech production ...... selective auditory impairment in autism: can perceive but do not attend, Proc. Natl. Acad. .... Fritzell B 1996 Voice disorders and occupations, Logoped.

  4. An Analysis of Students' Perceptions of the Value and Efficacy of Instructors' Auditory and Text-Based Feedback Modalities across Multiple Conceptual Levels

    Science.gov (United States)

    Ice, Phil; Swan, Karen; Diaz, Sebastian; Kupczynski, Lori; Swan-Dagen, Allison

    2010-01-01

    This article used work from the writing assessment literature to develop a framework for assessing the impact and perceived value of written, audio, and combined written and audio feedback strategies across four global and 22 discrete dimensions of feedback. Using a quasi-experimental research design, students at three U.S. universities were…

  5. Foetal response to music and voice.

    Science.gov (United States)

    Al-Qahtani, Noura H

    2005-10-01

    To examine whether prenatal exposure to music and voice alters foetal behaviour and whether foetal response to music differs from human voice. A prospective observational study was conducted in 20 normal term pregnant mothers. Ten foetuses were exposed to music and voice for 15 s at different sound pressure levels to find out the optimal setting for the auditory stimulation. Music, voice and sham were played to another 10 foetuses via a headphone on the maternal abdomen. The sound pressure level was 105 db and 94 db for music and voice, respectively. Computerised assessment of foetal heart rate and activity were recorded. 90 actocardiograms were obtained for the whole group. One way anova followed by posthoc (Student-Newman-Keuls method) analysis was used to find if there is significant difference in foetal response to music and voice versus sham. Foetuses responded with heart rate acceleration and motor response to both music and voice. This was statistically significant compared to sham. There was no significant difference between the foetal heart rate acceleration to music and voice. Prenatal exposure to music and voice alters the foetal behaviour. No difference was detected in foetal response to music and voice.

  6. Investigations of Hemispheric Specialization of Self-Voice Recognition

    Science.gov (United States)

    Rosa, Christine; Lassonde, Maryse; Pinard, Claudine; Keenan, Julian Paul; Belin, Pascal

    2008-01-01

    Three experiments investigated functional asymmetries related to self-recognition in the domain of voices. In Experiment 1, participants were asked to identify one of three presented voices (self, familiar or unknown) by responding with either the right or the left-hand. In Experiment 2, participants were presented with auditory morphs between the…

  7. Developmental programming of auditory learning

    Directory of Open Access Journals (Sweden)

    Melania Puddu

    2012-10-01

    Full Text Available The basic structures involved in the development of auditory function and consequently in language acquisition are directed by genetic code, but the expression of individual genes may be altered by exposure to environmental factors, which if favorable, orient it in the proper direction, leading its development towards normality, if unfavorable, they deviate it from its physiological course. Early sensorial experience during the foetal period (i.e. intrauterine noise floor, sounds coming from the outside and attenuated by the uterine filter, particularly mother’s voice and modifications induced by it at the cochlear level represent the first example of programming in one of the earliest critical periods in development of the auditory system. This review will examine the factors that influence the developmental programming of auditory learning from the womb to the infancy. In particular it focuses on the following points: the prenatal auditory experience and the plastic phenomena presumably induced by it in the auditory system from the basilar membrane to the cortex;the involvement of these phenomena on language acquisition and on the perception of language communicative intention after birth;the consequences of auditory deprivation in critical periods of auditory development (i.e. premature interruption of foetal life.

  8. Auditory prediction during speaking and listening.

    Science.gov (United States)

    Sato, Marc; Shiller, Douglas M

    2018-02-02

    In the present EEG study, the role of auditory prediction in speech was explored through the comparison of auditory cortical responses during active speaking and passive listening to the same acoustic speech signals. Two manipulations of sensory prediction accuracy were used during the speaking task: (1) a real-time change in vowel F1 feedback (reducing prediction accuracy relative to unaltered feedback) and (2) presenting a stable auditory target rather than a visual cue to speak (enhancing auditory prediction accuracy during baseline productions, and potentially enhancing the perturbing effect of altered feedback). While subjects compensated for the F1 manipulation, no difference between the auditory-cue and visual-cue conditions were found. Under visually-cued conditions, reduced N1/P2 amplitude was observed during speaking vs. listening, reflecting a motor-to-sensory prediction. In addition, a significant correlation was observed between the magnitude of behavioral compensatory F1 response and the magnitude of this speaking induced suppression (SIS) for P2 during the altered auditory feedback phase, where a stronger compensatory decrease in F1 was associated with a stronger the SIS effect. Finally, under the auditory-cued condition, an auditory repetition-suppression effect was observed in N1/P2 amplitude during the listening task but not active speaking, suggesting that auditory predictive processes during speaking and passive listening are functionally distinct. Copyright © 2018 Elsevier Inc. All rights reserved.

  9. Audiovisual speech facilitates voice learning.

    Science.gov (United States)

    Sheffert, Sonya M; Olson, Elizabeth

    2004-02-01

    In this research, we investigated the effects of voice and face information on the perceptual learning of talkers and on long-term memory for spoken words. In the first phase, listeners were trained over several days to identify voices from words presented auditorily or audiovisually. The training data showed that visual information about speakers enhanced voice learning, revealing cross-modal connections in talker processing akin to those observed in speech processing. In the second phase, the listeners completed an auditory or audiovisual word recognition memory test in which equal numbers of words were spoken by familiar and unfamiliar talkers. The data showed that words presented by familiar talkers were more likely to be retrieved from episodic memory, regardless of modality. Together, these findings provide new information about the representational code underlying familiar talker recognition and the role of stimulus familiarity in episodic word recognition.

  10. A pilot study of the relations within which hearing voices participates: Towards a functional distinction between voice hearers and controls

    NARCIS (Netherlands)

    McEnteggart, C.; Barnes-Holmes, Y.; Egger, J.I.M.; Barnes-Holmes, D.

    2016-01-01

    The current research used the Implicit Relational Assessment Procedure (IRAP) as a preliminary step toward bringing a broad, functional approach to understanding psychosis, by focusing on the specific phenomenon of auditory hallucinations of voices and sounds (often referred to as hearing voices).

  11. Auditory agnosia.

    Science.gov (United States)

    Slevc, L Robert; Shell, Alison R

    2015-01-01

    Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. © 2015 Elsevier B.V. All rights reserved.

  12. Voiced Excitations

    National Research Council Canada - National Science Library

    Holzricher, John

    2004-01-01

    To more easily obtain a voiced excitation function for speech characterization, measurements of skin motion, tracheal tube, and vocal fold, motions were made and compared to EM sensor-glottal derived...

  13. Prevalence and correlates of auditory vocal hallucinations in middle childhood

    NARCIS (Netherlands)

    Bartels-Velthuis, A.A.; Jenner, J.A.; van de Willige, G.; van Os, J.; Wiersma, D.

    Background Hearing voices occurs in middle childhood, but little is known about prevalence, aetiology and immediate consequences. Aims To investigate prevalence, developmental risk factors and behavioural correlates of auditory vocal hallucinations in 7- and 8-year-olds. Method Auditory vocal

  14. Auditory Peripheral Processing of Degraded Speech

    National Research Council Canada - National Science Library

    Ghitza, Oded

    2003-01-01

    ...". The underlying thesis is that the auditory periphery contributes to the robust performance of humans in speech reception in noise through a concerted contribution of the efferent feedback system...

  15. Using Facebook to Reach People Who Experience Auditory Hallucinations

    OpenAIRE

    Crosier, Benjamin Sage; Brian, Rachel Marie; Ben-Zeev, Dror

    2016-01-01

    Background Auditory hallucinations (eg, hearing voices) are relatively common and underreported false sensory experiences that may produce distress and impairment. A large proportion of those who experience auditory hallucinations go unidentified and untreated. Traditional engagement methods oftentimes fall short in reaching the diverse population of people who experience auditory hallucinations. Objective The objective of this proof-of-concept study was to examine the viability of leveraging...

  16. Speaker-Sex Discrimination for Voiced and Whispered Vowels at Short Durations

    OpenAIRE

    Smith, David R. R.

    2016-01-01

    Whispered vowels, produced with no vocal fold vibration, lack the periodic temporal fine structure which in voiced vowels underlies the perceptual attribute of pitch (a salient auditory cue to speaker sex). Voiced vowels possess no temporal fine structure at very short durations (below two glottal cycles). The prediction was that speaker-sex discrimination performance for whispered and voiced vowels would be similar for very short durations but, as stimulus duration increases, voiced vowel pe...

  17. Recovering from Hallucinations: A Qualitative Study of Coping with Voices Hearing of People with Schizophrenia in Hong Kong

    Directory of Open Access Journals (Sweden)

    Petrus Ng

    2012-01-01

    Full Text Available Auditory hallucination is a positive symptom of schizophrenia and has significant impacts on the lives of individuals. People with auditory hallucination require considerable assistance from mental health professionals. Apart from medications, they may apply different lay methods to cope with their voice hearing. Results from qualitative interviews showed that people with schizophrenia in the Chinese sociocultural context of Hong Kong were coping with auditory hallucination in different ways, including (a changing social contacts, (b manipulating the voices, and (c changing perception and meaning towards the voices. Implications for recovery from psychiatric illness of individuals with auditory hallucinations are discussed.

  18. Auditory-Perceptual Evaluation of Dysphonia: A Comparison Between Narrow and Broad Terminology Systems

    DEFF Research Database (Denmark)

    Iwarsson, Jenny

    2017-01-01

    of the terminology used in the multiparameter Danish Dysphonia Assessment (DDA) approach into the five-parameter GRBAS system. Methods. Voice samples illustrating type and grade of the voice qualities included in DDA were rated by five speech language pathologists using the GRBAS system with the aim of estimating...... terms and antagonists, reflecting muscular hypo- and hyperfunction. Key Words: Auditory-perceptual voice analysis–Dysphonia–GRBAS–Listening test–Voice ratings....

  19. Auditory Neuropathy

    Science.gov (United States)

    ... children and adults with auditory neuropathy. Cochlear implants (electronic devices that compensate for damaged or nonworking parts ... and Drug Administration: Information on Cochlear Implants Telecommunications Relay Services Your Baby's Hearing Screening News Deaf health ...

  20. Auditory hallucinations.

    Science.gov (United States)

    Blom, Jan Dirk

    2015-01-01

    Auditory hallucinations constitute a phenomenologically rich group of endogenously mediated percepts which are associated with psychiatric, neurologic, otologic, and other medical conditions, but which are also experienced by 10-15% of all healthy individuals in the general population. The group of phenomena is probably best known for its verbal auditory subtype, but it also includes musical hallucinations, echo of reading, exploding-head syndrome, and many other types. The subgroup of verbal auditory hallucinations has been studied extensively with the aid of neuroimaging techniques, and from those studies emerges an outline of a functional as well as a structural network of widely distributed brain areas involved in their mediation. The present chapter provides an overview of the various types of auditory hallucination described in the literature, summarizes our current knowledge of the auditory networks involved in their mediation, and draws on ideas from the philosophy of science and network science to reconceptualize the auditory hallucinatory experience, and point out directions for future research into its neurobiologic substrates. In addition, it provides an overview of known associations with various clinical conditions and of the existing evidence for pharmacologic and non-pharmacologic treatments. © 2015 Elsevier B.V. All rights reserved.

  1. Emotional feedback for mobile devices

    CERN Document Server

    Seebode, Julia

    2015-01-01

    This book investigates the functional adequacy as well as the affective impression made by feedback messages on mobile devices. It presents an easily adoptable experimental setup to examine context effects on various feedback messages, and applies it to auditory, tactile and auditory-tactile feedback messages. This approach provides insights into the relationship between the affective impression and functional applicability of these messages as well as an understanding of the influence of unimodal components on the perception of multimodal feedback messages. The developed paradigm can also be extended to investigate other aspects of context and used to investigate feedback messages in modalities other than those presented. The book uses questionnaires implemented on a Smartphone, which can easily be adopted for field studies to broaden the scope even wider. Finally, the book offers guidelines for the design of system feedback.

  2. Tips for Healthy Voices

    Science.gov (United States)

    ... prevent voice problems and maintain a healthy voice: Drink water (stay well hydrated): Keeping your body well hydrated by drinking plenty of water each day (6-8 glasses) is essential to maintaining a healthy voice. The ...

  3. [Hearing voices does not always constitute a psychosis].

    Science.gov (United States)

    Sommer, I E C; van der Spek, D W

    2016-01-01

    Hearing voices (i.e. auditory verbal hallucinations) is mainly known as part of schizophrenia and other psychotic disorders. However, hearing voices is a symptom that can occur in many psychiatric, neurological and general medical conditions. We present three cases of non-psychotic patients with auditory verbal hallucinations caused by different disorders. The first patient is a 74-year-old male with voices due to hearing loss, the second is a 20-year-old woman with voices due to traumatisation. The third patient is a 27-year-old woman with voices caused by temporal lobe epilepsy. Hearing voices is a phenomenon that occurs in a variety of disorders. Therefore, identification of the underlying disorder is essential to indicate treatment. Improvement of coping with the voices can reduce their impact on a patient. Antipsychotic drugs are especially effective when hearing voices is accompanied by delusions or disorganization. When this is not the case, the efficacy of antipsychotic drugs will probably not outweigh the side-effects.

  4. Auditory and motor imagery modulate learning in music performance.

    Science.gov (United States)

    Brown, Rachel M; Palmer, Caroline

    2013-01-01

    Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced novel melodies), and retrieval (during Recall of those melodies). Pianists learned melodies by listening without performing (auditory learning) or performing without sound (motor learning); following Learning, pianists performed the melodies from memory with auditory feedback (Recall). During either Learning (Experiment 1) or Recall (Experiment 2), pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced) and temporal regularity (variability of quarter-note interonset intervals) were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists' pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2). Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1): Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2): Higher auditory imagery skill predicted greater temporal regularity during Recall in the presence of

  5. Auditory and motor imagery modulate learning in music performance

    Science.gov (United States)

    Brown, Rachel M.; Palmer, Caroline

    2013-01-01

    Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced novel melodies), and retrieval (during Recall of those melodies). Pianists learned melodies by listening without performing (auditory learning) or performing without sound (motor learning); following Learning, pianists performed the melodies from memory with auditory feedback (Recall). During either Learning (Experiment 1) or Recall (Experiment 2), pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced) and temporal regularity (variability of quarter-note interonset intervals) were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists' pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2). Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1): Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2): Higher auditory imagery skill predicted greater temporal regularity during Recall in the presence of

  6. The role of emotions in the development of voice

    Directory of Open Access Journals (Sweden)

    Anna Maria Disanto

    2014-06-01

    Full Text Available In this paper the authors refer to the voice as expressive sphere of communication between two people. The voice expresses a symbolic meaning whose function is to represent our feelings, and thus our emotional life.The emission of sounds weaves an unconscious communication of affection, expresses the archaic nature of the links between body and language, the presence of a strong sensorial auditory, olfactory, tactile and visual.

  7. Noise perception in the workplace and auditory and extra-auditory symptoms referred by university professors.

    Science.gov (United States)

    Servilha, Emilse Aparecida Merlin; Delatti, Marina de Almeida

    2012-01-01

    To investigate the correlation between noise in the work environment and auditory and extra-auditory symptoms referred by university professors. Eighty five professors answered a questionnaire about identification, functional status, and health. The relationship between occupational noise and auditory and extra-auditory symptoms was investigated. Statistical analysis considered the significance level of 5%. None of the professors indicated absence of noise. Responses were grouped in Always (A) (n=21) and Not Always (NA) (n=63). Significant sources of noise were both the yard and another class, which were classified as high intensity; poor acoustic and echo. There was no association between referred noise and health complaints, such as digestive, hormonal, osteoarticular, dental, circulatory, respiratory and emotional complaints. There was also no association between referred noise and hearing complaints, and the group A showed higher occurrence of responses regarding noise nuisance, hearing difficulty and dizziness/vertigo, tinnitus, and earache. There was association between referred noise and voice alterations, and the group NA presented higher percentage of cases with voice alterations than the group A. The university environment was considered noisy; however, there was no association with auditory and extra-auditory symptoms. The hearing complaints were more evident among professors in the group A. Professors' health is a multi-dimensional product and, therefore, noise cannot be considered the only aggravation factor.

  8. Auditory and motor imagery modulate learning in music performance

    Directory of Open Access Journals (Sweden)

    Rachel M. Brown

    2013-07-01

    Full Text Available Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians’ encoding (during Learning, as they practiced novel melodies, and retrieval (during Recall of those melodies. Pianists learned melodies by listening without performing (auditory learning or performing without sound (motor learning; following Learning, pianists performed the melodies from memory with auditory feedback (Recall. During either Learning (Experiment 1 or Recall (Experiment 2, pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced and temporal regularity (variability of quarter-note interonset intervals were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists’ pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2. Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1: Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2: Higher auditory imagery skill predicted greater temporal regularity during Recall in the

  9. Auditory Sketches: Very Sparse Representations of Sounds Are Still Recognizable.

    Directory of Open Access Journals (Sweden)

    Vincent Isnard

    Full Text Available Sounds in our environment like voices, animal calls or musical instruments are easily recognized by human listeners. Understanding the key features underlying this robust sound recognition is an important question in auditory science. Here, we studied the recognition by human listeners of new classes of sounds: acoustic and auditory sketches, sounds that are severely impoverished but still recognizable. Starting from a time-frequency representation, a sketch is obtained by keeping only sparse elements of the original signal, here, by means of a simple peak-picking algorithm. Two time-frequency representations were compared: a biologically grounded one, the auditory spectrogram, which simulates peripheral auditory filtering, and a simple acoustic spectrogram, based on a Fourier transform. Three degrees of sparsity were also investigated. Listeners were asked to recognize the category to which a sketch sound belongs: singing voices, bird calls, musical instruments, and vehicle engine noises. Results showed that, with the exception of voice sounds, very sparse representations of sounds (10 features, or energy peaks, per second could be recognized above chance. No clear differences could be observed between the acoustic and the auditory sketches. For the voice sounds, however, a completely different pattern of results emerged, with at-chance or even below-chance recognition performances, suggesting that the important features of the voice, whatever they are, were removed by the sketch process. Overall, these perceptual results were well correlated with a model of auditory distances, based on spectro-temporal excitation patterns (STEPs. This study confirms the potential of these new classes of sounds, acoustic and auditory sketches, to study sound recognition.

  10. Auditory N1 reveals planning and monitoring processes during music performance.

    Science.gov (United States)

    Mathias, Brian; Gehring, William J; Palmer, Caroline

    2017-02-01

    The current study investigated the relationship between planning processes and feedback monitoring during music performance, a complex task in which performers prepare upcoming events while monitoring their sensory outcomes. Theories of action planning in auditory-motor production tasks propose that the planning of future events co-occurs with the perception of auditory feedback. This study investigated the neural correlates of planning and feedback monitoring by manipulating the contents of auditory feedback during music performance. Pianists memorized and performed melodies at a cued tempo in a synchronization-continuation task while the EEG was recorded. During performance, auditory feedback associated with single melody tones was occasionally substituted with tones corresponding to future (next), present (current), or past (previous) melody tones. Only future-oriented altered feedback disrupted behavior: Future-oriented feedback caused pianists to slow down on the subsequent tone more than past-oriented feedback, and amplitudes of the auditory N1 potential elicited by the tone immediately following the altered feedback were larger for future-oriented than for past-oriented or noncontextual (unrelated) altered feedback; larger N1 amplitudes were associated with greater slowing following altered feedback in the future condition only. Feedback-related negativities were elicited in all altered feedback conditions. In sum, behavioral and neural evidence suggests that future-oriented feedback disrupts performance more than past-oriented feedback, consistent with planning theories that posit similarity-based interference between feedback and planning contents. Neural sensory processing of auditory feedback, reflected in the N1 ERP, may serve as a marker for temporal disruption caused by altered auditory feedback in auditory-motor production tasks. © 2016 Society for Psychophysiological Research.

  11. Análise perceptivo-auditiva, acústica computadorizada e laringológica da voz de adultos jovens fumantes e não-fumantes Auditory perceptual, acoustic, computerized and laryngological analysis of young smokers' and nonsmokers' voice

    Directory of Open Access Journals (Sweden)

    Daniele C. de Figueiredo

    2003-12-01

    Full Text Available OBJETIVO: Realizar a avaliação laringológica, análise perceptivo-auditiva e acústica computadorizada das vozes de adultos jovens fumantes e não-fumantes, sem queixa vocal, compará-las e verificar a incidência de alterações laríngeas. FORMA DE ESTUDO: Caso-controle. MATERIAL E MÉTODO: Foram analisadas as vozes de 80 indivíduos com idades compreendidas entre 20 e 40 anos. Estes foram divididos em quatro grupos: 20 homens fumantes, 20 homens não-fumantes, 20 mulheres fumantes e 20 mulheres não-fumantes. Este estudo envolveu laringoscopia, realizada e interpretada por uma médica otorrinolaringologista, e gravação em fita cassete das vogais sustentadas /a/, /m/, /i/ e /u/, contagem dos números de 1 a 20, emissão dos dias da semana, dos meses do ano e da canção "Parabéns a você". A gravação em fita cassete foi editada para posterior análise espectrográfica e avaliação perceptiva auditiva por quatro avaliadores com experiência na área de voz. RESULTADOS: Após a análise, foi constatada uma discreta diminuição da freqüência fundamental da voz dos indivíduos fumantes de ambos os sexos, bem como maior incidência de rouquidão e de alterações laríngeas entre os tabagistas.AIM: The goal of this study was to make the laryngological, auditory perceptual and acoustic computer analyses of young adults' (smokers and non-smokers voices, without vocal complaint, compare them and verify the incidence of vocal alterations. STUDY DESIGN: Clinical comparative. MATERIAL AND METHOD: The voices of 80 individuals with age range from 20 to 40 years were analyzed. These individuals were divided in four groups: 20 male smokers, 20 male non-smokers, 20 female smokers and 20 female non-smokers. This analysis involved laryngoscopy, which was performed and interpreted by an otolaryngologist, and cassette tape recordings of the sustained vowels /a/, /m/, /i/ e /u/, number counting from 1 to 20, speech of the days of the week, months of

  12. Voice loops as coordination aids in space shuttle mission control.

    Science.gov (United States)

    Patterson, E S; Watts-Perotti, J; Woods, D D

    1999-01-01

    Voice loops, an auditory groupware technology, are essential coordination support tools for experienced practitioners in domains such as air traffic management, aircraft carrier operations and space shuttle mission control. They support synchronous communication on multiple channels among groups of people who are spatially distributed. In this paper, we suggest reasons for why the voice loop system is a successful medium for supporting coordination in space shuttle mission control based on over 130 hours of direct observation. Voice loops allow practitioners to listen in on relevant communications without disrupting their own activities or the activities of others. In addition, the voice loop system is structured around the mission control organization, and therefore directly supports the demands of the domain. By understanding how voice loops meet the particular demands of the mission control environment, insight can be gained for the design of groupware tools to support cooperative activity in other event-driven domains.

  13. Cognitive biases and auditory verbal hallucinations in healthy and clinical individuals

    NARCIS (Netherlands)

    Daalman, K.; Sommer, I. E. C.; Derks, E. M.; Peters, E. R.

    2013-01-01

    Background. Several cognitive biases are related to psychotic symptoms, including auditory verbal hallucinations (AVH). It remains unclear whether these biases differ in voice-hearers with and without a 'need-for-care'. Method. A total of 72 healthy controls, 72 healthy voice-hearers and 72 clinical

  14. Hearing an Illusory Vowel in Noise : Suppression of Auditory Cortical Activity

    NARCIS (Netherlands)

    Riecke, Lars; Vanbussel, Mieke; Hausfeld, Lars; Baskent, Deniz; Formisano, Elia; Esposito, Fabrizio

    2012-01-01

    Human hearing is constructive. For example, when a voice is partially replaced by an extraneous sound (e.g., on the telephone due to a transmission problem), the auditory system may restore the missing portion so that the voice can be perceived as continuous (Miller and Licklider, 1950; for review,

  15. Voice disorders in mucosal leishmaniasis.

    Directory of Open Access Journals (Sweden)

    Ana Cristina Nunes Ruas

    Full Text Available INTRODUCTION: Leishmaniasis is considered as one of the six most important infectious diseases because of its high detection coefficient and ability to produce deformities. In most cases, mucosal leishmaniasis (ML occurs as a consequence of cutaneous leishmaniasis. If left untreated, mucosal lesions can leave sequelae, interfering in the swallowing, breathing, voice and speech processes and requiring rehabilitation. OBJECTIVE: To describe the anatomical characteristics and voice quality of ML patients. MATERIALS AND METHODS: A descriptive transversal study was conducted in a cohort of ML patients treated at the Laboratory for Leishmaniasis Surveillance of the Evandro Chagas National Institute of Infectious Diseases-Fiocruz, between 2010 and 2013. The patients were submitted to otorhinolaryngologic clinical examination by endoscopy of the upper airways and digestive tract and to speech-language assessment through directed anamnesis, auditory perception, phonation times and vocal acoustic analysis. The variables of interest were epidemiologic (sex and age and clinic (lesion location, associated symptoms and voice quality. RESULTS: 26 patients under ML treatment and monitored by speech therapists were studied. 21 (81% were male and five (19% female, with ages ranging from 15 to 78 years (54.5+15.0 years. The lesions were distributed in the following structures 88.5% nasal, 38.5% oral, 34.6% pharyngeal and 19.2% laryngeal, with some patients presenting lesions in more than one anatomic site. The main complaint was nasal obstruction (73.1%, followed by dysphonia (38.5%, odynophagia (30.8% and dysphagia (26.9%. 23 patients (84.6% presented voice quality perturbations. Dysphonia was significantly associated to lesions in the larynx, pharynx and oral cavity. CONCLUSION: We observed that vocal quality perturbations are frequent in patients with mucosal leishmaniasis, even without laryngeal lesions; they are probably associated to disorders of some

  16. Auditory interfaces in automated driving: an international survey

    NARCIS (Netherlands)

    Bazilinskyy, P.; de Winter, J.C.F.

    2015-01-01

    This study investigated peoples’ opinion on auditory interfaces in contemporary
    cars and their willingness to be exposed to auditory feedback in automated driving. We used an Internet-based survey to collect 1,205 responses from 91 countries. The respondents stated their attitudes towards two

  17. Rhythmic walking interactions with auditory feedback

    DEFF Research Database (Denmark)

    Jylhä, Antti; Serafin, Stefania; Erkut, Cumhur

    2012-01-01

    of interactions based on varying the temporal characteristics of the output, using the sound of human walking as the input. The system either provides a direct synthesis of a walking sound based on the detected amplitude envelope of the user's footstep sounds, or provides a continuous synthetic walking sound...... as a stimulus for the walking human, either with a fixed tempo or a tempo adapting to the human gait. In a pilot experiment, the different interaction modes are studied with respect to their effect on the walking tempo and the experience of the subjects. The results tentatively outline different user profiles......Walking is a natural rhythmic activity that has become of interest as a means of interacting with software systems such as computer games. Therefore, designing multimodal walking interactions calls for further examination. This exploratory study presents a system capable of different kinds...

  18. Audio Feedback -- Better Feedback?

    Science.gov (United States)

    Voelkel, Susanne; Mello, Luciane V.

    2014-01-01

    National Student Survey (NSS) results show that many students are dissatisfied with the amount and quality of feedback they get for their work. This study reports on two case studies in which we tried to address these issues by introducing audio feedback to one undergraduate (UG) and one postgraduate (PG) class, respectively. In case study one…

  19. Functional Mapping of the Human Auditory Cortex: fMRI Investigation of a Patient with Auditory Agnosia from Trauma to the Inferior Colliculus.

    Science.gov (United States)

    Poliva, Oren; Bestelmeyer, Patricia E G; Hall, Michelle; Bultitude, Janet H; Koller, Kristin; Rafal, Robert D

    2015-09-01

    To use functional magnetic resonance imaging to map the auditory cortical fields that are activated, or nonreactive, to sounds in patient M.L., who has auditory agnosia caused by trauma to the inferior colliculi. The patient cannot recognize speech or environmental sounds. Her discrimination is greatly facilitated by context and visibility of the speaker's facial movements, and under forced-choice testing. Her auditory temporal resolution is severely compromised. Her discrimination is more impaired for words differing in voice onset time than place of articulation. Words presented to her right ear are extinguished with dichotic presentation; auditory stimuli in the right hemifield are mislocalized to the left. We used functional magnetic resonance imaging to examine cortical activations to different categories of meaningful sounds embedded in a block design. Sounds activated the caudal sub-area of M.L.'s primary auditory cortex (hA1) bilaterally and her right posterior superior temporal gyrus (auditory dorsal stream), but not the rostral sub-area (hR) of her primary auditory cortex or the anterior superior temporal gyrus in either hemisphere (auditory ventral stream). Auditory agnosia reflects dysfunction of the auditory ventral stream. The ventral and dorsal auditory streams are already segregated as early as the primary auditory cortex, with the ventral stream projecting from hR and the dorsal stream from hA1. M.L.'s leftward localization bias, preserved audiovisual integration, and phoneme perception are explained by preserved processing in her right auditory dorsal stream.

  20. Implicit multisensory associations influence voice recognition.

    Directory of Open Access Journals (Sweden)

    Katharina von Kriegstein

    2006-10-01

    Full Text Available Natural objects provide partially redundant information to the brain through different sensory modalities. For example, voices and faces both give information about the speech content, age, and gender of a person. Thanks to this redundancy, multimodal recognition is fast, robust, and automatic. In unimodal perception, however, only part of the information about an object is available. Here, we addressed whether, even under conditions of unimodal sensory input, crossmodal neural circuits that have been shaped by previous associative learning become activated and underpin a performance benefit. We measured brain activity with functional magnetic resonance imaging before, while, and after participants learned to associate either sensory redundant stimuli, i.e. voices and faces, or arbitrary multimodal combinations, i.e. voices and written names, ring tones, and cell phones or brand names of these cell phones. After learning, participants were better at recognizing unimodal auditory voices that had been paired with faces than those paired with written names, and association of voices with faces resulted in an increased functional coupling between voice and face areas. No such effects were observed for ring tones that had been paired with cell phones or names. These findings demonstrate that brief exposure to ecologically valid and sensory redundant stimulus pairs, such as voices and faces, induces specific multisensory associations. Consistent with predictive coding theories, associative representations become thereafter available for unimodal perception and facilitate object recognition. These data suggest that for natural objects effective predictive signals can be generated across sensory systems and proceed by optimization of functional connectivity between specialized cortical sensory modules.

  1. Speaker-Sex Discrimination for Voiced and Whispered Vowels at Short Durations.

    Science.gov (United States)

    Smith, David R R

    2016-01-01

    Whispered vowels, produced with no vocal fold vibration, lack the periodic temporal fine structure which in voiced vowels underlies the perceptual attribute of pitch (a salient auditory cue to speaker sex). Voiced vowels possess no temporal fine structure at very short durations (below two glottal cycles). The prediction was that speaker-sex discrimination performance for whispered and voiced vowels would be similar for very short durations but, as stimulus duration increases, voiced vowel performance would improve relative to whispered vowel performance as pitch information becomes available. This pattern of results was shown for women's but not for men's voices. A whispered vowel needs to have a duration three times longer than a voiced vowel before listeners can reliably tell whether it's spoken by a man or woman (∼30 ms vs. ∼10 ms). Listeners were half as sensitive to information about speaker-sex when it is carried by whispered compared with voiced vowels.

  2. How Do Batters Use Visual, Auditory, and Tactile Information about the Success of a Baseball Swing?

    Science.gov (United States)

    Gray, Rob

    2009-01-01

    Bat/ball contact produces visual (the ball leaving the bat), auditory (the "crack" of the bat), and tactile (bat vibration) feedback about the success of the swing. We used a batting simulation to investigate how college baseball players use visual, tactile, and auditory feedback. In Experiment 1, swing accuracy (i.e., the lateral separation…

  3. Dimensionality in voice quality.

    Science.gov (United States)

    Bele, Irene Velsvik

    2007-05-01

    This study concerns speaking voice quality in a group of male teachers (n = 35) and male actors (n = 36), as the purpose was to investigate normal and supranormal voices. The goal was the development of a method of valid perceptual evaluation for normal to supranormal and resonant voices. The voices (text reading at two loudness levels) had been evaluated by 10 listeners, for 15 vocal characteristics using VA scales. In this investigation, the results of an exploratory factor analysis of the vocal characteristics used in this method are presented, reflecting four dimensions of major importance for normal and supranormal voices. Special emphasis is placed on the effects on voice quality of a change in the loudness variable, as two loudness levels are studied. Furthermore, the vocal characteristics Sonority and Ringing voice quality are paid special attention, as the essence of the term "resonant voice" was a basic issue throughout a doctoral dissertation where this study was included.

  4. Writing with Voice

    Science.gov (United States)

    Kesler, Ted

    2012-01-01

    In this Teaching Tips article, the author argues for a dialogic conception of voice, based in the work of Mikhail Bakhtin. He demonstrates a dialogic view of voice in action, using two writing examples about the same topic from his daughter, a fifth-grade student. He then provides five practical tips for teaching a dialogic conception of voice in…

  5. Marshall’s Voice

    Directory of Open Access Journals (Sweden)

    Halper Thomas

    2017-12-01

    Full Text Available Most judicial opinions, for a variety of reasons, do not speak with the voice of identifiable judges, but an analysis of several of John Marshall’s best known opinions reveals a distinctive voice, with its characteristic language and style of argumentation. The power of this voice helps to account for the influence of his views.

  6. Bringing voice in policy building.

    Science.gov (United States)

    Lotrecchiano, Gaetano R; Kane, Mary; Zocchi, Mark S; Gosa, Jessica; Lazar, Danielle; Pines, Jesse M

    2017-07-03

    Purpose The purpose of this paper is to describe the use of group concept mapping (GCM) as a tool for developing a conceptual model of an episode of acute, unscheduled care from illness or injury to outcomes such as recovery, death and chronic illness. Design/methodology/approach After generating a literature review drafting an initial conceptual model, GCM software (CS Global MAX TM ) is used to organize and identify strengths and directionality between concepts generated through feedback about the model from several stakeholder groups: acute care and non-acute care providers, patients, payers and policymakers. Through online and in-person population-specific focus groups, the GCM approach seeks feedback, assigned relationships and articulated priorities from participants to produce an output map that described overarching concepts and relationships within and across subsamples. Findings A clustered concept map made up of relational data points that produced a taxonomy of feedback was used to update the model for use in soliciting additional feedback from two technical expert panels (TEPs), and finally, a public comment exercise was performed. The results were a stakeholder-informed improved model for an acute care episode, identified factors that influence process and outcomes, and policy recommendations, which were delivered to the Department of Health and Human Services's (DHHS) Assistant Secretary for Preparedness and Response. Practical implications This study provides an example of the value of cross-population multi-stakeholder input to increase voice in shared problem health stakeholder groups. Originality/value This paper provides GCM results and a visual analysis of the relational characteristics both within and across sub-populations involved in the study. It also provides an assessment of observational key factors supporting how different stakeholder voices can be integrated to inform model development and policy recommendations.

  7. Syllogisms delivered in an angry voice lead to improved performance and engagement of a different neural system compared to neutral voice

    OpenAIRE

    Kathleen Walton Smith; Laura-Lee eBalkwill; Oshin eVartanian; Vinod eGoel; Vinod eGoel

    2015-01-01

    Despite the fact that most real-world reasoning occurs in some emotional context, very little is known about the underlying behavioral and neural implications of such context. To further understand the role of emotional context in logical reasoning we scanned 15 participants with fMRI while they engaged in logical reasoning about neutral syllogisms presented through the auditory channel in a sad, angry, or neutral tone of voice. Exposure to angry voice led to improved reasoning performance co...

  8. Speech-Language Pathology production regarding voice in popular singing.

    Science.gov (United States)

    Drumond, Lorena Badaró; Vieira, Naymme Barbosa; Oliveira, Domingos Sávio Ferreira de

    2011-12-01

    To present a literature review about the Brazilian scientific production in Speech-Language Pathology and Audiology regarding voice in popular singing in the last decade, as for number of publications, musical styles studied, focus of the researches, and instruments used for data collection. Cross-sectional descriptive study carried out in two stages: search in databases and publications encompassing the last decade of researches in this area in Brazil, and reading of the material obtained for posterior categorization. The databases LILACS and SciELO, the Databasis of Dissertations and Theses organized by CAPES, the online version of Acta ORL, and the online version of OPUS were searched, using the following uniterms: voice, professional voice, singing voice, dysphonia, voice disorders, voice training, music, dysodia. Articles published between the years 2000 and 2010 were selected. The researches found were classified and categorized after reading their abstracts and, when necessary, the whole study. Twenty researches within the proposed theme were selected, all of which were descriptive, involving several musical styles. Twelve studies focused on the evaluation of the popular singer's voice, and the most frequently used data collection instrument was the auditory-perceptual evaluation. The results of the publications found corroborate the objectives proposed by the authors and the different methodologies. The number of studies published is still restricted when compared to the diversity of musical genres and the uniqueness of popular singer.

  9. Auditory midbrain processing is differentially modulated by auditory and visual cortices: An auditory fMRI study.

    Science.gov (United States)

    Gao, Patrick P; Zhang, Jevin W; Fan, Shu-Juan; Sanes, Dan H; Wu, Ed X

    2015-12-01

    The cortex contains extensive descending projections, yet the impact of cortical input on brainstem processing remains poorly understood. In the central auditory system, the auditory cortex contains direct and indirect pathways (via brainstem cholinergic cells) to nuclei of the auditory midbrain, called the inferior colliculus (IC). While these projections modulate auditory processing throughout the IC, single neuron recordings have samples from only a small fraction of cells during stimulation of the corticofugal pathway. Furthermore, assessments of cortical feedback have not been extended to sensory modalities other than audition. To address these issues, we devised blood-oxygen-level-dependent (BOLD) functional magnetic resonance imaging (fMRI) paradigms to measure the sound-evoked responses throughout the rat IC and investigated the effects of bilateral ablation of either auditory or visual cortices. Auditory cortex ablation increased the gain of IC responses to noise stimuli (primarily in the central nucleus of the IC) and decreased response selectivity to forward species-specific vocalizations (versus temporally reversed ones, most prominently in the external cortex of the IC). In contrast, visual cortex ablation decreased the gain and induced a much smaller effect on response selectivity. The results suggest that auditory cortical projections normally exert a large-scale and net suppressive influence on specific IC subnuclei, while visual cortical projections provide a facilitatory influence. Meanwhile, auditory cortical projections enhance the midbrain response selectivity to species-specific vocalizations. We also probed the role of the indirect cholinergic projections in the auditory system in the descending modulation process by pharmacologically blocking muscarinic cholinergic receptors. This manipulation did not affect the gain of IC responses but significantly reduced the response selectivity to vocalizations. The results imply that auditory cortical

  10. Singing voice outcomes following singing voice therapy.

    Science.gov (United States)

    Dastolfo-Hromack, Christina; Thomas, Tracey L; Rosen, Clark A; Gartner-Schmidt, Jackie

    2016-11-01

    The objectives of this study were to describe singing voice therapy (SVT), describe referred patient characteristics, and document the outcomes of SVT. Retrospective. Records of patients receiving SVT between June 2008 and June 2013 were reviewed (n = 51). All diagnoses were included. Demographic information, number of SVT sessions, and symptom severity were retrieved from the medical record. Symptom severity was measured via the 10-item Singing Voice Handicap Index (SVHI-10). Treatment outcome was analyzed by diagnosis, history of previous training, and SVHI-10. SVHI-10 scores decreased following SVT (mean change = 11, 40% decrease) (P singing lessons (n = 10) also completed an average of three SVT sessions. Primary muscle tension dysphonia (MTD1) and benign vocal fold lesion (lesion) were the most common diagnoses. Most patients (60%) had previous vocal training. SVHI-10 decrease was not significantly different between MTD and lesion. This is the first outcome-based study of SVT in a disordered population. Diagnosis of MTD or lesion did not influence treatment outcomes. Duration of SVT was short (approximately three sessions). Voice care providers are encouraged to partner with a singing voice therapist to provide optimal care for the singing voice. This study supports the use of SVT as a tool for the treatment of singing voice disorders. 4 Laryngoscope, 126:2546-2551, 2016. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.

  11. Visual attention modulates brain activation to angry voices.

    Science.gov (United States)

    Mothes-Lasch, Martin; Mentzel, Hans-Joachim; Miltner, Wolfgang H R; Straube, Thomas

    2011-06-29

    In accordance with influential models proposing prioritized processing of threat, previous studies have shown automatic brain responses to angry prosody in the amygdala and the auditory cortex under auditory distraction conditions. However, it is unknown whether the automatic processing of angry prosody is also observed during cross-modal distraction. The current fMRI study investigated brain responses to angry versus neutral prosodic stimuli during visual distraction. During scanning, participants were exposed to angry or neutral prosodic stimuli while visual symbols were displayed simultaneously. By means of task requirements, participants either attended to the voices or to the visual stimuli. While the auditory task revealed pronounced activation in the auditory cortex and amygdala to angry versus neutral prosody, this effect was absent during the visual task. Thus, our results show a limitation of the automaticity of the activation of the amygdala and auditory cortex to angry prosody. The activation of these areas to threat-related voices depends on modality-specific attention.

  12. Multidimensional assessment of strongly irregular voices such as in substitution voicing and spasmodic dysphonia: a compilation of own research.

    Science.gov (United States)

    Moerman, Mieke; Martens, Jean-Pierre; Dejonckere, Philippe

    2015-04-01

    This article is a compilation of own research performed during the European COoperation in Science and Technology (COST) action 2103: 'Advance Voice Function Assessment', an initiative of voice and speech processing teams consisting of physicists, engineers, and clinicians. This manuscript concerns analyzing largely irregular voicing types, namely substitution voicing (SV) and adductor spasmodic dysphonia (AdSD). A specific perceptual rating scale (IINFVo) was developed, and the Auditory Model Based Pitch Extractor (AMPEX), a piece of software that automatically analyses running speech and generates pitch values in background noise, was applied. The IINFVo perceptual rating scale has been shown to be useful in evaluating SV. The analysis of strongly irregular voices stimulated a modification of the European Laryngological Society's assessment protocol which was originally designed for the common types of (less severe) dysphonia. Acoustic analysis with AMPEX demonstrates that the most informative features are, for SV, the voicing-related acoustic features and, for AdSD, the perturbation measures. Poor correlations between self-assessment and acoustic and perceptual dimensions in the assessment of highly irregular voices argue for a multidimensional approach.

  13. Auditory Perspective Taking

    National Research Council Canada - National Science Library

    Martinson, Eric; Brock, Derek

    2006-01-01

    .... From this knowledge of another's auditory perspective, a conversational partner can then adapt his or her auditory output to overcome a variety of environmental challenges and insure that what is said is intelligible...

  14. Comparing the experience of voices in borderline personality disorder with the experience of voices in a psychotic disorder: A systematic review.

    Science.gov (United States)

    Merrett, Zalie; Rossell, Susan L; Castle, David J

    2016-07-01

    In clinical settings, there is substantial evidence both clinically and empirically to suggest that approximately 50% of individuals with borderline personality disorder experience auditory verbal hallucinations. However, there is limited research investigating the phenomenology of these voices. The aim of this study was to review and compare our current understanding of auditory verbal hallucinations in borderline personality disorder with auditory verbal hallucinations in patients with a psychotic disorder, to critically analyse existing studies investigating auditory verbal hallucinations in borderline personality disorder and to identify gaps in current knowledge, which will help direct future research. The literature was searched using the electronic database Scopus, PubMed and MEDLINE. Relevant studies were included if they were written in English, were empirical studies specifically addressing auditory verbal hallucinations and borderline personality disorder, were peer reviewed, used only adult humans and sample comprising borderline personality disorder as the primary diagnosis, and included a comparison group with a primary psychotic disorder such as schizophrenia. Our search strategy revealed a total of 16 articles investigating the phenomenology of auditory verbal hallucinations in borderline personality disorder. Some studies provided evidence to suggest that the voice experiences in borderline personality disorder are similar to those experienced by people with schizophrenia, for example, occur inside the head, and often involved persecutory voices. Other studies revealed some differences between schizophrenia and borderline personality disorder voice experiences, with the borderline personality disorder voices sounding more derogatory and self-critical in nature and the voice-hearers' response to the voices were more emotionally resistive. Furthermore, in one study, the schizophrenia group's voices resulted in more disruption in daily functioning

  15. Effects of Written Peer-Feedback Content and Sender's Competence on Perceptions, Performance, and Mindful Cognitive Processing

    Science.gov (United States)

    Berndt, Markus; Strijbos, Jan-Willem; Fischer, Frank

    2018-01-01

    Peer-feedback efficiency might be influenced by the oftentimes voiced concern of students that they perceive their peers' competence to provide feedback as inadequate. Feedback literature also identifies mindful processing of (peer)feedback and (peer)feedback content as important for its efficiency, but lacks systematic investigation. In a 2 × 2…

  16. The Speaker Behind The Voice: Therapeutic Practice from the Perspective of Pragmatic Theory

    Directory of Open Access Journals (Sweden)

    Felicity eDeamer

    2015-06-01

    Full Text Available Many attempts at understanding auditory verbal hallucinations (AVHs have tried to explain why there is an auditory experience in the absence of an appropriate stimulus. We suggest that many instance of voice-hearing should be approached differently. More specifically, they could be viewed primarily as hallucinated acts of communication, rather than hallucinated sounds. We suggest that this change of perspective is reflected in, and helps to explain, the successes of two recent therapeutic techniques. These two techniques are: Relating Therapy for Voices and Avatar Therapy.

  17. [Psychological effects of preventive voice care training in student teachers].

    Science.gov (United States)

    Nusseck, M; Richter, B; Echternach, M; Spahn, C

    2017-07-01

    Studies on the effectiveness of preventive voice care programs have focused mainly on voice parameters. Psychological parameters, however, have not been investigated in detail so far. The effect of a voice training program for German student teachers on psychological health parameters was investigated in a longitudinal study. The sample of 204 student teachers was divided into the intervention group (n = 123), who participated in the voice training program, and the control group (n = 81), who received no voice training. Voice training contained ten 90-min group courses and an individual visit by the voice trainer in a teaching situation with feedback afterwards. Participants were asked to fill out questionnaires (self-efficacy, Short-Form Health Survey, self-consciousness, voice self-concept, work-related behaviour and experience patterns) at the beginning and the end of their student teacher training period. The training program showed significant positive influences on psychological health, voice self-concept (i.e. more positive perception and increased awareness of one's own voice) and work-related coping behaviour in the intervention group. On average, the mental health status of all participants reduced over time, whereas the status in the trained group diminished significantly less than in the control group. Furthermore, the trained student teachers gained abilities to cope with work-related stress better than those without training. The training program clearly showed a positive impact on mental health. The results maintain the importance of such a training program not only for voice health, but also for wide-ranging aspects of constitutional health.

  18. Superior voice recognition in a patient with acquired prosopagnosia and object agnosia.

    Science.gov (United States)

    Hoover, Adria E N; Démonet, Jean-François; Steeves, Jennifer K E

    2010-11-01

    Anecdotally, it has been reported that individuals with acquired prosopagnosia compensate for their inability to recognize faces by using other person identity cues such as hair, gait or the voice. Are they therefore superior at the use of non-face cues, specifically voices, to person identity? Here, we empirically measure person and object identity recognition in a patient with acquired prosopagnosia and object agnosia. We quantify person identity (face and voice) and object identity (car and horn) recognition for visual, auditory, and bimodal (visual and auditory) stimuli. The patient is unable to recognize faces or cars, consistent with his prosopagnosia and object agnosia, respectively. He is perfectly able to recognize people's voices and car horns and bimodal stimuli. These data show a reverse shift in the typical weighting of visual over auditory information for audiovisual stimuli in a compromised visual recognition system. Moreover, the patient shows selectively superior voice recognition compared to the controls revealing that two different stimulus domains, persons and objects, may not be equally affected by sensory adaptation effects. This also implies that person and object identity recognition are processed in separate pathways. These data demonstrate that an individual with acquired prosopagnosia and object agnosia can compensate for the visual impairment and become quite skilled at using spared aspects of sensory processing. In the case of acquired prosopagnosia it is advantageous to develop a superior use of voices for person identity recognition in everyday life. Copyright © 2010 Elsevier Ltd. All rights reserved.

  19. Voice Response Systems Technology.

    Science.gov (United States)

    Gerald, Jeanette

    1984-01-01

    Examines two methods of generating synthetic speech in voice response systems, which allow computers to communicate in human terms (speech), using human interface devices (ears): phoneme and reconstructed voice systems. Considerations prior to implementation, current and potential applications, glossary, directory, and introduction to Input Output…

  20. Clinical Voices - an update

    DEFF Research Database (Denmark)

    Fusaroli, Riccardo; Weed, Ethan

    Anomalous aspects of speech and voice, including pitch, fluency, and voice quality, are reported to characterise many mental disorders. However, it has proven difficult to quantify and explain this oddness of speech by employing traditional statistical methods. In this talk we will show how...

  1. Stigma and need for care in individuals who hear voices.

    Science.gov (United States)

    Vilhauer, Ruvanee P

    2017-02-01

    Voice hearing experiences, or auditory verbal hallucinations, occur in healthy individuals as well as in individuals who need clinical care, but news media depict voice hearing primarily as a symptom of mental illness, particularly schizophrenia. This article explores whether, and how, public perception of an exaggerated association between voice hearing and mental illness might influence individuals' need for clinical care. A narrative literature review was conducted, using relevant peer-reviewed research published in the English language. Stigma may prevent disclosure of voice hearing experiences. Non-disclosure can prevent access to sources of normalizing information and lead to isolation, loss of social support and distress. Internalization of stigma and concomitantly decreased self-esteem could potentially affect features of voices such as perceived voice power, controllability, negativity and frequency, as well as distress. Increased distress may result in a decrease in functioning and increased need for clinical care. The literature reviewed suggests that stigma has the potential to increase need for care through many interrelated pathways. However, the ability to draw definitive conclusions was constrained by the designs of the studies reviewed. Further research is needed to confirm the findings of this review.

  2. Functional connectivity between face-movement and speech-intelligibility areas during auditory-only speech perception.

    Science.gov (United States)

    Schall, Sonja; von Kriegstein, Katharina

    2014-01-01

    It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.

  3. Voice following radiotherapy

    International Nuclear Information System (INIS)

    Stoicheff, M.L.

    1975-01-01

    This study was undertaken to provide information on the voice of patients following radiotherapy for glottic cancer. Part I presents findings from questionnaires returned by 227 of 235 patients successfully irradiated for glottic cancer from 1960 through 1971. Part II presents preliminary findings on the speaking fundamental frequencies of 22 irradiated patients. Normal to near-normal voice was reported by 83 percent of the 227 patients; however, 80 percent did indicate persisting vocal difficulties such as fatiguing of voice with much usage, inability to sing, reduced loudness, hoarse voice quality and inability to shout. Amount of talking during treatments appeared to affect length of time for voice to recover following treatments in those cases where it took from nine to 26 weeks; also, with increasing years since treatment, patients rated their voices more favorably. Smoking habits following treatments improved significantly with only 27 percent smoking heavily as compared with 65 percent prior to radiation therapy. No correlation was found between smoking (during or after treatments) and vocal ratings or between smoking and length of time for voice to recover. There was no relationship found between reported vocal ratings and stage of the disease

  4. Voice Savers for Music Teachers

    Science.gov (United States)

    Cookman, Starr

    2012-01-01

    Music teachers are in a class all their own when it comes to voice use. These elite vocal athletes require stamina, strength, and flexibility from their voices day in, day out for hours at a time. Voice rehabilitation clinics and research show that music education ranks high among the professionals most commonly affected by voice problems.…

  5. The Effect of Anchors and Training on the Reliability of Voice Quality Ratings for Different Types of Speech Stimuli.

    Science.gov (United States)

    Brinca, Lilia; Batista, Ana Paula; Tavares, Ana Inês; Pinto, Patrícia N; Araújo, Lara

    2015-11-01

    The main objective of the present study was to investigate if the type of voice stimuli-sustained vowel, oral reading, and connected speech-results in good intrarater and interrater agreement/reliability. A short-term panel study was performed. Voice samples from 30 native European Portuguese speakers were used in the present study. The speech materials used were (1) the sustained vowel /a/, (2) oral reading of the European Portuguese version of "The Story of Arthur the Rat," and (3) connected speech. After an extensive training with textual and auditory anchors, the judges were asked to rate the severity of dysphonic voice stimuli using the phonation dimensions G, R, and B from the GRBAS scale. The voice samples were judged 6 months and 1 year after the training. Intrarater agreement and reliability were generally very good for all the phonation dimensions and voice stimuli. The highest interrater reliability was obtained using the oral reading stimulus, particularly for phonation dimensions grade (G) and breathiness (B). Roughness (R) was the voice quality that was the most difficult to evaluate, leading to interrater unreliability in all voice quality ratings. Extensive training using textual and auditory anchors and the use of anchors during the voice evaluations appear to be good methods for auditory-perceptual evaluation of dysphonic voices. The best results of interrater reliability were obtained when the oral reading stimulus was used. Breathiness appears to be a voice quality that is easier to evaluate than roughness. Copyright © 2015 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  6. Movement goals and feedback and feedforward control mechanisms in speech production.

    Science.gov (United States)

    Perkell, Joseph S

    2012-09-01

    Studies of speech motor control are described that support a theoretical framework in which fundamental control variables for phonemic movements are multi-dimensional regions in auditory and somatosensory spaces. Auditory feedback is used to acquire and maintain auditory goals and in the development and function of feedback and feedforward control mechanisms. Several lines of evidence support the idea that speakers with more acute sensory discrimination acquire more distinct goal regions and therefore produce speech sounds with greater contrast. Feedback modification findings indicate that fluently produced sound sequences are encoded as feedforward commands, and feedback control serves to correct mismatches between expected and produced sensory consequences.

  7. Acute effects of radioiodine therapy on the voice and larynx of basedow-Graves patients

    International Nuclear Information System (INIS)

    Isolan-Cury, Roberta Werlang; Cury, Adriano Namo; Monte, Osmar; Silva, Marta Assumpcao de Andrada e; Duprat, Andre; Marone, Marilia; Almeida, Renata de; Iglesias, Alexandre

    2008-01-01

    Graves's disease is the most common cause of hyperthyroidism. There are three current therapeutic options: anti-thyroid medication, surgery, and radioactive iodine (I 131). There are few data in the literature regarding the effects of radioiodine therapy on the larynx and voice. The aim of this study was: to assess the effect of radioiodine therapy on the voice of Basedow-Graves patients. Material and method: A prospective study was done. Following the diagnosis of Grave's disease, patients underwent investigation of their voice, measurement of maximum phonatory time (/a/) and the s/z ratio, fundamental frequency analysis (Praat software), laryngoscopy and (perceptive-auditory) analysis in three different conditions: pre-treatment, 4 days, and 20 days post-radioiodine therapy. Conditions are based on the inflammatory pattern of thyroid tissue (Jones et al. 1999). Results: No statistically significant differences were found in voice characteristics in these three conditions. Conclusion: Radioiodine therapy does not affect voice quality. (author)

  8. Auditory Hallucinations as Translational Psychiatry: Evidence from Magnetic Resonance Imaging.

    Science.gov (United States)

    Hugdahl, Kenneth

    2017-12-01

    In this invited review article, I present a translational perspective and overview of our research on auditory hallucinations in schizophrenia at the University of Bergen, Norway, with a focus on the neuronal mechanisms underlying the phenomenology of experiencing "hearing voices". An auditory verbal hallucination (i.e. hearing a voice) is defined as a sensory experience in the absence of a corresponding external sensory source that could explain the phenomenological experience. I suggest a general frame or scheme for the study of auditory verbal hallucinations, called Levels of Explanation. Using a Levels of Explanation approach, mental phenomena can be described and explained at different levels (cultural, clinical, cognitive, brain-imaging, cellular and molecular). Another way of saying this is that, to advance knowledge in a research field, it is not only necessary to replicate findings, but also to show how evidence obtained with one method, and at one level of explanation, converges with evidence obtained with another method at another level. To achieve breakthroughs in our understanding of auditory verbal hallucinations, we have to advance vertically through the various levels, rather than the more common approach of staying at our favourite level and advancing horizontally (e.g., more advanced techniques and data acquisition analyses). The horizontal expansion will, however, not advance a deeper understanding of how an auditory verbal hallucination spontaneously starts and stops. Finally, I present data from the clinical, cognitive, brain-imaging, and cellular levels, where data from one level validate and support data at another level, called converging of evidence. Using a translational approach, the current status of auditory verbal hallucinations is that they implicate speech perception areas in the left temporal lobe, impairing perception of and attention to external sounds. Preliminary results also show that amygdala is implicated in the emotional

  9. Auditory Hallucinations as Translational Psychiatry: Evidence from Magnetic Resonance Imaging

    Directory of Open Access Journals (Sweden)

    Kenneth Hugdahl

    2017-12-01

    Full Text Available In this invited review article, I present a translational perspective and overview of our research on auditory hallucinations in schizophrenia at the University of Bergen, Norway, with a focus on the neuronal mechanisms underlying the phenomenology of experiencing "hearing voices". An auditory verbal hallucination (i.e. hearing a voice is defined as a sensory experience in the absence of a corresponding external sensory source that could explain the phenomenological experience. I suggest a general frame or scheme for the study of auditory verbal hallucinations, called Levels of Explanation. Using a Levels of Explanation approach, mental phenomena can be described and explained at different levels (cultural, clinical, cognitive, brain-imaging, cellular and molecular. Another way of saying this is that, to advance knowledge in a research field, it is not only necessary to replicate findings, but also to show how evidence obtained with one method, and at one level of explanation, converges with evidence obtained with another method at another level. To achieve breakthroughs in our understanding of auditory verbal hallucinations, we have to advance vertically through the various levels, rather than the more common approach of staying at our favourite level and advancing horizontally (e.g., more advanced techniques and data acquisition analyses. The horizontal expansion will, however, not advance a deeper understanding of how an auditory verbal hallucination spontaneously starts and stops. Finally, I present data from the clinical, cognitive, brain-imaging, and cellular levels, where data from one level validate and support data at another level, called converging of evidence. Using a translational approach, the current status of auditory verbal hallucinations is that they implicate speech perception areas in the left temporal lobe, impairing perception of and attention to external sounds. Preliminary results also show that amygdala is implicated in

  10. Attending to auditory memory.

    Science.gov (United States)

    Zimmermann, Jacqueline F; Moscovitch, Morris; Alain, Claude

    2016-06-01

    Attention to memory describes the process of attending to memory traces when the object is no longer present. It has been studied primarily for representations of visual stimuli with only few studies examining attention to sound object representations in short-term memory. Here, we review the interplay of attention and auditory memory with an emphasis on 1) attending to auditory memory in the absence of related external stimuli (i.e., reflective attention) and 2) effects of existing memory on guiding attention. Attention to auditory memory is discussed in the context of change deafness, and we argue that failures to detect changes in our auditory environments are most likely the result of a faulty comparison system of incoming and stored information. Also, objects are the primary building blocks of auditory attention, but attention can also be directed to individual features (e.g., pitch). We review short-term and long-term memory guided modulation of attention based on characteristic features, location, and/or semantic properties of auditory objects, and propose that auditory attention to memory pathways emerge after sensory memory. A neural model for auditory attention to memory is developed, which comprises two separate pathways in the parietal cortex, one involved in attention to higher-order features and the other involved in attention to sensory information. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Computer-aided voice training in higher education: participants ...

    African Journals Online (AJOL)

    The training of performance singing in a multi lingual, multi cultural educational context presents unique problems and requires inventive teaching strategies. Computer-aided training offers objective visual feedback of the voice production that can be implemented as a teaching aid in higher education. This article reports on ...

  12. Multi-modal assessment of on-road demand of voice and manual phone calling and voice navigation entry across two embedded vehicle systems.

    Science.gov (United States)

    Mehler, Bruce; Kidd, David; Reimer, Bryan; Reagan, Ian; Dobres, Jonathan; McCartt, Anne

    2016-03-01

    One purpose of integrating voice interfaces into embedded vehicle systems is to reduce drivers' visual and manual distractions with 'infotainment' technologies. However, there is scant research on actual benefits in production vehicles or how different interface designs affect attentional demands. Driving performance, visual engagement, and indices of workload (heart rate, skin conductance, subjective ratings) were assessed in 80 drivers randomly assigned to drive a 2013 Chevrolet Equinox or Volvo XC60. The Chevrolet MyLink system allowed completing tasks with one voice command, while the Volvo Sensus required multiple commands to navigate the menu structure. When calling a phone contact, both voice systems reduced visual demand relative to the visual-manual interfaces, with reductions for drivers in the Equinox being greater. The Equinox 'one-shot' voice command showed advantages during contact calling but had significantly higher error rates than Sensus during destination address entry. For both secondary tasks, neither voice interface entirely eliminated visual demand. Practitioner Summary: The findings reinforce the observation that most, if not all, automotive auditory-vocal interfaces are multi-modal interfaces in which the full range of potential demands (auditory, vocal, visual, manipulative, cognitive, tactile, etc.) need to be considered in developing optimal implementations and evaluating drivers' interaction with the systems. Social Media: In-vehicle voice-interfaces can reduce visual demand but do not eliminate it and all types of demand need to be taken into account in a comprehensive evaluation.

  13. Auditory Processing Disorder (For Parents)

    Science.gov (United States)

    ... role. Auditory cohesion problems: This is when higher-level listening tasks are difficult. Auditory cohesion skills — drawing inferences from conversations, understanding riddles, or comprehending verbal math problems — require heightened auditory processing and language levels. ...

  14. Effects of tailoring ingredients in auditory persuasive health messages on fruit and vegetable intake

    NARCIS (Netherlands)

    Elbert, Sarah P.; Dijkstra, Arie; Rozema, Andrea

    2017-01-01

    Objective: Health messages can be tailored by applying different tailoring ingredients, among which personalisation, feedback and adaptation. This experiment investigated the separate effects of these tailoring ingredients on behaviour in auditory health persuasion. Furthermore, the moderating

  15. Longitudinal variations of laryngeal overpressure and voice-related quality of life in spasmodic dysphonia.

    Science.gov (United States)

    Yeung, Jeffrey C; Fung, Kevin; Davis, Eric; Rai, Sunita K; Day, Adam M B; Dzioba, Agnieszka; Bornbaum, Catherine; Doyle, Philip C

    2015-03-01

    Adductor spasmodic dysphonia (AdSD) is a voice disorder characterized by variable symptom severity and voice disability. Those with the disorder experience a wide spectrum of symptom severity over time, resulting in varied degrees of perceived voice disability. This study investigated the longitudinal variability of AdSD, with a focus on auditory-perceptual judgments of a dimension termed laryngeal overpressure (LO) and patient self-assessments of voice-related quality of life (V-RQOL). Longitudinal, correlational study. Ten adults with AdSD were followed over three time periods. At each, both voice samples and self-ratings of V-RQOL were gathered prior to their scheduled Botox injection. Voice recordings subsequently were perceptually evaluated by eight listeners for LO using a visual analog scale. LO ratings for all-voiced and Rainbow Passage sentence stimuli were found to be highly correlated. However, only the LO ratings obtained from judgments of AV stimuli were found to correlate moderately with self-ratings of voice disability for both the physical functioning and social-emotional subscores, as well as the total V-RQOL score. Based on perceptual judgments, LO appears to provide a reliable means of quantifying the severity of voice abnormalities in AdSD. Variability in self-ratings of the V-RQOL suggest that perceived disability related to AdSD should be actively monitored. Further, auditory-perceptual judgments may provide an accurate index of the potential impact of the disorder on the speaker. Similarly, LO was supported as a simple clinical measure that serves as a reliable index of voice change over time. © 2014 The American Laryngological, Rhinological and Otological Society, Inc.

  16. Motor Training: Comparison of Visual and Auditory Coded Proprioceptive Cues

    Directory of Open Access Journals (Sweden)

    Philip Jepson

    2012-05-01

    Full Text Available Self-perception of body posture and movement is achieved through multi-sensory integration, particularly the utilisation of vision, and proprioceptive information derived from muscles and joints. Disruption to these processes can occur following a neurological accident, such as stroke, leading to sensory and physical impairment. Rehabilitation can be helped through use of augmented visual and auditory biofeedback to stimulate neuro-plasticity, but the effective design and application of feedback, particularly in the auditory domain, is non-trivial. Simple auditory feedback was tested by comparing the stepping accuracy of normal subjects when given a visual spatial target (step length and an auditory temporal target (step duration. A baseline measurement of step length and duration was taken using optical motion capture. Subjects (n=20 took 20 ‘training’ steps (baseline ±25% using either an auditory target (950 Hz tone, bell-shaped gain envelope or visual target (spot marked on the floor and were then asked to replicate the target step (length or duration corresponding to training with all feedback removed. Visual cues (mean percentage error=11.5%; SD ± 7.0%; auditory cues (mean percentage error = 12.9%; SD ± 11.8%. Visual cues elicit a high degree of accuracy both in training and follow-up un-cued tasks; despite the novelty of the auditory cues present for subjects, the mean accuracy of subjects approached that for visual cues, and initial results suggest a limited amount of practice using auditory cues can improve performance.

  17. Auditory short-term memory activation during score reading.

    Science.gov (United States)

    Simoens, Veerle L; Tervaniemi, Mari

    2013-01-01

    Performing music on the basis of reading a score requires reading ahead of what is being played in order to anticipate the necessary actions to produce the notes. Score reading thus not only involves the decoding of a visual score and the comparison to the auditory feedback, but also short-term storage of the musical information due to the delay of the auditory feedback during reading ahead. This study investigates the mechanisms of encoding of musical information in short-term memory during such a complicated procedure. There were three parts in this study. First, professional musicians participated in an electroencephalographic (EEG) experiment to study the slow wave potentials during a time interval of short-term memory storage in a situation that requires cross-modal translation and short-term storage of visual material to be compared with delayed auditory material, as it is the case in music score reading. This delayed visual-to-auditory matching task was compared with delayed visual-visual and auditory-auditory matching tasks in terms of EEG topography and voltage amplitudes. Second, an additional behavioural experiment was performed to determine which type of distractor would be the most interfering with the score reading-like task. Third, the self-reported strategies of the participants were also analyzed. All three parts of this study point towards the same conclusion according to which during music score reading, the musician most likely first translates the visual score into an auditory cue, probably starting around 700 or 1300 ms, ready for storage and delayed comparison with the auditory feedback.

  18. Voice - How humans communicate?

    Science.gov (United States)

    Tiwari, Manjul; Tiwari, Maneesha

    2012-01-01

    Voices are important things for humans. They are the medium through which we do a lot of communicating with the outside world: our ideas, of course, and also our emotions and our personality. The voice is the very emblem of the speaker, indelibly woven into the fabric of speech. In this sense, each of our utterances of spoken language carries not only its own message but also, through accent, tone of voice and habitual voice quality it is at the same time an audible declaration of our membership of particular social regional groups, of our individual physical and psychological identity, and of our momentary mood. Voices are also one of the media through which we (successfully, most of the time) recognize other humans who are important to us-members of our family, media personalities, our friends, and enemies. Although evidence from DNA analysis is potentially vastly more eloquent in its power than evidence from voices, DNA cannot talk. It cannot be recorded planning, carrying out or confessing to a crime. It cannot be so apparently directly incriminating. As will quickly become evident, voices are extremely complex things, and some of the inherent limitations of the forensic-phonetic method are in part a consequence of the interaction between their complexity and the real world in which they are used. It is one of the aims of this article to explain how this comes about. This subject have unsolved questions, but there is no direct way to present the information that is necessary to understand how voices can be related, or not, to their owners.

  19. Making social robots more attractive: the effects of voice pitch, humor and empathy

    NARCIS (Netherlands)

    Niculescu, A.I.; Ge, S.S.; van Dijk, Elisabeth M.A.G.; Nijholt, Antinus; Li, Haizhou; See, Swan Lan

    2013-01-01

    In this paper we explore how simple auditory/verbal features of the spoken language, such as voice characteristics (pitch) and language cues (empathy/humor expression) influence the quality of interaction with a social robot receptionist. For our experiment two robot characters were created: Olivia,

  20. The role of auditory temporal cues in the fluency of stuttering adults

    OpenAIRE

    Furini, Juliana; Picoloto, Luana Altran; Marconato, Eduarda; Bohnen, Anelise Junqueira; Cardoso, Ana Claudia Vieira; Oliveira, Cristiane Moço Canhetti de

    2017-01-01

    ABSTRACT Purpose: to compare the frequency of disfluencies and speech rate in spontaneous speech and reading in adults with and without stuttering in non-altered and delayed auditory feedback (NAF, DAF). Methods: participants were 30 adults: 15 with Stuttering (Research Group - RG), and 15 without stuttering (Control Group - CG). The procedures were: audiological assessment and speech fluency evaluation in two listening conditions, normal and delayed auditory feedback (100 milliseconds dela...

  1. Acoustic cues for the recognition of self-voice and other-voice

    Directory of Open Access Journals (Sweden)

    Mingdi eXu

    2013-10-01

    Full Text Available Self-recognition, being indispensable for successful social communication, has become a major focus in current social neuroscience. The physical aspects of the self are most typically manifested in the face and voice. Compared with the wealth of studies on self-face recognition, self-voice recognition (SVR has not gained much attention. Converging evidence has suggested that the fundamental frequency (F0 and formant structures serve as the key acoustic cues for other-voice recognition (OVR. However, little is known about which, and how, acoustic cues are utilized for SVR as opposed to OVR. To address this question, we independently manipulated the F0 and formant information of recorded voices and investigated their contributions to SVR and OVR. Japanese participants were presented with recorded vocal stimuli and were asked to identify the speaker—either themselves or one of their peers. Six groups of 5 peers of the same sex participated in the study. Under conditions where the formant information was fully preserved and where only the frequencies lower than the third formant (F3 were retained, accuracies of SVR deteriorated significantly with the modulation of the F0, and the results were comparable for OVR. By contrast, under a condition where only the frequencies higher than F3 were retained, the accuracy of SVR was significantly higher than that of OVR throughout the range of F0 modulations, and the F0 scarcely affected the accuracies of SVR and OVR. Our results indicate that while both F0 and formant information are involved in SVR, as well as in OVR, the advantage of SVR is manifested only when major formant information for speech intelligibility is absent. These findings imply the robustness of self-voice representation, possibly by virtue of auditory familiarity and other factors such as its association with motor/articulatory representation.

  2. Connections between voice ergonomic risk factors and voice symptoms, voice handicap, and respiratory tract diseases.

    Science.gov (United States)

    Rantala, Leena M; Hakala, Suvi J; Holmqvist, Sofia; Sala, Eeva

    2012-11-01

    The aim of the study was to investigate the connections between voice ergonomic risk factors found in classrooms and voice-related problems in teachers. Voice ergonomic assessment was performed in 39 classrooms in 14 elementary schools by means of a Voice Ergonomic Assessment in Work Environment--Handbook and Checklist. The voice ergonomic risk factors assessed included working culture, noise, indoor air quality, working posture, stress, and access to a sound amplifier. Teachers from the above-mentioned classrooms reported their voice symptoms, respiratory tract diseases, and completed a Voice Handicap Index (VHI). The more voice ergonomic risk factors found in the classroom the higher were the teachers' total scores on voice symptoms and VHI. Stress was the factor that correlated most strongly with voice symptoms. Poor indoor air quality increased the occurrence of laryngitis. Voice ergonomics were poor in the classrooms studied and voice ergonomic risk factors affected the voice. It is important to convey information on voice ergonomics to education administrators and those responsible for school planning and taking care of school buildings. Copyright © 2012 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  3. PoLAR Voices: Informing Adult Learners about the Science and Story of Climate Change in the Polar Regions Through Audio Podcast

    Science.gov (United States)

    Quinney, A.; Murray, M. S.; Gobroski, K. A.; Topp, R. M.; Pfirman, S. L.

    2015-12-01

    The resurgence of audio programming with the advent of podcasting in the early 2000s spawned a new medium for communicating advances in science, research, and technology. To capitalize on this informal educational outlet, the Arctic Institute of North America partnered with the International Arctic Research Center, the University of Alaska Fairbanks, and the UA Museum of the North to develop a podcast series called PoLAR Voices for the Polar Learning and Responding (PoLAR) Climate Change Education Partnership. PoLAR Voices is a public education initiative that uses creative storytelling and novel narrative structures to immerse the listener in an auditory depiction of climate change. The programs will feature the science and story of climate change, approaching topics from both the points of view of researchers and Arctic indigenous peoples. This approach will engage the listener in the holistic story of climate change, addressing both scientific and personal perspectives, resulting in a program that is at once educational, entertaining and accessible. Feedback is being collected at each stage of development to ensure the content and format of the program satisfies listener interests and preferences. Once complete, the series will be released on thepolarhub.org and on iTunes. Additionally, blanket distribution of the programs will be accomplished via radio broadcast in urban, rural and remote areas, and in multiple languages to increase distribution and enhance accessibility.

  4. Whose voice matters? LEARNERS

    African Journals Online (AJOL)

    Erna Kinsey

    the education quality and more specifically learners' mathematical skills are .... worth). Students with a high self-esteem displayed acceptance of feedback .... Thus feedback is portrayed as means of communication of the teacher's view.

  5. Formativ Feedback

    DEFF Research Database (Denmark)

    Hyldahl, Kirsten Kofod

    Denne bog undersøger, hvordan lærere kan anvende feedback til at forbedre undervisningen i klasselokalet. I denne sammenhæng har John Hattie, professor ved Melbourne Universitet, udviklet en model for feedback, hvilken er baseret på synteser af meta-analyser. I 2009 udgav han bogen "Visible...

  6. An EMG Study of the Lip Muscles during Covert Auditory Verbal Hallucinations in Schizophrenia

    Science.gov (United States)

    Rapin, Lucile; Dohen, Marion; Polosan, Mircea; Perrier, Pascal; Loevenbruck, Hélène

    2013-01-01

    Purpose: "Auditory verbal hallucinations" (AVHs) are speech perceptions in the absence of external stimulation. According to an influential theoretical account of AVHs in schizophrenia, a deficit in inner-speech monitoring may cause the patients' verbal thoughts to be perceived as external voices. The account is based on a…

  7. Subjective Loudness and Reality of Auditory Verbal Hallucinations and Activation of the Inner Speech Processing Network

    NARCIS (Netherlands)

    Vercammen, Ans; Knegtering, Henderikus; Bruggeman, Richard; Aleman, Andre

    Background: One of the most influential cognitive models of auditory verbal hallucinations (AVH) suggests that a failure to adequately monitor the production of one's own inner speech leads to verbal thought being misidentified as an alien voice. However, it is unclear whether this theory can

  8. Presentation of dynamically overlapping auditory messages in user interfaces

    Energy Technology Data Exchange (ETDEWEB)

    Papp, III, Albert Louis [Univ. of California, Davis, CA (United States)

    1997-09-01

    This dissertation describes a methodology and example implementation for the dynamic regulation of temporally overlapping auditory messages in computer-user interfaces. The regulation mechanism exists to schedule numerous overlapping auditory messages in such a way that each individual message remains perceptually distinct from all others. The method is based on the research conducted in the area of auditory scene analysis. While numerous applications have been engineered to present the user with temporally overlapped auditory output, they have generally been designed without any structured method of controlling the perceptual aspects of the sound. The method of scheduling temporally overlapping sounds has been extended to function in an environment where numerous applications can present sound independently of each other. The Centralized Audio Presentation System is a global regulation mechanism that controls all audio output requests made from all currently running applications. The notion of multimodal objects is explored in this system as well. Each audio request that represents a particular message can include numerous auditory representations, such as musical motives and voice. The Presentation System scheduling algorithm selects the best representation according to the current global auditory system state, and presents it to the user within the request constraints of priority and maximum acceptable latency. The perceptual conflicts between temporally overlapping audio messages are examined in depth through the Computational Auditory Scene Synthesizer. At the heart of this system is a heuristic-based auditory scene synthesis scheduling method. Different schedules of overlapped sounds are evaluated and assigned penalty scores. High scores represent presentations that include perceptual conflicts between over-lapping sounds. Low scores indicate fewer and less serious conflicts. A user study was conducted to validate that the perceptual difficulties predicted by

  9. [Review of Talking voices: Repetition, dialogue, and imagery in conversational discourse. 2nd edition. By Deborah Tannen

    OpenAIRE

    Dingemanse, M.

    2010-01-01

    Reviews the book, Talking voices: Repetition, dialogue, and imagery in conversational discourse. 2nd edition by Deborah Tannen. This book is the same as the 1989 original except for an added introduction. This introduction situates TV in the context of intertextuality and gives a survey of relevant research since the book first appeared. The strength of the book lies in its insightful analysis of the auditory side of conversation. Yet talking voices have always been embedded in richly context...

  10. Visual face-movement sensitive cortex is relevant for auditory-only speech recognition.

    Science.gov (United States)

    Riedel, Philipp; Ragert, Patrick; Schelinski, Stefanie; Kiebel, Stefan J; von Kriegstein, Katharina

    2015-07-01

    It is commonly assumed that the recruitment of visual areas during audition is not relevant for performing auditory tasks ('auditory-only view'). According to an alternative view, however, the recruitment of visual cortices is thought to optimize auditory-only task performance ('auditory-visual view'). This alternative view is based on functional magnetic resonance imaging (fMRI) studies. These studies have shown, for example, that even if there is only auditory input available, face-movement sensitive areas within the posterior superior temporal sulcus (pSTS) are involved in understanding what is said (auditory-only speech recognition). This is particularly the case when speakers are known audio-visually, that is, after brief voice-face learning. Here we tested whether the left pSTS involvement is causally related to performance in auditory-only speech recognition when speakers are known by face. To test this hypothesis, we applied cathodal transcranial direct current stimulation (tDCS) to the pSTS during (i) visual-only speech recognition of a speaker known only visually to participants and (ii) auditory-only speech recognition of speakers they learned by voice and face. We defined the cathode as active electrode to down-regulate cortical excitability by hyperpolarization of neurons. tDCS to the pSTS interfered with visual-only speech recognition performance compared to a control group without pSTS stimulation (tDCS to BA6/44 or sham). Critically, compared to controls, pSTS stimulation additionally decreased auditory-only speech recognition performance selectively for voice-face learned speakers. These results are important in two ways. First, they provide direct evidence that the pSTS is causally involved in visual-only speech recognition; this confirms a long-standing prediction of current face-processing models. Secondly, they show that visual face-sensitive pSTS is causally involved in optimizing auditory-only speech recognition. These results are in line

  11. Effect of classic uvulopalatopharyngoplasty and laser-assisted uvulopalatopharyngoplasty on voice acoustics and speech nasalance

    International Nuclear Information System (INIS)

    Mahmoud Y Abu El-ella

    2010-01-01

    Uvulopalatopharyngoplasty (UPPP) is a commonly used surgical technique for oropharyngeal reconstruction in patients with obstructive sleep apnea (OSA). This procedure can be done either through the classic or the laser-assisted uvulopalatopharyngoplasty (LAUP) technique. The purpose of this study was to evaluate the effect of classic UPPP and LAUP on acoustics of voice and speech nasalance, and to compare the effect of each operation on these two domains. Patients and The study included 27 patients with a mean age of 46 years. All patients were diagnosed with OSA based on polysomnographic examination. Patients were divided into two groups according to the type of surgical procedure. Fifteen patients underwent classic UPPP, whereas 12 patients were subjected to LAUP. A full assessment was done for all patients preoperatively and postoperatively, including auditory perceptual assessment (APA) of voice and speech, objective assessment using acoustic voice analysis and nasometry. Auditory perceptual assessment of speech and voice, acoustic analysis of voice and nasometric analysis of speech did not show statistically significant differences between the preoperative and postoperative evaluations in either group (P>.05).The results of this study demonstrated that in patients with OSA, the surgical technique, whether classic UPPP or LAUP, does not have significant effects on the patients' voice quality or their speech outcomes (Author).

  12. Voice Therapy Practices and Techniques: A Survey of Voice Clinicians.

    Science.gov (United States)

    Mueller, Peter B.; Larson, George W.

    1992-01-01

    Eighty-three voice disorder therapists' ratings of statements regarding voice therapy practices indicated that vocal nodules are the most frequent disorder treated; vocal abuse and hard glottal attack elimination, counseling, and relaxation were preferred treatment approaches; and voice therapy is more effective with adults than with children.…

  13. Smartphone App for Voice Disorders

    Science.gov (United States)

    ... on. Feature: Taste, Smell, Hearing, Language, Voice, Balance Smartphone App for Voice Disorders Past Issues / Fall 2013 ... developed a mobile monitoring device that relies on smartphone technology to gather a week's worth of talking, ...

  14. Effects of Medications on Voice

    Science.gov (United States)

    ... ENTCareers Marketplace Find an ENT Doctor Near You Effects of Medications on Voice Effects of Medications on Voice Patient Health Information News ... replacement therapy post-menopause may have a variable effect. An inadequate level of thyroid replacement medication in ...

  15. Hearing Voices and Seeing Things

    Science.gov (United States)

    ... Facts for Families Guide Facts for Families - Vietnamese Hearing Voices and Seeing Things No. 102; Updated October ... delusions (a fixed, false, and often bizarre belief). Hearing voices or seeing things that are not there ...

  16. Changes in brain activity following intensive voice treatment in children with cerebral palsy.

    Science.gov (United States)

    Bakhtiari, Reyhaneh; Cummine, Jacqueline; Reed, Alesha; Fox, Cynthia M; Chouinard, Brea; Cribben, Ivor; Boliek, Carol A

    2017-09-01

    Eight children (3 females; 8-16 years) with motor speech disorders secondary to cerebral palsy underwent 4 weeks of an intensive neuroplasticity-principled voice treatment protocol, LSVT LOUD ® , followed by a structured 12-week maintenance program. Children were asked to overtly produce phonation (ah) at conversational loudness, cued-phonation at perceived twice-conversational loudness, a series of single words, and a prosodic imitation task while being scanned using fMRI, immediately pre- and post-treatment and 12 weeks following a maintenance program. Eight age- and sex-matched controls were scanned at each of the same three time points. Based on the speech and language literature, 16 bilateral regions of interest were selected a priori to detect potential neural changes following treatment. Reduced neural activity in the motor areas (decreased motor system effort) before and immediately after treatment, and increased activity in the anterior cingulate gyrus after treatment (increased contribution of decision making processes) were observed in the group with cerebral palsy compared to the control group. Using graphical models, post-treatment changes in connectivity were observed between the left supramarginal gyrus and the right supramarginal gyrus and the left precentral gyrus for the children with cerebral palsy, suggesting LSVT LOUD enhanced contributions of the feedback system in the speech production network instead of high reliance on feedforward control system and the somatosensory target map for regulating vocal effort. Network pruning indicates greater processing efficiency and the recruitment of the auditory and somatosensory feedback control systems following intensive treatment. Hum Brain Mapp 38:4413-4429, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  17. A Randomized, Controlled Trial of Behavioral Voice Therapy for Dysphonia Related to Prematurity of Birth.

    Science.gov (United States)

    Reynolds, Victoria; Meldrum, Suzanne; Simmer, Karen; Vijayasekaran, Shyan; French, Noel

    2017-03-01

    Dysphonia is a potential complication of prematurity. Preterm children may sustain iatrogenic laryngeal damage from medical intervention in the neonatal period, and further, adopt compensatory, maladaptive voicing behaviors. This pilot study aimed to evaluate the effects of a voice therapy protocol on voice quality in school-aged, very preterm (VP) children. Twenty-seven VP children with dysphonia were randomized to an immediate intervention group (n = 7) or a delayed-intervention, waiting list control group (n = 14). Following analysis of these data, a secondary analysis was conducted on the pooled intervention data (n = 21). Six participants did not complete the trial. Change to voice quality was measured via pre- and posttreatment assessments using the Consensus Auditory Perceptual Evaluation of Voice. The intervention group did not demonstrate statistically significant improvements in voice quality, whereas this was observed in the control group (P = 0.026). However, when intervention data were pooled including both the immediate and delayed groups following intervention, dysphonia severity was significantly lower (P = 0.026) in the treatment group. Dysphonia in most VP children in this cohort was persistent. These pilot data indicate that some participants experienced acceptable voice outcomes on spontaneous recovery, whereas others demonstrated a response to behavioral intervention. Further research is needed to identify the facilitators of and barriers to intervention success, and to predict those who may experience spontaneous recovery. Copyright © 2017 The Voice Foundation. All rights reserved.

  18. Auditory Integration Training

    Directory of Open Access Journals (Sweden)

    Zahra Jafari

    2002-07-01

    Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depressin, and hyperacute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of The Sound of a Moracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.

  19. Review: Auditory Integration Training

    Directory of Open Access Journals (Sweden)

    Zahra Ja'fari

    2003-01-01

    Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depression, and hyper acute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of the sound of a miracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.

  20. Translation and adaptation of functional auditory performance indicators (FAPI

    Directory of Open Access Journals (Sweden)

    Karina Ferreira

    2011-12-01

    Full Text Available Work with deaf children has gained new attention since the expectation and goal of therapy has expanded to language development and subsequent language learning. Many clinical tests were developed for evaluation of speech sound perception in young children in response to the need for accurate assessment of hearing skills that developed from the use of individual hearing aids or cochlear implants. These tests also allow the evaluation of the rehabilitation program. However, few of these tests are available in Portuguese. Evaluation with the Functional Auditory Performance Indicators (FAPI generates a child's functional auditory skills profile, which lists auditory skills in an integrated and hierarchical order. It has seven hierarchical categories, including sound awareness, meaningful sound, auditory feedback, sound source localizing, auditory discrimination, short-term auditory memory, and linguistic auditory processing. FAPI evaluation allows the therapist to map the child's hearing profile performance, determine the target for increasing the hearing abilities, and develop an effective therapeutic plan. Objective: Since the FAPI is an American test, the inventory was adapted for application in the Brazilian population. Material and Methods: The translation was done following the steps of translation and back translation, and reproducibility was evaluated. Four translated versions (two originals and two back-translated were compared, and revisions were done to ensure language adaptation and grammatical and idiomatic equivalence. Results: The inventory was duly translated and adapted. Conclusion: Further studies about the application of the translated FAPI are necessary to make the test practicable in Brazilian clinical use.

  1. Contextual modulation of primary visual cortex by auditory signals.

    Science.gov (United States)

    Petro, L S; Paton, A T; Muckli, L

    2017-02-19

    Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol. 23, 195-201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol. 24, 1256-1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Authors.

  2. Aerodynamic and sound intensity measurements in tracheoesophageal voice

    NARCIS (Netherlands)

    Grolman, Wilko; Eerenstein, Simone E. J.; Tan, Frédérique M. L.; Tange, Rinze A.; Schouwenburg, Paul F.

    2007-01-01

    BACKGROUND: In laryngectomized patients, tracheoesophageal voice generally provides a better voice quality than esophageal voice. Understanding the aerodynamics of voice production in patients with a voice prosthesis is important for optimizing prosthetic designs and successful voice rehabilitation.

  3. Neural basis of the time window for subjective motor-auditory integration

    Directory of Open Access Journals (Sweden)

    Koichi eToida

    2016-01-01

    Full Text Available Temporal contiguity between an action and corresponding auditory feedback is crucial to the perception of self-generated sound. However, the neural mechanisms underlying motor–auditory temporal integration are unclear. Here, we conducted four experiments with an oddball paradigm to examine the specific event-related potentials (ERPs elicited by delayed auditory feedback for a self-generated action. The first experiment confirmed that a pitch-deviant auditory stimulus elicits mismatch negativity (MMN and P300, both when it is generated passively and by the participant’s action. In our second and third experiments, we investigated the ERP components elicited by delayed auditory feedback of for a self-generated action. We found that delayed auditory feedback elicited an enhancement of P2 (enhanced-P2 and a N300 component, which were apparently different from the MMN and P300 components observed in the first experiment. We further investigated the sensitivity of the enhanced-P2 and N300 to delay length in our fourth experiment. Strikingly, the amplitude of the N300 increased as a function of the delay length. Additionally, the N300 amplitude was significantly correlated with the conscious detection of the delay (the 50% detection point was around 200 ms, and hence reduction in the feeling of authorship of the sound (the sense of agency. In contrast, the enhanced-P2 was most prominent in short-delay (≤ 200 ms conditions and diminished in long-delay conditions. Our results suggest that different neural mechanisms are employed for the processing of temporally-deviant and pitch-deviant auditory feedback. Additionally, the temporal window for subjective motor–auditory integration is likely about 200 ms, as indicated by these auditory ERP components.

  4. [Voice disorders in female teachers assessed by Voice Handicap Index].

    Science.gov (United States)

    Niebudek-Bogusz, Ewa; Kuzańska, Anna; Woźnicka, Ewelina; Sliwińska-Kowalska, Mariola

    2007-01-01

    The aim of this study was to assess the application of Voice Handicap Index (VHI) in the diagnosis of occupational voice disorders in female teachers. The subjective assessment of voice by VHI was performed in fifty subjects with dysphonia diagnosed in laryngovideostroboscopic examination. The control group comprised 30 women whose jobs did not involve vocal effort. The results of the total VHI score and each of its subscales: functional, emotional and physical was significantly worse in the study group than in controls (p teachers estimated their own voice problems as a moderate disability, while 12% of them reported severe voice disability. However, all non-teachers assessed their voice problems as slight, their results ranged at the lowest level of VHI score. This study confirmed that VHI as a tool for self-assessment of voice can be a significant contribution to the diagnosis of occupational dysphonia.

  5. Feedback Networks

    OpenAIRE

    Zamir, Amir R.; Wu, Te-Lin; Sun, Lin; Shen, William; Malik, Jitendra; Savarese, Silvio

    2016-01-01

    Currently, the most successful learning models in computer vision are based on learning successive representations followed by a decision layer. This is usually actualized through feedforward multilayer neural networks, e.g. ConvNets, where each layer forms one of such successive representations. However, an alternative that can achieve the same goal is a feedback based approach in which the representation is formed in an iterative manner based on a feedback received from previous iteration's...

  6. Auditory Spatial Layout

    Science.gov (United States)

    Wightman, Frederic L.; Jenison, Rick

    1995-01-01

    All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.

  7. Listen to a voice

    DEFF Research Database (Denmark)

    Hølge-Hazelton, Bibi

    2001-01-01

    Listen to the voice of a young girl Lonnie, who was diagnosed with Type 1 diabetes at 16. Imagine that she is deeply involved in the social security system. She lives with her mother and two siblings in a working class part of a small town. She is at a special school for problematic youth, and her...

  8. Sustainable Consumer Voices

    DEFF Research Database (Denmark)

    Klitmøller, Anders; Rask, Morten; Jensen, Nevena

    2011-01-01

    Aiming to explore how user driven innovation can inform high level design strategies, an in-depth empirical study was carried out, based on data from 50 observations of private vehicle users. This paper reports the resulting 5 consumer voices: Technology Enthusiast, Environmentalist, Design Lover...

  9. Voices of courage

    Directory of Open Access Journals (Sweden)

    Noraida Abdullah Karim

    2007-07-01

    Full Text Available In May 2007 the Women’s Commission for Refugee Women and Children1 presented its annual Voices of Courage awards to three displaced people who have dedicated their lives to promoting economic opportunities for refugee and displaced women and youth. These are their (edited testimonies.

  10. What the voice reveals

    NARCIS (Netherlands)

    Ko, Sei Jin

    2007-01-01

    Given that the voice is our main form of communication, we know surprisingly little about how it impacts judgment and behavior. Furthermore, the modern advancement in telecommunication systems, such as cellular phones, has meant that a large proportion of our everyday interactions are conducted

  11. Bodies and Voices

    DEFF Research Database (Denmark)

    A wide-ranging collection of essays centred on readings of the body in contemporary literary and socio-anthropological discourse, from slavery and rape to female genital mutilation, from clothing, ocular pornography, voice, deformation and transmutation to the imprisoned, dismembered, remembered...

  12. Human voice perception.

    Science.gov (United States)

    Latinus, Marianne; Belin, Pascal

    2011-02-22

    We are all voice experts. First and foremost, we can produce and understand speech, and this makes us a unique species. But in addition to speech perception, we routinely extract from voices a wealth of socially-relevant information in what constitutes a more primitive, and probably more universal, non-linguistic mode of communication. Consider the following example: you are sitting in a plane, and you can hear a conversation in a foreign language in the row behind you. You do not see the speakers' faces, and you cannot understand the speech content because you do not know the language. Yet, an amazing amount of information is available to you. You can evaluate the physical characteristics of the different protagonists, including their gender, approximate age and size, and associate an identity to the different voices. You can form a good idea of the different speaker's mood and affective state, as well as more subtle cues as the perceived attractiveness or dominance of the protagonists. In brief, you can form a fairly detailed picture of the type of social interaction unfolding, which a brief glance backwards can on the occasion help refine - sometimes surprisingly so. What are the acoustical cues that carry these different types of vocal information? How does our brain process and analyse this information? Here we briefly review an emerging field and the main tools used in voice perception research. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. Voice application development for Android

    CERN Document Server

    McTear, Michael

    2013-01-01

    This book will give beginners an introduction to building voice-based applications on Android. It will begin by covering the basic concepts and will build up to creating a voice-based personal assistant. By the end of this book, you should be in a position to create your own voice-based applications on Android from scratch in next to no time.Voice Application Development for Android is for all those who are interested in speech technology and for those who, as owners of Android devices, are keen to experiment with developing voice apps for their devices. It will also be useful as a starting po

  14. Temporal Sequence of Visuo-Auditory Interaction in Multiple Areas of the Guinea Pig Visual Cortex

    Science.gov (United States)

    Nishimura, Masataka; Song, Wen-Jie

    2012-01-01

    Recent studies in humans and monkeys have reported that acoustic stimulation influences visual responses in the primary visual cortex (V1). Such influences can be generated in V1, either by direct auditory projections or by feedback projections from extrastriate cortices. To test these hypotheses, cortical activities were recorded using optical imaging at a high spatiotemporal resolution from multiple areas of the guinea pig visual cortex, to visual and/or acoustic stimulations. Visuo-auditory interactions were evaluated according to differences between responses evoked by combined auditory and visual stimulation, and the sum of responses evoked by separate visual and auditory stimulations. Simultaneous presentation of visual and acoustic stimulations resulted in significant interactions in V1, which occurred earlier than in other visual areas. When acoustic stimulation preceded visual stimulation, significant visuo-auditory interactions were detected only in V1. These results suggest that V1 is a cortical origin of visuo-auditory interaction. PMID:23029483

  15. Voice similarity in identical twins.

    Science.gov (United States)

    Van Gysel, W D; Vercammen, J; Debruyne, F

    2001-01-01

    If people are asked to discriminate visually the two individuals of a monozygotic twin (MT), they mostly get into trouble. Does this problem also exist when listening to twin voices? Twenty female and 10 male MT voices were randomly assembled with one "strange" voice to get voice trios. The listeners (10 female students in Speech and Language Pathology) were asked to label the twins (voices 1-2, 1-3 or 2-3) in two conditions: two standard sentences read aloud and a 2.5-second midsection of a sustained /a/. The proportion correctly labelled twins was for female voices 82% and 63% and for male voices 74% and 52% for the sentences and the sustained /a/ respectively, both being significantly greater than chance (33%). The acoustic analysis revealed a high intra-twin correlation for the speaking fundamental frequency (SFF) of the sentences and the fundamental frequency (F0) of the sustained /a/. So the voice pitch could have been a useful characteristic in the perceptual identification of the twins. We conclude that there is a greater perceptual resemblance between the voices of identical twins than between voices without genetic relationship. The identification however is not perfect. The voice pitch possibly contributes to the correct twin identifications.

  16. Duration reproduction with sensory feedback delay: Differential involvement of perception and action time

    Directory of Open Access Journals (Sweden)

    Stephanie eGanzenmüller

    2012-10-01

    Full Text Available Previous research has shown that voluntary action can attract subsequent, delayed feedback events towards the action, and adaptation to the sensorimotor delay can even reverse motor-sensory temporal-order judgments. However, whether and how sensorimotor delay affects duration reproduction is still unclear. To investigate this, we injected an onset- or offset-delay to the sensory feedback signal from a duration reproduction task. We compared duration reproductions within (visual, auditory modality and across audiovisual modalities with feedback signal onset- and offset-delay manipulations. We found that the reproduced duration was lengthened in both visual and auditory feedback signal onset-delay conditions. The lengthening effect was evident immediately, on the first trial with the onset delay. However, when the onset of the feedback signal was prior to the action, the lengthening effect was diminished. In contrast, a shortening effect was found with feedback signal offset-delay, though the effect was weaker and manifested only in the auditory offset-delay condition. These findings indicate that participants tend to mix the onset of action and the feedback signal more when the feedback is delayed, and they heavily rely on motor-stop signals for the duration reproduction. Furthermore, auditory duration was overestimated compared to visual duration in crossmodal feedback conditions, and the overestimation of auditory duration (or the underestimation of visual duration was independent of the delay manipulation.

  17. Emotional expressions in voice and music: same code, same effect?

    Science.gov (United States)

    Escoffier, Nicolas; Zhong, Jidan; Schirmer, Annett; Qiu, Anqi

    2013-08-01

    Scholars have documented similarities in the way voice and music convey emotions. By using functional magnetic resonance imaging (fMRI) we explored whether these similarities imply overlapping processing substrates. We asked participants to trace changes in either the emotion or pitch of vocalizations and music using a joystick. Compared to music, vocalizations more strongly activated superior and middle temporal cortex, cuneus, and precuneus. However, despite these differences, overlapping rather than differing regions emerged when comparing emotion with pitch tracing for music and vocalizations, respectively. Relative to pitch tracing, emotion tracing activated medial superior frontal and anterior cingulate cortex regardless of stimulus type. Additionally, we observed emotion specific effects in primary and secondary auditory cortex as well as in medial frontal cortex that were comparable for voice and music. Together these results indicate that similar mechanisms support emotional inferences from vocalizations and music and that these mechanisms tap on a general system involved in social cognition. Copyright © 2011 Wiley Periodicals, Inc.

  18. Developmental Changes in Locating Voice and Sound in Space

    Science.gov (United States)

    Kezuka, Emiko; Amano, Sachiko; Reddy, Vasudevi

    2017-01-01

    We know little about how infants locate voice and sound in a complex multi-modal space. Using a naturalistic laboratory experiment the present study tested 35 infants at 3 ages: 4 months (15 infants), 5 months (12 infants), and 7 months (8 infants). While they were engaged frontally with one experimenter, infants were presented with (a) a second experimenter’s voice and (b) castanet sounds from three different locations (left, right, and behind). There were clear increases with age in the successful localization of sounds from all directions, and a decrease in the number of repetitions required for success. Nonetheless even at 4 months two-thirds of the infants attempted to search for the voice or sound. At all ages localizing sounds from behind was more difficult and was clearly present only at 7 months. Perseverative errors (looking at the last location) were present at all ages and appeared to be task specific (only present in the 7 month-olds for the behind location). Spontaneous attention shifts by the infants between the two experimenters, evident at 7 months, suggest early evidence for infant initiation of triadic attentional engagements. There was no advantage found for voice over castanet sounds in this study. Auditory localization is a complex and contextual process emerging gradually in the first half of the first year. PMID:28979220

  19. Comparison Between Vocal Function Exercises and Voice Amplification.

    Science.gov (United States)

    Teixeira, Letícia Caldas; Behlau, Mara

    2015-11-01

    To compare the effectiveness of vocal function exercises (VFEs) versus voice amplification (VA) after a 6-week therapy for teachers diagnosed with behavioral dysphonia. A total of 162 teachers with behavioral dysphonia were randomly allocated into two intervention groups and one control group (CG). Outcomes were assessed using auditory-perceptual evaluation of voice, laryngeal status assessment, self-ratings of the impact of dysphonia, and acoustic analysis. The VFE group showed effective changes across treatment outcome measures: overall severity of dysphonia relative to the CG, laryngeal evaluation, and self-perceived dysphonia. The VA group showed positive outcomes in some measures of self-rated dysphonia. The CG had poorer outcomes across self-assessment dimensions. The VFE method is effective in treating the behavioral dysphonia of teachers, can change the overall severity and the self-perception of the impact of dysphonia, and the laryngeal evaluation outcomes. The use of a voice amplifier is effective as a preventive measure because it results in an improved self-perception of dysphonia, especially in the work-related dimension. One case of dysphonia aggravation can be prevented in every three patients with behavioral dysphonia engaged in VFE, and one case in every five patients using VA. The lack of a therapeutic intervention worsens teachers' behavioral dysphonia in a period of 6 weeks. Copyright © 2015 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  20. Listening to Schneiderian Voices: A Novel Phenomenological Analysis.

    Science.gov (United States)

    Rosen, Cherise; Chase, Kayla A; Jones, Nev; Grossman, Linda S; Gin, Hannah; Sharma, Rajiv P

    This paper reports on analyses designed to elucidate phenomenological characteristics, content and experience specifically targeting participants with Schneiderian voices conversing/commenting (VC) while exploring differences in clinical presentation and quality of life compared to those with voices not conversing (VNC). This mixed-method investigation of Schneiderian voices included standardized clinical metrics and exploratory phenomenological interviews designed to elicit in-depth information about the characteristics, content, meaning, and personification of auditory verbal hallucinations. The subjective experience shows a striking pattern of VC, as they are experienced as internal at initial onset and during the longer-term course of illness when compared to VNC. Participants in the VC group were more likely to attribute the origin of their voices to an external source such as God, telepathic communication, or mediumistic sources. VC and VNC were described as characterological entities that were distinct from self (I/we vs. you). We also found an association between VC and the positive, cognitive, and depression symptom profile. However, we did not find a significant group difference in overall quality of life. The clinical portrait of VC is complex, multisensory, and distinct, and suggests a need for further research into the biopsychosocial interface between subjective experience, socioenvironmental constraints, individual psychology, and the biological architecture of intersecting symptoms. © 2016 S. Karger AG, Basel.

  1. Integration of auditory and visual speech information

    NARCIS (Netherlands)

    Hall, M.; Smeele, P.M.T.; Kuhl, P.K.

    1998-01-01

    The integration of auditory and visual speech is observed when modes specify different places of articulation. Influences of auditory variation on integration were examined using consonant identifi-cation, plus quality and similarity ratings. Auditory identification predicted auditory-visual

  2. Guided self-help cognitive-behaviour Intervention for VoicEs (GiVE): Results from a pilot randomised controlled trial in a transdiagnostic sample.

    Science.gov (United States)

    Hazell, Cassie M; Hayward, Mark; Cavanagh, Kate; Jones, Anna-Marie; Strauss, Clara

    2017-10-12

    Few patients have access to cognitive behaviour therapy for psychosis (CBTp) even though at least 16 sessions of CBTp is recommended in treatment guidelines. Briefer CBTp could improve access as the same number of therapists could see more patients. In addition, focusing on single psychotic symptoms, such as auditory hallucinations ('voices'), rather than on psychosis more broadly, may yield greater benefits. This pilot RCT recruited 28 participants (with a range of diagnoses) from NHS mental health services who were distressed by hearing voices. The study compared an 8-session guided self-help CBT intervention for distressing voices with a wait-list control. Data were collected at baseline and at 12weeks with post-therapy assessments conducted blind to allocation. Voice-impact was the pre-determined primary outcome. Secondary outcomes were depression, anxiety, wellbeing and recovery. Mechanism measures were self-esteem, beliefs about self, beliefs about voices and voice-relating. Recruitment and retention was feasible with low study (3.6%) and therapy (14.3%) dropout. There were large, statistically significant between-group effects on the primary outcome of voice-impact (d=1.78; 95% CIs: 0.86-2.70), which exceeded the minimum clinically important difference. Large, statistically significant effects were found on a number of secondary and mechanism measures. Large effects on the pre-determined primary outcome of voice-impact are encouraging, and criteria for progressing to a definitive trial are met. Significant between-group effects on measures of self-esteem, negative beliefs about self and beliefs about voice omnipotence are consistent with these being mechanisms of change and this requires testing in a future trial. Copyright © 2017. Published by Elsevier B.V.

  3. Collaboration and conquest: MTD as viewed by voice teacher (singing voice specialist) and speech-language pathologist.

    Science.gov (United States)

    Goffi-Fynn, Jeanne C; Carroll, Linda M

    2013-05-01

    This study was designed as a qualitative case study to demonstrate the process of diagnosis and treatment between a voice team to manage a singer diagnosed with muscular tension dysphonia (MTD). Traditionally, literature suggests that MTD is challenging to treat and little in the literature directly addresses singers with MTD. Data collected included initial medical screening with laryngologist, referral to speech-language pathologist (SLP) specializing in voice disorders among singers, and adjunctive voice training with voice teacher trained in vocology (singing voice specialist or SVS). Initial target goals with SLP included reducing extrinsic laryngeal tension, using a relaxed laryngeal posture, and effective abdominal-diaphragmatic support for all phonation events. Balance of respiratory forces, laryngeal coordination, and use of optimum filtering of the source signal through resonance and articulatory awareness was emphasized. Further work with SVS included three main goals including a lowered breathing pattern to aid in decreasing subglottic air pressure, vertical laryngeal position to lower to allow for a relaxed laryngeal position, and a top-down singing approach to encourage an easier, more balanced registration, and better resonance. Initial results also emphasize the retraining of subject toward a sensory rather than auditory mode of monitoring. Other areas of consideration include singers' training and vocal use, the psychological effects of MTD, the personalities potentially associated with it, and its relationship with stress. Finally, the results emphasize that a positive rapport with the subject and collaboration between all professionals involved in a singer's care are essential for recovery. Copyright © 2013 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  4. Whose voice matters? Learners

    Directory of Open Access Journals (Sweden)

    Sarah Bansilal

    2010-01-01

    Full Text Available International and national mathematics studies have revealed the poor mathematics skills of South African learners. An essential tool that can be used to improve learners' mathematical skills is for educators to use effective feedback. Our purpose in this study was to elicit learners' understanding and expectations of teacher assessment feedback. The study was conducted with five Grade 9 mathematics learners. Data were generated from one group interview, seven journal entries by each learner, video-taped classroom observations and researcher field notes. The study revealed that the learners have insightful perceptions of the concept of educator feedback. While some learners viewed educator feedback as a tool to probe their understanding, others viewed it as a mechanism to get the educator's point of view. A significant finding of the study was that learners viewed educator assessment feedback as instrumental in building or breaking their self-confidence.

  5. Gay- and Lesbian-Sounding Auditory Cues Elicit Stereotyping and Discrimination.

    Science.gov (United States)

    Fasoli, Fabio; Maass, Anne; Paladino, Maria Paola; Sulpizio, Simone

    2017-07-01

    The growing body of literature on the recognition of sexual orientation from voice ("auditory gaydar") is silent on the cognitive and social consequences of having a gay-/lesbian- versus heterosexual-sounding voice. We investigated this issue in four studies (overall N = 276), conducted in Italian language, in which heterosexual listeners were exposed to single-sentence voice samples of gay/lesbian and heterosexual speakers. In all four studies, listeners were found to make gender-typical inferences about traits and preferences of heterosexual speakers, but gender-atypical inferences about those of gay or lesbian speakers. Behavioral intention measures showed that listeners considered lesbian and gay speakers as less suitable for a leadership position, and male (but not female) listeners took distance from gay speakers. Together, this research demonstrates that having a gay/lesbian rather than heterosexual-sounding voice has tangible consequences for stereotyping and discrimination.

  6. Using Facebook to Reach People Who Experience Auditory Hallucinations.

    Science.gov (United States)

    Crosier, Benjamin Sage; Brian, Rachel Marie; Ben-Zeev, Dror

    2016-06-14

    Auditory hallucinations (eg, hearing voices) are relatively common and underreported false sensory experiences that may produce distress and impairment. A large proportion of those who experience auditory hallucinations go unidentified and untreated. Traditional engagement methods oftentimes fall short in reaching the diverse population of people who experience auditory hallucinations. The objective of this proof-of-concept study was to examine the viability of leveraging Web-based social media as a method of engaging people who experience auditory hallucinations and to evaluate their attitudes toward using social media platforms as a resource for Web-based support and technology-based treatment. We used Facebook advertisements to recruit individuals who experience auditory hallucinations to complete an 18-item Web-based survey focused on issues related to auditory hallucinations and technology use in American adults. We systematically tested multiple elements of the advertisement and survey layout including image selection, survey pagination, question ordering, and advertising targeting strategy. Each element was evaluated sequentially and the most cost-effective strategy was implemented in the subsequent steps, eventually deriving an optimized approach. Three open-ended question responses were analyzed using conventional inductive content analysis. Coded responses were quantified into binary codes, and frequencies were then calculated. Recruitment netted N=264 total sample over a 6-week period. Ninety-seven participants fully completed all measures at a total cost of $8.14 per participant across testing phases. Systematic adjustments to advertisement design, survey layout, and targeting strategies improved data quality and cost efficiency. People were willing to provide information on what triggered their auditory hallucinations along with strategies they use to cope, as well as provide suggestions to others who experience auditory hallucinations. Women, people

  7. Using Facebook to Reach People Who Experience Auditory Hallucinations

    Science.gov (United States)

    Brian, Rachel Marie; Ben-Zeev, Dror

    2016-01-01

    Background Auditory hallucinations (eg, hearing voices) are relatively common and underreported false sensory experiences that may produce distress and impairment. A large proportion of those who experience auditory hallucinations go unidentified and untreated. Traditional engagement methods oftentimes fall short in reaching the diverse population of people who experience auditory hallucinations. Objective The objective of this proof-of-concept study was to examine the viability of leveraging Web-based social media as a method of engaging people who experience auditory hallucinations and to evaluate their attitudes toward using social media platforms as a resource for Web-based support and technology-based treatment. Methods We used Facebook advertisements to recruit individuals who experience auditory hallucinations to complete an 18-item Web-based survey focused on issues related to auditory hallucinations and technology use in American adults. We systematically tested multiple elements of the advertisement and survey layout including image selection, survey pagination, question ordering, and advertising targeting strategy. Each element was evaluated sequentially and the most cost-effective strategy was implemented in the subsequent steps, eventually deriving an optimized approach. Three open-ended question responses were analyzed using conventional inductive content analysis. Coded responses were quantified into binary codes, and frequencies were then calculated. Results Recruitment netted N=264 total sample over a 6-week period. Ninety-seven participants fully completed all measures at a total cost of $8.14 per participant across testing phases. Systematic adjustments to advertisement design, survey layout, and targeting strategies improved data quality and cost efficiency. People were willing to provide information on what triggered their auditory hallucinations along with strategies they use to cope, as well as provide suggestions to others who experience

  8. Effect of singing training on total laryngectomees wearing a tracheoesophageal voice prosthesis.

    Science.gov (United States)

    Onofre, Fernanda; Ricz, Hilton Marcos Alves; Takeshita-Monaretti, Telma Kioko; Prado, Maria Yuka de Almeida; Aguiar-Ricz, Lílian Neto

    2013-02-01

    To assess the effect of a program of singing training on the voice of total laryngectomees wearing tracheoesophageal voice prosthesis, considering the quality of alaryngeal phonation, vocal extension and the musical elements of tunning and legato. Five laryngectomees wearing tracheoesophageal voice prosthesis completed the singing training program over a period of three months, with exploration of the strengthening of the respiratory muscles and vocalization and with evaluation of perceptive-auditory and singing voice being performed before and after 12 sessions of singing therapy. After the program of singing voice training, the quality of tracheoesophageal voice showed improvement or the persistence of the general degree of dysphonia for the emitted vowels and for the parameters of roughness and breathiness. For the vowel "a", the pitch was displaced to grave in two participants and to acute in one, and remained adequate in the others. A similar situation was observed also for the vowel "i". After the singing program, all participants presented tunning and most of them showed a greater presence of legato. The vocal extension improved in all participants. Singing training seems to have a favorable effect on the quality of tracheoesophageal phonation and on singing voice.

  9. Spectral distribution of solo voice and accompaniment in pop music.

    Science.gov (United States)

    Borch, Daniel Zangger; Sundberg, Johan

    2002-01-01

    Singers performing in popular styles of music mostly rely on feedback provided by monitor loudspeakers on the stage. The highest sound level that these loudspeakers can provide without feedback noise is often too low to be heard over the ambient sound level on the stage. Long-term-average spectra of some orchestral accompaniments typically used in pop music are compared with those of classical symphonic orchestras. In loud pop accompaniment the sound level difference between 0.5 and 2.5 kHz is similar to that of a Wagner orchestra. Long-term-average spectra of pop singers' voices showed no signs of a singer's formant but a peak near 3.5 kHz. It is suggested that pop singers' difficulties to hear their own voices may be reduced if the frequency range 3-4 kHz is boosted in the monitor sound.

  10. Skill learning from kinesthetic feedback.

    Science.gov (United States)

    Pinzon, David; Vega, Roberto; Sanchez, Yerly Paola; Zheng, Bin

    2017-10-01

    It is important for a surgeon to perform surgical tasks under appropriate guidance from visual and kinesthetic feedback. However, our knowledge on kinesthetic (muscle) memory and its role in learning motor skills remains elementary. To discover the effect of exclusive kinesthetic training on kinesthetic memory in both performance and learning. In Phase 1, a total of twenty participants duplicated five 2 dimensional movements of increasing complexity via passive kinesthetic guidance, without visual or auditory stimuli. Five participants were asked to repeat the task in the Phase 2 over a period of three weeks, for a total of nine sessions. Subjects accurately recalled movement direction using kinesthetic memory, but recalling movement length was less precise. Over the nine training sessions, error occurrence dropped after the sixth session. Muscle memory constructs the foundation for kinesthetic training. Knowledge gained helps surgeons learn skills from kinesthetic information in the condition where visual feedback is limited. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Auditory white noise reduces age-related fluctuations in balance.

    Science.gov (United States)

    Ross, J M; Will, O J; McGann, Z; Balasubramaniam, R

    2016-09-06

    Fall prevention technologies have the potential to improve the lives of older adults. Because of the multisensory nature of human balance control, sensory therapies, including some involving tactile and auditory noise, are being explored that might reduce increased balance variability due to typical age-related sensory declines. Auditory white noise has previously been shown to reduce postural sway variability in healthy young adults. In the present experiment, we examined this treatment in young adults and typically aging older adults. We measured postural sway of healthy young adults and adults over the age of 65 years during silence and auditory white noise, with and without vision. Our results show reduced postural sway variability in young and older adults with auditory noise, even in the absence of vision. We show that vision and noise can reduce sway variability for both feedback-based and exploratory balance processes. In addition, we show changes with auditory noise in nonlinear patterns of sway in older adults that reflect what is more typical of young adults, and these changes did not interfere with the typical random walk behavior of sway. Our results suggest that auditory noise might be valuable for therapeutic and rehabilitative purposes in older adults with typical age-related balance variability. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  12. Risk factors for voice problems in teachers.

    NARCIS (Netherlands)

    Kooijman, P.G.C.; Jong, F.I.C.R.S. de; Thomas, G.; Huinck, W.J.; Donders, A.R.T.; Graamans, K.; Schutte, H.K.

    2006-01-01

    In order to identify factors that are associated with voice problems and voice-related absenteeism in teachers, 1,878 questionnaires were analysed. The questionnaires inquired about personal data, voice complaints, voice-related absenteeism from work and conditions that may lead to voice complaints

  13. Risk factors for voice problems in teachers

    NARCIS (Netherlands)

    Kooijman, P. G. C.; de Jong, F. I. C. R. S.; Thomas, G.; Huinck, W.; Donders, R.; Graamans, K.; Schutte, H. K.

    2006-01-01

    In order to identify factors that are associated with voice problems and voice-related absenteeism in teachers, 1,878 questionnaires were analysed. The questionnaires inquired about personal data, voice complaints, voice-related absenteeism from work and conditions that may lead to voice complaints

  14. You're a What? Voice Actor

    Science.gov (United States)

    Liming, Drew

    2009-01-01

    This article talks about voice actors and features Tony Oliver, a professional voice actor. Voice actors help to bring one's favorite cartoon and video game characters to life. They also do voice-overs for radio and television commercials and movie trailers. These actors use the sound of their voice to sell a character's emotions--or an advertised…

  15. The processing of auditory and visual recognition of self-stimuli.

    Science.gov (United States)

    Hughes, Susan M; Nicholson, Shevon E

    2010-12-01

    This study examined self-recognition processing in both the auditory and visual modalities by determining how comparable hearing a recording of one's own voice was to seeing photograph of one's own face. We also investigated whether the simultaneous presentation of auditory and visual self-stimuli would either facilitate or inhibit self-identification. Ninety-one participants completed reaction-time tasks of self-recognition when presented with their own faces, own voices, and combinations of the two. Reaction time and errors made when responding with both the right and left hand were recorded to determine if there were lateralization effects on these tasks. Our findings showed that visual self-recognition for facial photographs appears to be superior to auditory self-recognition for voice recordings. Furthermore, a combined presentation of one's own face and voice appeared to inhibit rather than facilitate self-recognition and there was a left-hand advantage for reaction time on the combined-presentation tasks. Copyright © 2010 Elsevier Inc. All rights reserved.

  16. Voice search for development

    CSIR Research Space (South Africa)

    Barnard, E

    2010-09-01

    Full Text Available of speech technology development, similar approaches are likely to be applicable in both circumstances. However, within these broad approaches there are details which are specific to certain languages (or lan- guage families) that may require solutions... to the modeling of pitch were therefore required. Similarly, it is possible that novel solutions will be required to deal with the click sounds that occur in some Southern Bantu languages, or the voicing Copyright  2010 ISCA 26-30 September 2010, Makuhari...

  17. Auditory and Visual Sensations

    CERN Document Server

    Ando, Yoichi

    2010-01-01

    Professor Yoichi Ando, acoustic architectural designer of the Kirishima International Concert Hall in Japan, presents a comprehensive rational-scientific approach to designing performance spaces. His theory is based on systematic psychoacoustical observations of spatial hearing and listener preferences, whose neuronal correlates are observed in the neurophysiology of the human brain. A correlation-based model of neuronal signal processing in the central auditory system is proposed in which temporal sensations (pitch, timbre, loudness, duration) are represented by an internal autocorrelation representation, and spatial sensations (sound location, size, diffuseness related to envelopment) are represented by an internal interaural crosscorrelation function. Together these two internal central auditory representations account for the basic auditory qualities that are relevant for listening to music and speech in indoor performance spaces. Observed psychological and neurophysiological commonalities between auditor...

  18. Multi-modal assessment of on-road demand of voice and manual phone calling and voice navigation entry across two embedded vehicle systems

    Science.gov (United States)

    Mehler, Bruce; Kidd, David; Reimer, Bryan; Reagan, Ian; Dobres, Jonathan; McCartt, Anne

    2016-01-01

    Abstract One purpose of integrating voice interfaces into embedded vehicle systems is to reduce drivers’ visual and manual distractions with ‘infotainment’ technologies. However, there is scant research on actual benefits in production vehicles or how different interface designs affect attentional demands. Driving performance, visual engagement, and indices of workload (heart rate, skin conductance, subjective ratings) were assessed in 80 drivers randomly assigned to drive a 2013 Chevrolet Equinox or Volvo XC60. The Chevrolet MyLink system allowed completing tasks with one voice command, while the Volvo Sensus required multiple commands to navigate the menu structure. When calling a phone contact, both voice systems reduced visual demand relative to the visual–manual interfaces, with reductions for drivers in the Equinox being greater. The Equinox ‘one-shot’ voice command showed advantages during contact calling but had significantly higher error rates than Sensus during destination address entry. For both secondary tasks, neither voice interface entirely eliminated visual demand. Practitioner Summary: The findings reinforce the observation that most, if not all, automotive auditory–vocal interfaces are multi-modal interfaces in which the full range of potential demands (auditory, vocal, visual, manipulative, cognitive, tactile, etc.) need to be considered in developing optimal implementations and evaluating drivers’ interaction with the systems. Social Media: In-vehicle voice-interfaces can reduce visual demand but do not eliminate it and all types of demand need to be taken into account in a comprehensive evaluation. PMID:26269281

  19. Auditory white noise reduces postural fluctuations even in the absence of vision.

    Science.gov (United States)

    Ross, Jessica Marie; Balasubramaniam, Ramesh

    2015-08-01

    The contributions of somatosensory, vestibular, and visual feedback to balance control are well documented, but the influence of auditory information, especially acoustic noise, on balance is less clear. Because somatosensory noise has been shown to reduce postural sway, we hypothesized that noise from the auditory modality might have a similar effect. Given that the nervous system uses noise to optimize signal transfer, adding mechanical or auditory noise should lead to increased feedback about sensory frames of reference used in balance control. In the present experiment, postural sway was analyzed in healthy young adults where they were presented with continuous white noise, in the presence and absence of visual information. Our results show reduced postural sway variability (as indexed by the body's center of pressure) in the presence of auditory noise, even when visual information was not present. Nonlinear time series analysis revealed that auditory noise has an additive effect, independent of vision, on postural stability. Further analysis revealed that auditory noise reduced postural sway variability in both low- and high-frequency regimes (> or noise. Our results support the idea that auditory white noise reduces postural sway, suggesting that auditory noise might be used for therapeutic and rehabilitation purposes in older individuals and those with balance disorders.

  20. Modularity in Sensory Auditory Memory

    OpenAIRE

    Clement, Sylvain; Moroni, Christine; Samson, Séverine

    2004-01-01

    The goal of this paper was to review various experimental and neuropsychological studies that support the modular conception of auditory sensory memory or auditory short-term memory. Based on initial findings demonstrating that verbal sensory memory system can be dissociated from a general auditory memory store at the functional and anatomical levels. we reported a series of studies that provided evidence in favor of multiple auditory sensory stores specialized in retaining eit...

  1. Perceptual adaptation of voice gender discrimination with spectrally shifted vowels.

    Science.gov (United States)

    Li, Tianhao; Fu, Qian-Jie

    2011-08-01

    To determine whether perceptual adaptation improves voice gender discrimination of spectrally shifted vowels and, if so, which acoustic cues contribute to the improvement. Voice gender discrimination was measured for 10 normal-hearing subjects, during 5 days of adaptation to spectrally shifted vowels, produced by processing the speech of 5 male and 5 female talkers with 16-channel sine-wave vocoders. The subjects were randomly divided into 2 groups; one subjected to 50-Hz, and the other to 200-Hz, temporal envelope cutoff frequencies. No preview or feedback was provided. There was significant adaptation in voice gender discrimination with the 200-Hz cutoff frequency, but significant improvement was observed only for 3 female talkers with F(0) > 180 Hz and 3 male talkers with F(0) gender discrimination under spectral shift conditions with perceptual adaptation, but spectral shift may limit the exclusive use of spectral information and/or the use of formant structure on voice gender discrimination. The results have implications for cochlear implant users and for understanding voice gender discrimination.

  2. How far away is plug 'n' play? Assessing the near-term potential of sonification and auditory display

    Science.gov (United States)

    Bargar, Robin

    1995-01-01

    The commercial music industry offers a broad range of plug 'n' play hardware and software scaled to music professionals and scaled to a broad consumer market. The principles of sound synthesis utilized in these products are relevant to application in virtual environments (VE). However, the closed architectures used in commercial music synthesizers are prohibitive to low-level control during real-time rendering, and the algorithms and sounds themselves are not standardized from product to product. To bring sound into VE requires a new generation of open architectures designed for human-controlled performance from interfaces embedded in immersive environments. This presentation addresses the state of the sonic arts in scientific computing and VE, analyzes research challenges facing sound computation, and offers suggestions regarding tools we might expect to become available during the next few years. A list of classes of audio functionality in VE includes sonification -- the use of sound to represent data from numerical models; 3D auditory display (spatialization and localization, also called externalization); navigation cues for positional orientation and for finding items or regions inside large spaces; voice recognition for controlling the computer; external communications between users in different spaces; and feedback to the user concerning his own actions or the state of the application interface. To effectively convey this considerable variety of signals, we apply principles of acoustic design to ensure the messages are neither confusing nor competing. We approach the design of auditory experience through a comprehensive structure for messages, and message interplay we refer to as an Automated Sound Environment. Our research addresses real-time sound synthesis, real-time signal processing and localization, interactive control of high-dimensional systems, and synchronization of sound and graphics.

  3. Auditory Hallucinations in Polyglots*

    African Journals Online (AJOL)

    1971-12-18

    Dec 18, 1971 ... that they were false. Schizophrenics on ... memory. Verbal as well as non-verbal thinking is em- ployed by everyone, and probably is essential in the forma- ... qualities or emotions such as anger or joy or threats from the voice ...

  4. Voice and silence in organizations

    Directory of Open Access Journals (Sweden)

    Moaşa, H.

    2011-01-01

    Full Text Available Unlike previous research on voice and silence, this article breaksthe distance between the two and declines to treat them as opposites. Voice and silence are interrelated and intertwined strategic forms ofcommunication which presuppose each other in such a way that the absence of one would minimize completely the other’s presence. Social actors are not voice, or silence. Social actors can have voice or silence, they can do both because they operate at multiple levels and deal with multiple issues at different moments in time.

  5. Voice Biometrics for Information Assurance Applications

    National Research Council Canada - National Science Library

    Kang, George

    2002-01-01

    .... The ultimate goal of voice biometrics is to enable the use of voice as a password. Voice biometrics are "man-in-the-loop" systems in which system performance is significantly dependent on human performance...

  6. Tuned with a tune: Talker normalization via general auditory processes

    Directory of Open Access Journals (Sweden)

    Erika J C Laing

    2012-06-01

    Full Text Available Voices have unique acoustic signatures, contributing to the acoustic variability listeners must contend with in perceiving speech, and it has long been proposed that listeners normalize speech perception to information extracted from a talker’s speech. Initial attempts to explain talker normalization relied on extraction of articulatory referents, but recent studies of context-dependent auditory perception suggest that general auditory referents such as the long-term average spectrum (LTAS of a talker’s speech similarly affect speech perception. The present study aimed to differentiate the contributions of articulatory/linguistic versus auditory referents for context-driven talker normalization effects and, more specifically, to identify the specific constraints under which such contexts impact speech perception. Synthesized sentences manipulated to sound like different talkers influenced categorization of a subsequent speech target only when differences in the sentences’ LTAS were in the frequency range of the acoustic cues relevant for the target phonemic contrast. This effect was true both for speech targets preceded by spoken sentence contexts and for targets preceded by nonspeech tone sequences that were LTAS-matched to the spoken sentence contexts. Specific LTAS characteristics, rather than perceived talker, predicted the results suggesting that general auditory mechanisms play an important role in effects considered to be instances of perceptual talker normalization.

  7. Gender differences in identifying emotions from auditory and visual stimuli.

    Science.gov (United States)

    Waaramaa, Teija

    2017-12-01

    The present study focused on gender differences in emotion identification from auditory and visual stimuli produced by two male and two female actors. Differences in emotion identification from nonsense samples, language samples and prolonged vowels were investigated. It was also studied whether auditory stimuli can convey the emotional content of speech without visual stimuli, and whether visual stimuli can convey the emotional content of speech without auditory stimuli. The aim was to get a better knowledge of vocal attributes and a more holistic understanding of the nonverbal communication of emotion. Females tended to be more accurate in emotion identification than males. Voice quality parameters played a role in emotion identification in both genders. The emotional content of the samples was best conveyed by nonsense sentences, better than by prolonged vowels or shared native language of the speakers and participants. Thus, vocal non-verbal communication tends to affect the interpretation of emotion even in the absence of language. The emotional stimuli were better recognized from visual stimuli than auditory stimuli by both genders. Visual information about speech may not be connected to the language; instead, it may be based on the human ability to understand the kinetic movements in speech production more readily than the characteristics of the acoustic cues.

  8. Auditory Memory for Timbre

    Science.gov (United States)

    McKeown, Denis; Wellsted, David

    2009-01-01

    Psychophysical studies are reported examining how the context of recent auditory stimulation may modulate the processing of new sounds. The question posed is how recent tone stimulation may affect ongoing performance in a discrimination task. In the task, two complex sounds occurred in successive intervals. A single target component of one complex…

  9. Auditory evacuation beacons

    NARCIS (Netherlands)

    Wijngaarden, S.J. van; Bronkhorst, A.W.; Boer, L.C.

    2005-01-01

    Auditory evacuation beacons can be used to guide people to safe exits, even when vision is totally obscured by smoke. Conventional beacons make use of modulated noise signals. Controlled evacuation experiments show that such signals require explicit instructions and are often misunderstood. A new

  10. Is the auditory evoked P2 response a biomarker of learning?

    Directory of Open Access Journals (Sweden)

    Kelly eTremblay

    2014-02-01

    Full Text Available Even though auditory training exercises for humans have been shown to improve certain perceptual skills of individuals with and without hearing loss, there is a lack of knowledge pertaining to which aspects of training are responsible for the perceptual gains, and which aspects of perception are changed. To better define how auditory training impacts brain and behavior, electroencephalography and magnetoencephalography have been used to determine the time course and coincidence of cortical modulations associated with different types of training. Here we focus on P1-N1-P2 auditory evoked responses (AEP, as there are consistent reports of gains in P2 amplitude following various types of auditory training experiences; including music and speech-sound training. The purpose of this experiment was to determine if the auditory evoked P2 response is a biomarker of learning. To do this, we taught native English speakers to identify a new pre-voiced temporal cue that is not used phonemically in the English language so that coinciding changes in evoked neural activity could be characterized. To differentiate possible effects of repeated stimulus exposure and a button-pushing task from learning itself, we examined modulations in brain activity in a group of participants who learned to identify the pre-voicing contrast and compared it to participants, matched in time, and stimulus exposure, that did not. The main finding was that the amplitude of the P2 auditory evoked response increased across repeated EEG sessions for all groups, regardless of any change in perceptual performance. What’s more, these effects were retained for months. Changes in P2 amplitude were attributed to changes in neural activity associated with the acquisition process and not the learned outcome itself. A further finding was the expression of a late negativity (LN wave 600-900 ms post-stimulus onset, post-training, exclusively for the group that learned to identify the pre-voiced

  11. Neural Substrates of Auditory Emotion Recognition Deficits in Schizophrenia.

    Science.gov (United States)

    Kantrowitz, Joshua T; Hoptman, Matthew J; Leitman, David I; Moreno-Ortega, Marta; Lehrfeld, Jonathan M; Dias, Elisa; Sehatpour, Pejman; Laukka, Petri; Silipo, Gail; Javitt, Daniel C

    2015-11-04

    Deficits in auditory emotion recognition (AER) are a core feature of schizophrenia and a key component of social cognitive impairment. AER deficits are tied behaviorally to impaired ability to interpret tonal ("prosodic") features of speech that normally convey emotion, such as modulations in base pitch (F0M) and pitch variability (F0SD). These modulations can be recreated using synthetic frequency modulated (FM) tones that mimic the prosodic contours of specific emotional stimuli. The present study investigates neural mechanisms underlying impaired AER using a combined event-related potential/resting-state functional connectivity (rsfMRI) approach in 84 schizophrenia/schizoaffective disorder patients and 66 healthy comparison subjects. Mismatch negativity (MMN) to FM tones was assessed in 43 patients/36 controls. rsfMRI between auditory cortex and medial temporal (insula) regions was assessed in 55 patients/51 controls. The relationship between AER, MMN to FM tones, and rsfMRI was assessed in the subset who performed all assessments (14 patients, 21 controls). As predicted, patients showed robust reductions in MMN across FM stimulus type (p = 0.005), particularly to modulations in F0M, along with impairments in AER and FM tone discrimination. MMN source analysis indicated dipoles in both auditory cortex and anterior insula, whereas rsfMRI analyses showed reduced auditory-insula connectivity. MMN to FM tones and functional connectivity together accounted for ∼50% of the variance in AER performance across individuals. These findings demonstrate that impaired preattentive processing of tonal information and reduced auditory-insula connectivity are critical determinants of social cognitive dysfunction in schizophrenia, and thus represent key targets for future research and clinical intervention. Schizophrenia patients show deficits in the ability to infer emotion based upon tone of voice [auditory emotion recognition (AER)] that drive impairments in social cognition

  12. Objective voice parameters in Colombian school workers with healthy voices

    NARCIS (Netherlands)

    L.C. Cantor Cutiva (Lady Catherine); A. Burdorf (Alex)

    2015-01-01

    textabstractObjectives: To characterize the objective voice parameters among school workers, and to identify associated factors of three objective voice parameters, namely fundamental frequency, sound pressure level and maximum phonation time. Materials and methods: We conducted a cross-sectional

  13. Pedagogic Voice: Student Voice in Teaching and Engagement Pedagogies

    Science.gov (United States)

    Baroutsis, Aspa; McGregor, Glenda; Mills, Martin

    2016-01-01

    In this paper, we are concerned with the notion of "pedagogic voice" as it relates to the presence of student "voice" in teaching, learning and curriculum matters at an alternative, or second chance, school in Australia. This school draws upon many of the principles of democratic schooling via its utilisation of student voice…

  14. A voice service for user feedback on school meals

    CSIR Research Space (South Africa)

    Sharma Grover, AS

    2012-03-01

    Full Text Available , focus group discussions and observations of learners’ interaction with multiple design prototype versions, they investigated several factors around input modality preference, language preference, performance and overall user experience. Whilst...

  15. Auditory Verbal Experience and Agency in Waking, Sleep Onset, REM, and Non-REM Sleep.

    Science.gov (United States)

    Speth, Jana; Harley, Trevor A; Speth, Clemens

    2017-04-01

    We present one of the first quantitative studies on auditory verbal experiences ("hearing voices") and auditory verbal agency (inner speech, and specifically "talking to (imaginary) voices or characters") in healthy participants across states of consciousness. Tools of quantitative linguistic analysis were used to measure participants' implicit knowledge of auditory verbal experiences (VE) and auditory verbal agencies (VA), displayed in mentation reports from four different states. Analysis was conducted on a total of 569 mentation reports from rapid eye movement (REM) sleep, non-REM sleep, sleep onset, and waking. Physiology was controlled with the nightcap sleep-wake mentation monitoring system. Sleep-onset hallucinations, traditionally at the focus of scientific attention on auditory verbal hallucinations, showed the lowest degree of VE and VA, whereas REM sleep showed the highest degrees. Degrees of different linguistic-pragmatic aspects of VE and VA likewise depend on the physiological states. The quantity and pragmatics of VE and VA are a function of the physiologically distinct state of consciousness in which they are conceived. Copyright © 2016 Cognitive Science Society, Inc.

  16. Auditory Selective Attention: an introduction and evidence for distinct facilitation and inhibition mechanisms

    OpenAIRE

    Mikyska, Constanze Elisabeth Anna

    2012-01-01

    Objective Auditory selective attention is a complex brain function that is still not completely understood. The classic example is the so-called “cocktail party effect” (Cherry, 1953), which describes the impressive ability to focus one’s attention on a single voice from a multitude of voices. This means that particular stimuli in the environment are enhanced in contrast to other ones of lower priority that are ignored. To be able to understand how attention can influence the perception and p...

  17. Facing Sound - Voicing Art

    DEFF Research Database (Denmark)

    Lønstrup, Ansa

    2013-01-01

    This article is based on examples of contemporary audiovisual art, with a special focus on the Tony Oursler exhibition Face to Face at Aarhus Art Museum ARoS in Denmark in March-July 2012. My investigation involves a combination of qualitative interviews with visitors, observations of the audience´s...... interactions with the exhibition and the artwork in the museum space and short analyses of individual works of art based on reception aesthetics and phenomenology and inspired by newer writings on sound, voice and listening....

  18. Voice over IP Security

    CERN Document Server

    Keromytis, Angelos D

    2011-01-01

    Voice over IP (VoIP) and Internet Multimedia Subsystem technologies (IMS) are rapidly being adopted by consumers, enterprises, governments and militaries. These technologies offer higher flexibility and more features than traditional telephony (PSTN) infrastructures, as well as the potential for lower cost through equipment consolidation and, for the consumer market, new business models. However, VoIP systems also represent a higher complexity in terms of architecture, protocols and implementation, with a corresponding increase in the potential for misuse. In this book, the authors examine the

  19. Bodies, Spaces, Voices, Silences

    OpenAIRE

    Donatella Mazzoleni; Pietro Vitiello

    2013-01-01

    A good architecture should not only allow functional, formal and technical quality for urban spaces, but also let the voice of the city be perceived, listened, enjoyed. Every city has got its specific sound identity, or “ISO” (R. O. Benenzon), made up of a complex texture of background noises and fluctuation of sound figures emerging and disappearing in a game of continuous fadings. For instance, the ISO of Naples is characterized by a spread need of hearing the sound return of one’s/others v...

  20. Long-Term Follow-Up of Patients with Spasmodic Dysphonia and Improved Voice despite Discontinuation of Treatment.

    Science.gov (United States)

    Geneid, Ahmed; Lindestad, Per-Åke; Granqvist, Svante; Möller, Riitta; Södersten, Maria

    2016-01-01

    To evaluate voice function in patients with adductor spasmodic dysphonia (AdSD) who discontinued botulinum toxin (BTX) treatment because they felt that their voice had improved sufficiently. Twenty-eight patients quit treatment in 2004, of whom 20 fulfilled the inclusion criteria for the study, with 3 subsequently excluded because of return of symptoms, leaving 17 patients (11 males, 6 females) included in this follow-up study. A questionnaire concerning current voice function and the Voice Handicap Index were completed. Audio-perceptual voice assessments were done by 3 listeners. The inter- and intrarater reliabilities were r > 0.80. All patients had a subjectively good stable voice, but with differences in their audio-perceptual voice assessment scores. Based on the pre-/posttreatment auditory scores on the overall degree of AdSD, patients were divided into 2 subgroups showing more and less improvement, with 10 and 7 patients, respectively. The subgroup with more improvement had shorter duration from the onset of symptoms until the start of BTX treatment, and included 7 males compared to only 4 males in the subgroup with less improvement. It seems plausible that the symptoms of spasmodic dysphonia may decrease over time. Early intervention and male gender seem to be important factors for long-term reduction of the voice symptoms of AdSD. © 2016 S. Karger AG, Basel.

  1. Voice, Schooling, Inequality, and Scale

    Science.gov (United States)

    Collins, James

    2013-01-01

    The rich studies in this collection show that the investigation of voice requires analysis of "recognition" across layered spatial-temporal and sociolinguistic scales. I argue that the concepts of voice, recognition, and scale provide insight into contemporary educational inequality and that their study benefits, in turn, from paying attention to…

  2. The Voices of the Documentarist

    Science.gov (United States)

    Utterback, Ann S.

    1977-01-01

    Discusses T. S. Elliot's essay, "The Three Voices of Poetry" which conceptualizes the position taken by the poet or creator. Suggests that an examination of documentary film, within the three voices concept, expands the critical framework of the film genre. (MH)

  3. Bodies, Spaces, Voices, Silences

    Directory of Open Access Journals (Sweden)

    Donatella Mazzoleni

    2013-07-01

    Full Text Available A good architecture should not only allow functional, formal and technical quality for urban spaces, but also let the voice of the city be perceived, listened, enjoyed. Every city has got its specific sound identity, or “ISO” (R. O. Benenzon, made up of a complex texture of background noises and fluctuation of sound figures emerging and disappearing in a game of continuous fadings. For instance, the ISO of Naples is characterized by a spread need of hearing the sound return of one’s/others voices, by a hate of silence. Cities may fall ill: illness from noise, within super-crowded neighbourhoods, or illness from silence, in the forced isolation of peripheries. The proposal of an urban music therapy denotes an unpublished and innovative enlarged interdisciplinary research path, where architecture, music, medicine, psychology, communication science may converge, in order to work for rebalancing spaces and relation life of the urban collectivity, through the care of body and sound dimensions.

  4. Comparison of Perceptual Signs of Voice before and after Vocal Hygiene Program in Adults with Dysphonia

    Directory of Open Access Journals (Sweden)

    Seyyedeh Maryam khoddami

    2011-12-01

    Full Text Available Background and Aim: Vocal abuse and misuse are the most frequent causes of voice disorders. Consequently some therapy is needed to stop or modify such behaviors. This research was performed to study the effectiveness of vocal hygiene program on perceptual signs of voice in people with dysphonia.Methods: A Vocal hygiene program was performed to 8 adults with dysphonia for 6 weeks. At first, Consensus Auditory- Perceptual Evaluation of Voice was used to assess perceptual signs. Then the program was delivered, Individuals were followed in second and forth weeks visits. In the last session, perceptual assessment was performed and individuals’ opinions were collected. Perceptual findings were compared before and after the therapy.Results: After the program, mean score of perceptual assessment decreased. Mean score of every perceptual sign revealed significant difference before and after the therapy (p≤0.0001. «Loudness» had maximum score and coordination between speech and respiration indicated minimum score. All participants confirmed efficiency of the therapy.Conclusion: The vocal hygiene program improves all perceptual signs of voice although not equally. This deduction is confirmed by both clinician-based and patient-based assessments. As a result, vocal hygiene program is necessary for a comprehensive voice therapy but is not solely effective to resolve all voice problems.

  5. Development of the auditory system

    Science.gov (United States)

    Litovsky, Ruth

    2015-01-01

    Auditory development involves changes in the peripheral and central nervous system along the auditory pathways, and these occur naturally, and in response to stimulation. Human development occurs along a trajectory that can last decades, and is studied using behavioral psychophysics, as well as physiologic measurements with neural imaging. The auditory system constructs a perceptual space that takes information from objects and groups, segregates sounds, and provides meaning and access to communication tools such as language. Auditory signals are processed in a series of analysis stages, from peripheral to central. Coding of information has been studied for features of sound, including frequency, intensity, loudness, and location, in quiet and in the presence of maskers. In the latter case, the ability of the auditory system to perform an analysis of the scene becomes highly relevant. While some basic abilities are well developed at birth, there is a clear prolonged maturation of auditory development well into the teenage years. Maturation involves auditory pathways. However, non-auditory changes (attention, memory, cognition) play an important role in auditory development. The ability of the auditory system to adapt in response to novel stimuli is a key feature of development throughout the nervous system, known as neural plasticity. PMID:25726262

  6. Animal models for auditory streaming

    Science.gov (United States)

    Itatani, Naoya

    2017-01-01

    Sounds in the natural environment need to be assigned to acoustic sources to evaluate complex auditory scenes. Separating sources will affect the analysis of auditory features of sounds. As the benefits of assigning sounds to specific sources accrue to all species communicating acoustically, the ability for auditory scene analysis is widespread among different animals. Animal studies allow for a deeper insight into the neuronal mechanisms underlying auditory scene analysis. Here, we will review the paradigms applied in the study of auditory scene analysis and streaming of sequential sounds in animal models. We will compare the psychophysical results from the animal studies to the evidence obtained in human psychophysics of auditory streaming, i.e. in a task commonly used for measuring the capability for auditory scene analysis. Furthermore, the neuronal correlates of auditory streaming will be reviewed in different animal models and the observations of the neurons’ response measures will be related to perception. The across-species comparison will reveal whether similar demands in the analysis of acoustic scenes have resulted in similar perceptual and neuronal processing mechanisms in the wide range of species being capable of auditory scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044022

  7. Success with voice recognition.

    Science.gov (United States)

    Sferrella, Sheila M

    2003-01-01

    You need a compelling reason to implement voice recognition technology. At my institution, the compelling reason was a turnaround time for Radiology results of more than two days. Only 41 percent of our reports were transcribed and signed within 24 hours. In November 1998, a team from Lehigh Valley Hospital went to RSNA and reviewed every voice system on the market. The evaluation was done with the radiologist workflow in mind, and we came back from the meeting with the vendor selection completed. The next steps included developing a business plan, approval of funds, reference calls to more than 15 sites and contract negotiation, all of which took about six months. The department of Radiology at Lehigh Valley Hospital and Health Network (LVHHN) is a multi-site center that performs over 360,000 procedures annually. The department handles all modalities of radiology: general diagnosis, neuroradiology, ultrasound, CT Scan, MRI, interventional radiology, arthography, myelography, bone densitometry, nuclear medicine, PET imaging, vascular lab and other advanced procedures. The department consists of 200 FTEs and a medical staff of more than 40 radiologists. The budget is in the $10.3 million range. There are three hospital sites and four outpatient imaging center sites where services are provided. At Lehigh Valley Hospital, radiologists are not dedicated to one subspecialty, so implementing a voice system by modality was not an option. Because transcription was so far behind, we needed to eliminate that part of the process. As a result, we decided to deploy the system all at once and with the radiologists as editors. The planning and testing phase took about four months, and the implementation took two weeks. We deployed over 40 workstations and trained close to 50 physicians. The radiologists brought in an extra radiologist from our group for the two weeks of training. That allowed us to train without taking a radiologist out of the department. We trained three to six

  8. Abnormalities in auditory efferent activities in children with selective mutism.

    Science.gov (United States)

    Muchnik, Chava; Ari-Even Roth, Daphne; Hildesheimer, Minka; Arie, Miri; Bar-Haim, Yair; Henkin, Yael

    2013-01-01

    Two efferent feedback pathways to the auditory periphery may play a role in monitoring self-vocalization: the middle-ear acoustic reflex (MEAR) and the medial olivocochlear bundle (MOCB) reflex. Since most studies regarding the role of auditory efferent activity during self-vocalization were conducted in animals, human data are scarce. The working premise of the current study was that selective mutism (SM), a rare psychiatric disorder characterized by consistent failure to speak in specific social situations despite the ability to speak normally in other situations, may serve as a human model for studying the potential involvement of auditory efferent activity during self-vocalization. For this purpose, auditory efferent function was assessed in a group of 31 children with SM and compared to that of a group of 31 normally developing control children (mean age 8.9 and 8.8 years, respectively). All children exhibited normal hearing thresholds and type A tympanograms. MEAR and MOCB functions were evaluated by means of acoustic reflex thresholds and decay functions and the suppression of transient-evoked otoacoustic emissions, respectively. Auditory afferent function was tested by means of auditory brainstem responses (ABR). Results indicated a significantly higher proportion of children with abnormal MEAR and MOCB function in the SM group (58.6 and 38%, respectively) compared to controls (9.7 and 8%, respectively). The prevalence of abnormal MEAR and/or MOCB function was significantly higher in the SM group (71%) compared to controls (16%). Intact afferent function manifested in normal absolute and interpeak latencies of ABR components in all children. The finding of aberrant efferent auditory function in a large proportion of children with SM provides further support for the notion that MEAR and MOCB may play a significant role in the process of self-vocalization. © 2013 S. Karger AG, Basel.

  9. Self-Voice, but Not Self-Face, Reduces the McGurk Effect

    Directory of Open Access Journals (Sweden)

    Christopher Aruffo

    2011-10-01

    Full Text Available The McGurk effect represents a perceptual illusion resulting from the integration of an auditory syllable dubbed onto an incongruous visual syllable. The involuntary and impenetrable nature of the illusion is frequently used to support the multisensory nature of audiovisual speech perception. Here we show that both self-speech and familiarized speech reduce the effect. When self-speech was separated into self-voice and self-face mismatched with different faces and voices, only self-voice weakened the illusion. Thus, a familiar vocal identity automatically confers a processing advantage to multisensory speech, while a familiar facial identity does not. When another group of participants were familiarized with the speakers, participants' ability to take advantage of that familiarization was inversely correlated with their overall susceptibility to the McGurk illusion.

  10. A Comprehensive Review of Auditory Verbal Hallucinations: Lifetime Prevalence, Correlates and Mechanisms in Healthy and Clinical Individuals

    Directory of Open Access Journals (Sweden)

    Saskia ede Leede-Smith

    2013-07-01

    Full Text Available Over the years, the prevalence of auditory verbal hallucinations (AVH has been documented across the lifespan in varied contexts, and with a range of potential long-term outcomes. Initially the emphasis focused on whether AVHs conferred risk for psychosis. However, recent research has identified significant differences in the presentation and outcomes of AVH in patients compared to those in non-clinical populations. For this reason, it has been suggested that auditory hallucinations are an entity by themselves and not necessarily indicative of transition along the psychosis continuum. This review will examine the presentation of auditory hallucinations across the life span. The stages described include childhood, adolescence, adult non-clinical populations, hypnaogogic/hypnopompic experiences, high schizotypal traits, schizophrenia, substance induced AVH, AVH in epilepsy and AVH in the elderly. In children, need for care depends upon whether the child associates the voice with negative beliefs, appraisals and other symptoms of psychosis. This theme appears to carry right through to healthy voice hearers in adulthood, in which a negative impact of the voice usually only exists if the individual has negative experiences as a result of their voice(s. This includes features of the voices such as the negative content, frequency and emotional valence as well as anxiety and depression, independently or caused by voices presence. It seems possible that the mechanisms which maintain AVH in non-clinical populations are different from those which are behind AVH presentations in psychotic illness. For example; the existence of maladaptive coping strategies in patient populations is one significant difference between clinical and non-clinical groups which is associated with a need for care. Whether or not these mechanisms start out the same and have differential trajectories is not yet evidenced. Future research needs to focus on the comparison of underlying

  11. A comprehensive review of auditory verbal hallucinations: lifetime prevalence, correlates and mechanisms in healthy and clinical individuals.

    Science.gov (United States)

    de Leede-Smith, Saskia; Barkus, Emma

    2013-01-01

    Over the years, the prevalence of auditory verbal hallucinations (AVHs) have been documented across the lifespan in varied contexts, and with a range of potential long-term outcomes. Initially the emphasis focused on whether AVHs conferred risk for psychosis. However, recent research has identified significant differences in the presentation and outcomes of AVH in patients compared to those in non-clinical populations. For this reason, it has been suggested that auditory hallucinations are an entity by themselves and not necessarily indicative of transition along the psychosis continuum. This review will examine the presentation of auditory hallucinations across the life span, as well as in various clinical groups. The stages described include childhood, adolescence, adult non-clinical populations, hypnagogic/hypnopompic experiences, high schizotypal traits, schizophrenia, substance induced AVH, AVH in epilepsy, and AVH in the elderly. In children, need for care depends upon whether the child associates the voice with negative beliefs, appraisals and other symptoms of psychosis. This theme appears to carry right through to healthy voice hearers in adulthood, in which a negative impact of the voice usually only exists if the individual has negative experiences as a result of their voice(s). This includes features of the voices such as the negative content, frequency, and emotional valence as well as anxiety and depression, independently or caused by voices presence. It seems possible that the mechanisms which maintain AVH in non-clinical populations are different from those which are behind AVH presentations in psychotic illness. For example, the existence of maladaptive coping strategies in patient populations is one significant difference between clinical and non-clinical groups which is associated with a need for care. Whether or not these mechanisms start out the same and have differential trajectories is not yet evidenced. Future research needs to focus on the

  12. A comprehensive review of auditory verbal hallucinations: lifetime prevalence, correlates and mechanisms in healthy and clinical individuals

    Science.gov (United States)

    de Leede-Smith, Saskia; Barkus, Emma

    2013-01-01

    Over the years, the prevalence of auditory verbal hallucinations (AVHs) have been documented across the lifespan in varied contexts, and with a range of potential long-term outcomes. Initially the emphasis focused on whether AVHs conferred risk for psychosis. However, recent research has identified significant differences in the presentation and outcomes of AVH in patients compared to those in non-clinical populations. For this reason, it has been suggested that auditory hallucinations are an entity by themselves and not necessarily indicative of transition along the psychosis continuum. This review will examine the presentation of auditory hallucinations across the life span, as well as in various clinical groups. The stages described include childhood, adolescence, adult non-clinical populations, hypnagogic/hypnopompic experiences, high schizotypal traits, schizophrenia, substance induced AVH, AVH in epilepsy, and AVH in the elderly. In children, need for care depends upon whether the child associates the voice with negative beliefs, appraisals and other symptoms of psychosis. This theme appears to carry right through to healthy voice hearers in adulthood, in which a negative impact of the voice usually only exists if the individual has negative experiences as a result of their voice(s). This includes features of the voices such as the negative content, frequency, and emotional valence as well as anxiety and depression, independently or caused by voices presence. It seems possible that the mechanisms which maintain AVH in non-clinical populations are different from those which are behind AVH presentations in psychotic illness. For example, the existence of maladaptive coping strategies in patient populations is one significant difference between clinical and non-clinical groups which is associated with a need for care. Whether or not these mechanisms start out the same and have differential trajectories is not yet evidenced. Future research needs to focus on the

  13. Crossing Cultures with Multi-Voiced Journals

    Science.gov (United States)

    Styslinger, Mary E.; Whisenant, Alison

    2004-01-01

    In this article, the authors discuss the benefits of using multi-voiced journals as a teaching strategy in reading instruction. Multi-voiced journals, an adaptation of dual-voiced journals, encourage responses to reading in varied, cultured voices of characters. It is similar to reading journals in that they prod students to connect to the lives…

  14. Auditory interfaces: The human perceiver

    Science.gov (United States)

    Colburn, H. Steven

    1991-01-01

    A brief introduction to the basic auditory abilities of the human perceiver with particular attention toward issues that may be important for the design of auditory interfaces is presented. The importance of appropriate auditory inputs to observers with normal hearing is probably related to the role of hearing as an omnidirectional, early warning system and to its role as the primary vehicle for communication of strong personal feelings.

  15. Voice synthesis application

    Science.gov (United States)

    Lightstone, P. C.; Davidson, W. M.

    1982-04-01

    The military detection assessment laboratory houses an experimental field system which assesses different alarm indicators such as fence disturbance sensors, MILES cables, and microwave Racons. A speech synthesis board which could be interfaced, by means of a computer, to an alarm logger making verbal acknowledgement of alarms possible was purchased. Different products and different types of voice synthesis were analyzed before a linear predictive code device produced by Telesensory Speech Systems of Palo Alto, California was chosen. This device is called the Speech 1000 Board and has a dedicated 8085 processor. A multiplexer card was designed and the Sp 1000 interfaced through the card into a TMS 990/100M Texas Instrument microcomputer. It was also necessary to design the software with the capability of recognizing and flagging an alarm on any 1 of 32 possible lines. The experimental field system was then packaged with a dc power supply, LED indicators, speakers, and switches, and deployed in the field performing reliably.

  16. Feedforward and Feedback Control in Apraxia of Speech: Effects of Noise Masking on Vowel Production

    Science.gov (United States)

    Maas, Edwin; Mailend, Marja-Liisa; Guenther, Frank H.

    2015-01-01

    Purpose: This study was designed to test two hypotheses about apraxia of speech (AOS) derived from the Directions Into Velocities of Articulators (DIVA) model (Guenther et al., 2006): the feedforward system deficit hypothesis and the feedback system deficit hypothesis. Method: The authors used noise masking to minimize auditory feedback during…

  17. How to help teachers' voices.

    Science.gov (United States)

    Saatweber, Margarete

    2008-01-01

    It has been shown that teachers are at high risk of developing occupational dysphonia, and it has been widely accepted that the vocal characteristics of a speaker play an important role in determining the reactions of listeners. The functions of breathing, breathing movement, breathing tonus, voice vibrations and articulation tonus are transmitted to the listener. So we may conclude that listening to the teacher's voice at school influences children's behavior and the perception of spoken language. This paper presents the concept of Schlaffhorst-Andersen including exercises to help teachers improve their voice, breathing, movement and their posture. Copyright 2008 S. Karger AG, Basel.

  18. Voice stress analysis and evaluation

    Science.gov (United States)

    Haddad, Darren M.; Ratley, Roy J.

    2001-02-01

    Voice Stress Analysis (VSA) systems are marketed as computer-based systems capable of measuring stress in a person's voice as an indicator of deception. They are advertised as being less expensive, easier to use, less invasive in use, and less constrained in their operation then polygraph technology. The National Institute of Justice have asked the Air Force Research Laboratory for assistance in evaluating voice stress analysis technology. Law enforcement officials have also been asking questions about this technology. If VSA technology proves to be effective, its value for military and law enforcement application is tremendous.

  19. Auditory Perceptual Abilities Are Associated with Specific Auditory Experience

    Directory of Open Access Journals (Sweden)

    Yael Zaltz

    2017-11-01

    Full Text Available The extent to which auditory experience can shape general auditory perceptual abilities is still under constant debate. Some studies show that specific auditory expertise may have a general effect on auditory perceptual abilities, while others show a more limited influence, exhibited only in a relatively narrow range associated with the area of expertise. The current study addresses this issue by examining experience-dependent enhancement in perceptual abilities in the auditory domain. Three experiments were performed. In the first experiment, 12 pop and rock musicians and 15 non-musicians were tested in frequency discrimination (DLF, intensity discrimination, spectrum discrimination (DLS, and time discrimination (DLT. Results showed significant superiority of the musician group only for the DLF and DLT tasks, illuminating enhanced perceptual skills in the key features of pop music, in which miniscule changes in amplitude and spectrum are not critical to performance. The next two experiments attempted to differentiate between generalization and specificity in the influence of auditory experience, by comparing subgroups of specialists. First, seven guitar players and eight percussionists were tested in the DLF and DLT tasks that were found superior for musicians. Results showed superior abilities on the DLF task for guitar players, though no difference between the groups in DLT, demonstrating some dependency of auditory learning on the specific area of expertise. Subsequently, a third experiment was conducted, testing a possible influence of vowel density in native language on auditory perceptual abilities. Ten native speakers of German (a language characterized by a dense vowel system of 14 vowels, and 10 native speakers of Hebrew (characterized by a sparse vowel system of five vowels, were tested in a formant discrimination task. This is the linguistic equivalent of a DLS task. Results showed that German speakers had superior formant

  20. Can you hear me now? Musical training shapes functional brain networks for selective auditory attention and hearing speech in noise

    Directory of Open Access Journals (Sweden)

    Dana L Strait

    2011-06-01

    Full Text Available Even in the quietest of rooms, our senses are perpetually inundated by a barrage of sounds, requiring the auditory system to adapt to a variety of listening conditions in order to extract signals of interest (e.g., one speaker’s voice amidst others. Brain networks that promote selective attention are thought to sharpen the neural encoding of a target signal, suppressing competing sounds and enhancing perceptual performance. Here, we ask: does musical training benefit cortical mechanisms that underlie selective attention to speech? To answer this question, we assessed the impact of selective auditory attention on cortical auditory-evoked response variability in musicians and nonmusicians. Outcomes indicate strengthened brain networks for selective auditory attention in musicians in that musicians but not nonmusicians demonstrate decreased prefrontal response variability with auditory attention. Results are interpreted in the context of previous work from our laboratory documenting perceptual and subcortical advantages in musicians for the hearing and neural encoding of speech in background noise. Musicians’ neural proficiency for selectively engaging and sustaining auditory attention to language indicates a potential benefit of music for auditory training. Given the importance of auditory attention for the development of language-related skills, musical training may aid in the prevention, habilitation and remediation of children with a wide range of attention-based language and learning impairments.

  1. Voice Habits and Behaviors: Voice Care Among Flamenco Singers.

    Science.gov (United States)

    Garzón García, Marina; Muñoz López, Juana; Y Mendoza Lara, Elvira

    2017-03-01

    The purpose of this study is to analyze the vocal behavior of flamenco singers, as compared with classical music singers, to establish a differential vocal profile of voice habits and behaviors in flamenco music. Bibliographic review was conducted, and the Singer's Vocal Habits Questionnaire, an experimental tool designed by the authors to gather data regarding hygiene behavior, drinking and smoking habits, type of practice, voice care, and symptomatology perceived in both the singing and the speaking voice, was administered. We interviewed 94 singers, divided into two groups: the flamenco experimental group (FEG, n = 48) and the classical control group (CCG, n = 46). Frequency analysis, a Likert scale, and discriminant and exploratory factor analysis were used to obtain a differential profile for each group. The FEG scored higher than the CCG in speaking voice symptomatology. The FEG scored significantly higher than the CCG in use of "inadequate vocal technique" when singing. Regarding voice habits, the FEG scored higher in "lack of practice and warm-up" and "environmental habits." A total of 92.6% of the subjects classified themselves correctly in each group. The Singer's Vocal Habits Questionnaire has proven effective in differentiating flamenco and classical singers. Flamenco singers are exposed to numerous vocal risk factors that make them more prone to vocal fatigue, mucosa dehydration, phonotrauma, and muscle stiffness than classical singers. Further research is needed in voice training in flamenco music, as a means to strengthen the voice and enable it to meet the requirements of this musical genre. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  2. Improving Higher Education Practice through Student Evaluation Systems: Is the Student Voice Being Heard?

    Science.gov (United States)

    Blair, Erik; Valdez Noel, Keisha

    2014-01-01

    Many higher education institutions use student evaluation systems as a way of highlighting course and lecturer strengths and areas for improvement. Globally, the student voice has been increasing in volume, and capitalising on student feedback has been proposed as a means to benefit teacher professional development. This paper examines the student…

  3. Literature review of voice recognition and generation technology for Army helicopter applications

    Science.gov (United States)

    Christ, K. A.

    1984-08-01

    This report is a literature review on the topics of voice recognition and generation. Areas covered are: manual versus vocal data input, vocabulary, stress and workload, noise, protective masks, feedback, and voice warning systems. Results of the studies presented in this report indicate that voice data entry has less of an impact on a pilot's flight performance, during low-level flying and other difficult missions, than manual data entry. However, the stress resulting from such missions may cause the pilot's voice to change, reducing the recognition accuracy of the system. The noise present in helicopter cockpits also causes the recognition accuracy to decrease. Noise-cancelling devices are being developed and improved upon to increase the recognition performance in noisy environments. Future research in the fields of voice recognition and generation should be conducted in the areas of stress and workload, vocabulary, and the types of voice generation best suited for the helicopter cockpit. Also, specific tasks should be studied to determine whether voice recognition and generation can be effectively applied.

  4. The Central Auditory Processing Kit[TM]. Book 1: Auditory Memory [and] Book 2: Auditory Discrimination, Auditory Closure, and Auditory Synthesis [and] Book 3: Auditory Figure-Ground, Auditory Cohesion, Auditory Binaural Integration, and Compensatory Strategies.

    Science.gov (United States)

    Mokhemar, Mary Ann

    This kit for assessing central auditory processing disorders (CAPD), in children in grades 1 through 8 includes 3 books, 14 full-color cards with picture scenes, and a card depicting a phone key pad, all contained in a sturdy carrying case. The units in each of the three books correspond with auditory skill areas most commonly addressed in…

  5. Auditory hallucinations in adults with hearing impairment: a large prevalence study.

    Science.gov (United States)

    Linszen, M M J; van Zanten, G A; Teunisse, R J; Brouwer, R M; Scheltens, P; Sommer, I E

    2018-03-20

    Similar to visual hallucinations in visually impaired patients, auditory hallucinations are often suggested to occur in adults with hearing impairment. However, research on this association is limited. This observational, cross-sectional study tested whether auditory hallucinations are associated with hearing impairment, by assessing their prevalence in an adult population with various degrees of objectified hearing impairment. Hallucination presence was determined in 1007 subjects aged 18-92, who were referred for audiometric testing to the Department of ENT-Audiology, University Medical Center Utrecht, the Netherlands. The presence and severity of hearing impairment were calculated using mean air conduction thresholds from the most recent pure tone audiometry. Out of 829 participants with hearing impairment, 16.2% (n = 134) had experienced auditory hallucinations in the past 4 weeks; significantly more than the non-impaired group [5.8%; n = 10/173; p impairment, with rates up to 24% in the most profoundly impaired group (p impairment in the best ear. Auditory hallucinations mostly consisted of voices (51%), music (36%), and doorbells or telephones (24%). Our findings reveal that auditory hallucinations are common among patients with hearing impairment, and increase with impairment severity. Although more research on potential confounding factors is necessary, clinicians should be aware of this phenomenon, by inquiring after hallucinations in hearing-impaired patients and, conversely, assessing hearing impairment in patients with auditory hallucinations, since it may be a treatable factor.

  6. Voice and choice by delegation.

    Science.gov (United States)

    van de Bovenkamp, Hester; Vollaard, Hans; Trappenburg, Margo; Grit, Kor

    2013-02-01

    In many Western countries, options for citizens to influence public services are increased to improve the quality of services and democratize decision making. Possibilities to influence are often cast into Albert Hirschman's taxonomy of exit (choice), voice, and loyalty. In this article we identify delegation as an important addition to this framework. Delegation gives individuals the chance to practice exit/choice or voice without all the hard work that is usually involved in these options. Empirical research shows that not many people use their individual options of exit and voice, which could lead to inequality between users and nonusers. We identify delegation as a possible solution to this problem, using Dutch health care as a case study to explore this option. Notwithstanding various advantages, we show that voice and choice by delegation also entail problems of inequality and representativeness.

  7. Voice Force tulekul / Tõnu Ojala

    Index Scriptorium Estoniae

    Ojala, Tõnu, 1969-

    2005-01-01

    60. sünnipäeva tähistava Tallinna Tehnikaülikooli Akadeemilise Meeskoori juubelihooaja üritusest - a capella pop-gruppide festivalist Voice Force (kontserdid 12. nov. klubis Parlament ja 3. dets. Vene Kultuurikeskuses)

  8. Taking Care of Your Voice

    Science.gov (United States)

    ... negative effect on voice. Exercise regularly. Exercise increases stamina and muscle tone. This helps provide good posture ... testing man-made and biological materials and stem cell technologies that may eventually be used to engineer ...

  9. The Christian voice in philosophy

    Directory of Open Access Journals (Sweden)

    Stuart Fowler

    1982-03-01

    Full Text Available In this paper the Rev. Stuart Fowler outlines a Christian voice in Philosophy and urges the Christian philosopher to investigate his position and his stance with integrity and honesty.

  10. Auditory Reserve and the Legacy of Auditory Experience

    Directory of Open Access Journals (Sweden)

    Erika Skoe

    2014-11-01

    Full Text Available Musical training during childhood has been linked to more robust encoding of sound later in life. We take this as evidence for an auditory reserve: a mechanism by which individuals capitalize on earlier life experiences to promote auditory processing. We assert that early auditory experiences guide how the reserve develops and is maintained over the lifetime. Experiences that occur after childhood, or which are limited in nature, are theorized to affect the reserve, although their influence on sensory processing may be less long-lasting and may potentially fade over time if not repeated. This auditory reserve may help to explain individual differences in how individuals cope with auditory impoverishment or loss of sensorineural function.

  11. Continuous vs. intermittent neurofeedback to regulate auditory cortex activity of tinnitus patients using real-time fMRI - A pilot study

    Directory of Open Access Journals (Sweden)

    Kirsten Emmert

    2017-01-01

    Overall, these results show that continuous feedback is suitable for long-term neurofeedback experiments while intermittent feedback presentation promises good results for single session experiments when using the auditory cortex as a target region. In particular, the down-regulation effect is more pronounced in the secondary auditory cortex, which might be more susceptible to voluntary modulation in comparison to a primary sensory region.

  12. Auditory changes in acromegaly.

    Science.gov (United States)

    Tabur, S; Korkmaz, H; Baysal, E; Hatipoglu, E; Aytac, I; Akarsu, E

    2017-06-01

    The aim of this study is to determine the changes involving auditory system in cases with acromegaly. Otological examinations of 41 cases with acromegaly (uncontrolled n = 22, controlled n = 19) were compared with those of age and gender-matched 24 healthy subjects. Whereas the cases with acromegaly underwent examination with pure tone audiometry (PTA), speech audiometry for speech discrimination (SD), tympanometry, stapedius reflex evaluation and otoacoustic emission tests, the control group did only have otological examination and PTA. Additionally, previously performed paranasal sinus-computed tomography of all cases with acromegaly and control subjects were obtained to measure the length of internal acoustic canal (IAC). PTA values were higher (p acromegaly group was narrower compared to that in control group (p = 0.03 for right ears and p = 0.02 for left ears). When only cases with acromegaly were taken into consideration, PTA values in left ears had positive correlation with growth hormone and insulin-like growth factor-1 levels (r = 0.4, p = 0.02 and r = 0.3, p = 0.03). Of all cases with acromegaly 13 (32%) had hearing loss in at least one ear, 7 (54%) had sensorineural type and 6 (46%) had conductive type hearing loss. Acromegaly may cause certain changes in the auditory system in cases with acromegaly. The changes in the auditory system may be multifactorial causing both conductive and sensorioneural defects.

  13. The auditory scene: an fMRI study on melody and accompaniment in professional pianists.

    Science.gov (United States)

    Spada, Danilo; Verga, Laura; Iadanza, Antonella; Tettamanti, Marco; Perani, Daniela

    2014-11-15

    The auditory scene is a mental representation of individual sounds extracted from the summed sound waveform reaching the ears of the listeners. Musical contexts represent particularly complex cases of auditory scenes. In such a scenario, melody may be seen as the main object moving on a background represented by the accompaniment. Both melody and accompaniment vary in time according to harmonic rules, forming a typical texture with melody in the most prominent, salient voice. In the present sparse acquisition functional magnetic resonance imaging study, we investigated the interplay between melody and accompaniment in trained pianists, by observing the activation responses elicited by processing: (1) melody placed in the upper and lower texture voices, leading to, respectively, a higher and lower auditory salience; (2) harmonic violations occurring in either the melody, the accompaniment, or both. The results indicated that the neural activation elicited by the processing of polyphonic compositions in expert musicians depends upon the upper versus lower position of the melodic line in the texture, and showed an overall greater activation for the harmonic processing of melody over accompaniment. Both these two predominant effects were characterized by the involvement of the posterior cingulate cortex and precuneus, among other associative brain regions. We discuss the prominent role of the posterior medial cortex in the processing of melodic and harmonic information in the auditory stream, and propose to frame this processing in relation to the cognitive construction of complex multimodal sensory imagery scenes. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Investigation of the mechanism of soft tissue conduction explains several perplexing auditory phenomena.

    Science.gov (United States)

    Adelman, Cahtia; Chordekar, Shai; Perez, Ronen; Sohmer, Haim

    2014-09-01

    Soft tissue conduction (STC) is a recently expounded mode of auditory stimulation in which the clinical bone vibrator delivers auditory frequency vibratory stimuli to skin sites on the head, neck, and thorax. Investigation of the mechanism of STC stimulation has served as a platform for the elucidation of the mechanics of cochlear activation, in general, and to a better understanding of several perplexing auditory phenomena. This review demonstrates that it is likely that the cochlear hair cells can be directly activated at low sound intensities by the fluid pressures initiated in the cochlea; that the fetus in utero, completely enveloped in amniotic fluid, hears by STC; that a speaker hears his/her own voice by air conduction and by STC; and that pulsatile tinnitus is likely due to pulsatile turbulent blood flow producing fluid pressures that reach the cochlea through the soft tissues.

  15. The human auditory brainstem response to running speech reveals a subcortical mechanism for selective attention.

    Science.gov (United States)

    Forte, Antonio Elia; Etard, Octave; Reichenbach, Tobias

    2017-10-10

    Humans excel at selectively listening to a target speaker in background noise such as competing voices. While the encoding of speech in the auditory cortex is modulated by selective attention, it remains debated whether such modulation occurs already in subcortical auditory structures. Investigating the contribution of the human brainstem to attention has, in particular, been hindered by the tiny amplitude of the brainstem response. Its measurement normally requires a large number of repetitions of the same short sound stimuli, which may lead to a loss of attention and to neural adaptation. Here we develop a mathematical method to measure the auditory brainstem response to running speech, an acoustic stimulus that does not repeat and that has a high ecological validity. We employ this method to assess the brainstem's activity when a subject listens to one of two competing speakers, and show that the brainstem response is consistently modulated by attention.

  16. Variations in voice level and fundamental frequency with changing background noise level and talker-to-listener distance while wearing hearing protectors: A pilot study.

    Science.gov (United States)

    Bouserhal, Rachel E; Macdonald, Ewen N; Falk, Tiago H; Voix, Jérémie

    2016-01-01

    Speech production in noise with varying talker-to-listener distance has been well studied for the open ear condition. However, occluding the ear canal can affect the auditory feedback and cause deviations from the models presented for the open-ear condition. Communication is a main concern for people wearing hearing protection devices (HPD). Although practical, radio communication is cumbersome, as it does not distinguish designated receivers. A smarter radio communication protocol must be developed to alleviate this problem. Thus, it is necessary to model speech production in noise while wearing HPDs. Such a model opens the door to radio communication systems that distinguish receivers and offer more efficient communication between persons wearing HPDs. This paper presents the results of a pilot study aimed to investigate the effects of occluding the ear on changes in voice level and fundamental frequency in noise and with varying talker-to-listener distance. Twelve participants with a mean age of 28 participated in this study. Compared to existing data, results show a trend similar to the open ear condition with the exception of the occluded quiet condition. This implies that a model can be developed to better understand speech production for the occluded ear.

  17. Differential sensory cortical involvement in auditory and visual sensorimotor temporal recalibration: Evidence from transcranial direct current stimulation (tDCS).

    Science.gov (United States)

    Aytemür, Ali; Almeida, Nathalia; Lee, Kwang-Hyuk

    2017-02-01

    Adaptation to delayed sensory feedback following an action produces a subjective time compression between the action and the feedback (temporal recalibration effect, TRE). TRE is important for sensory delay compensation to maintain a relationship between causally related events. It is unclear whether TRE is a sensory modality-specific phenomenon. In 3 experiments employing a sensorimotor synchronization task, we investigated this question using cathodal transcranial direct-current stimulation (tDCS). We found that cathodal tDCS over the visual cortex, and to a lesser extent over the auditory cortex, produced decreased visual TRE. However, both auditory and visual cortex tDCS did not produce any measurable effects on auditory TRE. Our study revealed different nature of TRE in auditory and visual domains. Visual-motor TRE, which is more variable than auditory TRE, is a sensory modality-specific phenomenon, modulated by the auditory cortex. The robustness of auditory-motor TRE, unaffected by tDCS, suggests the dominance of the auditory system in temporal processing, by providing a frame of reference in the realignment of sensorimotor timing signals. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Partial Epilepsy with Auditory Features

    Directory of Open Access Journals (Sweden)

    J Gordon Millichap

    2004-07-01

    Full Text Available The clinical characteristics of 53 sporadic (S cases of idiopathic partial epilepsy with auditory features (IPEAF were analyzed and compared to previously reported familial (F cases of autosomal dominant partial epilepsy with auditory features (ADPEAF in a study at the University of Bologna, Italy.

  19. Word Recognition in Auditory Cortex

    Science.gov (United States)

    DeWitt, Iain D. J.

    2013-01-01

    Although spoken word recognition is more fundamental to human communication than text recognition, knowledge of word-processing in auditory cortex is comparatively impoverished. This dissertation synthesizes current models of auditory cortex, models of cortical pattern recognition, models of single-word reading, results in phonetics and results in…

  20. Extensive Tonotopic Mapping across Auditory Cortex Is Recapitulated by Spectrally Directed Attention and Systematically Related to Cortical Myeloarchitecture.

    Science.gov (United States)

    Dick, Frederic K; Lehet, Matt I; Callaghan, Martina F; Keller, Tim A; Sereno, Martin I; Holt, Lori L

    2017-12-13

    Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation-acoustic frequency-might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R 1 -estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although

  1. Peripheral Auditory Mechanisms

    CERN Document Server

    Hall, J; Hubbard, A; Neely, S; Tubis, A

    1986-01-01

    How weIl can we model experimental observations of the peripheral auditory system'? What theoretical predictions can we make that might be tested'? It was with these questions in mind that we organized the 1985 Mechanics of Hearing Workshop, to bring together auditory researchers to compare models with experimental observations. Tbe workshop forum was inspired by the very successful 1983 Mechanics of Hearing Workshop in Delft [1]. Boston University was chosen as the site of our meeting because of the Boston area's role as a center for hearing research in this country. We made a special effort at this meeting to attract students from around the world, because without students this field will not progress. Financial support for the workshop was provided in part by grant BNS- 8412878 from the National Science Foundation. Modeling is a traditional strategy in science and plays an important role in the scientific method. Models are the bridge between theory and experiment. Tbey test the assumptions made in experim...

  2. Understanding the 'Anorexic Voice' in Anorexia Nervosa.

    Science.gov (United States)

    Pugh, Matthew; Waller, Glenn

    2017-05-01

    In common with individuals experiencing a number of disorders, people with anorexia nervosa report experiencing an internal 'voice'. The anorexic voice comments on the individual's eating, weight and shape and instructs the individual to restrict or compensate. However, the core characteristics of the anorexic voice are not known. This study aimed to develop a parsimonious model of the voice characteristics that are related to key features of eating disorder pathology and to determine whether patients with anorexia nervosa fall into groups with different voice experiences. The participants were 49 women with full diagnoses of anorexia nervosa. Each completed validated measures of the power and nature of their voice experience and of their responses to the voice. Different voice characteristics were associated with current body mass index, duration of disorder and eating cognitions. Two subgroups emerged, with 'weaker' and 'stronger' voice experiences. Those with stronger voices were characterized by having more negative eating attitudes, more severe compensatory behaviours, a longer duration of illness and a greater likelihood of having the binge-purge subtype of anorexia nervosa. The findings indicate that the anorexic voice is an important element of the psychopathology of anorexia nervosa. Addressing the anorexic voice might be helpful in enhancing outcomes of treatments for anorexia nervosa, but that conclusion might apply only to patients with more severe eating psychopathology. Copyright © 2016 John Wiley & Sons, Ltd. Experiences of an internal 'anorexic voice' are common in anorexia nervosa. Clinicians should consider the role of the voice when formulating eating pathology in anorexia nervosa, including how individuals perceive and relate to that voice. Addressing the voice may be beneficial, particularly in more severe and enduring forms of anorexia nervosa. When working with the voice, clinicians should aim to address both the content of the voice and how

  3. External Validation of the Acoustic Voice Quality Index Version 03.01 With Extended Representativity.

    Science.gov (United States)

    Barsties, Ben; Maryn, Youri

    2016-07-01

    The Acoustic Voice Quality Index (AVQI) is an objective method to quantify the severity of overall voice quality in concatenated continuous speech and sustained phonation segments. Recently, AVQI was successfully modified to be more representative and ecologically valid because the internal consistency of AVQI was balanced out through equal proportion of the 2 speech types. The present investigation aims to explore its external validation in a large data set. An expert panel of 12 speech-language therapists rated the voice quality of 1058 concatenated voice samples varying from normophonia to severe dysphonia. The Spearman rank-order correlation coefficients (r) were used to measure concurrent validity. The AVQI's diagnostic accuracy was evaluated with several estimates of its receiver operating characteristics (ROC). Finally, 8 of the 12 experts were chosen because of reliability criteria. A strong correlation was identified between AVQI and auditoryperceptual rating (r = 0.815, P = .000). It indicated that 66.4% of the auditory-perceptual rating's variation was explained by AVQI. Additionally, the ROC results showed again the best diagnostic outcome at a threshold of AVQI = 2.43. This study highlights external validation and diagnostic precision of the AVQI version 03.01 as a robust and ecologically valid measurement to objectify voice quality. © The Author(s) 2016.

  4. Audio-visual identification of place of articulation and voicing in white and babble noise.

    Science.gov (United States)

    Alm, Magnus; Behne, Dawn M; Wang, Yue; Eg, Ragnhild

    2009-07-01

    Research shows that noise and phonetic attributes influence the degree to which auditory and visual modalities are used in audio-visual speech perception (AVSP). Research has, however, mainly focused on white noise and single phonetic attributes, thus neglecting the more common babble noise and possible interactions between phonetic attributes. This study explores whether white and babble noise differentially influence AVSP and whether these differences depend on phonetic attributes. White and babble noise of 0 and -12 dB signal-to-noise ratio were added to congruent and incongruent audio-visual stop consonant-vowel stimuli. The audio (A) and video (V) of incongruent stimuli differed either in place of articulation (POA) or voicing. Responses from 15 young adults show that, compared to white noise, babble resulted in more audio responses for POA stimuli, and fewer for voicing stimuli. Voiced syllables received more audio responses than voiceless syllables. Results can be attributed to discrepancies in the acoustic spectra of both the noise and speech target. Voiced consonants may be more auditorily salient than voiceless consonants which are more spectrally similar to white noise. Visual cues contribute to identification of voicing, but only if the POA is visually salient and auditorily susceptible to the noise type.

  5. Acute effects of radioiodine therapy on the voice and larynx of basedow-Graves patients

    Energy Technology Data Exchange (ETDEWEB)

    Isolan-Cury, Roberta Werlang; Cury, Adriano Namo [Sao Paulo Santa Casa de Misericordia, SP (Brazil). Medical Science School (FCMSCSP); Monte, Osmar [Sao Paulo Santa Casa de Misericordia, SP (Brazil). Physiology Department; Silva, Marta Assumpcao de Andrada e [Sao Paulo Santa Casa de Misericordia, SP (Brazil). Medical Science School (FCMSCSP). Speech Therapy School; Duprat, Andre [Sao Paulo Santa Casa de Misericordia, SP (Brazil). Medical Science School (FCMSCSP). Otorhinolaryngology Department; Marone, Marilia [Nuclimagem - Irmanity of the Sao Paulo Santa Casa de Misericordia, SP (Brazil). Nuclear Medicine Unit; Almeida, Renata de; Iglesias, Alexandre [Sao Paulo Santa Casa de Misericordia, SP (Brazil). Medical Science School (FCMSCSP). Otorhinolaryngology Department. Endocrinology and Metabology Unit

    2008-07-01

    Graves's disease is the most common cause of hyperthyroidism. There are three current therapeutic options: anti-thyroid medication, surgery, and radioactive iodine (I 131). There are few data in the literature regarding the effects of radioiodine therapy on the larynx and voice. The aim of this study was: to assess the effect of radioiodine therapy on the voice of Basedow-Graves patients. Material and method: A prospective study was done. Following the diagnosis of Grave's disease, patients underwent investigation of their voice, measurement of maximum phonatory time (/a/) and the s/z ratio, fundamental frequency analysis (Praat software), laryngoscopy and (perceptive-auditory) analysis in three different conditions: pre-treatment, 4 days, and 20 days post-radioiodine therapy. Conditions are based on the inflammatory pattern of thyroid tissue (Jones et al. 1999). Results: No statistically significant differences were found in voice characteristics in these three conditions. Conclusion: Radioiodine therapy does not affect voice quality. (author)

  6. Feedback and Incentives

    DEFF Research Database (Denmark)

    Eriksson, Tor Viking; Poulsen, Anders; Villeval, Marie Claire

    2009-01-01

    This paper experimentally investigates the impact of different pay schemes and relative performance feedback policies on employee effort. We explore three feedback rules: no feedback on relative performance, feedback given halfway through the production period, and continuously updated feedback. ...... behind, and front runners do not slack off. But in both pay schemes relative performance feedback reduces the quality of the low performers' work; we refer to this as a "negative quality peer effect"....

  7. Anti-voice adaptation suggests prototype-based coding of voice identity

    Directory of Open Access Journals (Sweden)

    Marianne eLatinus

    2011-07-01

    Full Text Available We used perceptual aftereffects induced by adaptation with anti-voice stimuli to investigate voice identity representations. Participants learned a set of voices then were tested on a voice identification task with vowel stimuli morphed between identities, after different conditions of adaptation. In Experiment 1, participants chose the identity opposite to the adapting anti-voice significantly more often than the other two identities (e.g., after being adapted to anti-A, they identified the average voice as A. In Experiment 2, participants showed a bias for identities opposite to the adaptor specifically for anti-voice, but not for non anti-voice adaptors. These results are strikingly similar to adaptation aftereffects observed for facial identity. They are compatible with a representation of individual voice identities in a multidimensional perceptual voice space referenced on a voice prototype.

  8. Optical voice encryption based on digital holography.

    Science.gov (United States)

    Rajput, Sudheesh K; Matoba, Osamu

    2017-11-15

    We propose an optical voice encryption scheme based on digital holography (DH). An off-axis DH is employed to acquire voice information by obtaining phase retardation occurring in the object wave due to sound wave propagation. The acquired hologram, including voice information, is encrypted using optical image encryption. The DH reconstruction and decryption with all the correct parameters can retrieve an original voice. The scheme has the capability to record the human voice in holograms and encrypt it directly. These aspects make the scheme suitable for other security applications and help to use the voice as a potential security tool. We present experimental and some part of simulation results.

  9. Multimodal information Management: Evaluation of Auditory and Haptic Cues for NextGen Communication Displays

    Science.gov (United States)

    Begault, Durand R.; Bittner, Rachel M.; Anderson, Mark R.

    2012-01-01

    Auditory communication displays within the NextGen data link system may use multiple synthetic speech messages replacing traditional ATC and company communications. The design of an interface for selecting amongst multiple incoming messages can impact both performance (time to select, audit and release a message) and preference. Two design factors were evaluated: physical pressure-sensitive switches versus flat panel "virtual switches", and the presence or absence of auditory feedback from switch contact. Performance with stimuli using physical switches was 1.2 s faster than virtual switches (2.0 s vs. 3.2 s); auditory feedback provided a 0.54 s performance advantage (2.33 s vs. 2.87 s). There was no interaction between these variables. Preference data were highly correlated with performance.

  10. Voice deviation, dysphonia risk screening and quality of life in individuals with various laryngeal diagnoses

    Science.gov (United States)

    Nemr, Katia; Cota, Ariane; Tsuji, Domingos; Simões-Zenari, Marcia

    2018-01-01

    OBJECTIVES: To characterize the voice quality of individuals with dysphonia and to investigate possible correlations between the degree of voice deviation (D) and scores on the Dysphonia Risk Screening Protocol-General (DRSP), the Voice-Related Quality of Life (V-RQOL) measure and the Voice Handicap Index, short version (VHI-10). METHODS: The sample included 200 individuals with dysphonia. Following laryngoscopy, the participants completed the DRSP, the V-RQOL measure, and the VHI-10; subsequently, voice samples were recorded for auditory-perceptual and acoustic analyses. The correlation between the score for each questionnaire and the overall degree of vocal deviation was analyzed, as was the correlation among the scores for the three questionnaires. RESULTS: Most of the participants (62%) were female, and the mean age of the sample was 49 years. The most common laryngeal diagnosis was organic dysphonia (79.5%). The mean D was 59.54, and the predominance of roughness had a mean of 54.74. All the participants exhibited at least one abnormal acoustic aspect. The mean questionnaire scores were DRSP, 44.7; V-RQOL, 57.1; and VHI-10, 16. An inverse correlation was found between the V-RQOL score and D; however, a positive correlation was found between both the VHI-10 and DRSP scores and D. CONCLUSION: A predominance of adult women, organic dysphonia, moderate voice deviation, high dysphonia risk, and low to moderate quality of life impact characterized our sample. There were correlations between the scores of each of the three questionnaires and the degree of voice deviation. It should be noted that the DRSP monitored the degree of dysphonia severity, which reinforces its applicability for patients with different laryngeal diagnoses. PMID:29538494

  11. Mechanics of human voice production and control.

    Science.gov (United States)

    Zhang, Zhaoyan

    2016-10-01

    As the primary means of communication, voice plays an important role in daily life. Voice also conveys personal information such as social status, personal traits, and the emotional state of the speaker. Mechanically, voice production involves complex fluid-structure interaction within the glottis and its control by laryngeal muscle activation. An important goal of voice research is to establish a causal theory linking voice physiology and biomechanics to how speakers use and control voice to communicate meaning and personal information. Establishing such a causal theory has important implications for clinical voice management, voice training, and many speech technology applications. This paper provides a review of voice physiology and biomechanics, the physics of vocal fold vibration and sound production, and laryngeal muscular control of the fundamental frequency of voice, vocal intensity, and voice quality. Current efforts to develop mechanical and computational models of voice production are also critically reviewed. Finally, issues and future challenges in developing a causal theory of voice production and perception are discussed.

  12. Performance of the phonatory deviation diagram in the evaluation of rough and breathy synthesized voices.

    Science.gov (United States)

    Lopes, Leonardo Wanderley; Freitas, Jonas Almeida de; Almeida, Anna Alice; Silva, Priscila Oliveira Costa; Alves, Giorvan Ânderson Dos Santos

    2017-07-05

    Voice disorders alter the sound signal in several ways, combining several types of vocal emission disturbances and noise. The Phonatory Deviation Diagram (PDD) is a two-dimensional chart that allows the evaluation of the vocal signal based on the combination of periodicity (jitter, shimmer, and correlation coefficient) and noise (Glottal to Noise Excitation - GNE) measurements. The use of synthesized signals, where one has a greater control and knowledge of the production conditions, may allow a better understanding of the physiological and acoustic mechanisms underlying the vocal emission and its main perceptual-auditory correlates regarding the intensity of the deviation and types of vocal quality. To analyze the performance of the PDD in the discrimination of the presence and degree of roughness and breathiness in synthesized voices. 871 synthesized vocal signals were used corresponding to the vowel /ɛ/. The perceptual-auditory analysis of the degree of roughness and breathiness of the synthesized signals was performed using Visual Analogue Scale (VAS). Subsequently, the signals were categorized regarding the presence/absence of these parameters based on the VAS cutoff values. Acoustic analysis was performed by assessing the distribution of vocal signals according to the PDD area, quadrant, shape, and density. The equality of proportions and the chi-square tests were performed to compare the variables. Rough and breathy vocal signals were located predominantly outside the normal range and in the lower right quadrant of the PDD. Voices with higher degrees of roughness and breathiness were located outside the area of normality in the lower right quadrant and had concentrated density. The normality area and the PDD quadrant can discriminate healthy voices from rough and breathy ones. Voices with higher degrees of roughness and breathiness are proportionally located outside the area of normality, in the lower right quadrant and with concentrated density. Copyright

  13. Syllogisms delivered in an angry voice lead to improved performance and engagement of a different neural system compared to neutral voice

    Directory of Open Access Journals (Sweden)

    Kathleen Walton Smith

    2015-05-01

    Full Text Available Despite the fact that most real-world reasoning occurs in some emotional context, very little is known about the underlying behavioral and neural implications of such context. To further understand the role of emotional context in logical reasoning we scanned 15 participants with fMRI while they engaged in logical reasoning about neutral syllogisms presented through the auditory channel in a sad, angry, or neutral tone of voice. Exposure to angry voice led to improved reasoning performance compared to exposure to sad and neutral voice. A likely explanation for this effect is that exposure to expressions of anger increases selective attention toward the relevant features of target stimuli, in this case the reasoning task. Supporting this interpretation, reasoning in the context of angry voice was accompanied by activation in the superior frontal gyrus—a region known to be associated with selective attention. Our findings contribute to a greater understanding of the neural processes that underlie reasoning in an emotional context by demonstrating that two emotional contexts, despite being of the same (negative valence, have different effects on reasoning.

  14. The Voice as Computer Interface: A Look at Tomorrow's Technologies.

    Science.gov (United States)

    Lange, Holley R.

    1991-01-01

    Discussion of voice as the communications device for computer-human interaction focuses on voice recognition systems for use within a library environment. Voice technologies are described, including voice response and voice recognition; examples of voice systems in use in libraries are examined; and further possibilities, including use with…

  15. Auditory short-term memory in the primate auditory cortex.

    Science.gov (United States)

    Scott, Brian H; Mishkin, Mortimer

    2016-06-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory. Published by Elsevier B.V.

  16. Auditory short-term memory in the primate auditory cortex

    OpenAIRE

    Scott, Brian H.; Mishkin, Mortimer

    2015-01-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ���working memory��� bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive sho...

  17. Quick Statistics about Voice, Speech, and Language

    Science.gov (United States)

    ... here Home » Health Info » Statistics and Epidemiology Quick Statistics About Voice, Speech, Language Voice, Speech, Language, and ... no 205. Hyattsville, MD: National Center for Health Statistics. 2015. Hoffman HJ, Li C-M, Losonczy K, ...

  18. English Voicing in Dimensional Theory*

    Science.gov (United States)

    Iverson, Gregory K.; Ahn, Sang-Cheol

    2007-01-01

    Assuming a framework of privative features, this paper interprets two apparently disparate phenomena in English phonology as structurally related: the lexically specific voicing of fricatives in plural nouns like wives or thieves and the prosodically governed “flapping” of medial /t/ (and /d/) in North American varieties, which we claim is itself not a rule per se, but rather a consequence of the laryngeal weakening of fortis /t/ in interaction with speech-rate determined segmental abbreviation. Taking as our point of departure the Dimensional Theory of laryngeal representation developed by Avery & Idsardi (2001), along with their assumption that English marks voiceless obstruents but not voiced ones (Iverson & Salmons 1995), we find that an unexpected connection between fricative voicing and coronal flapping emerges from the interplay of familiar phonemic and phonetic factors in the phonological system. PMID:18496590

  19. Voices Falling Through the Air

    Directory of Open Access Journals (Sweden)

    Paul Elliman

    2012-11-01

    Full Text Available Where am I? Or as the young boy in Jules Verne’s Journey to the Centre of the Earth calls back to his distant-voiced companions: ‘Lost… in the most intense darkness.’ ‘Then I understood it,’ says the boy, Axel, ‘To make them hear me, all I had to do was to speak with my mouth close to the wall, which would serve to conduct my voice, as the wire conducts the electric fluid’ (Verne 1864. By timing their calls, the group of explorers work out that Axel is separated from them by a distance of four miles, held in a cavernous vertical gallery of smooth rock. Feeling his way down towards the others, the boy ends up falling, along with his voice, through the space. Losing consciousness he seems to give himself up to the space...

  20. Exploring the Impact of Role-Playing on Peer Feedback in an Online Case-Based Learning Activity

    Science.gov (United States)

    Ching, Yu-Hui

    2014-01-01

    This study explored the impact of role-playing on the quality of peer feedback and learners' perception of this strategy in a case-based learning activity with VoiceThread in an online course. The findings revealed potential positive impact of role-playing on learners' generation of constructive feedback as role-playing was associated with higher…

  1. Extracting the Neural Representation of Tone Onsets for Separate Voices of Ensemble Music Using Multivariate EEG Analysis

    DEFF Research Database (Denmark)

    Sturm, Irene; Treder, Matthias S.; Miklody, Daniel

    2015-01-01

    responses to tone onsets, such as N1/P2 ERP components. Music clips (resembling minimalistic electro-pop) were presented to 11 subjects, either in an ensemble version (drums, bass, keyboard) or in the corresponding three solo versions. For each instrument we train a spatio-temporal regression filter...... at the level of early auditory ERPs parallels the perceptual segregation of multi-voiced music....

  2. Skriftlig feedback i engelskundervisningen

    DEFF Research Database (Denmark)

    Kjærgaard, Hanne Wacher

    2017-01-01

    The article describes useful feedback strategies in language teaching and describes the feedback practices of lower-seconday teachers in Denmark. The article is aimed at language teahcers in secondary schools.......The article describes useful feedback strategies in language teaching and describes the feedback practices of lower-seconday teachers in Denmark. The article is aimed at language teahcers in secondary schools....

  3. Student Engagement with Feedback

    Science.gov (United States)

    Scott, Jon; Shields, Cathy; Gardner, James; Hancock, Alysoun; Nutt, Alex

    2011-01-01

    This report considers Biological Sciences students' perceptions of feedback, compared with those of the University as a whole, this includes what forms of feedback were considered most useful and how feedback used. Compared with data from previous studies, Biological Sciences students gave much greater recognition to oral feedback, placing it on a…

  4. DolphinAtack: Inaudible Voice Commands

    OpenAIRE

    Zhang, Guoming; Yan, Chen; Ji, Xiaoyu; Zhang, Taimin; Zhang, Tianchen; Xu, Wenyuan

    2017-01-01

    Speech recognition (SR) systems such as Siri or Google Now have become an increasingly popular human-computer interaction method, and have turned various systems into voice controllable systems(VCS). Prior work on attacking VCS shows that the hidden voice commands that are incomprehensible to people can control the systems. Hidden voice commands, though hidden, are nonetheless audible. In this work, we design a completely inaudible attack, DolphinAttack, that modulates voice commands on ultra...

  5. Permanent Quadriplegia Following Replacement of Voice Prosthesis.

    Science.gov (United States)

    Ozturk, Kayhan; Erdur, Omer; Kibar, Ertugrul

    2016-11-01

    The authors presented a patient with quadriplegia caused by cervical spine abscess following voice prosthesis replacement. The authors present the first reported permanent quadriplegia patient caused by voice prosthesis replacement. The authors wanted to emphasize that life-threatening complications may be faced during the replacement of voice prosthesis. Care should be taken during the replacement of voice prosthesis and if some problems have been faced during the procedure patients must be followed closely.

  6. Cognitive and behavioural therapy of voices for with patients intellectual disability: Two case reports

    Directory of Open Access Journals (Sweden)

    Pernier Sophie

    2007-08-01

    Full Text Available Abstract Background Two case studies are presented to examine how cognitive behavioural therapy (CBT of auditory hallucinations can be fitted to mild and moderate intellectual disability. Methods A 38-year-old female patient with mild intellectual disability and a 44-year-old male patient with moderate intellectual disability, both suffering from persistent auditory hallucinations, were treated with CBT. Patients were assessed on beliefs about their voices and their inappropriate coping behaviour to them. The traditional CBT techniques were modified to reduce the emphasis placed on cognitive abilities. Verbal strategies were replaced by more concrete tasks using roleplaying, figurines and touch and feel experimentation. Results Both patients improved on selected variables. They both gradually managed to reduce the power they attributed to the voice after the introduction of the therapy, and maintained their progress at follow-up. Their inappropriate behaviour consecutive to the belief about voices diminished in both cases. Conclusion These two case studies illustrate the feasibility of CBT for psychotic symptoms with intellectually disabled people, but need to be confirmed by more stringent studies.

  7. I like my voice better: self-enhancement bias in perceptions of voice attractiveness.

    Science.gov (United States)

    Hughes, Susan M; Harrison, Marissa A

    2013-01-01

    Previous research shows that the human voice can communicate a wealth of nonsemantic information; preferences for voices can predict health, fertility, and genetic quality of the speaker, and people often use voice attractiveness, in particular, to make these assessments of others. But it is not known what we think of the attractiveness of our own voices as others hear them. In this study eighty men and women rated the attractiveness of an array of voice recordings of different individuals and were not told that their own recorded voices were included in the presentation. Results showed that participants rated their own voices as sounding more attractive than others had rated their voices, and participants also rated their own voices as sounding more attractive than they had rated the voices of others. These findings suggest that people may engage in vocal implicit egotism, a form of self-enhancement.

  8. Vocal Acoustic and Auditory-Perceptual Characteristics During Fluctuations in Estradiol Levels During the Menstrual Cycle: A Longitudinal Study.

    Science.gov (United States)

    Arruda, Polyanna; Diniz da Rosa, Marine Raquel; Almeida, Larissa Nadjara Alves; de Araujo Pernambuco, Leandro; Almeida, Anna Alice

    2018-03-07

    Estradiol production varies cyclically, changes in levels are hypothesized to affect the voice. The main objective of this study was to investigate vocal acoustic and auditory-perceptual characteristics during fluctuations in the levels of the hormone estradiol during the menstrual cycle. A total of 44 volunteers aged between 18 and 45 were selected. Of these, 27 women with regular menstrual cycles comprised the test group (TG) and 17 combined oral contraceptive users comprised the control group (CG). The study was performed in two phases. In phase 1, anamnesis was performed. Subsequently, the TG underwent blood sample collection for measurement of estradiol levels and voice recording for later acoustic and auditory-perceptual analysis. The CG underwent only voice recording. Phase 2 involved the same measurements as phase 1 for each group. Variables were evaluated using descriptive and inferential analysis to compare groups and phases and to determine relationships between variables. Voice changes were found during the menstrual cycle, and such changes were determined to be related to variations in estradiol levels. Impaired voice quality was observed to be associated with decreased levels of estradiol. The CG did not demonstrate significant vocal changes during phases 1 and 2. The TG showed significant increases in vocal parameters of roughness, tension, and instability during phase 2 (the period of low estradiol levels) when compared with the CG. Low estradiol levels were also found to be negatively correlated with the parameters of tension, instability, and jitter and positively correlated with fundamental voice frequency. Copyright © 2018 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  9. Identification of neural structures involved in stuttering using vibrotactile feedback.

    Science.gov (United States)

    Cheadle, Oliver; Sorger, Clarissa; Howell, Peter

    Feedback delivered over auditory and vibratory afferent pathways has different effects on the fluency of people who stutter (PWS). These features were exploited to investigate the neural structures involved in stuttering. The speech signal vibrated locations on the body (vibrotactile feedback, VTF). Eleven PWS read passages under VTF and control (no-VTF) conditions. All combinations of vibration amplitude, synchronous or delayed VTF and vibrator position (hand, sternum or forehead) were presented. Control conditions were performed at the beginning, middle and end of test sessions. Stuttering rate, but not speaking rate, differed between the control and VTF conditions. Notably, speaking rate did not change between when VTF was delayed versus when it was synchronous in contrast with what happens with auditory feedback. This showed that cerebellar mechanisms, which are affected when auditory feedback is delayed, were not implicated in the fluency-enhancing effects of VTF, suggesting that there is a second fluency-enhancing mechanism. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. Maps of the Auditory Cortex.

    Science.gov (United States)

    Brewer, Alyssa A; Barton, Brian

    2016-07-08

    One of the fundamental properties of the mammalian brain is that sensory regions of cortex are formed of multiple, functionally specialized cortical field maps (CFMs). Each CFM comprises two orthogonal topographical representations, reflecting two essential aspects of sensory space. In auditory cortex, auditory field maps (AFMs) are defined by the combination of tonotopic gradients, representing the spectral aspects of sound (i.e., tones), with orthogonal periodotopic gradients, representing the temporal aspects of sound (i.e., period or temporal envelope). Converging evidence from cytoarchitectural and neuroimaging measurements underlies the definition of 11 AFMs across core and belt regions of human auditory cortex, with likely homology to those of macaque. On a macrostructural level, AFMs are grouped into cloverleaf clusters, an organizational structure also seen in visual cortex. Future research can now use these AFMs to investigate specific stages of auditory processing, key for understanding behaviors such as speech perception and multimodal sensory integration.

  11. Demodulation Processes in Auditory Perception

    National Research Council Canada - National Science Library

    Feth, Lawrence

    1997-01-01

    The long range goal of this project was the understanding of human auditory processing of information conveyed by complex, time varying signals such as speech, music or important environmental sounds...

  12. Analyzing the mediated voice - a datasession

    DEFF Research Database (Denmark)

    Lawaetz, Anna

    Broadcasted voices are technologically manipulated. In order to achieve a certain autencity or sound of “reality” paradoxically the voices are filtered and trained in order to reach the listeners. This “mis-en-scene” is important knowledge when it comes to the development of a consistent method o...... of analysis of the mediated voice...

  13. Voices Not Heard: Voice-Use Profiles of Elementary Music Teachers, the Effects of Voice Amplification on Vocal Load, and Perceptions of Issues Surrounding Voice Use

    Science.gov (United States)

    Morrow, Sharon L.

    2009-01-01

    Teachers represent the largest group of occupational voice users and have voice-related problems at a rate of over twice that found in the general population. Among teachers, music teachers are roughly four times more likely than classroom teachers to develop voice-related problems. Although it has been established that music teachers use their…

  14. Intentional preparation of auditory attention-switches: Explicit cueing and sequential switch-predictability.

    Science.gov (United States)

    Seibold, Julia C; Nolden, Sophie; Oberem, Josefa; Fels, Janina; Koch, Iring

    2018-06-01

    In an auditory attention-switching paradigm, participants heard two simultaneously spoken number-words, each presented to one ear, and decided whether the target number was smaller or larger than 5 by pressing a left or right key. An instructional cue in each trial indicated which feature had to be used to identify the target number (e.g., female voice). Auditory attention-switch costs were found when this feature changed compared to when it repeated in two consecutive trials. Earlier studies employing this paradigm showed mixed results when they examined whether such cued auditory attention-switches can be prepared actively during the cue-stimulus interval. This study systematically assessed which preconditions are necessary for the advance preparation of auditory attention-switches. Three experiments were conducted that controlled for cue-repetition benefits, modality switches between cue and stimuli, as well as for predictability of the switch-sequence. Only in the third experiment, in which predictability for an attention-switch was maximal due to a pre-instructed switch-sequence and predictable stimulus onsets, active switch-specific preparation was found. These results suggest that the cognitive system can prepare auditory attention-switches, and this preparation seems to be triggered primarily by the memorised switching-sequence and valid expectations about the time of target onset.

  15. From sensory to long-term memory: evidence from auditory memory reactivation studies.

    Science.gov (United States)

    Winkler, István; Cowan, Nelson

    2005-01-01

    Everyday experience tells us that some types of auditory sensory information are retained for long periods of time. For example, we are able to recognize friends by their voice alone or identify the source of familiar noises even years after we last heard the sounds. It is thus somewhat surprising that the results of most studies of auditory sensory memory show that acoustic details, such as the pitch of a tone, fade from memory in ca. 10-15 s. One should, therefore, ask (1) what types of acoustic information can be retained for a longer term, (2) what circumstances allow or help the formation of durable memory records for acoustic details, and (3) how such memory records can be accessed. The present review discusses the results of experiments that used a model of auditory recognition, the auditory memory reactivation paradigm. Results obtained with this paradigm suggest that the brain stores features of individual sounds embedded within representations of acoustic regularities that have been detected for the sound patterns and sequences in which the sounds appeared. Thus, sounds closely linked with their auditory context are more likely to be remembered. The representations of acoustic regularities are automatically activated by matching sounds, enabling object recognition.

  16. Interventions for preventing voice disorders in adults.

    Science.gov (United States)

    Ruotsalainen, J H; Sellman, J; Lehto, L; Jauhiainen, M; Verbeek, J H

    2007-10-17

    Poor voice quality due to a voice disorder can lead to a reduced quality of life. In occupations where voice use is substantial it can lead to periods of absence from work. To evaluate the effectiveness of interventions to prevent voice disorders in adults. We searched MEDLINE (PubMed, 1950 to 2006), EMBASE (1974 to 2006), CENTRAL (The Cochrane Library, Issue 2 2006), CINAHL (1983 to 2006), PsychINFO (1967 to 2006), Science Citation Index (1986 to 2006) and the Occupational Health databases OSH-ROM (to 2006). The date of the last search was 05/04/06. Randomised controlled clinical trials (RCTs) of interventions evaluating the effectiveness of treatments to prevent voice disorders in adults. For work-directed interventions interrupted time series and prospective cohort studies were also eligible. Two authors independently extracted data and assessed trial quality. Meta-analysis was performed where appropriate. We identified two randomised controlled trials including a total of 53 participants in intervention groups and 43 controls. One study was conducted with teachers and the other with student teachers. Both trials were poor quality. Interventions were grouped into 1) direct voice training, 2) indirect voice training and 3) direct and indirect voice training combined.1) Direct voice training: One study did not find a significant decrease of the Voice Handicap Index for direct voice training compared to no intervention.2) Indirect voice training: One study did not find a significant decrease of the Voice Handicap Index for indirect voice training when compared to no intervention.3) Direct and indirect voice training combined: One study did not find a decrease of the Voice Handicap Index for direct and indirect voice training combined when compared to no intervention. The same study did however find an improvement in maximum phonation time (Mean Difference -3.18 sec; 95 % CI -4.43 to -1.93) for direct and indirect voice training combined when compared to no

  17. The Voice of Anger: Oscillatory EEG Responses to Emotional Prosody.

    Directory of Open Access Journals (Sweden)

    Renata Del Giudice

    Full Text Available Emotionally relevant stimuli and in particular anger are, due to their evolutionary relevance, often processed automatically and able to modulate attention independent of conscious access. Here, we tested whether attention allocation is enhanced when auditory stimuli are uttered by an angry voice. We recorded EEG and presented healthy individuals with a passive condition where unfamiliar names as well as the subject's own name were spoken both with an angry and neutral prosody. The active condition instead, required participants to actively count one of the presented (angry names. Results revealed that in the passive condition the angry prosody only elicited slightly stronger delta synchronization as compared to a neutral voice. In the active condition the attended (angry target was related to enhanced delta/theta synchronization as well as alpha desynchronization suggesting enhanced allocation of attention and utilization of working memory resources. Altogether, the current results are in line with previous findings and highlight that attention orientation can be systematically related to specific oscillatory brain responses. Potential applications include assessment of non-communicative clinical groups such as post-comatose patients.

  18. Domestic dogs and puppies can use human voice direction referentially.

    Science.gov (United States)

    Rossano, Federico; Nitzschner, Marie; Tomasello, Michael

    2014-06-22

    Domestic dogs are particularly skilled at using human visual signals to locate hidden food. This is, to our knowledge, the first series of studies that investigates the ability of dogs to use only auditory communicative acts to locate hidden food. In a first study, from behind a barrier, a human expressed excitement towards a baited box on either the right or left side, while sitting closer to the unbaited box. Dogs were successful in following the human's voice direction and locating the food. In the two following control studies, we excluded the possibility that dogs could locate the box containing food just by relying on smell, and we showed that they would interpret a human's voice direction in a referential manner only when they could locate a possible referent (i.e. one of the boxes) in the environment. Finally, in a fourth study, we tested 8-14-week-old puppies in the main experimental test and found that those with a reasonable amount of human experience performed overall even better than the adult dogs. These results suggest that domestic dogs' skills in comprehending human communication are not based on visual cues alone, but are instead multi-modal and highly flexible. Moreover, the similarity between young and adult dogs' performances has important implications for the domestication hypothesis.

  19. Objective Voice Parameters in Colombian School Workers with Healthy Voices

    Directory of Open Access Journals (Sweden)

    Lady Catherine Cantor Cutiva

    2015-09-01

    Full Text Available Objectives: To characterize the objective voice parameters among school workers, and to identi­fy associated factors of three objective voice parameters, namely fundamental frequency, sound pressure level and maximum phonation time. Materials and methods: We conducted a cross-sectional study among 116 Colombian teachers and 20 Colombian non-teachers. After signing the informed consent form, participants filled out a questionnaire. Then, a voice sample was recorded and evaluated perceptually by a speech therapist and by objective voice analysis with praat software. Short-term environmental measurements of sound level, temperature, humi­dity, and reverberation time were conducted during visits at the workplaces, such as classrooms and offices. Linear regression analysis was used to determine associations between individual and work-related factors and objective voice parameters. Results: Compared with men, women had higher fundamental frequency (201 Hz for teachers and 209 for non-teachers vs. 120 Hz for teachers and 127 for non-teachers and sound pressure level (82 dB vs. 80 dB, and shorter maximum phonation time (around 14 seconds vs. around 16 seconds. Female teachers younger than 50 years of age evidenced a significant tendency to speak with lower fundamental frequen­cy and shorter mpt compared with female teachers older than 50 years of age. Female teachers had significantly higher fundamental frequency (66 Hz, higher sound pressure level (2 dB and short phonation time (2 seconds than male teachers. Conclusion: Female teachers younger than 50 years of age had significantly lower F0 and shorter mpt compared with those older than 50 years of age. The multivariate analysis showed that gender was a much more important determinant of variations in F0, spl and mpt than age and teaching occupation. Objectively measured temperature also contributed to the changes on spl among school workers.

  20. Diminished auditory sensory gating during active auditory verbal hallucinations.

    Science.gov (United States)

    Thoma, Robert J; Meier, Andrew; Houck, Jon; Clark, Vincent P; Lewine, Jeffrey D; Turner, Jessica; Calhoun, Vince; Stephen, Julia

    2017-10-01

    Auditory sensory gating, assessed in a paired-click paradigm, indicates the extent to which incoming stimuli are filtered, or "gated", in auditory cortex. Gating is typically computed as the ratio of the peak amplitude of the event related potential (ERP) to a second click (S2) divided by the peak amplitude of the ERP to a first click (S1). Higher gating ratios are purportedly indicative of incomplete suppression of S2 and considered to represent sensory processing dysfunction. In schizophrenia, hallucination severity is positively correlated with gating ratios, and it was hypothesized that a failure of sensory control processes early in auditory sensation (gating) may represent a larger system failure within the auditory data stream; resulting in auditory verbal hallucinations (AVH). EEG data were collected while patients (N=12) with treatment-resistant AVH pressed a button to indicate the beginning (AVH-on) and end (AVH-off) of each AVH during a paired click protocol. For each participant, separate gating ratios were computed for the P50, N100, and P200 components for each of the AVH-off and AVH-on states. AVH trait severity was assessed using the Psychotic Symptoms Rating Scales AVH Total score (PSYRATS). The results of a mixed model ANOVA revealed an overall effect for AVH state, such that gating ratios were significantly higher during the AVH-on state than during AVH-off for all three components. PSYRATS score was significantly and negatively correlated with N100 gating ratio only in the AVH-off state. These findings link onset of AVH with a failure of an empirically-defined auditory inhibition system, auditory sensory gating, and pave the way for a sensory gating model of AVH. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Voice Disorders in Teachers: Clinical, Videolaryngoscopical, and Vocal Aspects.

    Science.gov (United States)

    Pereira, Eny Regina Bóia Neves; Tavares, Elaine Lara Mendes; Martins, Regina Helena Garcia

    2015-09-01

    Dysphonia is more prevalent in teachers than among the general population. The objective of this study was to analyze clinical, vocal, and videolaryngoscopical aspects in dysphonic teachers. Ninety dysphonic teachers were inquired about their voice, comorbidities, and work conditions. They underwent vocal auditory-perceptual evaluation (maximum phonation time and GRBASI scale), acoustic voice analysis, and videolaryngoscopy. The results were compared with a control group consisting of 90 dysphonic nonteachers, of similar gender and ages, and with professional activities excluding teaching and singing. In both groups, there were 85 women and five men (age range 31-50 years). In the controls, the majority of subjects worked in domestic activities, whereas the majority of teachers worked in primary (42.8%) and secondary school (37.7%). Teachers and controls reported, respectively: vocal abuse (76.7%; 37.8%), weekly hours of work between 21 and 40 years (72.2%; 80%), under 10 years of practice (36%; 23%), absenteeism (23%; 0%), sinonasal (66%; 20%) and gastroesophageal symptoms (44%; 22%), hoarseness (82%; 78%), throat clearing (70%; 62%), and phonatory effort (72%; 52%). In both groups, there were decreased values of maximum phonation time, impairment of the G parameter in the GRBASI scale (82%), decrease of F0 and increase of the rest of acoustic parameters. Nodules and laryngopharyngeal reflux were predominant in teachers; laryngopharyngeal reflux, polyps, and sulcus vocalis predominated in the controls. Vocal symptoms, comorbidities, and absenteeism were predominant among teachers. The vocal analyses were similar in both groups. Nodules and laryngopharyngeal reflux were predominant among teachers, whereas polyps, laryngopharyngeal reflux, and sulcus were predominant among controls. Copyright © 2015 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  2. Work-related voice disorder

    Directory of Open Access Journals (Sweden)

    Paulo Eduardo Przysiezny

    2015-04-01

    Full Text Available INTRODUCTION: Dysphonia is the main symptom of the disorders of oral communication. However, voice disorders also present with other symptoms such as difficulty in maintaining the voice (asthenia, vocal fatigue, variation in habitual vocal fundamental frequency, hoarseness, lack of vocal volume and projection, loss of vocal efficiency, and weakness when speaking. There are several proposals for the etiologic classification of dysphonia: functional, organofunctional, organic, and work-related voice disorder (WRVD.OBJECTIVE: To conduct a literature review on WRVD and on the current Brazilian labor legislation.METHODS: This was a review article with bibliographical research conducted on the PubMed and Bireme databases, using the terms "work-related voice disorder", "occupational dysphonia", "dysphonia and labor legislation", and a review of labor and social security relevant laws.CONCLUSION: WRVD is a situation that frequently is listed as a reason for work absenteeism, functional rehabilitation, or for prolonged absence from work. Currently, forensic physicians have no comparative parameters to help with the analysis of vocal disorders. In certain situations WRVD may cause, work disability. This disorder may be labor-related, or be an adjuvant factor to work-related diseases.

  3. FILTWAM and Voice Emotion Recognition

    NARCIS (Netherlands)

    Bahreini, Kiavash; Nadolski, Rob; Westera, Wim

    2014-01-01

    This paper introduces the voice emotion recognition part of our framework for improving learning through webcams and microphones (FILTWAM). This framework enables multimodal emotion recognition of learners during game-based learning. The main goal of this study is to validate the use of microphone

  4. Playful Interaction with Voice Sensing Modular Robots

    DEFF Research Database (Denmark)

    Heesche, Bjarke; MacDonald, Ewen; Fogh, Rune

    2013-01-01

    This paper describes a voice sensor, suitable for modular robotic systems, which estimates the energy and fundamental frequency, F0, of the user’s voice. Through a number of example applications and tests with children, we observe how the voice sensor facilitates playful interaction between child...... children and two different robot configurations. In future work, we will investigate if such a system can motivate children to improve voice control and explore how to extend the sensor to detect emotions in the user’s voice....

  5. Deep transcranial magnetic stimulation for the treatment of auditory hallucinations: a preliminary open-label study.

    Science.gov (United States)

    Rosenberg, Oded; Roth, Yiftach; Kotler, Moshe; Zangen, Abraham; Dannon, Pinhas

    2011-02-09

    Schizophrenia is a chronic and disabling disease that presents with delusions and hallucinations. Auditory hallucinations are usually expressed as voices speaking to or about the patient. Previous studies have examined the effect of repetitive transcranial magnetic stimulation (TMS) over the temporoparietal cortex on auditory hallucinations in schizophrenic patients. Our aim was to explore the potential effect of deep TMS, using the H coil over the same brain region on auditory hallucinations. Eight schizophrenic patients with refractory auditory hallucinations were recruited, mainly from Beer Ya'akov Mental Health Institution (Tel Aviv university, Israel) ambulatory clinics, as well as from other hospitals outpatient populations. Low-frequency deep TMS was applied for 10 min (600 pulses per session) to the left temporoparietal cortex for either 10 or 20 sessions. Deep TMS was applied using Brainsway's H1 coil apparatus. Patients were evaluated using the Auditory Hallucinations Rating Scale (AHRS) as well as the Scale for the Assessment of Positive Symptoms scores (SAPS), Clinical Global Impressions (CGI) scale, and the Scale for Assessment of Negative Symptoms (SANS). This preliminary study demonstrated a significant improvement in AHRS score (an average reduction of 31.7% ± 32.2%) and to a lesser extent improvement in SAPS results (an average reduction of 16.5% ± 20.3%). In this study, we have demonstrated the potential of deep TMS treatment over the temporoparietal cortex as an add-on treatment for chronic auditory hallucinations in schizophrenic patients. Larger samples in a double-blind sham-controlled design are now being preformed to evaluate the effectiveness of deep TMS treatment for auditory hallucinations. This trial is registered with clinicaltrials.gov (identifier: NCT00564096).

  6. Deep transcranial magnetic stimulation for the treatment of auditory hallucinations: a preliminary open-label study

    Directory of Open Access Journals (Sweden)

    Zangen Abraham

    2011-02-01

    Full Text Available Abstract Background Schizophrenia is a chronic and disabling disease that presents with delusions and hallucinations. Auditory hallucinations are usually expressed as voices speaking to or about the patient. Previous studies have examined the effect of repetitive transcranial magnetic stimulation (TMS over the temporoparietal cortex on auditory hallucinations in schizophrenic patients. Our aim was to explore the potential effect of deep TMS, using the H coil over the same brain region on auditory hallucinations. Patients and methods Eight schizophrenic patients with refractory auditory hallucinations were recruited, mainly from Beer Ya'akov Mental Health Institution (Tel Aviv university, Israel ambulatory clinics, as well as from other hospitals outpatient populations. Low-frequency deep TMS was applied for 10 min (600 pulses per session to the left temporoparietal cortex for either 10 or 20 sessions. Deep TMS was applied using Brainsway's H1 coil apparatus. Patients were evaluated using the Auditory Hallucinations Rating Scale (AHRS as well as the Scale for the Assessment of Positive Symptoms scores (SAPS, Clinical Global Impressions (CGI scale, and the Scale for Assessment of Negative Symptoms (SANS. Results This preliminary study demonstrated a significant improvement in AHRS score (an average reduction of 31.7% ± 32.2% and to a lesser extent improvement in SAPS results (an average reduction of 16.5% ± 20.3%. Conclusions In this study, we have demonstrated the potential of deep TMS treatment over the temporoparietal cortex as an add-on treatment for chronic auditory hallucinations in schizophrenic patients. Larger samples in a double-blind sham-controlled design are now being preformed to evaluate the effectiveness of deep TMS treatment for auditory hallucinations. Trial registration This trial is registered with clinicaltrials.gov (identifier: NCT00564096.

  7. Auditory word recognition: extrinsic and intrinsic effects of word frequency.

    Science.gov (United States)

    Connine, C M; Titone, D; Wang, J

    1993-01-01

    Two experiments investigated the influence of word frequency in a phoneme identification task. Speech voicing continua were constructed so that one endpoint was a high-frequency word and the other endpoint was a low-frequency word (e.g., best-pest). Experiment 1 demonstrated that ambiguous tokens were labeled such that a high-frequency word was formed (intrinsic frequency effect). Experiment 2 manipulated the frequency composition of the list (extrinsic frequency effect). A high-frequency list bias produced an exaggerated influence of frequency; a low-frequency list bias showed a reverse frequency effect. Reaction time effects were discussed in terms of activation and postaccess decision models of frequency coding. The results support a late use of frequency in auditory word recognition.

  8. VOICE QUALITY BEFORE AND AFTER THYROIDECTOMY

    Directory of Open Access Journals (Sweden)

    Dora CVELBAR

    2016-04-01

    Full Text Available Introduction: Voice disorders are a well-known complication which is often associated with thyroid gland diseases and because voice is still the basic mean of communication it is very important to maintain its quality healthy. Objectives: The aim of this study referred to questions whether there is a statistically significant difference between results of voice self-assessment, perceptual voice assessment and acoustic voice analysis before and after thyroidectomy and whether there are statistically significant correlations between variables of voice self-assessment, perceptual assessment and acoustic analysis before and after thyroidectomy. Methods: This scientific research included 12 participants aged between 41 and 76. Voice self-assessment was conducted with the help of Croatian version of Voice Handicap Index (VHI. Recorded reading samples were used for perceptual assessment and later evaluated by two clinical speech and language therapists. Recorded samples of phonation were used for acoustic analysis which was conducted with the help of acoustic program Praat. All of the data was processed through descriptive statistics and nonparametric statistical methods. Results: Results showed that there are statistically significant differences between results of voice self-assessments and results of acoustic analysis before and after thyroidectomy. Statistically significant correlations were found between variables of perceptual assessment and acoustic analysis. Conclusion: Obtained results indicate the importance of multidimensional, preoperative and postoperative assessment. This kind of assessment allows the clinician to describe all of the voice features and provides appropriate recommendation for further rehabilitation to the patient in order to optimize voice outcomes.

  9. Application of computer voice input/output

    International Nuclear Information System (INIS)

    Ford, W.; Shirk, D.G.

    1981-01-01

    The advent of microprocessors and other large-scale integration (LSI) circuits is making voice input and output for computers and instruments practical; specialized LSI chips for speech processing are appearing on the market. Voice can be used to input data or to issue instrument commands; this allows the operator to engage in other tasks, move about, and to use standard data entry systems. Voice synthesizers can generate audible, easily understood instructions. Using voice characteristics, a control system can verify speaker identity for security purposes. Two simple voice-controlled systems have been designed at Los Alamos for nuclear safeguards applicaations. Each can easily be expanded as time allows. The first system is for instrument control that accepts voice commands and issues audible operator prompts. The second system is for access control. The speaker's voice is used to verify his identity and to actuate external devices

  10. Precise auditory-vocal mirroring in neurons for learned vocal communication.

    Science.gov (United States)

    Prather, J F; Peters, S; Nowicki, S; Mooney, R

    2008-01-17

    Brain mechanisms for communication must establish a correspondence between sensory and motor codes used to represent the signal. One idea is that this correspondence is established at the level of single neurons that are active when the individual performs a particular gesture or observes a similar gesture performed by another individual. Although neurons that display a precise auditory-vocal correspondence could facilitate vocal communication, they have yet to be identified. Here we report that a certain class of neurons in the swamp sparrow forebrain displays a precise auditory-vocal correspondence. We show that these neurons respond in a temporally precise fashion to auditory presentation of certain note sequences in this songbird's repertoire and to similar note sequences in other birds' songs. These neurons display nearly identical patterns of activity when the bird sings the same sequence, and disrupting auditory feedback does not alter this singing-related activity, indicating it is motor in nature. Furthermore, these neurons innervate striatal structures important for song learning, raising the possibility that singing-related activity in these cells is compared to auditory feedback to guide vocal learning.

  11. Internal versus External Auditory Hallucinations in Schizophrenia: Symptom and Course Correlates

    Science.gov (United States)

    Docherty, Nancy M.; Dinzeo, Thomas J.; McCleery, Amanda; Bell, Emily K.; Shakeel, Mohammed K.; Moe, Aubrey

    2015-01-01

    Introduction The auditory hallucinations associated with schizophrenia are phenomenologically diverse. “External” hallucinations classically have been considered to reflect more severe psychopathology than “internal” hallucinations, but empirical support has been equivocal. Methods We examined associations of “internal” v. “external” hallucinations with (a) other characteristics of the hallucinations, (b) severity of other symptoms, and (c) course of illness variables, in a sample of 97 stable outpatients with schizophrenia or schizoaffective disorder who experienced auditory hallucinations. Results Patients with internal hallucinations did not differ from those with external hallucinations on severity of other symptoms. However, they reported their hallucinations to be more emotionally negative, distressing, and long-lasting, less controllable, and less likely to remit over time. They also were more likely to experience voices commenting, conversing, or commanding. However, they also were more likely to have insight into the self-generated nature of their voices. Patients with internal hallucinations were not older, but had a later age of illness onset. Conclusions Differences in characteristics of auditory hallucinations are associated with differences in other characteristics of the disorder, and hence may be relevant to identifying subgroups of patients that are more homogeneous with respect to their underlying disease processes. PMID:25530157

  12. Fault Tolerant Feedback Control

    DEFF Research Database (Denmark)

    Stoustrup, Jakob; Niemann, H.

    2001-01-01

    An architecture for fault tolerant feedback controllers based on the Youla parameterization is suggested. It is shown that the Youla parameterization will give a residual vector directly in connection with the fault diagnosis part of the fault tolerant feedback controller. It turns out...... that there is a separation be-tween the feedback controller and the fault tolerant part. The closed loop feedback properties are handled by the nominal feedback controller and the fault tolerant part is handled by the design of the Youla parameter. The design of the fault tolerant part will not affect the design...... of the nominal feedback con-troller....

  13. The development of the Spanish verb ir into auxiliary of voice

    DEFF Research Database (Denmark)

    Vinther, Thora

    2005-01-01

    spanish, syntax, grammaticalisation, past participle, passive voice, middle voice, language development......spanish, syntax, grammaticalisation, past participle, passive voice, middle voice, language development...

  14. Feedback on Feedback--Does It Work?

    Science.gov (United States)

    Speicher, Oranna; Stollhans, Sascha

    2015-01-01

    It is well documented that providing assessment feedback through the medium of screencasts is favourably received by students and encourages deeper engagement with the feedback given by the language teacher (inter alia Abdous & Yoshimura, 2010; Brick & Holmes, 2008; Cann, 2007; Stannard, 2007). In this short paper we will report the…

  15. Phonomicrosurgery in Vocal Fold Nodules: Quantification of Outcomes in Professional and Non-Professional Voice Users.

    Science.gov (United States)

    Caffier, Philipp P; Salmen, Tatjana; Ermakova, Tatiana; Forbes, Eleanor; Ko, Seo-Rin; Song, Wen; Gross, Manfred; Nawka, Tadeus

    2017-12-01

    There are few data demonstrating the specific extent to which surgical intervention for vocal fold nodules (VFN) improves vocal function in professional (PVU) and non-professional voice users (NVU). The objective of this study was to compare and quantify results after phonomicrosurgery for VFN in these patient groups. In a prospective clinical study, surgery was performed via microlaryngoscopy in 37 female patients with chronic VFN manifestations (38±12 yrs, mean±SD). Pre- and postoperative evaluations of treatment efficacy comprised videolaryngostroboscopy, auditory-perceptual voice assessment, voice range profile (VRP), acoustic-aerodynamic analysis, and voice handicap index (VHI-9i). The dysphonia severity index (DSI) was compared with the vocal extent measure (VEM). PVU (n=24) and NVU (n=13) showed comparable laryngeal findings and levels of suffering (VHI-9i 16±7 vs 17±8), but PVU had a better pretherapeutic vocal range (26.8±7.4 vs 17.7±5.1 semitones, p<0.001) and vocal capacity (VEM 106±18 vs 74±29, p<0.01). Three months postoperatively, all patients had straight vocal fold edges, complete glottal closure, and recovered mucosal wave propagation. The mean VHI-9i score decreased by 8±6 points. DSI increased from 4.0±2.4 to 5.5±2.4, and VEM from 95±27 to 108±23 (p<0.001). Both parameters correlated significantly (rs=0.82). The average vocal range increased by 4.1±5.3 semitones, and the mean speaking pitch lowered by 0.5±1.4 semitones. These results confirm that phonomicrosurgery for VFN is a safe therapy for voice improvement in both PVU and NVU who do not respond to voice therapy alone. Top-level artistic capabilities in PVU were restored, but numeric changes of most vocal parameters were considerably larger in NVU.

  16. The role of the medial temporal limbic system in processing emotions in voice and music.

    Science.gov (United States)

    Frühholz, Sascha; Trost, Wiebke; Grandjean, Didier

    2014-12-01

    Subcortical brain structures of the limbic system, such as the amygdala, are thought to decode the emotional value of sensory information. Recent neuroimaging studies, as well as lesion studies in patients, have shown that the amygdala is sensitive to emotions in voice and music. Similarly, the hippocampus, another part of the temporal limbic system (TLS), is responsive to vocal and musical emotions, but its specific roles in emotional processing from music and especially from voices have been largely neglected. Here we review recent research on vocal and musical emotions, and outline commonalities and differences in the neural processing of emotions in the TLS in terms of emotional valence, emotional intensity and arousal, as well as in terms of acoustic and structural features of voices and music. We summarize the findings in a neural framework including several subcortical and cortical functional pathways between the auditory system and the TLS. This framework proposes that some vocal expressions might already receive a fast emotional evaluation via a subcortical pathway to the amygdala, whereas cortical pathways to the TLS are thought to be equally used for vocal and musical emotions. While the amygdala might be specifically involved in a coarse decoding of the emotional value of voices and music, the hippocampus might process more complex vocal and musical emotions, and might have an important role especially for the decoding of musical emotions by providing memory-based and contextual associations. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Acoustic markers to differentiate gender in prepubescent children's speaking and singing voice.

    Science.gov (United States)

    Guzman, Marco; Muñoz, Daniel; Vivero, Martin; Marín, Natalia; Ramírez, Mirta; Rivera, María Trinidad; Vidal, Carla; Gerhard, Julia; González, Catalina

    2014-10-01

    Investigation sought to determine whether there is any acoustic variable to objectively differentiate gender in children with normal voices. A total of 30 children, 15 boys and 15 girls, with perceptually normal voices were examined. They were between 7 and 10 years old (mean: 8.1, SD: 0.7 years). Subjects were required to perform the following phonatory tasks: (1) to phonate sustained vowels [a:], [i:], [u:], (2) to read a phonetically balanced text, and (3) to sing a song. Acoustic analysis included long-term average spectrum (LTAS), fundamental frequency (F0), speaking fundamental frequency (SFF), equivalent continuous sound level (Leq), linear predictive code (LPC) to obtain formant frequencies, perturbation measures, harmonic to noise ratio (HNR), and Cepstral peak prominence (CPP). Auditory perceptual analysis was performed by four blinded judges to determine gender. No significant gender-related differences were found for most acoustic variables. Perceptual assessment showed good intra and inter rater reliability for gender. Cepstrum for [a:], alpha ratio in text, shimmer for [i:], F3 in [a:], and F3 in [i:], were the parameters that composed the multivariate logistic regression model to best differentiate male and female children's voices. Since perceptual assessment reliably detected gender, it is likely that other acoustic markers (not evaluated in the present study) are able to make clearer gender differences. For example, gender-specific patterns of intonation may be a more accurate feature for differentiating gender in children's voices. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  18. Haptic Feedback for Enhancing Realism of Walking Simulations

    DEFF Research Database (Denmark)

    Turchet, Luca; Burelli, Paolo; Serafin, Stefania

    2013-01-01

    system. While during the use of the interactive system subjects physically walked, during the use of the non-interactive system the locomotion was simulated while subjects were sitting on a chair. In both the configurations subjects were exposed to auditory and audio-visual stimuli presented...... with and without the haptic feedback. Results of the experiments provide a clear preference towards the simulations enhanced with haptic feedback showing that the haptic channel can lead to more realistic experiences in both interactive and non-interactive configurations. The majority of subjects clearly...... appreciated the added feedback. However, some subjects found the added feedback disturbing and annoying. This might be due on one hand to the limits of the haptic simulation and on the other hand to the different individual desire to be involved in the simulations. Our findings can be applied to the context...

  19. Promoting smoke-free homes: a novel behavioral intervention using real-time audio-visual feedback on airborne particle levels.

    Directory of Open Access Journals (Sweden)

    Neil E Klepeis

    Full Text Available Interventions are needed to protect the health of children who live with smokers. We pilot-tested a real-time intervention for promoting behavior change in homes that reduces second hand tobacco smoke (SHS levels. The intervention uses a monitor and feedback system to provide immediate auditory and visual signals triggered at defined thresholds of fine particle concentration. Dynamic graphs of real-time particle levels are also shown on a computer screen. We experimentally evaluated the system, field-tested it in homes with smokers, and conducted focus groups to obtain general opinions. Laboratory tests of the monitor demonstrated SHS sensitivity, stability, precision equivalent to at least 1 µg/m(3, and low noise. A linear relationship (R(2 = 0.98 was observed between the monitor and average SHS mass concentrations up to 150 µg/m(3. Focus groups and interviews with intervention participants showed in-home use to be acceptable and feasible. The intervention was evaluated in 3 homes with combined baseline and intervention periods lasting 9 to 15 full days. Two families modified their behavior by opening windows or doors, smoking outdoors, or smoking less. We observed evidence of lower SHS levels in these homes. The remaining household voiced reluctance to changing their smoking activity and did not exhibit lower SHS levels in main smoking areas or clear behavior change; however, family members expressed receptivity to smoking outdoors. This study established the feasibility of the real-time intervention, laying the groundwork for controlled trials with larger sample sizes. Visual and auditory cues may prompt family members to take immediate action to reduce SHS levels. Dynamic graphs of SHS levels may help families make decisions about specific mitigation approaches.

  20. Group climate in the voice therapy of patients with Parkinson's Disease.

    Science.gov (United States)

    Diaféria, Giovana; Madazio, Glaucya; Pacheco, Claudia; Takaki, Patricia Barbarini; Behlau, Mara

    2017-09-04

    To verify the impact that group dynamics and coaching strategies have on the PD patients voice, speech and communication, as well as the group climate. 16 individuals with mild to moderate dysarthria due to the PD were divided into two groups: the CG (8 patients), submitted to traditional therapy with 12 regular therapy sessions plus 4 additional support sessions; and the EG (8 patients), submitted to traditional therapy with 12 regular therapy sessions plus 4 sessions with group dynamics and coaching strategies. The Living with Dysarthria questionnaire (LwD), the self-evaluation of voice, speech and communication, and the perceptual-auditory analysis of the vocal quality were assess in 3 moments: pre-traditional therapy (pre); post-traditional therapy (post 1); and post support sessions/coaching strategies (post 2); in post 1 and post 2 moments, the Group Climate Questionnaire (GCQ) was also applied. CG and EG showed an improvement in the LwD from pre to post 1 and post 2 moments. Voice self-evaluation was better for the EG - when pre was compared with post 2 and when post 1 was compared with post 2 - ranging from regular to very good; both groups presented improvement in the communication self-evaluation. The perceptual-auditory evaluation of the vocal quality was better for the EG in the post 1 moment. No difference was found for the GCQ; however, the EG presented lower avoidance scores in post 2. All patients showed improvement in the voice, speech and communication self-evaluation; EG showed lower avoidance scores, creating a more collaborative and propitious environment for speech therapy.

  1. Rateless feedback codes

    DEFF Research Database (Denmark)

    Sørensen, Jesper Hemming; Koike-Akino, Toshiaki; Orlik, Philip

    2012-01-01

    This paper proposes a concept called rateless feedback coding. We redesign the existing LT and Raptor codes, by introducing new degree distributions for the case when a few feedback opportunities are available. We show that incorporating feedback to LT codes can significantly decrease both...... the coding overhead and the encoding/decoding complexity. Moreover, we show that, at the price of a slight increase in the coding overhead, linear complexity is achieved with Raptor feedback coding....

  2. Listen, you are writing!Speeding up online spelling with a dynamic auditory BCI

    Directory of Open Access Journals (Sweden)

    Martijn eSchreuder

    2011-10-01

    Full Text Available Representing an intuitive spelling interface for Brain-Computer Interfaces (BCI in the auditory domain is not straightforward. In consequence, all existing approaches based on event-related potentials (ERP rely at least partially on a visual representation of the interface. This online study introduces an auditory spelling interface that eliminates the necessity for such a visualization. In up to two sessions, a group of healthy subjects (N=21 was asked to use a text entry application, utilizing the spatial cues of the AMUSE paradigm (Auditory Multiclass Spatial ERP. The speller relies on the auditory sense both for stimulation and the core feedback. Without prior BCI experience, 76% of the participants were able to write a full sentence during the first session. By exploiting the advantages of a newly introduced dynamic stopping method, a maximum writing speed of 1.41 characters/minute (7.55 bits/minute could be reached during the second session (average: .94 char/min, 5.26 bits/min. For the first time, the presented work shows that an auditory BCI can reach performances similar to state-of-the-art visual BCIs based on covert attention. These results represent an important step towards a purely auditory BCI.

  3. Active auditory experience in infancy promotes brain plasticity in Theta and Gamma oscillations

    Directory of Open Access Journals (Sweden)

    Gabriella Musacchia

    2017-08-01

    Full Text Available Language acquisition in infants is driven by on-going neural plasticity that is acutely sensitive to environmental acoustic cues. Recent studies showed that attention-based experience with non-linguistic, temporally-modulated auditory stimuli sharpens cortical responses. A previous ERP study from this laboratory showed that interactive auditory experience via behavior-based feedback (AEx, over a 6-week period from 4- to 7-months-of-age, confers a processing advantage, compared to passive auditory exposure (PEx or maturation alone (Naïve Control, NC. Here, we provide a follow-up investigation of the underlying neural oscillatory patterns in these three groups. In AEx infants, Standard stimuli with invariant frequency (STD elicited greater Theta-band (4–6 Hz activity in Right Auditory Cortex (RAC, as compared to NC infants, and Deviant stimuli with rapid frequency change (DEV elicited larger responses in Left Auditory Cortex (LAC. PEx and NC counterparts showed less-mature bilateral patterns. AEx infants also displayed stronger Gamma (33–37 Hz activity in the LAC during DEV discrimination, compared to NCs, while NC and PEx groups demonstrated bilateral activity in this band, if at all. This suggests that interactive acoustic experience with non-linguistic stimuli can promote a distinct, robust and precise cortical pattern during rapid auditory processing, perhaps reflecting mechanisms that support fine-tuning of early acoustic mapping.

  4. Bihippocampal damage with emotional dysfunction: impaired auditory recognition of fear.

    Science.gov (United States)

    Ghika-Schmid, F; Ghika, J; Vuilleumier, P; Assal, G; Vuadens, P; Scherer, K; Maeder, P; Uske, A; Bogousslavsky, J

    1997-01-01

    A right-handed man developed a sudden transient, amnestic syndrome associated with bilateral hemorrhage of the hippocampi, probably due to Urbach-Wiethe disease. In the 3rd month, despite significant hippocampal structural damage on imaging, only a milder degree of retrograde and anterograde amnesia persisted on detailed neuropsychological examination. On systematic testing of recognition of facial and vocal expression of emotion, we found an impairment of the vocal perception of fear, but not that of other emotions, such as joy, sadness and anger. Such selective impairment of fear perception was not present in the recognition of facial expression of emotion. Thus emotional perception varies according to the different aspects of emotions and the different modality of presentation (faces versus voices). This is consistent with the idea that there may be multiple emotion systems. The study of emotional perception in this unique case of bilateral involvement of hippocampus suggests that this structure may play a critical role in the recognition of fear in vocal expression, possibly dissociated from that of other emotions and from that of fear in facial expression. In regard of recent data suggesting that the amygdala is playing a role in the recognition of fear in the auditory as well as in the visual modality this could suggest that the hippocampus may be part of the auditory pathway of fear recognition.

  5. A Positive Generation Effect on Memory for Auditory Context.

    Science.gov (United States)

    Overman, Amy A; Richard, Alison G; Stephens, Joseph D W

    2017-06-01

    Self-generation of information during memory encoding has large positive effects on subsequent memory for items, but mixed effects on memory for contextual information associated with items. A processing account of generation effects on context memory (Mulligan in Journal of Experimental Psychology: Learning, Memory, and Cognition, 30(4), 838-855, 2004; Mulligan, Lozito, & Rosner in Journal of Experimental Psychology: Learning, Memory, and Cognition, 32(4), 836-846, 2006) proposes that these effects depend on whether the generation task causes any shift in processing of the type of context features for which memory is being tested. Mulligan and colleagues have used this account to predict various negative effects of generation on context memory, but the account also predicts positive generation effects under certain circumstances. The present experiment provided a critical test of the processing account by examining how generation affected memory for auditory rather than visual context. Based on the processing account, we predicted that generation of rhyme words should enhance processing of auditory information associated with the words (i.e., voice gender), whereas generation of antonym words should have no effect. These predictions were confirmed, providing support to the processing account.

  6. The Mythology of Feedback

    Science.gov (United States)

    Adcroft, Andy

    2011-01-01

    Much of the general education and discipline-specific literature on feedback suggests that it is a central and important element of student learning. This paper examines feedback from a social process perspective and suggests that feedback is best understood through an analysis of the interactions between academics and students. The paper argues…

  7. Auditory Hallucinations in Acute Stroke

    Directory of Open Access Journals (Sweden)

    Yair Lampl

    2005-01-01

    Full Text Available Auditory hallucinations are uncommon phenomena which can be directly caused by acute stroke, mostly described after lesions of the brain stem, very rarely reported after cortical strokes. The purpose of this study is to determine the frequency of this phenomenon. In a cross sectional study, 641 stroke patients were followed in the period between 1996–2000. Each patient underwent comprehensive investigation and follow-up. Four patients were found to have post cortical stroke auditory hallucinations. All of them occurred after an ischemic lesion of the right temporal lobe. After no more than four months, all patients were symptom-free and without therapy. The fact the auditory hallucinations may be of cortical origin must be taken into consideration in the treatment of stroke patients. The phenomenon may be completely reversible after a couple of months.

  8. A Review of Auditory Prediction and Its Potential Role in Tinnitus Perception.

    Science.gov (United States)

    Durai, Mithila; O'Keeffe, Mary G; Searchfield, Grant D

    2018-06-01

    The precise mechanisms underlying tinnitus perception and distress are still not fully understood. A recent proposition is that auditory prediction errors and related memory representations may play a role in driving tinnitus perception. It is of interest to further explore this. To obtain a comprehensive narrative synthesis of current research in relation to auditory prediction and its potential role in tinnitus perception and severity. A narrative review methodological framework was followed. The key words Prediction Auditory, Memory Prediction Auditory, Tinnitus AND Memory, Tinnitus AND Prediction in Article Title, Abstract, and Keywords were extensively searched on four databases: PubMed, Scopus, SpringerLink, and PsychINFO. All study types were selected from 2000-2016 (end of 2016) and had the following exclusion criteria applied: minimum age of participants article not available in English. Reference lists of articles were reviewed to identify any further relevant studies. Articles were short listed based on title relevance. After reading the abstracts and with consensus made between coauthors, a total of 114 studies were selected for charting data. The hierarchical predictive coding model based on the Bayesian brain hypothesis, attentional modulation and top-down feedback serves as the fundamental framework in current literature for how auditory prediction may occur. Predictions are integral to speech and music processing, as well as in sequential processing and identification of auditory objects during auditory streaming. Although deviant responses are observable from middle latency time ranges, the mismatch negativity (MMN) waveform is the most commonly studied electrophysiological index of auditory irregularity detection. However, limitations may apply when interpreting findings because of the debatable origin of the MMN and its restricted ability to model real-life, more complex auditory phenomenon. Cortical oscillatory band activity may act as

  9. Pre-Attentive Auditory Processing of Lexicality

    Science.gov (United States)

    Jacobsen, Thomas; Horvath, Janos; Schroger, Erich; Lattner, Sonja; Widmann, Andreas; Winkler, Istvan

    2004-01-01

    The effects of lexicality on auditory change detection based on auditory sensory memory representations were investigated by presenting oddball sequences of repeatedly presented stimuli, while participants ignored the auditory stimuli. In a cross-linguistic study of Hungarian and German participants, stimulus sequences were composed of words that…

  10. Feature Assignment in Perception of Auditory Figure

    Science.gov (United States)

    Gregg, Melissa K.; Samuel, Arthur G.

    2012-01-01

    Because the environment often includes multiple sounds that overlap in time, listeners must segregate a sound of interest (the auditory figure) from other co-occurring sounds (the unattended auditory ground). We conducted a series of experiments to clarify the principles governing the extraction of auditory figures. We distinguish between auditory…

  11. Voice congruency facilitates word recognition.

    Directory of Open Access Journals (Sweden)

    Sandra Campeanu

    Full Text Available Behavioral studies of spoken word memory have shown that context congruency facilitates both word and source recognition, though the level at which context exerts its influence remains equivocal. We measured event-related potentials (ERPs while participants performed both types of recognition task with words spoken in four voices. Two voice parameters (i.e., gender and accent varied between speakers, with the possibility that none, one or two of these parameters was congruent between study and test. Results indicated that reinstating the study voice at test facilitated both word and source recognition, compared to similar or no context congruency at test. Behavioral effects were paralleled by two ERP modulations. First, in the word recognition test, the left parietal old/new effect showed a positive deflection reflective of context congruency between study and test words. Namely, the same speaker condition provided the most positive deflection of all correctly identified old words. In the source recognition test, a right frontal positivity was found for the same speaker condition compared to the different speaker conditions, regardless of response success. Taken together, the results of this study suggest that the benefit of context congruency is reflected behaviorally and in ERP modulations traditionally associated with recognition memory.

  12. Voice congruency facilitates word recognition.

    Science.gov (United States)

    Campeanu, Sandra; Craik, Fergus I M; Alain, Claude

    2013-01-01

    Behavioral studies of spoken word memory have shown that context congruency facilitates both word and source recognition, though the level at which context exerts its influence remains equivocal. We measured event-related potentials (ERPs) while participants performed both types of recognition task with words spoken in four voices. Two voice parameters (i.e., gender and accent) varied between speakers, with the possibility that none, one or two of these parameters was congruent between study and test. Results indicated that reinstating the study voice at test facilitated both word and source recognition, compared to similar or no context congruency at test. Behavioral effects were paralleled by two ERP modulations. First, in the word recognition test, the left parietal old/new effect showed a positive deflection reflective of context congruency between study and test words. Namely, the same speaker condition provided the most positive deflection of all correctly identified old words. In the source recognition test, a right frontal positivity was found for the same speaker condition compared to the different speaker conditions, regardless of response success. Taken together, the results of this study suggest that the benefit of context congruency is reflected behaviorally and in ERP modulations traditionally associated with recognition memory.

  13. [Assessment of voice acoustic parameters in female teachers with diagnosed occupational voice disorders].

    Science.gov (United States)

    Niebudek-Bogusz, Ewa; Fiszer, Marta; Sliwińska-Kowalska, Mariola

    2005-01-01

    Laryngovideostroboscopy is the method most frequently used in the assessment of voice disorders. However, the employment of quantitative methods, such as voice acoustic analysis, is essential for evaluating the effectiveness of prophylactic and therapeutic activities as well as for objective medical certification of larynx pathologies. The aim of this study was to examine voice acoustic parameters in female teachers with occupational voice diseases. Acoustic analysis (IRIS software) was performed in 66 female teachers, including 35 teachers with occupational voice diseases and 31 with functional dysphonia. The teachers with occupational voice diseases presented the lower average fundamental frequency (193 Hz) compared to the group with functional dysphonia (209 Hz) and to the normative value (236 Hz), whereas other acoustic parameters did not differ significantly in both groups. Voice acoustic analysis, when applied separately from vocal loading, cannot be used as a testing method to verify the diagnosis of occupational voice disorders.

  14. Auditory cortical function during verbal episodic memory encoding in Alzheimer's disease.

    Science.gov (United States)

    Dhanjal, Novraj S; Warren, Jane E; Patel, Maneesh C; Wise, Richard J S

    2013-02-01

    Episodic memory encoding of a verbal message depends upon initial registration, which requires sustained auditory attention followed by deep semantic processing of the message. Motivated by previous data demonstrating modulation of auditory cortical activity during sustained attention to auditory stimuli, we investigated the response of the human auditory cortex during encoding of sentences to episodic memory. Subsequently, we investigated this response in patients with mild cognitive impairment (MCI) and probable Alzheimer's disease (pAD). Using functional magnetic resonance imaging, 31 healthy participants were studied. The response in 18 MCI and 18 pAD patients was then determined, and compared to 18 matched healthy controls. Subjects heard factual sentences, and subsequent retrieval performance indicated successful registration and episodic encoding. The healthy subjects demonstrated that suppression of auditory cortical responses was related to greater success in encoding heard sentences; and that this was also associated with greater activity in the semantic system. In contrast, there was reduced auditory cortical suppression in patients with MCI, and absence of suppression in pAD. Administration of a central cholinesterase inhibitor (ChI) partially restored the suppression in patients with pAD, and this was associated with an improvement in verbal memory. Verbal episodic memory impairment in AD is associated with altered auditory cortical function, reversible with a ChI. Although these results may indicate the direct influence of pathology in auditory cortex, they are also likely to indicate a partially reversible impairment of feedback from neocortical systems responsible for sustained attention and semantic processing. Copyright © 2012 American Neurological Association.

  15. Sex differences in the representation of call stimuli in a songbird secondary auditory area.

    Science.gov (United States)

    Giret, Nicolas; Menardy, Fabien; Del Negro, Catherine

    2015-01-01

    Understanding how communication sounds are encoded in the central auditory system is critical to deciphering the neural bases of acoustic communication. Songbirds use learned or unlearned vocalizations in a variety of social interactions. They have telencephalic auditory areas specialized for processing natural sounds and considered as playing a critical role in the discrimination of behaviorally relevant vocal sounds. The zebra finch, a highly social songbird species, forms lifelong pair bonds. Only male zebra finches sing. However, both sexes produce the distance call when placed in visual isolation. This call is sexually dimorphic, is learned only in males and provides support for individual recognition in both sexes. Here, we assessed whether auditory processing of distance calls differs between paired males and females by recording spiking activity in a secondary auditory area, the caudolateral mesopallium (CLM), while presenting the distance calls of a variety of individuals, including the bird itself, the mate, familiar and unfamiliar males and females. In males, the CLM is potentially involved in auditory feedback processing important for vocal learning. Based on both the analyses of spike rates and temporal aspects of discharges, our results clearly indicate that call-evoked responses of CLM neurons are sexually dimorphic, being stronger, lasting longer, and conveying more information about calls in males than in females. In addition, how auditory responses vary among call types differ between sexes. In females, response strength differs between familiar male and female calls. In males, temporal features of responses reveal a sensitivity to the bird's own call. These findings provide evidence that sexual dimorphism occurs in higher-order processing areas within the auditory system. They suggest a sexual dimorphism in the function of the CLM, contributing to transmit information about the self-generated calls in males and to storage of information about the

  16. Sex differences in the representation of call stimuli in a songbird secondary auditory area

    Directory of Open Access Journals (Sweden)

    Nicolas eGiret

    2015-10-01

    Full Text Available Understanding how communication sounds are encoded in the central auditory system is critical to deciphering the neural bases of acoustic communication. Songbirds use learned or unlearned vocalizations in a variety of social interactions. They have telencephalic auditory areas specialized for processing natural sounds and considered as playing a critical role in the discrimination of behaviorally relevant vocal sounds. The zebra finch, a highly social songbird species, forms lifelong pair bonds. Only male zebra finches sing. However, both sexes produce the distance call when placed in visual isolation. This call is sexually dimorphic, is learned only in males and provides support for individual recognition in both sexes. Here, we assessed whether auditory processing of distance calls differs between paired males and females by recording spiking activity in a secondary auditory area, the caudolateral mesopallium (CLM, while presenting the distance calls of a variety of individuals, including the bird itself, the mate, familiar and unfamiliar males and females. In males, the CLM is potentially involved in auditory feedback processing important for vocal learning. Based on both the analyses of spike rates and temporal aspects of discharges, our results clearly indicate that call-evoked responses of CLM neurons are sexually dimorphic, being stronger, lasting longer and conveying more information about calls in males than in females. In addition, how auditory responses vary among call types differ between sexes. In females, response strength differs between familiar male and female calls. In males, temporal features of responses reveal a sensitivity to the bird’s own call. These findings provide evidence that sexual dimorphism occurs in higher-order processing areas within the auditory system. They suggest a sexual dimorphism in the function of the CLM, contributing to transmit information about the self-generated calls in males and to storage of

  17. Auditory-visual integration in fields of the auditory cortex.

    Science.gov (United States)

    Kubota, Michinori; Sugimoto, Shunji; Hosokawa, Yutaka; Ojima, Hisayuki; Horikawa, Junsei

    2017-03-01

    While multimodal interactions have been known to exist in the early sensory cortices, the response properties and spatiotemporal organization of these interactions are poorly understood. To elucidate the characteristics of multimodal sensory interactions in the cerebral cortex, neuronal responses to visual stimuli with or without auditory stimuli were investigated in core and belt fields of guinea pig auditory cortex using real-time optical imaging with a voltage-sensitive dye. On average, visual responses consisted of short excitation followed by long inhibition. Although visual responses were observed in core and belt fields, there were regional and temporal differences in responses. The most salient visual responses were observed in the caudal belt fields, especially posterior (P) and dorsocaudal belt (DCB) fields. Visual responses emerged first in fields P and DCB and then spread rostroventrally to core and ventrocaudal belt (VCB) fields. Absolute values of positive and negative peak amplitudes of visual responses were both larger in fields P and DCB than in core and VCB fields. When combined visual and auditory stimuli were applied, fields P and DCB were more inhibited than core and VCB fields beginning approximately 110 ms after stimuli. Correspondingly, differences between responses to auditory stimuli alone and combined audiovisual stimuli became larger in fields P and DCB than in core and VCB fields after approximately 110 ms after stimuli. These data indicate that visual influences are most salient in fields P and DCB, which manifest mainly as inhibition, and that they enhance differences in auditory responses among fields. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Integrating cues of social interest and voice pitch in men's preferences for women's voices

    OpenAIRE

    Jones, Benedict C; Feinberg, David R; DeBruine, Lisa M; Little, Anthony C; Vukovic, Jovana

    2008-01-01

    Most previous studies of vocal attractiveness have focused on preferences for physical characteristics of voices such as pitch. Here we examine the content of vocalizations in interaction with such physical traits, finding that vocal cues of social interest modulate the strength of men's preferences for raised pitch in women's voices. Men showed stronger preferences for raised pitch when judging the voices of women who appeared interested in the listener than when judging the voices of women ...

  19. Contribution of auditory working memory to speech understanding in mandarin-speaking cochlear implant users.

    Science.gov (United States)

    Tao, Duoduo; Deng, Rui; Jiang, Ye; Galvin, John J; Fu, Qian-Jie; Chen, Bing

    2014-01-01

    To investigate how auditory working memory relates to speech perception performance by Mandarin-speaking cochlear implant (CI) users. Auditory working memory and speech perception was measured in Mandarin-speaking CI and normal-hearing (NH) participants. Working memory capacity was measured using forward digit span and backward digit span; working memory efficiency was measured using articulation rate. Speech perception was assessed with: (a) word-in-sentence recognition in quiet, (b) word-in-sentence recognition in speech-shaped steady noise at +5 dB signal-to-noise ratio, (c) Chinese disyllable recognition in quiet, (d) Chinese lexical tone recognition in quiet. Self-reported school rank was also collected regarding performance in schoolwork. There was large inter-subject variability in auditory working memory and speech performance for CI participants. Working memory and speech performance were significantly poorer for CI than for NH participants. All three working memory measures were strongly correlated with each other for both CI and NH participants. Partial correlation analyses were performed on the CI data while controlling for demographic variables. Working memory efficiency was significantly correlated only with sentence recognition in quiet when working memory capacity was partialled out. Working memory capacity was correlated with disyllable recognition and school rank when efficiency was partialled out. There was no correlation between working memory and lexical tone recognition in the present CI participants. Mandarin-speaking CI users experience significant deficits in auditory working memory and speech performance compared with NH listeners. The present data suggest that auditory working memory may contribute to CI users' difficulties in speech understanding. The present pattern of results with Mandarin-speaking CI users is consistent with previous auditory working memory studies with English-speaking CI users, suggesting that the lexical importance

  20. Auditory-Perceptual and Acoustic Methods in Measuring Dysphonia Severity of Korean Speech.

    Science.gov (United States)

    Maryn, Youri; Kim, Hyung-Tae; Kim, Jaeock

    2016-09-01

    The purpose of this study was to explore the criterion-related concurrent validity of two standardized auditory-perceptual rating protocols and the Acoustic Voice Quality Index (AVQI) for measuring dysphonia severity in Korean speech. Sixty native Korean subjects with various voice disorders were asked to sustain the vowel [a:] and to read aloud the Korean text "Walk." A 3-second midvowel portion of the sustained vowel and two sentences (with 25 syllables) were edited, concatenated, and analyzed according to methods described elsewhere. From 56 participants, both continuous speech and sustained vowel recordings had sufficiently high signal-to-noise ratios (35.5 dB and 37 dB on average, respectively) and were therefore subjected to further dysphonia severity analysis with (1) "G" or Grade from the GRBAS protocol, (2) "OS" or Overall Severity from the Consensus Auditory-Perceptual Evaluation of Voice protocol, and (3) AVQI. First, high correlations were found between G and OS (rS = 0.955 for sustained vowels; rS = 0.965 for continuous speech). Second, the AVQI showed a strong correlation with G (rS = 0.911) as well as OS (rP = 0.924). These findings are in agreement with similar studies dealing with continuous speech in other languages. The present study highlights the criterion-related concurrent validity of these methods in Korean speech. Furthermore, it supports the cross-linguistic robustness of the AVQI as a valid and objective marker of overall dysphonia severity. Copyright © 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  1. Voice disorders in the workplace: productivity in spasmodic dysphonia and the impact of botulinum toxin.

    Science.gov (United States)

    Meyer, Tanya K; Hu, Amanda; Hillel, Allen D

    2013-11-01

    The impact of the disordered voice on standard work productivity measures and employment trends is difficult to quantify; this is in large part due to the heterogeneity of the disease processes. Spasmodic dysphonia (SD), a chronic voice disorder, may be a useful model to study this impact. Self-reported work measures (worked missed, work impairment, overall work productivity, and activity impairment) were studied among patients receiving botulinum toxin (BTX) treatments for SD. It was hypothesized that there would be a substantial difference in work-related measures between the best and worst voicing periods. In addition, job types, employment shifts, and vocal requirements during the course of vocal disability from SD were investigated for each individual, and the impact of SD on these patterns was studied. A total of 145 patients with SD, either adductor or abductor, who were established in routine therapeutic BTX injections agreed to participate in a self-administered questionnaire study. Seventy-two participants were currently working and provided highly detailed information on work-related measures. Their answers characterized the effect of SD on their employment status, productivity at work, activity impairment outside of work, employment retention or change, and whether the individual perceived that BTX therapy affected these measures. Patients were asked to complete the Work Productivity and Activity Impairment (WPAI) instrument to determine these measures for their best and worst voicing weeks over the duration since their previous BTX injection. Voice-specific quality of life instruments (Voice Handicap Index-10) and perceptual assessments (Consensus Auditory Perceptual Evaluation of Voice) were elicited to provide correlations of work measures with patient-perceived voice handicap and clinician-perceived voice quality. Cross-sectional analysis using self-administered questionnaire. A total of 108 patients reported ever working during their diagnosis and

  2. Voice Onset Time in Azerbaijani Consonants

    Directory of Open Access Journals (Sweden)

    Ali Jahan

    2009-10-01

    Full Text Available Objective: Voice onset time is known to be cue for the distinction between voiced and voiceless stops and it can be used to describe or categorize a range of developmental, neuromotor and linguistic disorders. The aim of this study is determination of standard values of voice onset time for Azerbaijani language (Tabriz dialect. Materials & Methods: In this description-analytical study, 30 Azeris persons whom were selected conveniently by simple selection, uttered 46 monosyllabic words initiating with 6 Azerbaijani stops twice. Using Praat software, the voice onset time values were analyzed by waveform and wideband spectrogram in milliseconds. Vowel effect, sex differences and the effect of place of articulation on VOT, were evaluated and data were analyzed by one-way ANOVA test. Results: There was no significant difference in voice onset time between male and female Azeris speakers (P<0.05. Vowel and place of articulation had significant correlation with voice onset time (P<0.001. Voice onset time values for /b/, /p/, /d/, /t/, /g/, /k/, and [c], [ɟ] allophones were 10.64, 86.88, 13.35, 87.09, 26.25, 100.62, 131.19, 63.18 mili second, respectively. Conclusion: Voice onset time values are the same for Azerbaijani men and women. However, like many other languages, back and high vowels and back place of articulation lengthen VOT. Also, voiceless stops are aspirated in this language and voiced stops have positive VOT values.

  3. Singing Voice Analysis, Synthesis, and Modeling

    Science.gov (United States)

    Kim, Youngmoo E.

    The singing voice is the oldest musical instrument, but its versatility and emotional power are unmatched. Through the combination of music, lyrics, and expression, the voice is able to affect us in ways that no other instrument can. The fact that vocal music is prevalent in almost all cultures is indicative of its innate appeal to the human aesthetic. Singing also permeates most genres of music, attesting to the wide range of sounds the human voice is capable of producing. As listeners we are naturally drawn to the sound of the human voice, and, when present, it immediately becomes the focus of our attention.

  4. Familiarity and Voice Representation: From Acoustic-Based Representation to Voice Averages

    Directory of Open Access Journals (Sweden)

    Maureen Fontaine

    2017-07-01

    Full Text Available The ability to recognize an individual from their voice is a widespread ability with a long evolutionary history. Yet, the perceptual representation of familiar voices is ill-defined. In two experiments, we explored the neuropsychological processes involved in the perception of voice identity. We specifically explored the hypothesis that familiar voices (trained-to-familiar (Experiment 1, and famous voices (Experiment 2 are represented as a whole complex pattern, well approximated by the average of multiple utterances produced by a single speaker. In experiment 1, participants learned three voices over several sessions, and performed a three-alternative forced-choice identification task on original voice samples and several “speaker averages,” created by morphing across varying numbers of different vowels (e.g., [a] and [i] produced by the same speaker. In experiment 2, the same participants performed the same task on voice samples produced by familiar speakers. The two experiments showed that for famous voices, but not for trained-to-familiar voices, identification performance increased and response times decreased as a function of the number of utterances in the averages. This study sheds light on the perceptual representation of familiar voices, and demonstrates the power of average in recognizing familiar voices. The speaker average captures the unique characteristics of a speaker, and thus retains the information essential for recognition; it acts as a prototype of the speaker.

  5. "Voice Forum" The Human Voice as Primary Instrument in Music Therapy

    DEFF Research Database (Denmark)

    Pedersen, Inge Nygaard; Storm, Sanne

    2009-01-01

    Aspects will be drawn on the human voice as tool for embodying our psychological and physiological state, and attempting integration of feelings. Presentations and dialogues on different methods and techniques in "Therapy related body-and voice work.", as well as the human voice as a tool for non...

  6. Am I ready for it? Students’ perceptions of meaningful feedback on entrustable professional activities

    NARCIS (Netherlands)

    Duijn, Chantal C. M. A.; Welink, Lisanne S; Mandoki, Mira; Ten Cate, Olle Th J; Kremer, Wim D. J.; Bok, Harold G. J.

    2017-01-01

    Background Receiving feedback while in the clinical workplace is probably the most frequently voiced desire of students. In clinical learning environments, providing and seeking performance-relevant information is often difficult for both supervisors and students. The use of entrustable professional

  7. Auditory and audio-visual processing in patients with cochlear, auditory brainstem, and auditory midbrain implants: An EEG study.

    Science.gov (United States)

    Schierholz, Irina; Finke, Mareike; Kral, Andrej; Büchner, Andreas; Rach, Stefan; Lenarz, Thomas; Dengler, Reinhard; Sandmann, Pascale

    2017-04-01

    There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  8. Auditory evoked potentials to abrupt pitch and timbre change of complex tones: electrophysiological evidence of 'streaming'?

    Science.gov (United States)

    Jones, S J; Longe, O; Vaz Pato, M

    1998-03-01

    Examination of the cortical auditory evoked potentials to complex tones changing in pitch and timbre suggests a useful new method for investigating higher auditory processes, in particular those concerned with 'streaming' and auditory object formation. The main conclusions were: (i) the N1 evoked by a sudden change in pitch or timbre was more posteriorly distributed than the N1 at the onset of the tone, indicating at least partial segregation of the neuronal populations responsive to sound onset and spectral change; (ii) the T-complex was consistently larger over the right hemisphere, consistent with clinical and PET evidence for particular involvement of the right temporal lobe in the processing of timbral and musical material; (iii) responses to timbral change were relatively unaffected by increasing the rate of interspersed changes in pitch, suggesting a mechanism for detecting the onset of a new voice in a constantly modulated sound stream; (iv) responses to onset, offset and pitch change of complex tones were relatively unaffected by interfering tones when the latter were of a different timbre, suggesting these responses must be generated subsequent to auditory stream segregation.

  9. Selective attention modulates human auditory brainstem responses: relative contributions of frequency and spatial cues.

    Directory of Open Access Journals (Sweden)

    Alexandre Lehmann

    Full Text Available Selective attention is the mechanism that allows focusing one's attention on a particular stimulus while filtering out a range of other stimuli, for instance, on a single conversation in a noisy room. Attending to one sound source rather than another changes activity in the human auditory cortex, but it is unclear whether attention to different acoustic features, such as voice pitch and speaker location, modulates subcortical activity. Studies using a dichotic listening paradigm indicated that auditory brainstem processing may be modulated by the direction of attention. We investigated whether endogenous selective attention to one of two speech signals affects amplitude and phase locking in auditory brainstem responses when the signals were either discriminable by frequency content alone, or by frequency content and spatial location. Frequency-following responses to the speech sounds were significantly modulated in both conditions. The modulation was specific to the task-relevant frequency band. The effect was stronger when both frequency and spatial information were available. Patterns of response were variable between participants, and were correlated with psychophysical discriminability of the stimuli, suggesting that the modulation was biologically relevant. Our results demonstrate that auditory brainstem responses are susceptible to efferent modulation related to behavioral goals. Furthermore they suggest that mechanisms of selective attention actively shape activity at early subcortical processing stages according to task relevance and based on frequency and spatial cues.

  10. Molecular approach of auditory neuropathy.

    Science.gov (United States)

    Silva, Magali Aparecida Orate Menezes da; Piatto, Vânia Belintani; Maniglia, Jose Victor

    2015-01-01

    Mutations in the otoferlin gene are responsible for auditory neuropathy. To investigate the prevalence of mutations in the mutations in the otoferlin gene in patients with and without auditory neuropathy. This original cross-sectional case study evaluated 16 index cases with auditory neuropathy, 13 patients with sensorineural hearing loss, and 20 normal-hearing subjects. DNA was extracted from peripheral blood leukocytes, and the mutations in the otoferlin gene sites were amplified by polymerase chain reaction/restriction fragment length polymorphism. The 16 index cases included nine (56%) females and seven (44%) males. The 13 deaf patients comprised seven (54%) males and six (46%) females. Among the 20 normal-hearing subjects, 13 (65%) were males and seven were (35%) females. Thirteen (81%) index cases had wild-type genotype (AA) and three (19%) had the heterozygous AG genotype for IVS8-2A-G (intron 8) mutation. The 5473C-G (exon 44) mutation was found in a heterozygous state (CG) in seven (44%) index cases and nine (56%) had the wild-type allele (CC). Of these mutants, two (25%) were compound heterozygotes for the mutations found in intron 8 and exon 44. All patients with sensorineural hearing loss and normal-hearing individuals did not have mutations (100%). There are differences at the molecular level in patients with and without auditory neuropathy. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  11. Dynamics of auditory working memory

    Directory of Open Access Journals (Sweden)

    Jochen eKaiser

    2015-05-01

    Full Text Available Working memory denotes the ability to retain stimuli in mind that are no longer physically present and to perform mental operations on them. Electro- and magnetoencephalography allow investigating the short-term maintenance of acoustic stimuli at a high temporal resolution. Studies investigating working memory for non-spatial and spatial auditory information have suggested differential roles of regions along the putative auditory ventral and dorsal streams, respectively, in the processing of the different sound properties. Analyses of event-related potentials have shown sustained, memory load-dependent deflections over the retention periods. The topography of these waves suggested an involvement of modality-specific sensory storage regions. Spectral analysis has yielded information about the temporal dynamics of auditory working memory processing of individual stimuli, showing activation peaks during the delay phase whose timing was related to task performance. Coherence at different frequencies was enhanced between frontal and sensory cortex. In summary, auditory working memory seems to rely on the dynamic interplay between frontal executive systems and sensory representation regions.

  12. Short-term plasticity in auditory cognition.

    Science.gov (United States)

    Jääskeläinen, Iiro P; Ahveninen, Jyrki; Belliveau, John W; Raij, Tommi; Sams, Mikko

    2007-12-01

    Converging lines of evidence suggest that auditory system short-term plasticity can enable several perceptual and cognitive functions that have been previously considered as relatively distinct phenomena. Here we review recent findings suggesting that auditory stimulation, auditory selective attention and cross-modal effects of visual stimulation each cause transient excitatory and (surround) inhibitory modulations in the auditory cortex. These modulations might adaptively tune hierarchically organized sound feature maps of the auditory cortex (e.g. tonotopy), thus filtering relevant sounds during rapidly changing environmental and task demands. This could support auditory sensory memory, pre-attentive detection of sound novelty, enhanced perception during selective attention, influence of visual processing on auditory perception and longer-term plastic changes associated with perceptual learning.

  13. Follower-Centered Perspective on Feedback: Effects of Feedback Seeking on Identification and Feedback Environment

    OpenAIRE

    Gong, Zhenxing; Li, Miaomiao; Qi, Yaoyuan; Zhang, Na

    2017-01-01

    In the formation mechanism of the feedback environment, the existing research pays attention to external feedback sources and regards individuals as objects passively accepting feedback. Thus, the external source fails to realize the individuals’ need for feedback, and the feedback environment cannot provide them with useful information, leading to a feedback vacuum. The aim of this study is to examine the effect of feedback-seeking by different strategies on the supervisor-feedback environme...

  14. Clinical voice analysis of Carnatic singers.

    Science.gov (United States)

    Arunachalam, Ravikumar; Boominathan, Prakash; Mahalingam, Shenbagavalli

    2014-01-01

    Carnatic singing is a classical South Indian style of music that involves rigorous training to produce an "open throated" loud, predominantly low-pitched singing, embedded with vocal nuances in higher pitches. Voice problems in singers are not uncommon. The objective was to report the nature of voice problems and apply a routine protocol to assess the voice. Forty-five trained performing singers (females: 36 and males: 9) who reported to a tertiary care hospital with voice problems underwent voice assessment. The study analyzed their problems and the clinical findings. Voice change, difficulty in singing higher pitches, and voice fatigue were major complaints. Most of the singers suffered laryngopharyngeal reflux that coexisted with muscle tension dysphonia and chronic laryngitis. Speaking voices were rated predominantly as "moderate deviation" on GRBAS (Grade, Rough, Breathy, Asthenia, and Strain). Maximum phonation time ranged from 4 to 29 seconds (females: 10.2, standard deviation [SD]: 5.28 and males: 15.7, SD: 5.79). Singing frequency range was reduced (females: 21.3 Semitones and males: 23.99 Semitones). Dysphonia severity index (DSI) scores ranged from -3.5 to 4.91 (females: 0.075 and males: 0.64). Singing frequency range and DSI did not show significant difference between sex and across clinical diagnosis. Self-perception using voice disorder outcome profile revealed overall severity score of 5.1 (SD: 2.7). Findings are discussed from a clinical intervention perspective. Study highlighted the nature of voice problems (hyperfunctional) and required modifications in assessment protocol for Carnatic singers. Need for regular assessments and vocal hygiene education to maintain good vocal health are emphasized as outcomes. Copyright © 2014 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  15. The right hemisphere supports but does not replace left hemisphere auditory function in patients with persisting aphasia.

    Science.gov (United States)

    Teki, Sundeep; Barnes, Gareth R; Penny, William D; Iverson, Paul; Woodhead, Zoe V J; Griffiths, Timothy D; Leff, Alexander P

    2013-06-01

    In this study, we used magnetoencephalography and a mismatch paradigm to investigate speech processing in stroke patients with auditory comprehension deficits and age-matched control subjects. We probed connectivity within and between the two temporal lobes in response to phonemic (different word) and acoustic (same word) oddballs using dynamic causal modelling. We found stronger modulation of self-connections as a function of phonemic differences for control subjects versus aphasics in left primary auditory cortex and bilateral superior temporal gyrus. The patients showed stronger modulation of connections from right primary auditory cortex to right superior temporal gyrus (feed-forward) and from left primary auditory cortex to right primary auditory cortex (interhemispheric). This differential connectivity can be explained on the basis of a predictive coding theory which suggests increased prediction error and decreased sensitivity to phonemic boundaries in the aphasics' speech network in both hemispheres. Within the aphasics, we also found behavioural correlates with connection strengths: a negative correlation between phonemic perception and an inter-hemispheric connection (left superior temporal gyrus to right superior temporal gyrus), and positive correlation between semantic performance and a feedback connection (right superior temporal gyrus to right primary auditory cortex). Our results suggest that aphasics with impaired speech comprehension have less veridical speech representations in both temporal lobes, and rely more on the right hemisphere auditory regions, particularly right superior temporal gyrus, for processing speech. Despite this presumed compensatory shift in network connectivity, the patients remain significantly impaired.

  16. Associations between the Transsexual Voice Questionnaire (TVQMtF ) and self-report of voice femininity and acoustic voice measures.

    Science.gov (United States)

    Dacakis, Georgia; Oates, Jennifer; Douglas, Jacinta

    2017-11-01

    The Transsexual Voice Questionnaire (TVQ MtF ) was designed to capture the voice-related perceptions of individuals whose gender identity as female is the opposite of their birth-assigned gender (MtF women). Evaluation of the psychometric properties of the TVQ MtF is ongoing. To investigate associations between TVQ MtF scores and (1) self-perceptions of voice femininity and (2) acoustic parameters of voice pitch and voice quality in order to evaluate further the validity of the TVQ MtF . A strong correlation between TVQ MtF scores and self-ratings of voice femininity was predicted, but no association between TVQ MtF scores and acoustic measures of voice pitch and quality was proposed. Participants were 148 MtF women (mean age 48.14 years) recruited from the La Trobe Communication Clinic and the clinics of three doctors specializing in transgender health. All participants completed the TVQ MtF and 34 of these participants also provided a voice sample for acoustic analysis. Pearson product-moment correlation analysis was conducted to examine the associations between TVQ MtF scores and (1) self-perceptions of voice femininity and (2) acoustic measures of F0, jitter (%), shimmer (dB) and harmonic-to-noise ratio (HNR). Strong negative correlations between the participants' perceptions of their voice femininity and the TVQ MtF scores demonstrated that for this group of MtF women a low self-rating of voice femininity was associated with more frequent negative voice-related experiences. This association was strongest with the vocal-functioning component of the TVQ MtF . These strong correlations and high levels of shared variance between the TVQ MtF and a measure of a related construct provides evidence for the convergent validity of the TVQ MtF . The absence of significant correlations between the TVQ MtF and the acoustic data is consistent with the equivocal findings of earlier research. This finding indicates that these two measures assess different aspects of the voice

  17. Sound induced activity in voice sensitive cortex predicts voice memory ability

    Directory of Open Access Journals (Sweden)

    Rebecca eWatson

    2012-04-01

    Full Text Available The ‘temporal voice areas’ (TVAs (Belin et al., 2000 of the human brain show greater neuronal activity in response to human voices than to other categories of nonvocal sounds. However, a direct link between TVA activity and voice perceptionbehaviour has not yet been established. Here we show that a functional magnetic resonance imaging (fMRI measure of activity in the TVAs predicts individual performance at a separately administered voice memory test. This relation holds whengeneral sound memory ability is taken into account. These findings provide the first evidence that the TVAs are specifically involved in voice cognition.

  18. Differential Effectiveness of Electromyograph Feedback, Verbal Relaxation Instructions, and Medication Placebo with Tension Headaches

    Science.gov (United States)

    Cox, Daniel J.; And Others

    1975-01-01

    Adults with chronic tension headaches were assigned to auditory electromyograph (EMG) feedback (N=9), to progressive relaxation (N=9), and to placebo treatment (N=9). Data indicated that biofeedback and verbal relaxation instructions were equally superior to the medicine placebo on all measured variables in the direction of clinical improvement,…

  19. The Effect of Multimodal Feedback on Perceived Exertion on a VR Exercise Setting

    DEFF Research Database (Denmark)

    Bruun-Pedersen, Jon Ram; Andersen, Morten G.; Clemmesen, Mathias M.

    2018-01-01

    This paper seeks to determine if multimodal feedback, from auditory and haptic stimuli, can affect a user’s perceived exertion in a virtual reality setting. A simple virtual environment was created in the style of a desert to minimize the amount of visual distractions; a head mounted display was ...

  20. The Impact of Wireless Technology Feedback on Inventory Management at a Dairy Manufacturing Plant

    Science.gov (United States)

    Goomas, David T.

    2012-01-01

    Replacing the method of counting inventory from paper count sheets to that of wireless reliably reduced the elapsed time to complete a daily inventory of the storage cooler in a dairy manufacturing plant. The handheld computers delivered immediate prompts as well as auditory and visual feedback. Reducing the time to complete the daily inventory…

  1. RF feedback for KEKB

    Energy Technology Data Exchange (ETDEWEB)

    Ezura, Eizi; Yoshimoto, Shin-ichi; Akai, Kazunori [National Lab. for High Energy Physics, Tsukuba, Ibaraki (Japan)

    1996-08-01

    This paper describes the present status of the RF feedback development for the KEK B-Factory (KEKB). A preliminary experiment concerning the RF feedback using a parallel comb-filter was performed through a choke-mode cavity and a klystron. The RF feedback has been tested using the beam of the TRISTAN Main Ring, and has proved to be effective in damping the beam instability. (author)

  2. Neural cryptography with feedback.

    Science.gov (United States)

    Ruttor, Andreas; Kinzel, Wolfgang; Shacham, Lanir; Kanter, Ido

    2004-04-01

    Neural cryptography is based on a competition between attractive and repulsive stochastic forces. A feedback mechanism is added to neural cryptography which increases the repulsive forces. Using numerical simulations and an analytic approach, the probability of a successful attack is calculated for different model parameters. Scaling laws are derived which show that feedback improves the security of the system. In addition, a network with feedback generates a pseudorandom bit sequence which can be used to encrypt and decrypt a secret message.

  3. Feedback and Incentives:

    DEFF Research Database (Denmark)

    Eriksson, Tor Viking; Poulsen, Anders; Villeval, Marie-Claire

    This paper experimentally investigates the impact of different pay and relative performance information policies on employee effort. We explore three information policies: No feedback about relative performance, feedback given halfway through the production period, and continuously updated feedba...... of positive peer effects since the underdogs almost never quit the competition even when lagging significantly behind, and frontrunners do not slack off. Moreover, in both pay schemes information feedback reduces the quality of the low performers' work....

  4. Voices from Around the Globe

    Directory of Open Access Journals (Sweden)

    Birgit Schreiber

    2017-07-01

    Full Text Available JSAA has been seeking to provide an opportunity for Student Affairs professionals and higher education scholars from around the globe to share their research and experiences of student services and student affairs programmes from their respective regional and institutional contexts. This has been given a specific platform with the guest-edited issue “Voices from Around the Globe” which is the result of a collaboration with the International Association of Student Affairs and Services (IASAS, and particularly with the guest editors, Kathleen Callahan and Chinedu Mba.

  5. Voice Disorders: Etiology and Diagnosis.

    Science.gov (United States)

    Martins, Regina Helena Garcia; do Amaral, Henrique Abrantes; Tavares, Elaine Lara Mendes; Martins, Maira Garcia; Gonçalves, Tatiana Maria; Dias, Norimar Hernandes

    2016-11-01

    Voice disorders affect adults and children and have different causes in different age groups. The aim of the study is to present the etiology and diagnosis dysphonia in a large population of patients with this voice disorder.for dysphonia of a large population of dysphonic patients. We evaluated 2019 patients with dysphonia who attended the Voice Disease ambulatories of a university hospital. Parameters assessed were age, gender, profession, associated symptoms, smoking, and videolaryngoscopy diagnoses. Of the 2019 patients with dysphonia who were included in this study, 786 were male (38.93%) and 1233 were female (61.07). The age groups were as follows: 1-6 years (n = 100); 7-12 years (n = 187); 13-18 years (n = 92); 19-39 years (n = 494); 41-60 years (n = 811); and >60 years (n = 335). Symptoms associated with dysphonia were vocal overuse (n = 677), gastroesophageal symptoms (n = 535), and nasosinusal symptoms (n = 497). The predominant professions of the patients were domestic workers, students, and teachers. Smoking was reported by 13.6% patients. With regard to the etiology of dysphonia, in children (1-18 years old), nodules (n = 225; 59.3%), cysts (n = 39; 10.3%), and acute laryngitis (n = 26; 6.8%) prevailed. In adults (19-60 years old), functional dysphonia (n = 268; 20.5%), acid laryngitis (n = 164; 12.5%), and vocal polyps (n = 156; 12%) predominated. In patients older than 60 years, presbyphonia (n = 89; 26.5%), functional dysphonia (n = 59; 17.6%), and Reinke's edema (n = 48; 14%) predominated. In this population of 2019 patients with dysphonia, adults and women were predominant. Dysphonia had different etiologies in the age groups studied. Nodules and cysts were predominant in children, functional dysphonia and reflux in adults, and presbyphonia and Reinke's edema in the elderly. Copyright © 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  6. From Out of Our Voices

    Directory of Open Access Journals (Sweden)

    Evangelia Papanikolaou

    2010-01-01

    Full Text Available Note from the interviewer: Diane Austin's new book “The Theory and Practice of Vocal Psychotherapy: Songs of the Self” (2008 which was published recently, has been an excellent opportunity to learn more about the use of voice in therapy, its clinical applications and its enormous possibilities that offers within a psychotherapeutic setting. This interview focuses on introducing some of these aspects based on Austin’s work, and on exploring her background, motivations and considerations towards this pioneer music-therapeutic approach. The interview has been edited by Diane Austin and Evangelia Papanikolaou and took place via a series of emails, dated from September to December 2009.

  7. Muscular tension and body posture in relation to voice handicap and voice quality in teachers with persistent voice complaints.

    Science.gov (United States)

    Kooijman, P G C; de Jong, F I C R S; Oudes, M J; Huinck, W; van Acht, H; Graamans, K

    2005-01-01

    The aim of this study was to investigate the relationship between extrinsic laryngeal muscular hypertonicity and deviant body posture on the one hand and voice handicap and voice quality on the other hand in teachers with persistent voice complaints and a history of voice-related absenteeism. The study group consisted of 25 female teachers. A voice therapist assessed extrinsic laryngeal muscular tension and a physical therapist assessed body posture. The assessed parameters were clustered in categories. The parameters in the different categories represent the same function. Further a tension/posture index was created, which is the summation of the different parameters. The different parameters and the index were related to the Voice Handicap Index (VHI) and the Dysphonia Severity Index (DSI). The scores of the VHI and the individual parameters differ significantly except for the posterior weight bearing and tension of the sternocleidomastoid muscle. There was also a significant difference between the individual parameters and the DSI, except for tension of the cricothyroid muscle and posterior weight bearing. The score of the tension/posture index correlates significantly with both the VHI and the DSI. In a linear regression analysis, the combination of hypertonicity of the sternocleidomastoid, the geniohyoid muscles and posterior weight bearing is the most important predictor for a high voice handicap. The combination of hypertonicity of the geniohyoid muscle, posterior weight bearing, high position of the hyoid bone, hypertonicity of the cricothyroid muscle and anteroposition of the head is the most important predictor for a low DSI score. The results of this study show the higher the score of the index, the higher the score of the voice handicap and the worse the voice quality is. Moreover, the results are indicative for the importance of assessment of muscular tension and body posture in the diagnosis of voice disorders.

  8. The Role of Occupational Voice Demand and Patient-Rated Impairment in Predicting Voice Therapy Adherence.

    Science.gov (United States)

    Ebersole, Barbara; Soni, Resha S; Moran, Kathleen; Lango, Miriam; Devarajan, Karthik; Jamal, Nausheen

    2018-05-01

    Examine the relationship among the severity of patient-perceived voice impairment, perceptual dysphonia severity, occupational voice demand, and voice therapy adherence. Identify clinical predictors of increased risk for therapy nonadherence. A retrospective cohort study of patients presenting with a chief complaint of persistent dysphonia at an interdisciplinary voice center was done. The Voice Handicap Index-10 (VHI-10) and the Voice-Related Quality of Life (V-RQOL) survey scores, clinician rating of dysphonia severity using the Grade score from the Grade, Roughness Breathiness, Asthenia, and Strain scale, occupational voice demand, and patient demographics were tested for associations with therapy adherence, defined as completion of the treatment plan. Classification and Regression Tree (CART) analysis was performed to establish thresholds for nonadherence risk. Of 166 patients evaluated, 111 were recommended for voice therapy. The therapy nonadherence rate was 56%. Occupational voice demand category, VHI-10, and V-RQOL scores were the only factors significantly correlated with therapy adherence (P demand are significantly more likely to be nonadherent with therapy than those with high occupational voice demand (P 40 is a significant cutoff point for predicting therapy nonadherence (P demand and patient perception of impairment are significantly and independently correlated with therapy adherence. A VHI-10 score of ≤9 or a V-RQOL score of >40 is a significant cutoff point for predicting nonadherence risk. Copyright © 2018 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  9. Integrating cues of social interest and voice pitch in men's preferences for women's voices.

    Science.gov (United States)

    Jones, Benedict C; Feinberg, David R; Debruine, Lisa M; Little, Anthony C; Vukovic, Jovana

    2008-04-23

    Most previous studies of vocal attractiveness have focused on preferences for physical characteristics of voices such as pitch. Here we examine the content of vocalizations in interaction with such physical traits, finding that vocal cues of social interest modulate the strength of men's preferences for raised pitch in women's voices. Men showed stronger preferences for raised pitch when judging the voices of women who appeared interested in the listener than when judging the voices of women who appeared relatively disinterested in the listener. These findings show that voice preferences are not determined solely by physical properties of voices and that men integrate information about voice pitch and the degree of social interest expressed by women when forming voice preferences. Women's preferences for raised pitch in women's voices were not modulated by cues of social interest, suggesting that the integration of cues of social interest and voice pitch when men judge the attractiveness of women's voices may reflect adaptations that promote efficient allocation of men's mating effort.

  10. Policy Feedback System (PFS)

    Data.gov (United States)

    Social Security Administration — The Policy Feedback System (PFS) is a web application developed by the Office of Disability Policy Management Information (ODPMI) team that gathers empirical data...

  11. Feedback stabilization initiative

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-06-01

    Much progress has been made in attaining high confinement regimes in magnetic confinement devices. These operating modes tend to be transient, however, due to the onset of MHD instabilities, and their stabilization is critical for improved performance at steady state. This report describes the Feedback Stabilization Initiative (FSI), a broad-based, multi-institutional effort to develop and implement methods for raising the achievable plasma betas through active MHD feedback stabilization. A key element in this proposed effort is the Feedback Stabilization Experiment (FSX), a medium-sized, national facility that would be specifically dedicated to demonstrating beta improvement in reactor relevant plasmas by using a variety of MHD feedback stabilization schemes.

  12. Feedback stabilization initiative

    International Nuclear Information System (INIS)

    1997-06-01

    Much progress has been made in attaining high confinement regimes in magnetic confinement devices. These operating modes tend to be transient, however, due to the onset of MHD instabilities, and their stabilization is critical for improved performance at steady state. This report describes the Feedback Stabilization Initiative (FSI), a broad-based, multi-institutional effort to develop and implement methods for raising the achievable plasma betas through active MHD feedback stabilization. A key element in this proposed effort is the Feedback Stabilization Experiment (FSX), a medium-sized, national facility that would be specifically dedicated to demonstrating beta improvement in reactor relevant plasmas by using a variety of MHD feedback stabilization schemes

  13. Feedback Loop Gains and Feedback Behavior (1996)

    DEFF Research Database (Denmark)

    Kampmann, Christian Erik

    2012-01-01

    Linking feedback loops and system behavior is part of the foundation of system dynamics, yet the lack of formal tools has so far prevented a systematic application of the concept, except for very simple systems. Having such tools at their disposal would be a great help to analysts in understanding...... large, complicated simulation models. The paper applies tools from graph theory formally linking individual feedback loop strengths to the system eigenvalues. The significance of a link or a loop gain and an eigenvalue can be expressed in the eigenvalue elasticity, i.e., the relative change...... of an eigenvalue resulting from a relative change in the gain. The elasticities of individual links and loops may be found through simple matrix operations on the linearized system. Even though the number of feedback loops can grow rapidly with system size, reaching astronomical proportions even for modest systems...

  14. Auditory imagery shapes movement timing and kinematics: evidence from a musical task.

    Science.gov (United States)

    Keller, Peter E; Dalla Bella, Simone; Koch, Iring

    2010-04-01

    The role of anticipatory auditory imagery in music-like sequential action was investigated by examining timing accuracy and kinematics using a motion capture system. Musicians responded to metronomic pacing signals by producing three unpaced taps on three vertically aligned keys at the given tempo. Taps triggered tones in two out of three blocked feedback conditions, where key-to-tone mappings were compatible or incompatible in terms of spatial and pitch height. Results indicate that, while timing was most accurate without tones, movements were smaller in amplitude and less forceful (i.e., acceleration prior to impact was lowest) when tones were present. Moreover, timing was more accurate and movements were less forceful with compatible than with incompatible auditory feedback. Observing these effects at the first tap (before tone onset) suggests that anticipatory auditory imagery modulates the temporal kinematics of regularly timed auditory action sequences, like those found in music. Such cross-modal ideomotor processes may function to facilitate planning efficiency and biomechanical economy in voluntary action. Copyright 2010 APA, all rights reserved.

  15. Perception of Paralinguistic Traits in Synthesized Voices

    DEFF Research Database (Denmark)

    Baird, Alice Emily; Hasse Jørgensen, Stina; Parada-Cabaleiro, Emilia

    2017-01-01

    Along with the rise of artificial intelligence and the internet-of-things, synthesized voices are now common in daily–life, providing us with guidance, assistance, and even companionship. From formant to concatenative synthesis, the synthesized voice continues to be defined by the same traits we...

  16. Student Voices in School-Based Assessment

    Science.gov (United States)

    Tong, Siu Yin Annie; Adamson, Bob

    2015-01-01

    The value of student voices in dialogues about learning improvement is acknowledged in the literature. This paper examines how the views of students regarding School-based Assessment (SBA), a significant shift in examination policy and practice in secondary schools in Hong Kong, have largely been ignored. The study captures student voices through…

  17. Analog voicing detector responds to pitch

    Science.gov (United States)

    Abel, R. S.; Watkins, H. E.

    1967-01-01

    Modified electronic voice encoder /Vocoder/ includes an independent analog mode of operation in addition to the conventional digital mode. The Vocoder is a bandwidth compression equipment that permits voice transmission over channels, having only a fraction of the bandwidth required for conventional telephone-quality speech transmission.

  18. The Voice of the Technical Writer.

    Science.gov (United States)

    Euler, James S.

    The author's voice is implicit in all writing, even technical writing. It is the expression of the writer's attitude toward audience, subject matter, and self. Effective use of voice is made possible by recognizing the three roles of the technical writer: transmitter, translator, and author. As a transmitter, the writer must consciously apply an…

  19. Student Voice and the Common Core

    Science.gov (United States)

    Yonezawa, Susan

    2015-01-01

    Common Core proponents and detractors debate its merits, but students have voiced their opinion for years. Using a decade's worth of data gathered through design-research on youth voice, this article discusses what high school students have long described as more ideal learning environments for themselves--and how remarkably similar the Common…

  20. Employee voice and engagement : Connections and consequences

    NARCIS (Netherlands)

    Rees, C.; Alfes, K.; Gatenby, M.

    2013-01-01

    This paper considers the relationship between employee voice and employee engagement. Employee perceptions of voice behaviour aimed at improving the functioning of the work group are found to have both a direct impact and an indirect impact on levels of employee engagement. Analysis of data from two

  1. Speaking with the voice of authority

    CERN Multimedia

    2002-01-01

    GPB Consulting has developed a scientific approach to voice coaching. A digital recording of the voice is sent to a lab in Switzerland and analyzed by a computer programme designed by a doctor of psychology and linguistics and a scientist at CERN (1 page).

  2. Managing dysphonia in occupational voice users.

    Science.gov (United States)

    Behlau, Mara; Zambon, Fabiana; Madazio, Glaucya

    2014-06-01

    Recent advances with regard to occupational voice disorders are highlighted with emphasis on issues warranting consideration when assessing, training, and treating professional voice users. Findings include the many particularities between the various categories of professional voice users, the concept that the environment plays a major role in occupational voice disorders, and that biopsychosocial influences should be analyzed on an individual basis. Assessment via self-evaluation protocols to quantify the impact of these disorders is mandatory as a component of an evaluation and to document treatment outcomes. Discomfort or odynophonia has evolved as a critical symptom in this population. Clinical trials are limited and the complexity of the environment may be a limitation in experiment design. This review reinforced the need for large population studies of professional voice users; new data highlighted important factors specific to each group of voice users. Interventions directed at student teachers are necessities to not only improving the quality of future professionals, but also to avoid the frustration and limitations associated with chronic voice problems. The causative relationship between the work environment and voice disorders has not yet been established. Randomized controlled trials are lacking and must be a focus to enhance treatment paradigms for this population.

  3. Does CPAP treatment affect the voice?

    Science.gov (United States)

    Saylam, Güleser; Şahin, Mustafa; Demiral, Dilek; Bayır, Ömer; Yüceege, Melike Bağnu; Çadallı Tatar, Emel; Korkmaz, Mehmet Hakan

    2016-12-20

    The aim of this study was to investigate alterations in voice parameters among patients using continuous positive airway pressure (CPAP) for the treatment of obstructive sleep apnea syndrome. Patients with an indication for CPAP treatment without any voice problems and with normal laryngeal findings were included and voice parameters were evaluated before and 1 and 6 months after CPAP. Videolaryngostroboscopic findings, a self-rated scale (Voice Handicap Index-10, VHI-10), perceptual voice quality assessment (GRBAS: grade, roughness, breathiness, asthenia, strain), and acoustic parameters were compared. Data from 70 subjects (48 men and 22 women) with a mean age of 44.2 ± 6.0 years were evaluated. When compared with the pre-CPAP treatment period, there was a significant increase in the VHI-10 score after 1 month of treatment and in VHI- 10 and total GRBAS scores, jitter percent (P = 0.01), shimmer percent, noise-to-harmonic ratio, and voice turbulence index after 6 months of treatment. Vague negative effects on voice parameters after the first month of CPAP treatment became more evident after 6 months. We demonstrated nonsevere alterations in the voice quality of patients under CPAP treatment. Given that CPAP is a long-term treatment it is important to keep these alterations in mind.

  4. Occupational risk factors and voice disorders.

    Science.gov (United States)

    Vilkman, E

    1996-01-01

    From the point of view of occupational health, the field of voice disorders is very poorly developed as compared, for instance, to the prevention and diagnostics of occupational hearing disorders. In fact, voice disorders have not even been recognized in the field of occupational medicine. Hence, it is obviously very rare in most countries that the voice disorder of a professional voice user, e.g. a teacher, a singer or an actor, is accepted as an occupational disease by insurance companies. However, occupational voice problems do not lack significance from the point of view of the patient. We also know from questionnaires and clinical studies that voice complaints are very common. Another example of job-related health problems, which has proved more successful in terms of its occupational health status, is the repetition strain injury of the elbow, i.e. the "tennis elbow". Its textbook definition could be used as such to describe an occupational voice disorder ("dysphonia professional is"). In the present paper the effects of such risk factors as vocal loading itself, background noise and room acoustics and low relative humidity of the air are discussed. Due to individual factors underlying the development of professional voice disorders, recommendations rather than regulations are called for. There are many simple and even relatively low-cost methods available for the prevention of vocal problems as well as for supporting rehabilitation.

  5. Why Is My Voice Changing? (For Teens)

    Science.gov (United States)

    ... enter puberty earlier or later than others. How Deep Will My Voice Get? How deep a guy's voice gets depends on his genes: ... of Use Notice of Nondiscrimination Visit the Nemours Web site. Note: All information on TeensHealth® is for ...

  6. Stage Voice Training in the London Schools.

    Science.gov (United States)

    Rubin, Lucille S.

    This report is the result of a six-week study in which the voice training offerings at four schools of drama in London were examined using interviews of teachers and directors, observation of voice classes, and attendance at studio presentations and public performances. The report covers such topics as: textbooks and references being used; courses…

  7. Predictors of Choral Directors' Voice Handicap

    Science.gov (United States)

    Schwartz, Sandra

    2013-01-01

    Vocal demands of teaching are considerable and these challenges are greater for choral directors who depend on the voice as a musical and instructive instrument. The purpose of this study was to (1) examine choral directors' vocal condition using a modified Voice Handicap Index (VHI), and (2) determine the extent to which the major variables…

  8. Rapid Auditory System Adaptation Using a Virtual Auditory Environment

    Directory of Open Access Journals (Sweden)

    Gaëtan Parseihian

    2011-10-01

    Full Text Available Various studies have highlighted plasticity of the auditory system from visual stimuli, limiting the trained field of perception. The aim of the present study is to investigate auditory system adaptation using an audio-kinesthetic platform. Participants were placed in a Virtual Auditory Environment allowing the association of the physical position of a virtual sound source with an alternate set of acoustic spectral cues or Head-Related Transfer Function (HRTF through the use of a tracked ball manipulated by the subject. This set-up has the advantage to be not being limited to the visual field while also offering a natural perception-action coupling through the constant awareness of one's hand position. Adaptation process to non-individualized HRTF was realized through a spatial search game application. A total of 25 subjects participated, consisting of subjects presented with modified cues using non-individualized HRTF and a control group using individual measured HRTFs to account for any learning effect due to the game itself. The training game lasted 12 minutes and was repeated over 3 consecutive days. Adaptation effects were measured with repeated localization tests. Results showed a significant performance improvement for vertical localization and a significant reduction in the front/back confusion rate after 3 sessions.

  9. Auditory Dysfunction in Patients with Cerebrovascular Disease

    Directory of Open Access Journals (Sweden)

    Sadaharu Tabuchi

    2014-01-01

    Full Text Available Auditory dysfunction is a common clinical symptom that can induce profound effects on the quality of life of those affected. Cerebrovascular disease (CVD is the most prevalent neurological disorder today, but it has generally been considered a rare cause of auditory dysfunction. However, a substantial proportion of patients with stroke might have auditory dysfunction that has been underestimated due to difficulties with evaluation. The present study reviews relationships between auditory dysfunction and types of CVD including cerebral infarction, intracerebral hemorrhage, subarachnoid hemorrhage, cerebrovascular malformation, moyamoya disease, and superficial siderosis. Recent advances in the etiology, anatomy, and strategies to diagnose and treat these conditions are described. The numbers of patients with CVD accompanied by auditory dysfunction will increase as the population ages. Cerebrovascular diseases often include the auditory system, resulting in various types of auditory dysfunctions, such as unilateral or bilateral deafness, cortical deafness, pure word deafness, auditory agnosia, and auditory hallucinations, some of which are subtle and can only be detected by precise psychoacoustic and electrophysiological testing. The contribution of CVD to auditory dysfunction needs to be understood because CVD can be fatal if overlooked.

  10. Adaptation in the auditory system: an overview

    Directory of Open Access Journals (Sweden)

    David ePérez-González

    2014-02-01

    Full Text Available The early stages of the auditory system need to preserve the timing information of sounds in order to extract the basic features of acoustic stimuli. At the same time, different processes of neuronal adaptation occur at several levels to further process the auditory information. For instance, auditory nerve fiber responses already experience adaptation of their firing rates, a type of response that can be found in many other auditory nuclei and may be useful for emphasizing the onset of the stimuli. However, it is at higher levels in the auditory hierarchy where more sophisticated types of neuronal processing take place. For example, stimulus-specific adaptation, where neurons show adaptation to frequent, repetitive stimuli, but maintain their responsiveness to stimuli with different physical characteristics, thus representing a distinct kind of processing that may play a role in change and deviance detection. In the auditory cortex, adaptation takes more elaborate forms, and contributes to the processing of complex sequences, auditory scene analysis and attention. Here we review the multiple types of adaptation that occur in the auditory system, which are part of the pool of resources that the neurons employ to process the auditory scene, and are critical to a proper understanding of the neuronal mechanisms that govern auditory perception.

  11. Effects of tailoring ingredients in auditory persuasive health messages on fruit and vegetable intake

    OpenAIRE

    Elbert, Sarah P.; Dijkstra, Arie; Rozema, Andrea

    2017-01-01

    Objective: Health messages can be tailored by applying different tailoring ingredients, among which personalisation, feedback and adaptation. This experiment investigated the separate effects of these tailoring ingredients on behaviour in auditory health persuasion. Furthermore, the moderating effect of self-efficacy was assessed.Design: The between-participants design consisted of four conditions. A generic health message served as a control condition; personalisation was applied using the r...

  12. Voice disorders in teachers. A review.

    Science.gov (United States)

    Martins, Regina Helena Garcia; Pereira, Eny Regina Bóia Neves; Hidalgo, Caio Bosque; Tavares, Elaine Lara Mendes

    2014-11-01

    Voice disorders are very prevalent among teachers and consequences are serious. Although the literature is extensive, there are differences in the concepts and methodology related to voice problems; most studies are restricted to analyzing the responses of teachers to questionnaires and only a few studies include vocal assessments and videolaryngoscopic examinations to obtain a definitive diagnosis. To review demographic studies related to vocal disorders in teachers to analyze the diverse methodologies, the prevalence rates pointed out by the authors, the main risk factors, the most prevalent laryngeal lesions, and the repercussions of dysphonias on professional activities. The available literature (from 1997 to 2013) was narratively reviewed based on Medline, PubMed, Lilacs, SciELO, and Cochrane library databases. Excluded were articles that specifically analyzed treatment modalities and those that did not make their abstracts available in those databases. The keywords included were teacher, dysphonia, voice disorders, professional voice. Copyright © 2014 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  13. Voice pedagogy-what do we need?

    Science.gov (United States)

    Gill, Brian P; Herbst, Christian T

    2016-12-01

    The final keynote panel of the 10th Pan-European Voice Conference (PEVOC) was concerned with the topic 'Voice pedagogy-what do we need?' In this communication the panel discussion is summarized, and the authors provide a deepening discussion on one of the key questions, addressing the roles and tasks of people working with voice students. In particular, a distinction is made between (1) voice building (derived from the German term 'Stimmbildung'), primarily comprising the functional and physiological aspects of singing; (2) coaching, mostly concerned with performance skills; and (3) singing voice rehabilitation. Both public and private educators are encouraged to apply this distinction to their curricula, in order to arrive at more efficient singing teaching and to reduce the risk of vocal injury to the singers concerned.

  14. Voice Quality Estimation in Wireless Networks

    Directory of Open Access Journals (Sweden)

    Petr Zach

    2015-01-01

    Full Text Available This article deals with the impact of Wireless (Wi-Fi networks on the perceived quality of voice services. The Quality of Service (QoS metrics must be monitored in the computer network during the voice data transmission to ensure proper voice service quality the end-user has paid for, especially in the wireless networks. In addition to the QoS, research area called Quality of Experience (QoE provides metrics and methods for quality evaluation from the end-user’s perspective. This article focuses on a QoE estimation of Voice over IP (VoIP calls in the wireless networks using network simulator. Results contribute to voice quality estimation based on characteristics of the wireless network and location of a wireless client.

  15. Synchrony of auditory brain responses predicts behavioral ability to keep still in children with autism spectrum disorder

    Directory of Open Access Journals (Sweden)

    Yuko Yoshimura

    2016-01-01

    Full Text Available The auditory-evoked P1m, recorded by magnetoencephalography, reflects a central auditory processing ability in human children. One recent study revealed that asynchrony of P1m between the right and left hemispheres reflected a central auditory processing disorder (i.e., attention deficit hyperactivity disorder, ADHD in children. However, to date, the relationship between auditory P1m right-left hemispheric synchronization and the comorbidity of hyperactivity in children with autism spectrum disorder (ASD is unknown. In this study, based on a previous report of an asynchrony of P1m in children with ADHD, to clarify whether the P1m right-left hemispheric synchronization is related to the symptom of hyperactivity in children with ASD, we investigated the relationship between voice-evoked P1m right-left hemispheric synchronization and hyperactivity in children with ASD. In addition to synchronization, we investigated the right-left hemispheric lateralization. Our findings failed to demonstrate significant differences in these values between ASD children with and without the symptom of hyperactivity, which was evaluated using the Autism Diagnostic Observational Schedule, Generic (ADOS-G subscale. However, there was a significant correlation between the degrees of hemispheric synchronization and the ability to keep still during 12-minute MEG recording periods. Our results also suggested that asynchrony in the bilateral brain auditory processing system is associated with ADHD-like symptoms in children with ASD.

  16. Voice advisory manikin versus instructor facilitated training in cardiopulmonary resuscitation

    DEFF Research Database (Denmark)

    Isbye, Dan L; Høiby, Pernilla; Rasmussen, Maria B

    2008-01-01

    BACKGROUND: Training of healthcare staff in cardiopulmonary resuscitation (CPR) is time-consuming and costly. It has been suggested to replace instructor facilitated (IF) training with an automated voice advisory manikin (VAM), which increases skill level by continuous verbal feedback during...... individual training. AIMS: To compare a VAM (ResusciAnne CPR skills station, Laerdal Medical A/S, Norway) with IF training in CPR using a bag-valve-mask (BVM) in terms of skills retention after 3 months. METHODS: Forty-three second year medical students were included and CPR performance (ERC Guidelines...... for Resuscitation 2005) was assessed in a 2 min test before randomisation to either IF training in groups of 8 or individual VAM training. Immediately after training and after 3 months, CPR performance was assessed in identical 2 min tests. Laerdal PC Skill Reporting System 2.0 was used to collect data. To quantify...

  17. Identifying hidden voice and video streams

    Science.gov (United States)

    Fan, Jieyan; Wu, Dapeng; Nucci, Antonio; Keralapura, Ram; Gao, Lixin

    2009-04-01

    Given the rising popularity of voice and video services over the Internet, accurately identifying voice and video traffic that traverse their networks has become a critical task for Internet service providers (ISPs). As the number of proprietary applications that deliver voice and video services to end users increases over time, the search for the one methodology that can accurately detect such services while being application independent still remains open. This problem becomes even more complicated when voice and video service providers like Skype, Microsoft, and Google bundle their voice and video services with other services like file transfer and chat. For example, a bundled Skype session can contain both voice stream and file transfer stream in the same layer-3/layer-4 flow. In this context, traditional techniques to identify voice and video streams do not work. In this paper, we propose a novel self-learning classifier, called VVS-I , that detects the presence of voice and video streams in flows with minimum manual intervention. Our classifier works in two phases: training phase and detection phase. In the training phase, VVS-I first extracts the relevant features, and subsequently constructs a fingerprint of a flow using the power spectral density (PSD) analysis. In the detection phase, it compares the fingerprint of a flow to the existing fingerprints learned during the training phase, and subsequently classifies the flow. Our classifier is not only capable of detecting voice and video streams that are hidden in different flows, but is also capable of detecting different applications (like Skype, MSN, etc.) that generate these voice/video streams. We show that our classifier can achieve close to 100% detection rate while keeping the false positive rate to less that 1%.

  18. Reality of auditory verbal hallucinations.

    Science.gov (United States)

    Raij, Tuukka T; Valkonen-Korhonen, Minna; Holi, Matti; Therman, Sebastian; Lehtonen, Johannes; Hari, Riitta

    2009-11-01

    Distortion of the sense of reality, actualized in delusions and hallucinations, is the key feature of psychosis but the underlying neuronal correlates remain largely unknown. We studied 11 highly functioning subjects with schizophrenia or schizoaffective disorder while they rated the reality of auditory verbal hallucinations (AVH) during functional magnetic resonance imaging (fMRI). The subjective reality of AVH correlated strongly and specifically with the hallucination-related activation strength of the inferior frontal gyri (IFG), including the Broca's language region. Furthermore, how real the hallucination that subjects experienced was depended on the hallucination-related coupling between the IFG, the ventral striatum, the auditory cortex, the right posterior temporal lobe, and the cingulate cortex. Our findings suggest that the subjective reality of AVH is related to motor mechanisms of speech comprehension, with contributions from sensory and salience-detection-related brain regions as well as circuitries related to self-monitoring and the experience of agency.

  19. Effects of feedback reliability on feedback-related brain activity: A feedback valuation account.

    Science.gov (United States)

    Ernst, Benjamin; Steinhauser, Marco

    2018-04-06

    Adaptive decision making relies on learning from feedback. Because feedback sometimes can be misleading, optimal learning requires that knowledge about the feedback's reliability be utilized to adjust feedback processing. Although previous research has shown that feedback reliability indeed influences feedback processing, the underlying mechanisms through which this is accomplished remain unclear. Here we propose that feedback processing is adjusted by the adaptive, top-down valuation of feedback. We assume that unreliable feedback is devalued relative to reliable feedback, thus reducing the reward prediction errors that underlie feedback-related brain activity and learning. A crucial prediction of this account is that the effects of feedback reliability are susceptible to contrast effects. That is, the effects of feedback reliability should be enhanced when both reliable and unreliable feedback are experienced within the same context, as compared to when only one level of feedback reliability is experienced. To evaluate this prediction, we measured the event-related potentials elicited by feedback in two experiments in which feedback reliability was varied either within or between blocks. We found that the fronto-central valence effect, a correlate of reward prediction errors during reinforcement learning, was reduced for unreliable feedback. But this result was obtained only when feedback reliability was varied within blocks, thus indicating a contrast effect. This suggests that the adaptive valuation of feedback is one mechanism underlying the effects of feedback reliability on feedback processing.

  20. Learning effects of dynamic postural control by auditory biofeedback versus visual biofeedback training.

    Science.gov (United States)

    Hasegawa, Naoya; Takeda, Kenta; Sakuma, Moe; Mani, Hiroki; Maejima, Hiroshi; Asaka, Tadayoshi

    2017-10-01

    Augmented sensory biofeedback (BF) for postural control is widely used to improve postural stability. However, the effective sensory information in BF systems of motor learning for postural control is still unknown. The purpose of this study was to investigate the learning effects of visual versus auditory BF training in dynamic postural control. Eighteen healthy young adults were randomly divided into two groups (visual BF and auditory BF). In test sessions, participants were asked to bring the real-time center of pressure (COP) in line with a hidden target by body sway in the sagittal plane. The target moved in seven cycles of sine curves at 0.23Hz in the vertical direction on a monitor. In training sessions, the visual and auditory BF groups were required to change the magnitude of a visual circle and a sound, respectively, according to the distance between the COP and target in order to reach the target. The perceptual magnitudes of visual and auditory BF were equalized according to Stevens' power law. At the retention test, the auditory but not visual BF group demonstrated decreased postural performance errors in both the spatial and temporal parameters under the no-feedback condition. These findings suggest that visual BF increases the dependence on visual information to control postural performance, while auditory BF may enhance the integration of the proprioceptive sensory system, which contributes to motor learning without BF. These results suggest that auditory BF training improves motor learning of dynamic postural control. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. The Role of Age and Executive Function in Auditory Category Learning

    Science.gov (United States)

    Reetzke, Rachel; Maddox, W. Todd; Chandrasekaran, Bharath

    2015-01-01

    Auditory categorization is a natural and adaptive process that allows for the organization of high-dimensional, continuous acoustic information into discrete representations. Studies in the visual domain have identified a rule-based learning system that learns and reasons via a hypothesis-testing process that requires working memory and executive attention. The rule-based learning system in vision shows a protracted development, reflecting the influence of maturing prefrontal function on visual categorization. The aim of the current study is two-fold: (a) to examine the developmental trajectory of rule-based auditory category learning from childhood through adolescence, into early adulthood; and (b) to examine the extent to which individual differences in rule-based category learning relate to individual differences in executive function. Sixty participants with normal hearing, 20 children (age range, 7–12), 21 adolescents (age range, 13–19), and 19 young adults (age range, 20–23), learned to categorize novel dynamic ripple sounds using trial-by-trial feedback. The spectrotemporally modulated ripple sounds are considered the auditory equivalent of the well-studied Gabor patches in the visual domain. Results revealed that auditory categorization accuracy improved with age, with young adults outperforming children and adolescents. Computational modeling analyses indicated that the use of the task-optimal strategy (i.e. a conjunctive rule-based learning strategy) improved with age. Notably, individual differences in executive flexibility significantly predicted auditory category learning success. The current findings demonstrate a protracted development of rule-based auditory categorization. The results further suggest that executive flexibility coupled with perceptual processes play important roles in successful rule-based auditory category learning. PMID:26491987

  2. The role of auditory temporal cues in the fluency of stuttering adults

    Directory of Open Access Journals (Sweden)

    Juliana Furini

    Full Text Available ABSTRACT Purpose: to compare the frequency of disfluencies and speech rate in spontaneous speech and reading in adults with and without stuttering in non-altered and delayed auditory feedback (NAF, DAF. Methods: participants were 30 adults: 15 with Stuttering (Research Group - RG, and 15 without stuttering (Control Group - CG. The procedures were: audiological assessment and speech fluency evaluation in two listening conditions, normal and delayed auditory feedback (100 milliseconds delayed by Fono Tools software. Results: the DAF caused a significant improvement in the fluency of spontaneous speech in RG when compared to speech under NAF. The effect of DAF was different in CG, because it increased the common disfluencies and the total of disfluencies in spontaneous speech and reading, besides showing an increase in the frequency of stuttering-like disfluencies in reading. The intergroup analysis showed significant differences in the two speech tasks for the two listening conditions in the frequency of stuttering-like disfluencies and in the total of disfluencies, and in the flows of syllable and word-per-minute in the NAF. Conclusion: the results demonstrated that delayed auditory feedback promoted fluency in spontaneous speech of adults who stutter, without interfering in the speech rate. In non-stuttering adults an increase occurred in the number of common disfluencies and total of disfluencies as well as reduction of speech rate in spontaneous speech and reading.

  3. Feedback For Helpers

    Science.gov (United States)

    Stromer, Walter F.

    1975-01-01

    The author offers some feedback to those in the helping professions in three areas: (1) forms and letters; (2) jumping to conclusions; and (3) blaming and belittling, in hopes of stimulating more feedback as well as more positive ways of performing their services. (HMV)

  4. 'Peer feedback' voor huisartsopleiders

    NARCIS (Netherlands)

    Damoiseaux, R A M J; Truijens, L

    2016-01-01

    In medical specialist training programmes it is common practice for residents to provide feedback to their medical trainers. The problem is that due to its anonymous nature, the feedback often lacks the specificity necessary to improve the performance of trainers. If anonymity is to be abolished,

  5. Feedback og interpersonel kommunikation

    DEFF Research Database (Denmark)

    Dindler, Camilla

    2016-01-01

    Som interpersonel kommunikationsform handler feedback om at observere, mærke og italesætte det, som handler om relationen mellem samtaleparterne mere end om samtaleemnet. Her er fokus på, hvad der siges og hvordan der kommunikeres sammen. Feedback er her ikke en korrigerende tilbagemelding til...

  6. Laterality of basic auditory perception.

    Science.gov (United States)

    Sininger, Yvonne S; Bhatara, Anjali

    2012-01-01

    Laterality (left-right ear differences) of auditory processing was assessed using basic auditory skills: (1) gap detection, (2) frequency discrimination, and (3) intensity discrimination. Stimuli included tones (500, 1000, and 4000 Hz) and wide-band noise presented monaurally to each ear of typical adult listeners. The hypothesis tested was that processing of tonal stimuli would be enhanced by left ear (LE) stimulation and noise by right ear (RE) presentations. To investigate the limits of laterality by (1) spectral width, a narrow-band noise (NBN) of 450-Hz bandwidth was evaluated using intensity discrimination, and (2) stimulus duration, 200, 500, and 1000 ms duration tones were evaluated using frequency discrimination. A left ear advantage (LEA) was demonstrated with tonal stimuli in all experiments, but an expected REA for noise stimuli was not found. The NBN stimulus demonstrated no LEA and was characterised as a noise. No change in laterality was found with changes in stimulus durations. The LEA for tonal stimuli is felt to be due to more direct connections between the left ear and the right auditory cortex, which has been shown to be primary for spectral analysis and tonal processing. The lack of a REA for noise stimuli is unexplained. Sex differences in laterality for noise stimuli were noted but were not statistically significant. This study did establish a subtle but clear pattern of LEA for processing of tonal stimuli.

  7. Classification of voice disorder in children with cochlear implantation and hearing aid using multiple classifier fusion

    Directory of Open Access Journals (Sweden)

    Tayarani Hamid

    2011-01-01

    Full Text Available Abstract Background Speech production and speech phonetic features gradually improve in children by obtaining audio feedback after cochlear implantation or using hearing aids. The aim of this study was to develop and evaluate automated classification of voice disorder in children with cochlear implantation and hearing aids. Methods We considered 4 disorder categories in children's voice using the following definitions: Level_1: Children who produce spontaneous phonation and use words spontaneously and imitatively. Level_2: Children, who produce spontaneous phonation, use words spontaneously and make short sentences imitatively. Level_3: Children, who produce spontaneous phonations, use words and arbitrary sentences spontaneously. Level_4: Normal children without any hearing loss background. Thirty Persian children participated in the study, including six children in each level from one to three and 12 children in level four. Voice samples of five isolated Persian words "mashin", "mar", "moosh", "gav" and "mouz" were analyzed. Four levels of the voice quality were considered, the higher the level the less significant the speech disorder. "Frame-based" and "word-based" features were extracted from voice signals. The frame-based features include intensity, fundamental frequency, formants, nasality and approximate entropy and word-based features include phase space features and wavelet coefficients. For frame-based features, hidden Markov models were used as classifiers and for word-based features, neural network was used. Results After Classifiers fusion with three methods: Majority Voting Rule, Linear Combination and Stacked fusion, the best classification rates were obtained using frame-based and word-based features with MVR rule (level 1:100%, level 2: 93.75%, level 3: 100%, level 4: 94%. Conclusions Result of this study may help speech pathologists follow up voice disorder recovery in children with cochlear implantation or hearing aid who are

  8. Velocity Feedback Experiments

    Directory of Open Access Journals (Sweden)

    Chiu Choi

    2017-02-01

    Full Text Available Transient response such as ringing in a control system can be reduced or removed by velocity feedback. It is a useful control technique that should be covered in the relevant engineering laboratory courses. We developed velocity feedback experiments using two different low cost technologies, viz., operational amplifiers and microcontrollers. These experiments can be easily integrated into laboratory courses on feedback control systems or microcontroller applications. The intent of developing these experiments was to illustrate the ringing problem and to offer effective, low cost solutions for removing such problem. In this paper the pedagogical approach for these velocity feedback experiments was described. The advantages and disadvantages of the two different implementation of velocity feedback were discussed also.

  9. Feedback i matematik

    DEFF Research Database (Denmark)

    Sortkær, Bent

    2017-01-01

    Feedback bliver i litteraturen igen og igen fremhævet som et af de mest effektive midler til at fremme elevers præstationer i skolen (Hartberg, Dobson, & Gran, 2012; Hattie & Timperley, 2007; Wiliam, 2015). Dette på trods af, at flere forskere påpeger, at feedback ikke altid er læringsfremmende...... (Hattie & Gan, 2011), og nogle endda viser, at feedback kan have en negativ virkning i forhold til præstationer (Kluger & DeNisi, 1996). Artiklen vil undersøge disse tilsyneladende modstridende resultater ved at stille spørgsmålet: Under hvilke forudsætninger virker feedback i matematik læringsfremmende......? Dette gøres ved at dykke ned i forskningslitteraturen omhandlende feedback ud fra en række temaer for på den måde at besvare ovenstående spørgsmål....

  10. Auditory Motion Elicits a Visual Motion Aftereffect

    Directory of Open Access Journals (Sweden)

    Christopher C. Berger

    2016-12-01

    Full Text Available The visual motion aftereffect is a visual illusion in which exposure to continuous motion in one direction leads to a subsequent illusion of visual motion in the opposite direction. Previous findings have been mixed with regard to whether this visual illusion can be induced cross-modally by auditory stimuli. Based on research on multisensory perception demonstrating the profound influence auditory perception can have on the interpretation and perceived motion of visual stimuli, we hypothesized that exposure to auditory stimuli with strong directional motion cues should induce a visual motion aftereffect. Here, we demonstrate that horizontally moving auditory stimuli induced a significant visual motion aftereffect—an effect that was driven primarily by a change in visual motion perception following exposure to leftward moving auditory stimuli. This finding is consistent with the notion that visual and auditory motion perception rely on at least partially overlapping neural substrates.

  11. Auditory Motion Elicits a Visual Motion Aftereffect.

    Science.gov (United States)

    Berger, Christopher C; Ehrsson, H Henrik

    2016-01-01

    The visual motion aftereffect is a visual illusion in which exposure to continuous motion in one direction leads to a subsequent illusion of visual motion in the opposite direction. Previous findings have been mixed with regard to whether this visual illusion can be induced cross-modally by auditory stimuli. Based on research on multisensory perception demonstrating the profound influence auditory perception can have on the interpretation and perceived motion of visual stimuli, we hypothesized that exposure to auditory stimuli with strong directional motion cues should induce a visual motion aftereffect. Here, we demonstrate that horizontally moving auditory stimuli induced a significant visual motion aftereffect-an effect that was driven primarily by a change in visual motion perception following exposure to leftward moving auditory stimuli. This finding is consistent with the notion that visual and auditory motion perception rely on at least partially overlapping neural substrates.

  12. Your Cheatin' Voice Will Tell on You: Detection of Past Infidelity from Voice.

    Science.gov (United States)

    Hughes, Susan M; Harrison, Marissa A

    2017-01-01

    Evidence suggests that many physical, behavioral, and trait qualities can be detected solely from the sound of a person's voice, irrespective of the semantic information conveyed through speech. This study examined whether raters could accurately assess the likelihood that a person has cheated on committed, romantic partners simply by hearing the speaker's voice. Independent raters heard voice samples of individuals who self-reported that they either cheated or had never cheated on their romantic partners. To control for aspects that may clue a listener to the speaker's mate value, we used voice samples that did not differ between these groups for voice attractiveness, age, voice pitch, and other acoustic measures. We found that participants indeed rated the voices of those who had a history of cheating as more likely to cheat. Male speakers were given higher ratings for cheating, while female raters were more likely to ascribe the likelihood to cheat to speakers. Additionally, we manipulated the pitch of the voice samples, and for both sexes, the lower pitched versions were consistently rated to be from those who were more likely to have cheated. Regardless of the pitch manipulation, speakers were able to assess actual history of infidelity; the one exception was that men's accuracy decreased when judging women whose voices were lowered. These findings expand upon the idea that the human voice may be of value as a cheater detection tool and very thin slices of vocal information are all that is needed to make certain assessments about others.

  13. A pneumatic Bionic Voice prosthesis-Pre-clinical trials of controlling the voice onset and offset.

    Directory of Open Access Journals (Sweden)

    Farzaneh Ahmadi

    Full Text Available Despite emergent progress in many fields of bionics, a functional Bionic Voice prosthesis for laryngectomy patients (larynx amputees has not yet been achieved, leading to a lifetime of vocal disability for these patients. This study introduces a novel framework of Pneumatic Bionic Voice Prostheses as an electronic adaptation of the Pneumatic Artificial Larynx (PAL device. The PAL is a non-invasive mechanical voice source, driven exclusively by respiration with an exceptionally high voice quality, comparable to the existing gold standard of Tracheoesophageal (TE voice prosthesis. Following PAL design closely as the reference, Pneumatic Bionic Voice Prostheses seem to have a strong potential to substitute the existing gold standard by generating a similar voice quality while remaining non-invasive and non-surgical. This paper designs the first Pneumatic Bionic Voice prosthesis and evaluates its onset and offset control against the PAL device through pre-clinical trials on one laryngectomy patient. The evaluation on a database of more than five hours of continuous/isolated speech recordings shows a close match between the onset/offset control of the Pneumatic Bionic Voice and the PAL with an accuracy of 98.45 ±0.54%. When implemented in real-time, the Pneumatic Bionic Voice prosthesis controller has an average onset/offset delay of 10 milliseconds compared to the PAL. Hence it addresses a major disadvantage of previous electronic voice prostheses, including myoelectric Bionic Voice, in meeting the short time-frames of controlling the onset/offset of the voice in continuous speech.

  14. A pneumatic Bionic Voice prosthesis-Pre-clinical trials of controlling the voice onset and offset.

    Science.gov (United States)

    Ahmadi, Farzaneh; Noorian, Farzad; Novakovic, Daniel; van Schaik, André

    2018-01-01

    Despite emergent progress in many fields of bionics, a functional Bionic Voice prosthesis for laryngectomy patients (larynx amputees) has not yet been achieved, leading to a lifetime of vocal disability for these patients. This study introduces a novel framework of Pneumatic Bionic Voice Prostheses as an electronic adaptation of the Pneumatic Artificial Larynx (PAL) device. The PAL is a non-invasive mechanical voice source, driven exclusively by respiration with an exceptionally high voice quality, comparable to the existing gold standard of Tracheoesophageal (TE) voice prosthesis. Following PAL design closely as the reference, Pneumatic Bionic Voice Prostheses seem to have a strong potential to substitute the existing gold standard by generating a similar voice quality while remaining non-invasive and non-surgical. This paper designs the first Pneumatic Bionic Voice prosthesis and evaluates its onset and offset control against the PAL device through pre-clinical trials on one laryngectomy patient. The evaluation on a database of more than five hours of continuous/isolated speech recordings shows a close match between the onset/offset control of the Pneumatic Bionic Voice and the PAL with an accuracy of 98.45 ±0.54%. When implemented in real-time, the Pneumatic Bionic Voice prosthesis controller has an average onset/offset delay of 10 milliseconds compared to the PAL. Hence it addresses a major disadvantage of previous electronic voice prostheses, including myoelectric Bionic Voice, in meeting the short time-frames of controlling the onset/offset of the voice in continuous speech.

  15. A pneumatic Bionic Voice prosthesis—Pre-clinical trials of controlling the voice onset and offset

    Science.gov (United States)

    Noorian, Farzad; Novakovic, Daniel; van Schaik, André

    2018-01-01

    Despite emergent progress in many fields of bionics, a functional Bionic Voice prosthesis for laryngectomy patients (larynx amputees) has not yet been achieved, leading to a lifetime of vocal disability for these patients. This study introduces a novel framework of Pneumatic Bionic Voice Prostheses as an electronic adaptation of the Pneumatic Artificial Larynx (PAL) device. The PAL is a non-invasive mechanical voice source, driven exclusively by respiration with an exceptionally high voice quality, comparable to the existing gold standard of Tracheoesophageal (TE) voice prosthesis. Following PAL design closely as the reference, Pneumatic Bionic Voice Prostheses seem to have a strong potential to substitute the existing gold standard by generating a similar voice quality while remaining non-invasive and non-surgical. This paper designs the first Pneumatic Bionic Voice prosthesis and evaluates its onset and offset control against the PAL device through pre-clinical trials on one laryngectomy patient. The evaluation on a database of more than five hours of continuous/isolated speech recordings shows a close match between the onset/offset control of the Pneumatic Bionic Voice and the PAL with an accuracy of 98.45 ±0.54%. When implemented in real-time, the Pneumatic Bionic Voice prosthesis controller has an average onset/offset delay of 10 milliseconds compared to the PAL. Hence it addresses a major disadvantage of previous electronic voice prostheses, including myoelectric Bionic Voice, in meeting the short time-frames of controlling the onset/offset of the voice in continuous speech. PMID:29466455

  16. Brain 'talks over' boring quotes: top-down activation of voice-selective areas while listening to monotonous direct speech quotations.

    Science.gov (United States)

    Yao, Bo; Belin, Pascal; Scheepers, Christoph

    2012-04-15

    In human communication, direct speech (e.g., Mary said, "I'm hungry") is perceived as more vivid than indirect speech (e.g., Mary said that she was hungry). This vividness distinction has previously been found to underlie silent reading of quotations: Using functional magnetic resonance imaging (fMRI), we found that direct speech elicited higher brain activity in the temporal voice areas (TVA) of the auditory cortex than indirect speech, consistent with an "inner voice" experience in reading direct speech. Here we show that listening to monotonously spoken direct versus indirect speech quotations also engenders differential TVA activity. This suggests that individuals engage in top-down simulations or imagery of enriched supra-segmental acoustic representations while listening to monotonous direct speech. The findings shed new light on the acoustic nature of the "inner voice" in understanding direct speech. Copyright © 2012 Elsevier Inc. All rights reserved.

  17. Hearing and saying. The functional neuro-anatomy of auditory word processing.

    Science.gov (United States)

    Price, C J; Wise, R J; Warburton, E A; Moore, C J; Howard, D; Patterson, K; Frackowiak, R S; Friston, K J

    1996-06-01

    The neural systems involved in hearing and repeating single words were investigated in a series of experiments using PET. Neuropsychological and psycholinguistic studies implicate the involvement of posterior and anterior left perisylvian regions (Wernicke's and Broca's areas). Although previous functional neuroimaging studies have consistently shown activation of Wernicke's area, there has been only variable implication of Broca's area. This study demonstrates that Broca's area is involved in both auditory word perception and repetition but activation is dependent on task (greater during repetition than hearing) and stimulus presentation (greater when hearing words at a slow rate). The peak of frontal activation in response to hearing words is anterior to that associated with repeating words; the former is probably located in Brodmann's area 45, the latter in Brodmann's area 44 and the adjacent precentral sulcus. As Broca's area activation is more subtle and complex than that in Wernicke's area during these tasks, the likelihood of observing it is influenced by both the study design and the image analysis technique employed. As a secondary outcome from the study, the response of bilateral auditory association cortex to 'own voice' during repetition was shown to be the same as when listening to "other voice' from a prerecorded tape.

  18. Syntactic processing in music and language: Effects of interrupting auditory streams with alternating timbres.

    Science.gov (United States)

    Fiveash, Anna; Thompson, William Forde; Badcock, Nicholas A; McArthur, Genevieve

    2018-07-01

    Music and language both rely on the processing of spectral (pitch, timbre) and temporal (rhythm) information to create structure and meaning from incoming auditory streams. Behavioral results have shown that interrupting a melodic stream with unexpected changes in timbre leads to reduced syntactic processing. Such findings suggest that syntactic processing is conditional on successful streaming of incoming sequential information. The current study used event-related potentials (ERPs) to investigate whether (1) the effect of alternating timbres on syntactic processing is reflected in a reduced brain response to syntactic violations, and (2) the phenomenon is similar for music and language. Participants listened to melodies and sentences with either one timbre (piano or one voice) or three timbres (piano, guitar, and vibraphone, or three different voices). Half the stimuli contained syntactic violations: an out-of-key note in the melodies, and a phrase-structure violation in the sentences. We found smaller ERPs to syntactic violations in music in the three-timbre compared to the one-timbre condition, reflected in a reduced early right anterior negativity (ERAN). A similar but non-significant pattern was observed for language stimuli in both the early left anterior negativity (ELAN) and the left anterior negativity (LAN) ERPs. The results suggest that disruptions to auditory streaming may interfere with syntactic processing, especially for melodic sequences. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Mindfulness of voices, self-compassion, and secure attachment in relation to the experience of hearing voices.

    Science.gov (United States)

    Dudley, James; Eames, Catrin; Mulligan, John; Fisher, Naomi

    2018-03-01

    Developing compassion towards oneself has been linked to improvement in many areas of psychological well-being, including psychosis. Furthermore, developing a non-judgemental, accepting way of relating to voices is associated with lower levels of distress for people who hear voices. These factors have also been associated with secure attachment. This study explores associations between the constructs of mindfulness of voices, self-compassion, and distress from hearing voices and how secure attachment style related to each of these variables. Cross-sectional online. One hundred and twenty-eight people (73% female; M age  = 37.5; 87.5% Caucasian) who currently hear voices completed the Self-Compassion Scale, Southampton Mindfulness of Voices Questionnaire, Relationships Questionnaire, and Hamilton Programme for Schizophrenia Voices Questionnaire. Results showed that mindfulness of voices mediated the relationship between self-compassion and severity of voices, and self-compassion mediated the relationship between mindfulness of voices and severity of voices. Self-compassion and mindfulness of voices were significantly positively correlated with each other and negatively correlated with distress and severity of voices. Mindful relation to voices and self-compassion are associated with reduced distress and severity of voices, which supports the proposed potential benefits of mindful relating to voices and self-compassion as therapeutic skills for people experiencing distress by voice hearing. Greater self-compassion and mindfulness of voices were significantly associated with less distress from voices. These findings support theory underlining compassionate mind training. Mindfulness of voices mediated the relationship between self-compassion and distress from voices, indicating a synergistic relationship between the constructs. Although the current findings do not give a direction of causation, consideration is given to the potential impact of mindful and

  20. The Effect of Working Memory Training on Auditory Stream Segregation in Auditory Processing Disorders Children

    OpenAIRE

    Abdollah Moossavi; Saeideh Mehrkian; Yones Lotfi; Soghrat Faghih zadeh; Hamed Adjedi

    2015-01-01

    Objectives: This study investigated the efficacy of working memory training for improving working memory capacity and related auditory stream segregation in auditory processing disorders children. Methods: Fifteen subjects (9-11 years), clinically diagnosed with auditory processing disorder participated in this non-randomized case-controlled trial. Working memory abilities and auditory stream segregation were evaluated prior to beginning and six weeks after completing the training program...

  1. Brain activity during auditory and visual phonological, spatial and simple discrimination tasks.

    Science.gov (United States)

    Salo, Emma; Rinne, Teemu; Salonen, Oili; Alho, Kimmo

    2013-02-16

    We used functional magnetic resonance imaging to measure human brain activity during tasks demanding selective attention to auditory or visual stimuli delivered in concurrent streams. Auditory stimuli were syllables spoken by different voices and occurring in central or peripheral space. Visual stimuli were centrally or more peripherally presented letters in darker or lighter fonts. The participants performed a phonological, spatial or "simple" (speaker-gender or font-shade) discrimination task in either modality. Within each modality, we expected a clear distinction between brain activations related to nonspatial and spatial processing, as reported in previous studies. However, within each modality, different tasks activated largely overlapping areas in modality-specific (auditory and visual) cortices, as well as in the parietal and frontal brain regions. These overlaps may be due to effects of attention common for all three tasks within each modality or interaction of processing task-relevant features and varying task-irrelevant features in the attended-modality stimuli. Nevertheless, brain activations caused by auditory and visual phonological tasks overlapped in the left mid-lateral prefrontal cortex, while those caused by the auditory and visual spatial tasks overlapped in the inferior parietal cortex. These overlapping activations reveal areas of multimodal phonological and spatial processing. There was also some evidence for intermodal attention-related interaction. Most importantly, activity in the superior temporal sulcus elicited by unattended speech sounds was attenuated during the visual phonological task in comparison with the other visual tasks. This effect might be related to suppression of processing irrelevant speech presumably distracting the phonological task involving the letters. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. Haptic feedback for enhancing realism of walking simulations.

    Science.gov (United States)

    Turchet, Luca; Burelli, Paolo; Serafin, Stefania

    2013-01-01

    In this paper, we describe several experiments whose goal is to evaluate the role of plantar vibrotactile feedback in enhancing the realism of walking experiences in multimodal virtual environments. To achieve this goal we built an interactive and a noninteractive multimodal feedback system. While during the use of the interactive system subjects physically walked, during the use of the noninteractive system the locomotion was simulated while subjects were sitting on a chair. In both the configurations subjects were exposed to auditory and audio-visual stimuli presented with and without the haptic feedback. Results of the experiments provide a clear preference toward the simulations enhanced with haptic feedback showing that the haptic channel can lead to more realistic experiences in both interactive and noninteractive configurations. The majority of subjects clearly appreciated the added feedback. However, some subjects found the added feedback unpleasant. This might be due, on one hand, to the limits of the haptic simulation and, on the other hand, to the different individual desire to be involved in the simulations. Our findings can be applied to the context of physical navigation in multimodal virtual environments as well as to enhance the user experience of watching a movie or playing a video game.

  3. Psychological effects of dysphonia in voice professionals.

    Science.gov (United States)

    Salturk, Ziya; Kumral, Tolgar Lutfi; Aydoğdu, Imran; Arslanoğlu, Ahmet; Berkiten, Güler; Yildirim, Güven; Uyar, Yavuz

    2015-08-01

    To evaluate the psychological effects of dysphonia in voice professionals compared to non-voice professionals and in both genders. Cross-sectional analysis. Forty-eight 48 voice professionals and 52 non-voice professionals with dysphonia were included in this study. All participants underwent a complete ear, nose, and throat examination and an evaluation for pathologies that might affect vocal quality. Participants were asked to complete the Turkish versions of the Voice Handicap Index-30 (VHI-30), Perceived Stress Scale (PSS), and the Hospital Anxiety and Depression Scale (HADS). HADS scores were evaluated as HADS-A (anxiety) and HADS-D (depression). Dysphonia status was evaluated by grade, roughness, breathiness, asthenia, and strain (GRBAS) scale perceptually. The results were compared statistically. Significant differences between the two groups were evident when the VHI-30 and PSS data were compared (P = .00001 and P = .00001, respectively). However, neither HADS score (HADS-A and HADS-D) differed between groups. An analysis of the scores in terms of sex revealed that females had significantly higher PSS scores (P = .006). The GRBAS scale revealed no difference between groups (P = .819, .931, .803, .655, and .803, respectively). No between-sex differences in the VHI-30 or HADS scores were evident We found that voice professionals and females experienced more stress and were more dissatisfied with their voices. 4. © 2015 The American Laryngological, Rhinological and Otological Society, Inc.

  4. Reliability in perceptual analysis of voice quality.

    Science.gov (United States)

    Bele, Irene Velsvik

    2005-12-01

    This study focuses on speaking voice quality in male teachers (n = 35) and male actors (n = 36), who represent untrained and trained voice users, because we wanted to investigate normal and supranormal voices. In this study, both substantial and methodologic aspects were considered. It includes a method for perceptual voice evaluation, and a basic issue was rater reliability. A listening group of 10 listeners, 7 experienced speech-language therapists, and 3 speech-language therapist students evaluated the voices by 15 vocal characteristics using VA scales. Two sets of voice signals were investigated: text reading (2 loudness levels) and sustained vowel (3 levels). The results indicated a high interrater reliability for most perceptual characteristics. Connected speech was evaluated more reliably, especially at the normal level, but both types of voice signals were evaluated reliably, although the reliability for connected speech was somewhat higher than for vowels. Experienced listeners tended to be more consistent in their ratings than did the student raters. Some vocal characteristics achieved acceptable reliability even with a smaller panel of listeners. The perceptual characteristics grouped in 4 factors reflected perceptual dimensions.

  5. Muted 'voice': The writing of two groups of postgraduate ...

    African Journals Online (AJOL)

    The purpose of this article is to demonstrate and account for the weak emergence of 'voice' in the writing of students embarking upon their postgraduate studies in Geosciences. The two elements of 'voice' that are emphasised are 'voice' as style of expression and 'voice' as the ability to write distinctly, yet building upon ...

  6. Functional mapping of the primate auditory system.

    Science.gov (United States)

    Poremba, Amy; Saunders, Richard C; Crane, Alison M; Cook, Michelle; Sokoloff, Louis; Mishkin, Mortimer

    2003-01-24

    Cerebral auditory areas were delineated in the awake, passively listening, rhesus monkey by comparing the rates of glucose utilization in an intact hemisphere and in an acoustically isolated contralateral hemisphere of the same animal. The auditory system defined in this way occupied large portions of cerebral tissue, an extent probably second only to that of the visual system. Cortically, the activated areas included the entire superior temporal gyrus and large portions of the parietal, prefrontal, and limbic lobes. Several auditory areas overlapped with previously identified visual areas, suggesting that the auditory system, like the visual system, contains separate pathways for processing stimulus quality, location, and motion.

  7. Auditory Modeling for Noisy Speech Recognition

    National Research Council Canada - National Science Library

    2000-01-01

    ... digital filtering for noise cancellation which interfaces to speech recognition software. It uses auditory features in speech recognition training, and provides applications to multilingual spoken language translation...

  8. Human Factors Military Lexicon: Auditory Displays

    National Research Council Canada - National Science Library

    Letowski, Tomasz

    2001-01-01

    .... In addition to definitions specific to auditory displays, speech communication, and audio technology, the lexicon includes several terms unique to military operational environments and human factors...

  9. Auditory, visual and auditory-visual memory and sequencing performance in typically developing children.

    Science.gov (United States)

    Pillai, Roshni; Yathiraj, Asha

    2017-09-01

    The study evaluated whether there exists a difference/relation in the way four different memory skills (memory score, sequencing score, memory span, & sequencing span) are processed through the auditory modality, visual modality and combined modalities. Four memory skills were evaluated on 30 typically developing children aged 7 years and 8 years across three modality conditions (auditory, visual, & auditory-visual). Analogous auditory and visual stimuli were presented to evaluate the three modality conditions across the two age groups. The children obtained significantly higher memory scores through the auditory modality compared to the visual modality. Likewise, their memory scores were significantly higher through the auditory-visual modality condition than through the visual modality. However, no effect of modality was observed on the sequencing scores as well as for the memory and the sequencing span. A good agreement was seen between the different modality conditions that were studied (auditory, visual, & auditory-visual) for the different memory skills measures (memory scores, sequencing scores, memory span, & sequencing span). A relatively lower agreement was noted only between the auditory and visual modalities as well as between the visual and auditory-visual modality conditions for the memory scores, measured using Bland-Altman plots. The study highlights the efficacy of using analogous stimuli to assess the auditory, visual as well as combined modalities. The study supports the view that the performance of children on different memory skills was better through the auditory modality compared to the visual modality. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Comparison between treadmill training with rhythmic auditory stimulation and ground walking with rhythmic auditory stimulation on gait ability in chronic stroke patients: A pilot study.

    Science.gov (United States)

    Park, Jin; Park, So-yeon; Kim, Yong-wook; Woo, Youngkeun

    2015-01-01

    Generally, treadmill training is very effective intervention, and rhythmic auditory stimulation is designed to feedback during gait training in stroke patients. The purpose of this study was to compare the gait abilities in chronic stroke patients following either treadmill walking training with rhythmic auditory stimulation (TRAS) or over ground walking training with rhythmic auditory stimulation (ORAS). Nineteen subjects were divided into two groups: a TRAS group (9 subjects) and an ORAS group (10 subjects). Temporal and spatial gait parameters and motor recovery ability were measured before and after the training period. Gait ability was measured by the Biodex Gait trainer treadmill system, Timed up and go test (TUG), 6 meter walking distance (6MWD) and Functional gait assessment (FGA). After the training periods, the TRAS group showed a significant improvement in walking speed, step cycle, step length of the unaffected limb, coefficient of variation, 6MWD, and, FGA when compared to the ORAS group (p <  0.05). Treadmill walking training during the rhythmic auditory stimulation may be useful for rehabilitation of patients with chronic stroke.

  11. Feedback and efficient behavior.

    Directory of Open Access Journals (Sweden)

    Sandro Casal

    Full Text Available Feedback is an effective tool for promoting efficient behavior: it enhances individuals' awareness of choice consequences in complex settings. Our study aims to isolate the mechanisms underlying the effects of feedback on achieving efficient behavior in a controlled environment. We design a laboratory experiment in which individuals are not aware of the consequences of different alternatives and, thus, cannot easily identify the efficient ones. We introduce feedback as a mechanism to enhance the awareness of consequences and to stimulate exploration and search for efficient alternatives. We assess the efficacy of three different types of intervention: provision of social information, manipulation of the frequency, and framing of feedback. We find that feedback is most effective when it is framed in terms of losses, that it reduces efficiency when it includes information about inefficient peers' behavior, and that a lower frequency of feedback does not disrupt efficiency. By quantifying the effect of different types of feedback, our study suggests useful insights for policymakers.

  12. Feedback - fra et elevperspektiv

    DEFF Research Database (Denmark)

    Petersen, Benedikte Vilslev; Pedersen, Bent Sortkær

    Feedback bliver i litteraturen igen og igen fremhævet som et af de mest effektive midler til at fremme elevers præstationer i skolen (Hattie og Timperley, 2007). Andre studier er dog inde på at feedback ikke altid er læringsfremmende og nogle viser endda at feedback kan have en negativ virkning i...... forhold til præstationer (Kluger & DeNisi, 1996). I forsøget på at forklare hvordan og hvorfor feedback virker (forskelligt), er der undersøgt flere dimensioner og forhold omkring feedback (se bl.a. Black og Wiliam, 1998; Hattie og Timperley, 2007; Shute, 2008). Dog er der få studier der undersøger...... hvordan feedback opleves fra et elevperspektiv (Ruiz-Primo og Li, 2013). Samtidig er der i feedbacklitteraturen en mangel på kvalitative studier, der kommer tæt på fænomenet feedback, som det viser sig i klasserummet (Ruiz-Primo og Li, 2013) i naturlige omgivelser (Black og Wiliam, 1998), og hvordan...

  13. Voicing children's critique and utopias

    DEFF Research Database (Denmark)

    Husted, Mia; Lind, Unni

    and restrictions, Call for aesthetics an sensuality, Longings for home and parents, Longings for better social relations Making children's voice visible allows preschool teachers to reflect children's knowledge and life word in pedagogical practice. Keywords: empowerment and participation, action research...... children to raise and render visible their own critique and wishes related to their everyday life in daycare. Research on how and why to engage children as participants in research and in institutional developments addresses overall interests in democratization and humanization that can be traced back...... to strategies for Nordic welfare developments and the Conventions on Children's Rights. The theoretical and methodological framework follow the lines of how to form and learn democracy of Lewin (1948) and Dewey (1916). The study is carried out as action research involving 50 children at age three to five...

  14. His Master’s Voice?

    DEFF Research Database (Denmark)

    Sörbom, Adrienne; Garsten, Christina

    This paper departs from an interest in the involvement of business leaders in the sphere of politics, in the broad sense. Many global business leaders today do much more than engage narrowly in their own corporation and its search for profit. At a general level, we are seeing a proliferation...... as political. What is the role of business in the World Economic Forum, and how do business corporations advance their interests through the WEF? The results show that corporations find a strategically positioned amplifier for their non-market interests in the WEF. The WEF functions to enhance and gain...... leverage for their ideas and priorities in a highly selective and resourceful environment. In the long run, both the market priorities and the political interests of business may be served by engagement in the WEF. However, the WEF cannot only be conceived as the extended voice of corporations. The WEF...

  15. Giving the Customer a Voice

    DEFF Research Database (Denmark)

    Van der Hoven, Christopher; Michea, Adela; Varnes, Claus

    , for example there are studies that have strongly criticized focus groups, interviews and surveys (e.g. Ulwick, 2002; Goffin et al, 2010; Sandberg, 2002). In particular, a point is made that, “…traditional market research and development approaches proved to be particularly ill-suited to breakthrough products...... the voice of the customer (VoC) through market research is well documented (Davis, 1993; Mullins and Sutherland, 1998; Cooper et al., 2002; Flint, 2002; Davilla et al., 2006; Cooper and Edgett, 2008; Cooper and Dreher, 2010; Goffin and Mitchell, 2010). However, not all research methods are well received......” (Deszca et al, 2010, p613). Therefore, in situations where traditional techniques - interviews and focus groups - are ineffective, the question is which market research techniques are appropriate, particularly for developing breakthrough products? To investigate this, an attempt was made to access...

  16. Dangertalk: Voices of abortion providers.

    Science.gov (United States)

    Martin, Lisa A; Hassinger, Jane A; Debbink, Michelle; Harris, Lisa H

    2017-07-01

    Researchers have described the difficulties of doing abortion work, including the psychosocial costs to individual providers. Some have discussed the self-censorship in which providers engage in to protect themselves and the pro-choice movement. However, few have examined the costs of this self-censorship to public discourse and social movements in the US. Using qualitative data collected during abortion providers' discussions of their work, we explore the tensions between their narratives and pro-choice discourse, and examine the types of stories that are routinely silenced - narratives we name "dangertalk". Using these data, we theorize about the ways in which giving voice to these tensions might transform current abortion discourse by disrupting false dichotomies and better reflecting the complex realities of abortion. We present a conceptual model for dangertalk in abortion discourse, connecting it to functions of dangertalk in social movements more broadly. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Mediatization: a concept, multiple voices

    Directory of Open Access Journals (Sweden)

    Pedro Gilberto GOMES

    2016-12-01

    Full Text Available Mediatization has become increasingly a key concept, fundamental, essential to describe the present and the history of media and communicative change taking place. Thus, it became part of a whole, one can not see them as a separate sphere. In this perspective, the media coverage is used as a concept to describe the process of expansion of the different technical means and consider the interrelationships between the communicative change, means and sociocultural change. However, although many researchers use the concept of mediatization, each gives you the meaning that best suits your needs. Thus, the concept of media coverage is treated with multiple voices. This paper discusses this problem and present a preliminary pre-position on the matter.

  18. Robust matching for voice recognition

    Science.gov (United States)

    Higgins, Alan; Bahler, L.; Porter, J.; Blais, P.

    1994-10-01

    This paper describes an automated method of comparing a voice sample of an unknown individual with samples from known speakers in order to establish or verify the individual's identity. The method is based on a statistical pattern matching approach that employs a simple training procedure, requires no human intervention (transcription, work or phonetic marketing, etc.), and makes no assumptions regarding the expected form of the statistical distributions of the observations. The content of the speech material (vocabulary, grammar, etc.) is not assumed to be constrained in any way. An algorithm is described which incorporates frame pruning and channel equalization processes designed to achieve robust performance with reasonable computational resources. An experimental implementation demonstrating the feasibility of the concept is described.

  19. Disability: a voice in Australian bioethics?

    Science.gov (United States)

    Newell, Christopher

    2003-06-01

    The rise of research and advocacy over the years to establish a disability voice in Australia with regard to bioethical issues is explored. This includes an analysis of some of the political processes and engagement in mainstream bioethical debate. An understanding of the politics of rejected knowledge is vital in understanding the muted disability voices in Australian bioethics and public policy. It is also suggested that the voices of those who are marginalised or oppressed in society, such as people with disability, have particular contribution to make in fostering critical bioethics.

  20. Unfamiliar voice identification: Effect of post-event information on accuracy and voice ratings

    Directory of Open Access Journals (Sweden)

    Harriet Mary Jessica Smith

    2014-04-01

    Full Text Available This study addressed the effect of misleading post-event information (PEI on voice ratings, identification accuracy, and confidence, as well as the link between verbal recall and accuracy. Participants listened to a dialogue between male and female targets, then read misleading information about voice pitch. Participants engaged in verbal recall, rated voices on a feature checklist, and made a lineup decision. Accuracy rates were low, especially on target-absent lineups. Confidence and accuracy were unrelated, but the number of facts recalled about the voice predicted later lineup accuracy. There was a main effect of misinformation on ratings of target voice pitch, but there was no effect on identification accuracy or confidence ratings. As voice lineup evidence from earwitnesses is used in courts, the findings have potential applied relevance.