WorldWideScience

Sample records for voice auditory feedback

  1. Analysis of the Auditory Feedback and Phonation in Normal Voices.

    Science.gov (United States)

    Arbeiter, Mareike; Petermann, Simon; Hoppe, Ulrich; Bohr, Christopher; Doellinger, Michael; Ziethe, Anke

    2018-02-01

    The aim of this study was to investigate the auditory feedback mechanisms and voice quality during phonation in response to a spontaneous pitch change in the auditory feedback. Does the pitch shift reflex (PSR) change voice pitch and voice quality? Quantitative and qualitative voice characteristics were analyzed during the PSR. Twenty-eight healthy subjects underwent transnasal high-speed video endoscopy (HSV) at 8000 fps during sustained phonation [a]. While phonating, the subjects heard their sound pitched up for 700 cents (interval of a fifth), lasting 300 milliseconds in their auditory feedback. The electroencephalography (EEG), acoustic voice signal, electroglottography (EGG), and high-speed-videoendoscopy (HSV) were analyzed to compare feedback mechanisms for the pitched and unpitched condition of the phonation paradigm statistically. Furthermore, quantitative and qualitative voice characteristics were analyzed. The PSR was successfully detected within all signals of the experimental tools (EEG, EGG, acoustic voice signal, HSV). A significant increase of the perturbation measures and an increase of the values of the acoustic parameters during the PSR were observed, especially for the audio signal. The auditory feedback mechanism seems not only to control for voice pitch but also for voice quality aspects.

  2. Sensory Processing: Advances in Understanding Structure and Function of Pitch-Shifted Auditory Feedback in Voice Control

    OpenAIRE

    Charles R Larson; Donald A Robin

    2016-01-01

    The pitch-shift paradigm has become a widely used method for studying the role of voice pitch auditory feedback in voice control. This paradigm introduces small, brief pitch shifts in voice auditory feedback to vocalizing subjects. The perturbations trigger a reflexive mechanism that counteracts the change in pitch. The underlying mechanisms of the vocal responses are thought to reflect a negative feedback control system that is similar to constructs developed to explain other forms of motor ...

  3. Error-dependent modulation of speech-induced auditory suppression for pitch-shifted voice feedback

    Directory of Open Access Journals (Sweden)

    Larson Charles R

    2011-06-01

    Full Text Available Abstract Background The motor-driven predictions about expected sensory feedback (efference copies have been proposed to play an important role in recognition of sensory consequences of self-produced motor actions. In the auditory system, this effect was suggested to result in suppression of sensory neural responses to self-produced voices that are predicted by the efference copies during vocal production in comparison with passive listening to the playback of the identical self-vocalizations. In the present study, event-related potentials (ERPs were recorded in response to upward pitch shift stimuli (PSS with five different magnitudes (0, +50, +100, +200 and +400 cents at voice onset during active vocal production and passive listening to the playback. Results Results indicated that the suppression of the N1 component during vocal production was largest for unaltered voice feedback (PSS: 0 cents, became smaller as the magnitude of PSS increased to 200 cents, and was almost completely eliminated in response to 400 cents stimuli. Conclusions Findings of the present study suggest that the brain utilizes the motor predictions (efference copies to determine the source of incoming stimuli and maximally suppresses the auditory responses to unaltered feedback of self-vocalizations. The reduction of suppression for 50, 100 and 200 cents and its elimination for 400 cents pitch-shifted voice auditory feedback support the idea that motor-driven suppression of voice feedback leads to distinctly different sensory neural processing of self vs. non-self vocalizations. This characteristic may enable the audio-vocal system to more effectively detect and correct for unexpected errors in the feedback of self-produced voice pitch compared with externally-generated sounds.

  4. Error-dependent modulation of speech-induced auditory suppression for pitch-shifted voice feedback.

    Science.gov (United States)

    Behroozmand, Roozbeh; Larson, Charles R

    2011-06-06

    The motor-driven predictions about expected sensory feedback (efference copies) have been proposed to play an important role in recognition of sensory consequences of self-produced motor actions. In the auditory system, this effect was suggested to result in suppression of sensory neural responses to self-produced voices that are predicted by the efference copies during vocal production in comparison with passive listening to the playback of the identical self-vocalizations. In the present study, event-related potentials (ERPs) were recorded in response to upward pitch shift stimuli (PSS) with five different magnitudes (0, +50, +100, +200 and +400 cents) at voice onset during active vocal production and passive listening to the playback. Results indicated that the suppression of the N1 component during vocal production was largest for unaltered voice feedback (PSS: 0 cents), became smaller as the magnitude of PSS increased to 200 cents, and was almost completely eliminated in response to 400 cents stimuli. Findings of the present study suggest that the brain utilizes the motor predictions (efference copies) to determine the source of incoming stimuli and maximally suppresses the auditory responses to unaltered feedback of self-vocalizations. The reduction of suppression for 50, 100 and 200 cents and its elimination for 400 cents pitch-shifted voice auditory feedback support the idea that motor-driven suppression of voice feedback leads to distinctly different sensory neural processing of self vs. non-self vocalizations. This characteristic may enable the audio-vocal system to more effectively detect and correct for unexpected errors in the feedback of self-produced voice pitch compared with externally-generated sounds.

  5. Effects of voice harmonic complexity on ERP responses to pitch-shifted auditory feedback.

    Science.gov (United States)

    Behroozmand, Roozbeh; Korzyukov, Oleg; Larson, Charles R

    2011-12-01

    The present study investigated the neural mechanisms of voice pitch control for different levels of harmonic complexity in the auditory feedback. Event-related potentials (ERPs) were recorded in response to+200 cents pitch perturbations in the auditory feedback of self-produced natural human vocalizations, complex and pure tone stimuli during active vocalization and passive listening conditions. During active vocal production, ERP amplitudes were largest in response to pitch shifts in the natural voice, moderately large for non-voice complex stimuli and smallest for the pure tones. However, during passive listening, neural responses were equally large for pitch shifts in voice and non-voice complex stimuli but still larger than that for pure tones. These findings suggest that pitch change detection is facilitated for spectrally rich sounds such as natural human voice and non-voice complex stimuli compared with pure tones. Vocalization-induced increase in neural responses for voice feedback suggests that sensory processing of naturally-produced complex sounds such as human voice is enhanced by means of motor-driven mechanisms (e.g. efference copies) during vocal production. This enhancement may enable the audio-vocal system to more effectively detect and correct for vocal errors in the feedback of natural human vocalizations to maintain an intended vocal output for speaking. Copyright © 2011 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  6. Sensory Processing: Advances in Understanding Structure and Function of Pitch-Shifted Auditory Feedback in Voice Control

    Directory of Open Access Journals (Sweden)

    Charles R Larson

    2016-02-01

    Full Text Available The pitch-shift paradigm has become a widely used method for studying the role of voice pitch auditory feedback in voice control. This paradigm introduces small, brief pitch shifts in voice auditory feedback to vocalizing subjects. The perturbations trigger a reflexive mechanism that counteracts the change in pitch. The underlying mechanisms of the vocal responses are thought to reflect a negative feedback control system that is similar to constructs developed to explain other forms of motor control. Another use of this technique requires subjects to voluntarily change the pitch of their voice when they hear a pitch shift stimulus. Under these conditions, short latency responses are produced that change voice pitch to match that of the stimulus. The pitch-shift technique has been used with magnetoencephalography (MEG and electroencephalography (EEG recordings, and has shown that at vocal onset there is normally a suppression of neural activity related to vocalization. However, if a pitch-shift is also presented at voice onset, there is a cancellation of this suppression, which has been interpreted to mean that one way in which a person distinguishes self-vocalization from vocalization of others is by a comparison of the intended voice and the actual voice. Studies of the pitch shift reflex in the fMRI environment show that the superior temporal gyrus (STG plays an important role in the process of controlling voice F0 based on auditory feedback. Additional studies using fMRI for effective connectivity modeling show that the left and right STG play critical roles in correcting for an error in voice production. While both the left and right STG are involved in this process, a feedback loop develops between left and right STG during perturbations, in which the left to right connection becomes stronger, and a new negative right to left connection emerges along with the emergence of other feedback loops within the cortical network tested.

  7. Reliance on auditory feedback in children with childhood apraxia of speech.

    Science.gov (United States)

    Iuzzini-Seigel, Jenya; Hogan, Tiffany P; Guarino, Anthony J; Green, Jordan R

    2015-01-01

    Children with childhood apraxia of speech (CAS) have been hypothesized to continuously monitor their speech through auditory feedback to minimize speech errors. We used an auditory masking paradigm to determine the effect of attenuating auditory feedback on speech in 30 children: 9 with CAS, 10 with speech delay, and 11 with typical development. The masking only affected the speech of children with CAS as measured by voice onset time and vowel space area. These findings provide preliminary support for greater reliance on auditory feedback among children with CAS. Readers of this article should be able to (i) describe the motivation for investigating the role of auditory feedback in children with CAS; (ii) report the effects of feedback attenuation on speech production in children with CAS, speech delay, and typical development, and (iii) understand how the current findings may support a feedforward program deficit in children with CAS. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  8. Temporal recalibration in vocalization induced by adaptation of delayed auditory feedback.

    Directory of Open Access Journals (Sweden)

    Kosuke Yamamoto

    Full Text Available BACKGROUND: We ordinarily perceive our voice sound as occurring simultaneously with vocal production, but the sense of simultaneity in vocalization can be easily interrupted by delayed auditory feedback (DAF. DAF causes normal people to have difficulty speaking fluently but helps people with stuttering to improve speech fluency. However, the underlying temporal mechanism for integrating the motor production of voice and the auditory perception of vocal sound remains unclear. In this study, we investigated the temporal tuning mechanism integrating vocal sensory and voice sounds under DAF with an adaptation technique. METHODS AND FINDINGS: Participants produced a single voice sound repeatedly with specific delay times of DAF (0, 66, 133 ms during three minutes to induce 'Lag Adaptation'. They then judged the simultaneity between motor sensation and vocal sound given feedback. We found that lag adaptation induced a shift in simultaneity responses toward the adapted auditory delays. This indicates that the temporal tuning mechanism in vocalization can be temporally recalibrated after prolonged exposure to delayed vocal sounds. Furthermore, we found that the temporal recalibration in vocalization can be affected by averaging delay times in the adaptation phase. CONCLUSIONS: These findings suggest vocalization is finely tuned by the temporal recalibration mechanism, which acutely monitors the integration of temporal delays between motor sensation and vocal sound.

  9. Auditory feedback of one’s own voice is used for high-level semantic monitoring: the self-comprehension hypothesis

    Directory of Open Access Journals (Sweden)

    Andreas eLind

    2014-03-01

    Full Text Available What would it be like if we said one thing, and heard ourselves saying something else? Would we notice something was wrong? Or would we believe we said the thing we heard? Is feedback of our own speech only used to detect errors, or does it also help to specify the meaning of what we say? Comparator models of self-monitoring favor the first alternative, and hold that our sense of agency is given by the comparison between intentions and outcomes, while inferential models argue that agency is a more fluent construct, dependent on contextual inferences about the most likely cause of an action. In this paper, we present a theory about the use of feedback during speech. Specifically, we discuss inferential models of speech production that question the standard comparator assumption that the meaning of our utterances is fully specified before articulation. We then argue that auditory feedback provides speakers with a channel for high-level, semantic self-comprehension. In support of this we discuss results using a method we recently developed called Real-time Speech Exchange (RSE. In our first study using RSE (Lind et al, submitted participants were fitted with headsets and performed a computerized Stroop task. We surreptitiously recorded words they said, and later in the test we played them back at the exact same time that the participants uttered something else, while blocking the actual feedback of their voice. Thus, participants said one thing, but heard themselves saying something else. The results showed that when timing conditions were ideal, more than two thirds of the manipulations went undetected. Crucially, in a large proportion of the non-detected manipulated trials, the inserted words were experienced as self-produced by the participants. This indicates that our sense of agency for speech has a strong inferential component, and that auditory feedback of our own voice acts as a pathway for semantic monitoring.

  10. Multivariate sensitivity to voice during auditory categorization.

    Science.gov (United States)

    Lee, Yune Sang; Peelle, Jonathan E; Kraemer, David; Lloyd, Samuel; Granger, Richard

    2015-09-01

    Past neuroimaging studies have documented discrete regions of human temporal cortex that are more strongly activated by conspecific voice sounds than by nonvoice sounds. However, the mechanisms underlying this voice sensitivity remain unclear. In the present functional MRI study, we took a novel approach to examining voice sensitivity, in which we applied a signal detection paradigm to the assessment of multivariate pattern classification among several living and nonliving categories of auditory stimuli. Within this framework, voice sensitivity can be interpreted as a distinct neural representation of brain activity that correctly distinguishes human vocalizations from other auditory object categories. Across a series of auditory categorization tests, we found that bilateral superior and middle temporal cortex consistently exhibited robust sensitivity to human vocal sounds. Although the strongest categorization was in distinguishing human voice from other categories, subsets of these regions were also able to distinguish reliably between nonhuman categories, suggesting a general role in auditory object categorization. Our findings complement the current evidence of cortical sensitivity to human vocal sounds by revealing that the greatest sensitivity during categorization tasks is devoted to distinguishing voice from nonvoice categories within human temporal cortex. Copyright © 2015 the American Physiological Society.

  11. Adaptation to Delayed Speech Feedback Induces Temporal Recalibration between Vocal Sensory and Auditory Modalities

    Directory of Open Access Journals (Sweden)

    Kosuke Yamamoto

    2011-10-01

    Full Text Available We ordinarily perceive our voice sound as occurring simultaneously with vocal production, but the sense of simultaneity in vocalization can be easily interrupted by delayed auditory feedback (DAF. DAF causes normal people to have difficulty speaking fluently but helps people with stuttering to improve speech fluency. However, the underlying temporal mechanism for integrating the motor production of voice and the auditory perception of vocal sound remains unclear. In this study, we investigated the temporal tuning mechanism integrating vocal sensory and voice sounds under DAF with an adaptation technique. Participants read some sentences with specific delay times of DAF (0, 30, 75, 120 ms during three minutes to induce ‘Lag Adaptation’. After the adaptation, they then judged the simultaneity between motor sensation and vocal sound given feedback in producing simple voice but not speech. We found that speech production with lag adaptation induced a shift in simultaneity responses toward the adapted auditory delays. This indicates that the temporal tuning mechanism in vocalization can be temporally recalibrated after prolonged exposure to delayed vocal sounds. These findings suggest vocalization is finely tuned by the temporal recalibration mechanism, which acutely monitors the integration of temporal delays between motor sensation and vocal sound.

  12. Different auditory feedback control for echolocation and communication in horseshoe bats.

    Directory of Open Access Journals (Sweden)

    Ying Liu

    Full Text Available Auditory feedback from the animal's own voice is essential during bat echolocation: to optimize signal detection, bats continuously adjust various call parameters in response to changing echo signals. Auditory feedback seems also necessary for controlling many bat communication calls, although it remains unclear how auditory feedback control differs in echolocation and communication. We tackled this question by analyzing echolocation and communication in greater horseshoe bats, whose echolocation pulses are dominated by a constant frequency component that matches the frequency range they hear best. To maintain echoes within this "auditory fovea", horseshoe bats constantly adjust their echolocation call frequency depending on the frequency of the returning echo signal. This Doppler-shift compensation (DSC behavior represents one of the most precise forms of sensory-motor feedback known. We examined the variability of echolocation pulses emitted at rest (resting frequencies, RFs and one type of communication signal which resembles an echolocation pulse but is much shorter (short constant frequency communication calls, SCFs and produced only during social interactions. We found that while RFs varied from day to day, corroborating earlier studies in other constant frequency bats, SCF-frequencies remained unchanged. In addition, RFs overlapped for some bats whereas SCF-frequencies were always distinctly different. This indicates that auditory feedback during echolocation changed with varying RFs but remained constant or may have been absent during emission of SCF calls for communication. This fundamentally different feedback mechanism for echolocation and communication may have enabled these bats to use SCF calls for individual recognition whereas they adjusted RF calls to accommodate the daily shifts of their auditory fovea.

  13. Perceiving a stranger's voice as being one's own: a 'rubber voice' illusion?

    Directory of Open Access Journals (Sweden)

    Zane Z Zheng

    2011-04-01

    Full Text Available We describe an illusion in which a stranger's voice, when presented as the auditory concomitant of a participant's own speech, is perceived as a modified version of their own voice. When the congruence between utterance and feedback breaks down, the illusion is also broken. Compared to a baseline condition in which participants heard their own voice as feedback, hearing a stranger's voice induced robust changes in the fundamental frequency (F0 of their production. Moreover, the shift in F0 appears to be feedback dependent, since shift patterns depended reliably on the relationship between the participant's own F0 and the stranger-voice F0. The shift in F0 was evident both when the illusion was present and after it was broken, suggesting that auditory feedback from production may be used separately for self-recognition and for vocal motor control. Our findings indicate that self-recognition of voices, like other body attributes, is malleable and context dependent.

  14. Tactile feedback improves auditory spatial localization

    Directory of Open Access Journals (Sweden)

    Monica eGori

    2014-10-01

    Full Text Available Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial-bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gori et al., 2014. To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds before and after training, either with tactile feedback, verbal feedback or no feedback. Audio thresholds were first measured with a spatial bisection task: subjects judged whether the second sound of a three sound sequence was spatially closer to the first or the third sound. The tactile-feedback group underwent two audio-tactile feedback sessions of 100 trials, where each auditory trial was followed by the same spatial sequence played on the subject’s forearm; auditory spatial bisection thresholds were evaluated after each session. In the verbal-feedback condition, the positions of the sounds were verbally reported to the subject after each feedback trial. The no-feedback group did the same sequence of trials, with no feedback. Performance improved significantly only after audio-tactile feedback. The results suggest that direct tactile feedback interacts with the auditory spatial localization system, possibly by a process of cross-sensory recalibration. Control tests with the subject rotated suggested that this effect occurs only when the tactile and acoustic sequences are spatially coherent. Our results suggest that the tactile system can be used to recalibrate the auditory sense of space. These results encourage the possibility of designing rehabilitation programs to help blind persons establish a robust auditory sense of space, through training with the tactile modality.

  15. Effect of delayed auditory feedback on stuttering with and without central auditory processing disorders.

    Science.gov (United States)

    Picoloto, Luana Altran; Cardoso, Ana Cláudia Vieira; Cerqueira, Amanda Venuti; Oliveira, Cristiane Moço Canhetti de

    2017-12-07

    To verify the effect of delayed auditory feedback on speech fluency of individuals who stutter with and without central auditory processing disorders. The participants were twenty individuals with stuttering from 7 to 17 years old and were divided into two groups: Stuttering Group with Auditory Processing Disorders (SGAPD): 10 individuals with central auditory processing disorders, and Stuttering Group (SG): 10 individuals without central auditory processing disorders. Procedures were: fluency assessment with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF), assessment of the stuttering severity and central auditory processing (CAP). Phono Tools software was used to cause a delay of 100 milliseconds in the auditory feedback. The "Wilcoxon Signal Post" test was used in the intragroup analysis and "Mann-Whitney" test in the intergroup analysis. The DAF caused a statistically significant reduction in SG: in the frequency score of stuttering-like disfluencies in the analysis of the Stuttering Severity Instrument, in the amount of blocks and repetitions of monosyllabic words, and in the frequency of stuttering-like disfluencies of duration. Delayed auditory feedback did not cause statistically significant effects on SGAPD fluency, individuals with stuttering with auditory processing disorders. The effect of delayed auditory feedback in speech fluency of individuals who stutter was different in individuals of both groups, because there was an improvement in fluency only in individuals without auditory processing disorder.

  16. The self or the voice? Relative contributions of self-esteem and voice appraisal in persistent auditory hallucinations.

    Science.gov (United States)

    Fannon, Dominic; Hayward, Peter; Thompson, Neil; Green, Nicola; Surguladze, Simon; Wykes, Til

    2009-07-01

    Persistent auditory hallucinations are common, disabling and difficult to treat. Cognitive behavioural therapy is recommended in their treatment though there is limited empirical evidence of the role of cognitive factors in the formation and persistence of voices. Low self-esteem is thought to play a causal and maintaining role in a range of clinical disorders, particularly depression, which is prevalent and disabling in schizophrenia. It was hypothesized that low self-esteem is prominent in, and contributes to, depression in voice hearers. Beliefs about persistent auditory hallucinations were investigated in 82 patients using the Beliefs About Voices Questionnaire--revised in a cross-sectional design. Self-esteem and depression were assessed using standardized measures. Depression and low self-esteem were prominent as were beliefs about the omnipotence and malevolence of auditory hallucinations. Beliefs about the uncontrollability and dominance of auditory hallucinations and low self-esteem were significantly correlated with depression. Low self-esteem did not mediate the effect of beliefs about auditory hallucinations--both acted independently to contribute to depression in this sample of patients with schizophrenia and persistent auditory hallucinations. Low self-esteem is of fundamental importance to the understanding of affective disturbance in voice hearers. Therapeutic interventions need to address both the appraisal of self and hallucinations in schizophrenia. Measures which ameliorate low self-esteem can be expected to improve depressed mood in this patient group. Further elucidation of the mechanisms involved can strengthen existing models of positive psychotic symptoms and provide targets for more effective treatments.

  17. [Design of standard voice sample text for subjective auditory perceptual evaluation of voice disorders].

    Science.gov (United States)

    Li, Jin-rang; Sun, Yan-yan; Xu, Wen

    2010-09-01

    To design a speech voice sample text with all phonemes in Mandarin for subjective auditory perceptual evaluation of voice disorders. The principles for design of a speech voice sample text are: The short text should include the 21 initials and 39 finals, this may cover all the phonemes in Mandarin. Also, the short text should have some meanings. A short text was made out. It had 155 Chinese words, and included 21 initials and 38 finals (the final, ê, was not included because it was rarely used in Mandarin). Also, the text covered 17 light tones and one "Erhua". The constituent ratios of the initials and finals presented in this short text were statistically similar as those in Mandarin according to the method of similarity of the sample and population (r = 0.742, P text were statistically not similar as those in Mandarin (r = 0.731, P > 0.05). A speech voice sample text with all phonemes in Mandarin was made out. The constituent ratios of the initials and finals presented in this short text are similar as those in Mandarin. Its value for subjective auditory perceptual evaluation of voice disorders need further study.

  18. Auditory hallucinations: A review of the ERC "VOICE" project.

    Science.gov (United States)

    Hugdahl, Kenneth

    2015-06-22

    In this invited review I provide a selective overview of recent research on brain mechanisms and cognitive processes involved in auditory hallucinations. The review is focused on research carried out in the "VOICE" ERC Advanced Grant Project, funded by the European Research Council, but I also review and discuss the literature in general. Auditory hallucinations are suggested to be perceptual phenomena, with a neuronal origin in the speech perception areas in the temporal lobe. The phenomenology of auditory hallucinations is conceptualized along three domains, or dimensions; a perceptual dimension, experienced as someone speaking to the patient; a cognitive dimension, experienced as an inability to inhibit, or ignore the voices, and an emotional dimension, experienced as the "voices" having primarily a negative, or sinister, emotional tone. I will review cognitive, imaging, and neurochemistry data related to these dimensions, primarily the first two. The reviewed data are summarized in a model that sees auditory hallucinations as initiated from temporal lobe neuronal hyper-activation that draws attentional focus inward, and which is not inhibited due to frontal lobe hypo-activation. It is further suggested that this is maintained through abnormal glutamate and possibly gamma-amino-butyric-acid transmitter mediation, which could point towards new pathways for pharmacological treatment. A final section discusses new methods of acquiring quantitative data on the phenomenology and subjective experience of auditory hallucination that goes beyond standard interview questionnaires, by suggesting an iPhone/iPod app.

  19. Stuttering adults' lack of pre-speech auditory modulation normalizes when speaking with delayed auditory feedback.

    Science.gov (United States)

    Daliri, Ayoub; Max, Ludo

    2018-02-01

    Auditory modulation during speech movement planning is limited in adults who stutter (AWS), but the functional relevance of the phenomenon itself remains unknown. We investigated for AWS and adults who do not stutter (AWNS) (a) a potential relationship between pre-speech auditory modulation and auditory feedback contributions to speech motor learning and (b) the effect on pre-speech auditory modulation of real-time versus delayed auditory feedback. Experiment I used a sensorimotor adaptation paradigm to estimate auditory-motor speech learning. Using acoustic speech recordings, we quantified subjects' formant frequency adjustments across trials when continually exposed to formant-shifted auditory feedback. In Experiment II, we used electroencephalography to determine the same subjects' extent of pre-speech auditory modulation (reductions in auditory evoked potential N1 amplitude) when probe tones were delivered prior to speaking versus not speaking. To manipulate subjects' ability to monitor real-time feedback, we included speaking conditions with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF). Experiment I showed that auditory-motor learning was limited for AWS versus AWNS, and the extent of learning was negatively correlated with stuttering frequency. Experiment II yielded several key findings: (a) our prior finding of limited pre-speech auditory modulation in AWS was replicated; (b) DAF caused a decrease in auditory modulation for most AWNS but an increase for most AWS; and (c) for AWS, the amount of auditory modulation when speaking with DAF was positively correlated with stuttering frequency. Lastly, AWNS showed no correlation between pre-speech auditory modulation (Experiment II) and extent of auditory-motor learning (Experiment I) whereas AWS showed a negative correlation between these measures. Thus, findings suggest that AWS show deficits in both pre-speech auditory modulation and auditory-motor learning; however, limited pre

  20. Speakers' acceptance of real-time speech exchange indicates that we use auditory feedback to specify the meaning of what we say.

    Science.gov (United States)

    Lind, Andreas; Hall, Lars; Breidegard, Björn; Balkenius, Christian; Johansson, Petter

    2014-06-01

    Speech is usually assumed to start with a clearly defined preverbal message, which provides a benchmark for self-monitoring and a robust sense of agency for one's utterances. However, an alternative hypothesis states that speakers often have no detailed preview of what they are about to say, and that they instead use auditory feedback to infer the meaning of their words. In the experiment reported here, participants performed a Stroop color-naming task while we covertly manipulated their auditory feedback in real time so that they said one thing but heard themselves saying something else. Under ideal timing conditions, two thirds of these semantic exchanges went undetected by the participants, and in 85% of all nondetected exchanges, the inserted words were experienced as self-produced. These findings indicate that the sense of agency for speech has a strong inferential component, and that auditory feedback of one's own voice acts as a pathway for semantic monitoring, potentially overriding other feedback loops. © The Author(s) 2014.

  1. Perceptual-Auditory and Acoustical Analysis of the Voices of Transgender Women.

    Science.gov (United States)

    Schwarz, Karine; Fontanari, Anna Martha Vaitses; Costa, Angelo Brandelli; Soll, Bianca Machado Borba; da Silva, Dhiordan Cardoso; de Sá Villas-Bôas, Anna Paula; Cielo, Carla Aparecida; Bastilha, Gabriele Rodrigues; Ribeiro, Vanessa Veis; Dorfman, Maria Elza Kazumi Yamaguti; Lobato, Maria Inês Rodrigues

    2017-09-28

    Voice is an important gender marker in the transition process as a transgender individual accepts a new gender identity. The objectives of this study were to describe and relate aspects of a perceptual-auditory analysis and the fundamental frequency (F0) of male-to-female (MtF) transsexual individuals. A case-control study was carried out with individuals aged 19-52 years who attended the Gender Identity Program of the Hospital de Clínicas of Porto Alegre. Vocal recordings from the MtF transgender and cisgender individuals (vowel /a:/ and six phrases of Consensus Auditory Perceptual Evaluation Voice [CAPE-V]) were edited and randomly coded before storage in a Dropbox folder. The voices (vowel /a:/) were analyzed by consensus on the same day by two judge speech therapists who had more than 10 years of experience in the voice area using the GRBASI perceptual-auditory vocal evaluation scale. Acoustic analysis of the voices was performed using the advanced Multi-Dimensional Voice Program software. The resonance focus and the degrees of masculinity and femininity for each voice recording were determined by listening to the CAPE-V phrases, for the same judges. There were significant differences between the groups regarding a greater frequency of subjects with F0 between 80 and 150 Hz (P = 0.003), and a greater frequency of hypernasal resonant focus (P < 0.001) in the MtF cases and greater frequency of subjects with absence of roughness (P = 0.031) in the control group. The MtF group of individuals showed altered vertical resonant focus, more masculine voices, and lower fundamental frequencies. The control group showed a significant absence of roughness. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  2. Task-irrelevant auditory feedback facilitates motor performance in musicians

    Directory of Open Access Journals (Sweden)

    Virginia eConde

    2012-05-01

    Full Text Available An efficient and fast auditory–motor network is a basic resource for trained musicians due to the importance of motor anticipation of sound production in musical performance. When playing an instrument, motor performance always goes along with the production of sounds and the integration between both modalities plays an essential role in the course of musical training. The aim of the present study was to investigate the role of task-irrelevant auditory feedback during motor performance in musicians using a serial reaction time task (SRTT. Our hypothesis was that musicians, due to their extensive auditory–motor practice routine during musical training, have a superior performance and learning capabilities when receiving auditory feedback during SRTT relative to musicians performing the SRTT without any auditory feedback. Here we provide novel evidence that task-irrelevant auditory feedback is capable to reinforce SRTT performance but not learning, a finding that might provide further insight into auditory-motor integration in musicians on a behavioral level.

  3. Auditory comprehension: from the voice up to the single word level

    OpenAIRE

    Jones, Anna Barbara

    2016-01-01

    Auditory comprehension, the ability to understand spoken language, consists of a number of different auditory processing skills. In the five studies presented in this thesis I investigated both intact and impaired auditory comprehension at different levels: voice versus phoneme perception, as well as single word auditory comprehension in terms of phonemic and semantic content. In the first study, using sounds from different continua of ‘male’-/pæ/ to ‘female’-/tæ/ and ‘male’...

  4. Auditory reafferences: The influence of real-time feedback on movement control

    Directory of Open Access Journals (Sweden)

    Christian eKennel

    2015-01-01

    Full Text Available Auditory reafferences are real-time auditory products created by a person’s own movements. Whereas the interdependency of action and perception is generally well studied, the auditory feedback channel and the influence of perceptual processes during movement execution remain largely unconsidered. We argue that movements have a rhythmic character that is closely connected to sound, making it possible to manipulate auditory reafferences online to understand their role in motor control. We examined if step sounds, occurring as a by-product of running, have an influence on the performance of a complex movement task. Twenty participants completed a hurdling task in three auditory feedback conditions: a control condition with normal auditory feedback, a white noise condition in which sound was masked, and a delayed auditory feedback condition. Overall time and kinematic data were collected. Results show that delayed auditory feedback led to a significantly slower overall time and changed kinematic parameters. Our findings complement previous investigations in a natural movement situation with nonartificial auditory cues. Our results support the existing theoretical understanding of action–perception coupling and hold potential for applied work, where naturally occurring movement sounds can be implemented in the motor learning processes.

  5. Auditory reafferences: the influence of real-time feedback on movement control.

    Science.gov (United States)

    Kennel, Christian; Streese, Lukas; Pizzera, Alexandra; Justen, Christoph; Hohmann, Tanja; Raab, Markus

    2015-01-01

    Auditory reafferences are real-time auditory products created by a person's own movements. Whereas the interdependency of action and perception is generally well studied, the auditory feedback channel and the influence of perceptual processes during movement execution remain largely unconsidered. We argue that movements have a rhythmic character that is closely connected to sound, making it possible to manipulate auditory reafferences online to understand their role in motor control. We examined if step sounds, occurring as a by-product of running, have an influence on the performance of a complex movement task. Twenty participants completed a hurdling task in three auditory feedback conditions: a control condition with normal auditory feedback, a white noise condition in which sound was masked, and a delayed auditory feedback condition. Overall time and kinematic data were collected. Results show that delayed auditory feedback led to a significantly slower overall time and changed kinematic parameters. Our findings complement previous investigations in a natural movement situation with non-artificial auditory cues. Our results support the existing theoretical understanding of action-perception coupling and hold potential for applied work, where naturally occurring movement sounds can be implemented in the motor learning processes.

  6. Age Differences in Voice Evaluation: From Auditory-Perceptual Evaluation to Social Interactions

    Science.gov (United States)

    Lortie, Catherine L.; Deschamps, Isabelle; Guitton, Matthieu J.; Tremblay, Pascale

    2018-01-01

    Purpose: The factors that influence the evaluation of voice in adulthood, as well as the consequences of such evaluation on social interactions, are not well understood. Here, we examined the effect of listeners' age and the effect of talker age, sex, and smoking status on the auditory-perceptual evaluation of voice, voice-related psychosocial…

  7. Top-Down Modulation of Auditory-Motor Integration during Speech Production: The Role of Working Memory.

    Science.gov (United States)

    Guo, Zhiqiang; Wu, Xiuqin; Li, Weifeng; Jones, Jeffery A; Yan, Nan; Sheft, Stanley; Liu, Peng; Liu, Hanjun

    2017-10-25

    Although working memory (WM) is considered as an emergent property of the speech perception and production systems, the role of WM in sensorimotor integration during speech processing is largely unknown. We conducted two event-related potential experiments with female and male young adults to investigate the contribution of WM to the neurobehavioural processing of altered auditory feedback during vocal production. A delayed match-to-sample task that required participants to indicate whether the pitch feedback perturbations they heard during vocalizations in test and sample sequences matched, elicited significantly larger vocal compensations, larger N1 responses in the left middle and superior temporal gyrus, and smaller P2 responses in the left middle and superior temporal gyrus, inferior parietal lobule, somatosensory cortex, right inferior frontal gyrus, and insula compared with a control task that did not require memory retention of the sequence of pitch perturbations. On the other hand, participants who underwent extensive auditory WM training produced suppressed vocal compensations that were correlated with improved auditory WM capacity, and enhanced P2 responses in the left middle frontal gyrus, inferior parietal lobule, right inferior frontal gyrus, and insula that were predicted by pretraining auditory WM capacity. These findings indicate that WM can enhance the perception of voice auditory feedback errors while inhibiting compensatory vocal behavior to prevent voice control from being excessively influenced by auditory feedback. This study provides the first evidence that auditory-motor integration for voice control can be modulated by top-down influences arising from WM, rather than modulated exclusively by bottom-up and automatic processes. SIGNIFICANCE STATEMENT One outstanding question that remains unsolved in speech motor control is how the mismatch between predicted and actual voice auditory feedback is detected and corrected. The present study

  8. Altered Sensory Feedbacks in Pianist's Dystonia: the altered auditory feedback paradigm and the glove effect

    Directory of Open Access Journals (Sweden)

    Felicia Pei-Hsin Cheng

    2013-12-01

    Full Text Available Background: This study investigates the effect of altered auditory feedback (AAF in musician's dystonia (MD and discusses whether altered auditory feedback can be considered as a sensory trick in MD. Furthermore, the effect of AAF is compared with altered tactile feedback, which can serve as a sensory trick in several other forms of focal dystonia. Methods: The method is based on scale analysis (Jabusch et al. 2004. Experiment 1 employs synchronization paradigm: 12 MD patients and 25 healthy pianists had to repeatedly play C-major scales in synchrony with a metronome on a MIDI-piano with 3 auditory feedback conditions: 1. normal feedback; 2. no feedback; 3. constant delayed feedback. Experiment 2 employs synchronization-continuation paradigm: 12 MD patients and 12 healthy pianists had to repeatedly play C-major scales in two phases: first in synchrony with a metronome, secondly continue the established tempo without the metronome. There are 4 experimental conditions, among them 3 are the same altered auditory feedback as in Experiment 1 and 1 is related to altered tactile sensory input. The coefficient of variation of inter-onset intervals of the key depressions was calculated to evaluate fine motor control. Results: In both experiments, the healthy controls and the patients behaved very similarly. There is no difference in the regularity of playing between the two groups under any condition, and neither did AAF nor did altered tactile feedback have a beneficial effect on patients’ fine motor control. Conclusions: The results of the two experiments suggest that in the context of our experimental designs, AAF and altered tactile feedback play a minor role in motor coordination in patients with musicians' dystonia. We propose that altered auditory and tactile feedback do not serve as effective sensory tricks and may not temporarily reduce the symptoms of patients suffering from MD in this experimental context.

  9. Effect- and Performance-Based Auditory Feedback on Interpersonal Coordination

    Directory of Open Access Journals (Sweden)

    Tong-Hun Hwang

    2018-03-01

    Full Text Available When two individuals interact in a collaborative task, such as carrying a sofa or a table, usually spatiotemporal coordination of individual motor behavior will emerge. In many cases, interpersonal coordination can arise independently of verbal communication, based on the observation of the partners' movements and/or the object's movements. In this study, we investigate how social coupling between two individuals can emerge in a collaborative task under different modes of perceptual information. A visual reference condition was compared with three different conditions with new types of additional auditory feedback provided in real time: effect-based auditory feedback, performance-based auditory feedback, and combined effect/performance-based auditory feedback. We have developed a new paradigm in which the actions of both participants continuously result in a seamlessly merged effect on an object simulated by a tablet computer application. Here, participants should temporally synchronize their movements with a 90° phase difference and precisely adjust the finger dynamics in order to keep the object (a ball accurately rotating on a given circular trajectory on the tablet. Results demonstrate that interpersonal coordination in a joint task can be altered by different kinds of additional auditory information in various ways.

  10. Formant compensation for auditory feedback with English vowels

    DEFF Research Database (Denmark)

    Mitsuya, Takashi; MacDonald, Ewen N; Munhall, Kevin G

    2015-01-01

    Past studies have shown that speakers spontaneously adjust their speech acoustics in response to their auditory feedback perturbed in real time. In the case of formant perturbation, the majority of studies have examined speaker's compensatory production using the English vowel /ɛ/ as in the word...... "head." Consistent behavioral observations have been reported, and there is lively discussion as to how the production system integrates auditory versus somatosensory feedback to control vowel production. However, different vowels have different oral sensation and proprioceptive information due...... to differences in the degree of lingual contact or jaw openness. This may in turn influence the ways in which speakers compensate for auditory feedback. The aim of the current study was to examine speakers' compensatory behavior with six English monophthongs. Specifically, the current study tested to see...

  11. Adaptation to delayed auditory feedback induces the temporal recalibration effect in both speech perception and production.

    Science.gov (United States)

    Yamamoto, Kosuke; Kawabata, Hideaki

    2014-12-01

    We ordinarily speak fluently, even though our perceptions of our own voices are disrupted by various environmental acoustic properties. The underlying mechanism of speech is supposed to monitor the temporal relationship between speech production and the perception of auditory feedback, as suggested by a reduction in speech fluency when the speaker is exposed to delayed auditory feedback (DAF). While many studies have reported that DAF influences speech motor processing, its relationship to the temporal tuning effect on multimodal integration, or temporal recalibration, remains unclear. We investigated whether the temporal aspects of both speech perception and production change due to adaptation to the delay between the motor sensation and the auditory feedback. This is a well-used method of inducing temporal recalibration. Participants continually read texts with specific DAF times in order to adapt to the delay. Then, they judged the simultaneity between the motor sensation and the vocal feedback. We measured the rates of speech with which participants read the texts in both the exposure and re-exposure phases. We found that exposure to DAF changed both the rate of speech and the simultaneity judgment, that is, participants' speech gained fluency. Although we also found that a delay of 200 ms appeared to be most effective in decreasing the rates of speech and shifting the distribution on the simultaneity judgment, there was no correlation between these measurements. These findings suggest that both speech motor production and multimodal perception are adaptive to temporal lag but are processed in distinct ways.

  12. Explaining the high voice superiority effect in polyphonic music: evidence from cortical evoked potentials and peripheral auditory models.

    Science.gov (United States)

    Trainor, Laurel J; Marie, Céline; Bruce, Ian C; Bidelman, Gavin M

    2014-02-01

    Natural auditory environments contain multiple simultaneously-sounding objects and the auditory system must parse the incoming complex sound wave they collectively create into parts that represent each of these individual objects. Music often similarly requires processing of more than one voice or stream at the same time, and behavioral studies demonstrate that human listeners show a systematic perceptual bias in processing the highest voice in multi-voiced music. Here, we review studies utilizing event-related brain potentials (ERPs), which support the notions that (1) separate memory traces are formed for two simultaneous voices (even without conscious awareness) in auditory cortex and (2) adults show more robust encoding (i.e., larger ERP responses) to deviant pitches in the higher than in the lower voice, indicating better encoding of the former. Furthermore, infants also show this high-voice superiority effect, suggesting that the perceptual dominance observed across studies might result from neurophysiological characteristics of the peripheral auditory system. Although musically untrained adults show smaller responses in general than musically trained adults, both groups similarly show a more robust cortical representation of the higher than of the lower voice. Finally, years of experience playing a bass-range instrument reduces but does not reverse the high voice superiority effect, indicating that although it can be modified, it is not highly neuroplastic. Results of new modeling experiments examined the possibility that characteristics of middle-ear filtering and cochlear dynamics (e.g., suppression) reflected in auditory nerve firing patterns might account for the higher-voice superiority effect. Simulations show that both place and temporal AN coding schemes well-predict a high-voice superiority across a wide range of interval spacings and registers. Collectively, we infer an innate, peripheral origin for the higher-voice superiority observed in human

  13. Auditory Masking Effects on Speech Fluency in Apraxia of Speech and Aphasia: Comparison to Altered Auditory Feedback

    Science.gov (United States)

    Jacks, Adam; Haley, Katarina L.

    2015-01-01

    Purpose: To study the effects of masked auditory feedback (MAF) on speech fluency in adults with aphasia and/or apraxia of speech (APH/AOS). We hypothesized that adults with AOS would increase speech fluency when speaking with noise. Altered auditory feedback (AAF; i.e., delayed/frequency-shifted feedback) was included as a control condition not…

  14. [Distinguishing the voice of self from others: the self-monitoring hypothesis of auditory hallucination].

    Science.gov (United States)

    Asai, Tomohisa; Tanno, Yoshihiko

    2010-08-01

    Auditory hallucinations (AH), a psychopathological phenomenon where a person hears non-existent voices, commonly occur in schizophrenia. Recent cognitive and neuroscience studies suggest that AH may be the misattribution of one's own inner speech. Self-monitoring through neural feedback mechanisms allows individuals to distinguish between their own and others' actions, including speech. AH maybe the results of an individual's inability to discriminate between their own speech and that of others. The present paper tries to integrate the three theories (behavioral, brain, and model approaches) proposed to explain the self-monitoring hypothesis of AH. In addition, we investigate the lateralization of self-other representation in the brain, as suggested by recent studies, and discuss future research directions.

  15. Weak responses to auditory feedback perturbation during articulation in persons who stutter: evidence for abnormal auditory-motor transformation.

    Directory of Open Access Journals (Sweden)

    Shanqing Cai

    Full Text Available Previous empirical observations have led researchers to propose that auditory feedback (the auditory perception of self-produced sounds when speaking functions abnormally in the speech motor systems of persons who stutter (PWS. Researchers have theorized that an important neural basis of stuttering is the aberrant integration of auditory information into incipient speech motor commands. Because of the circumstantial support for these hypotheses and the differences and contradictions between them, there is a need for carefully designed experiments that directly examine auditory-motor integration during speech production in PWS. In the current study, we used real-time manipulation of auditory feedback to directly investigate whether the speech motor system of PWS utilizes auditory feedback abnormally during articulation and to characterize potential deficits of this auditory-motor integration. Twenty-one PWS and 18 fluent control participants were recruited. Using a short-latency formant-perturbation system, we examined participants' compensatory responses to unanticipated perturbation of auditory feedback of the first formant frequency during the production of the monophthong [ε]. The PWS showed compensatory responses that were qualitatively similar to the controls' and had close-to-normal latencies (∼150 ms, but the magnitudes of their responses were substantially and significantly smaller than those of the control participants (by 47% on average, p<0.05. Measurements of auditory acuity indicate that the weaker-than-normal compensatory responses in PWS were not attributable to a deficit in low-level auditory processing. These findings are consistent with the hypothesis that stuttering is associated with functional defects in the inverse models responsible for the transformation from the domain of auditory targets and auditory error information into the domain of speech motor commands.

  16. Mouth and Voice: A Relationship between Visual and Auditory Preference in the Human Superior Temporal Sulcus.

    Science.gov (United States)

    Zhu, Lin L; Beauchamp, Michael S

    2017-03-08

    Cortex in and around the human posterior superior temporal sulcus (pSTS) is known to be critical for speech perception. The pSTS responds to both the visual modality (especially biological motion) and the auditory modality (especially human voices). Using fMRI in single subjects with no spatial smoothing, we show that visual and auditory selectivity are linked. Regions of the pSTS were identified that preferred visually presented moving mouths (presented in isolation or as part of a whole face) or moving eyes. Mouth-preferring regions responded strongly to voices and showed a significant preference for vocal compared with nonvocal sounds. In contrast, eye-preferring regions did not respond to either vocal or nonvocal sounds. The converse was also true: regions of the pSTS that showed a significant response to speech or preferred vocal to nonvocal sounds responded more strongly to visually presented mouths than eyes. These findings can be explained by environmental statistics. In natural environments, humans see visual mouth movements at the same time as they hear voices, while there is no auditory accompaniment to visual eye movements. The strength of a voxel's preference for visual mouth movements was strongly correlated with the magnitude of its auditory speech response and its preference for vocal sounds, suggesting that visual and auditory speech features are coded together in small populations of neurons within the pSTS. SIGNIFICANCE STATEMENT Humans interacting face to face make use of auditory cues from the talker's voice and visual cues from the talker's mouth to understand speech. The human posterior superior temporal sulcus (pSTS), a brain region known to be important for speech perception, is complex, with some regions responding to specific visual stimuli and others to specific auditory stimuli. Using BOLD fMRI, we show that the natural statistics of human speech, in which voices co-occur with mouth movements, are reflected in the neural architecture of

  17. Multivoxel Patterns Reveal Functionally Differentiated Networks Underlying Auditory Feedback Processing of Speech

    DEFF Research Database (Denmark)

    Zheng, Zane Z.; Vicente-Grabovetsky, Alejandro; MacDonald, Ewen N.

    2013-01-01

    The everyday act of speaking involves the complex processes of speech motor control. An important component of control is monitoring, detection, and processing of errors when auditory feedback does not correspond to the intended motor gesture. Here we show, using fMRI and converging operations...... within a multivoxel pattern analysis framework, that this sensorimotor process is supported by functionally differentiated brain networks. During scanning, a real-time speech-tracking system was used to deliver two acoustically different types of distorted auditory feedback or unaltered feedback while...... human participants were vocalizing monosyllabic words, and to present the same auditory stimuli while participants were passively listening. Whole-brain analysis of neural-pattern similarity revealed three functional networks that were differentially sensitive to distorted auditory feedback during...

  18. The written voice: implicit memory effects of voice characteristics following silent reading and auditory presentation.

    Science.gov (United States)

    Abramson, Marianne

    2007-12-01

    After being familiarized with two voices, either implicit (auditory lexical decision) or explicit memory (auditory recognition) for words from silently read sentences was assessed among 32 men and 32 women volunteers. In the silently read sentences, the sex of speaker was implied in the initial words, e.g., "He said, ..." or "She said...". Tone in question versus statement was also manipulated by appropriate punctuation. Auditory lexical decision priming was found for sex- and tone-consistent items following silent reading, but only up to 5 min. after silent reading. In a second study, similar lexical decision priming was found following listening to the sentences, although these effects remained reliable after a 2-day delay. The effect sizes for lexical decision priming showed that tone-consistency and sex-consistency were strong following both silent reading and listening 5 min. after studying. These results suggest that readers create episodic traces of text from auditory images of silently read sentences as they do during listening.

  19. Rhythmic walking interaction with auditory feedback

    DEFF Research Database (Denmark)

    Maculewicz, Justyna; Jylhä, Antti; Serafin, Stefania

    2015-01-01

    We present an interactive auditory display for walking with sinusoidal tones or ecological, physically-based synthetic walking sounds. The feedback is either step-based or rhythmic, with constant or adaptive tempo. In a tempo-following experiment, we investigate different interaction modes...

  20. Effect of task-related continuous auditory feedback during learning of tracking motion exercises

    Directory of Open Access Journals (Sweden)

    Rosati Giulio

    2012-10-01

    Full Text Available Abstract Background This paper presents the results of a set of experiments in which we used continuous auditory feedback to augment motor training exercises. This feedback modality is mostly underexploited in current robotic rehabilitation systems, which usually implement only very basic auditory interfaces. Our hypothesis is that properly designed continuous auditory feedback could be used to represent temporal and spatial information that could in turn, improve performance and motor learning. Methods We implemented three different experiments on healthy subjects, who were asked to track a target on a screen by moving an input device (controller with their hand. Different visual and auditory feedback modalities were envisaged. The first experiment investigated whether continuous task-related auditory feedback can help improve performance to a greater extent than error-related audio feedback, or visual feedback alone. In the second experiment we used sensory substitution to compare different types of auditory feedback with equivalent visual feedback, in order to find out whether mapping the same information on a different sensory channel (the visual channel yielded comparable effects with those gained in the first experiment. The final experiment applied a continuously changing visuomotor transformation between the controller and the screen and mapped kinematic information, computed in either coordinate system (controller or video, to the audio channel, in order to investigate which information was more relevant to the user. Results Task-related audio feedback significantly improved performance with respect to visual feedback alone, whilst error-related feedback did not. Secondly, performance in audio tasks was significantly better with respect to the equivalent sensory-substituted visual tasks. Finally, with respect to visual feedback alone, video-task-related sound feedback decreased the tracking error during the learning of a novel

  1. Delayed Auditory Feedback and Movement

    Science.gov (United States)

    Pfordresher, Peter Q.; Dalla Bella, Simone

    2011-01-01

    It is well known that timing of rhythm production is disrupted by delayed auditory feedback (DAF), and that disruption varies with delay length. We tested the hypothesis that disruption depends on the state of the movement trajectory at the onset of DAF. Participants tapped isochronous rhythms at a rate specified by a metronome while hearing DAF…

  2. Auditory and Visual Modulation of Temporal Lobe Neurons in Voice-Sensitive and Association Cortices

    Science.gov (United States)

    Perrodin, Catherine; Kayser, Christoph; Logothetis, Nikos K.

    2014-01-01

    Effective interactions between conspecific individuals can depend upon the receiver forming a coherent multisensory representation of communication signals, such as merging voice and face content. Neuroimaging studies have identified face- or voice-sensitive areas (Belin et al., 2000; Petkov et al., 2008; Tsao et al., 2008), some of which have been proposed as candidate regions for face and voice integration (von Kriegstein et al., 2005). However, it was unclear how multisensory influences occur at the neuronal level within voice- or face-sensitive regions, especially compared with classically defined multisensory regions in temporal association cortex (Stein and Stanford, 2008). Here, we characterize auditory (voice) and visual (face) influences on neuronal responses in a right-hemisphere voice-sensitive region in the anterior supratemporal plane (STP) of Rhesus macaques. These results were compared with those in the neighboring superior temporal sulcus (STS). Within the STP, our results show auditory sensitivity to several vocal features, which was not evident in STS units. We also newly identify a functionally distinct neuronal subpopulation in the STP that appears to carry the area's sensitivity to voice identity related features. Audiovisual interactions were prominent in both the STP and STS. However, visual influences modulated the responses of STS neurons with greater specificity and were more often associated with congruent voice-face stimulus pairings than STP neurons. Together, the results reveal the neuronal processes subserving voice-sensitive fMRI activity patterns in primates, generate hypotheses for testing in the visual modality, and clarify the position of voice-sensitive areas within the unisensory and multisensory processing hierarchies. PMID:24523543

  3. Auditory and visual modulation of temporal lobe neurons in voice-sensitive and association cortices.

    Science.gov (United States)

    Perrodin, Catherine; Kayser, Christoph; Logothetis, Nikos K; Petkov, Christopher I

    2014-02-12

    Effective interactions between conspecific individuals can depend upon the receiver forming a coherent multisensory representation of communication signals, such as merging voice and face content. Neuroimaging studies have identified face- or voice-sensitive areas (Belin et al., 2000; Petkov et al., 2008; Tsao et al., 2008), some of which have been proposed as candidate regions for face and voice integration (von Kriegstein et al., 2005). However, it was unclear how multisensory influences occur at the neuronal level within voice- or face-sensitive regions, especially compared with classically defined multisensory regions in temporal association cortex (Stein and Stanford, 2008). Here, we characterize auditory (voice) and visual (face) influences on neuronal responses in a right-hemisphere voice-sensitive region in the anterior supratemporal plane (STP) of Rhesus macaques. These results were compared with those in the neighboring superior temporal sulcus (STS). Within the STP, our results show auditory sensitivity to several vocal features, which was not evident in STS units. We also newly identify a functionally distinct neuronal subpopulation in the STP that appears to carry the area's sensitivity to voice identity related features. Audiovisual interactions were prominent in both the STP and STS. However, visual influences modulated the responses of STS neurons with greater specificity and were more often associated with congruent voice-face stimulus pairings than STP neurons. Together, the results reveal the neuronal processes subserving voice-sensitive fMRI activity patterns in primates, generate hypotheses for testing in the visual modality, and clarify the position of voice-sensitive areas within the unisensory and multisensory processing hierarchies.

  4. Auditory feedback and memory for music performance: sound evidence for an encoding effect.

    Science.gov (United States)

    Finney, Steven A; Palmer, Caroline

    2003-01-01

    Research on the effects of context and task on learning and memory has included approaches that emphasize processes during learning (e.g., Craik & Tulving, 1975) and approaches that emphasize a match of conditions during learning with conditions during a later test of memory (e.g., Morris, Bransford, & Franks, 1977; Proteau, 1992; Tulving & Thomson, 1973). We investigated the effects of auditory context on learning and retrieval in three experiments on memorized music performance (a form of serial recall). Auditory feedback (presence or absence) was manipulated while pianists learned musical pieces from notation and when they later played the pieces from memory. Auditory feedback during learning significantly improved later recall. However, auditory feedback at test did not significantly affect recall, nor was there an interaction between conditions at learning and test. Auditory feedback in music performance appears to be a contextual factor that affects learning but is relatively independent of retrieval conditions.

  5. Auditory feedback perturbation in children with developmental speech disorders

    NARCIS (Netherlands)

    Terband, H.R.; van Brenk, F.J.; van Doornik-van der Zee, J.C.

    2014-01-01

    Background/purpose: Several studies indicate a close relation between auditory and speech motor functions in children with speech sound disorders (SSD). The aim of this study was to investigate the ability to compensate and adapt for perturbed auditory feedback in children with SSD compared to

  6. Correlation of the Dysphonia Severity Index (DSI), Consensus Auditory-Perceptual Evaluation of Voice (CAPE-V), and Gender in Brazilians With and Without Voice Disorders.

    Science.gov (United States)

    Nemr, Katia; Simões-Zenari, Marcia; de Souza, Glaucia S; Hachiya, Adriana; Tsuji, Domingos H

    2016-11-01

    This study aims to analyze the Dysphonia Severity Index (DSI) in Brazilians with or without voice disorders and investigate DSI's correlation with gender and auditory-perceptual evaluation data obtained via the Consensus Auditory-Perceptual Evaluation of Voice (CAPE-V) protocol. A total of 66 Brazilian adults from both genders participated in the study, including 24 patients with dysphonia confirmed on laryngeal examination (dysphonic group [DG]) and 42 volunteers without voice or hearing complaints and without auditory-perceptual voice disorders (nondysphonic group [NDG]). The vocal tasks included in CAPE-V and DSI were performed and recorded. Data were analyzed by means of the independent t test, the Mann-Whitney U test, and Pearson correlation at the 5% significance level. Differences were found in the mean DSI values between the DG and the NDG. Differences were also found in all DSI items between the groups, except for the highest frequency parameter. In the DG, a moderate negative correlation was detected between overall dysphonia severity (CAPE-V) and DSI value, and between breathiness and DSI value, and a weak negative correlation was detected between DSI value and roughness. In the NDG, the maximum phonation time was higher among males. In both groups, the highest frequency parameter was higher among females. The DSI discriminated among Brazilians with or without voice disorders. A correlation was found between some aspects of the DSI and the CAPE-V but not between DSI and gender. Copyright © 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  7. The impact of auditory feedback on neuronavigation

    NARCIS (Netherlands)

    Willems, PWA; Noordmans, HJ; van Overbeeke, JJ; Viergever, MA; Tulleken, CAF; van der Sprenkel, JWB

    Object. We aimed to develop an auditory feedback system to be used in addition to regular neuronavigation, in an attempt to improve the usefulness of the information offered by neuronavigation systems. Instrumentation. Using a serial connection, instrument co-ordinates determined by a commercially

  8. Auditory display as feedback for a novel eye-tracking system for sterile operating room interaction.

    Science.gov (United States)

    Black, David; Unger, Michael; Fischer, Nele; Kikinis, Ron; Hahn, Horst; Neumuth, Thomas; Glaser, Bernhard

    2018-01-01

    The growing number of technical systems in the operating room has increased attention on developing touchless interaction methods for sterile conditions. However, touchless interaction paradigms lack the tactile feedback found in common input devices such as mice and keyboards. We propose a novel touchless eye-tracking interaction system with auditory display as a feedback method for completing typical operating room tasks. Auditory display provides feedback concerning the selected input into the eye-tracking system as well as a confirmation of the system response. An eye-tracking system with a novel auditory display using both earcons and parameter-mapping sonification was developed to allow touchless interaction for six typical scrub nurse tasks. An evaluation with novice participants compared auditory display with visual display with respect to reaction time and a series of subjective measures. When using auditory display to substitute for the lost tactile feedback during eye-tracking interaction, participants exhibit reduced reaction time compared to using visual-only display. In addition, the auditory feedback led to lower subjective workload and higher usefulness and system acceptance ratings. Due to the absence of tactile feedback for eye-tracking and other touchless interaction methods, auditory display is shown to be a useful and necessary addition to new interaction concepts for the sterile operating room, reducing reaction times while improving subjective measures, including usefulness, user satisfaction, and cognitive workload.

  9. Investigating the Role of Auditory Feedback in a Multimodal Biking Experience

    DEFF Research Database (Denmark)

    Bruun-Pedersen, Jon Ram; Grani, Francesco; Serafin, Stefania

    2017-01-01

    In this paper, we investigate the role of auditory feedback in affecting perception of effort while biking in a virtual environment. Subjects were biking on a stationary chair bike, while exposed to 3D renditions of a recumbent bike inside a virtual environment (VE). The VE simulated a park...... and was created in the Unity5 engine. While biking, subjects were exposed to 9 kinds of auditory feedback (3 amplitude levels with three different filters) which were continuously triggered corresponding to pedal speed, representing the sound of the wheels and bike/chain mechanics. Subjects were asked to rate...... the perception of exertion using the Borg RPE scale. Results of the experiment showed that most subjects perceived a difference in mechanical resistance from the bike between conditions, but did not consciously notice the variations of the auditory feedback, although these were significantly varied. This points...

  10. Effect of auditory feedback differs according to side of hemiparesis: a comparative pilot study

    Directory of Open Access Journals (Sweden)

    Bensmail Djamel

    2009-12-01

    Full Text Available Abstract Background Following stroke, patients frequently demonstrate loss of motor control and function and altered kinematic parameters of reaching movements. Feedback is an essential component of rehabilitation and auditory feedback of kinematic parameters may be a useful tool for rehabilitation of reaching movements at the impairment level. The aim of this study was to investigate the effect of 2 types of auditory feedback on the kinematics of reaching movements in hemiparetic stroke patients and to compare differences between patients with right (RHD and left hemisphere damage (LHD. Methods 10 healthy controls, 8 stroke patients with LHD and 8 with RHD were included. Patient groups had similar levels of upper limb function. Two types of auditory feedback (spatial and simple were developed and provided online during reaching movements to 9 targets in the workspace. Kinematics of the upper limb were recorded with an electromagnetic system. Kinematics were compared between groups (Mann Whitney test and the effect of auditory feedback on kinematics was tested within each patient group (Friedman test. Results In the patient groups, peak hand velocity was lower, the number of velocity peaks was higher and movements were more curved than in the healthy group. Despite having a similar clinical level, kinematics differed between LHD and RHD groups. Peak velocity was similar but LHD patients had fewer velocity peaks and less curved movements than RHD patients. The addition of auditory feedback improved the curvature index in patients with RHD and deteriorated peak velocity, the number of velocity peaks and curvature index in LHD patients. No difference between types of feedback was found in either patient group. Conclusion In stroke patients, side of lesion should be considered when examining arm reaching kinematics. Further studies are necessary to evaluate differences in responses to auditory feedback between patients with lesions in opposite

  11. Logarithmic temporal axis manipulation and its application for measuring auditory contributions in F0 control using a transformed auditory feedback procedure

    Science.gov (United States)

    Yanaga, Ryuichiro; Kawahara, Hideki

    2003-10-01

    A new parameter extraction procedure based on logarithmic transformation of the temporal axis was applied to investigate auditory effects on voice F0 control to overcome artifacts due to natural fluctuations and nonlinearities in speech production mechanisms. The proposed method may add complementary information to recent findings reported by using frequency shift feedback method [Burnett and Larson, J. Acoust. Soc. Am. 112 (2002)], in terms of dynamic aspects of F0 control. In a series of experiments, dependencies of system parameters in F0 control on subjects, F0 and style (musical expressions and speaking) were tested using six participants. They were three male and three female students specialized in musical education. They were asked to sustain a Japanese vowel /a/ for about 10 s repeatedly up to 2 min in total while hearing F0 modulated feedback speech, that was modulated using an M-sequence. The results replicated qualitatively the previous finding [Kawahara and Williams, Vocal Fold Physiology, (1995)] and provided more accurate estimates. Relations with designing an artificial singer also will be discussed. [Work partly supported by the grant in aids in scientific research (B) 14380165 and Wakayama University.

  12. Tap Arduino: An Arduino microcontroller for low-latency auditory feedback in sensorimotor synchronization experiments.

    Science.gov (United States)

    Schultz, Benjamin G; van Vugt, Floris T

    2016-12-01

    Timing abilities are often measured by having participants tap their finger along with a metronome and presenting tap-triggered auditory feedback. These experiments predominantly use electronic percussion pads combined with software (e.g., FTAP or Max/MSP) that records responses and delivers auditory feedback. However, these setups involve unknown latencies between tap onset and auditory feedback and can sometimes miss responses or record multiple, superfluous responses for a single tap. These issues may distort measurements of tapping performance or affect the performance of the individual. We present an alternative setup using an Arduino microcontroller that addresses these issues and delivers low-latency auditory feedback. We validated our setup by having participants (N = 6) tap on a force-sensitive resistor pad connected to the Arduino and on an electronic percussion pad with various levels of force and tempi. The Arduino delivered auditory feedback through a pulse-width modulation (PWM) pin connected to a headphone jack or a wave shield component. The Arduino's PWM (M = 0.6 ms, SD = 0.3) and wave shield (M = 2.6 ms, SD = 0.3) demonstrated significantly lower auditory feedback latencies than the percussion pad (M = 9.1 ms, SD = 2.0), FTAP (M = 14.6 ms, SD = 2.8), and Max/MSP (M = 15.8 ms, SD = 3.4). The PWM and wave shield latencies were also significantly less variable than those from FTAP and Max/MSP. The Arduino missed significantly fewer taps, and recorded fewer superfluous responses, than the percussion pad. The Arduino captured all responses, whereas at lower tapping forces, the percussion pad missed more taps. Regardless of tapping force, the Arduino outperformed the percussion pad. Overall, the Arduino is a high-precision, low-latency, portable, and affordable tool for auditory experiments.

  13. Bottom-up influences of voice continuity in focusing selective auditory attention.

    Science.gov (United States)

    Bressler, Scott; Masud, Salwa; Bharadwaj, Hari; Shinn-Cunningham, Barbara

    2014-01-01

    Selective auditory attention causes a relative enhancement of the neural representation of important information and suppression of the neural representation of distracting sound, which enables a listener to analyze and interpret information of interest. Some studies suggest that in both vision and in audition, the "unit" on which attention operates is an object: an estimate of the information coming from a particular external source out in the world. In this view, which object ends up in the attentional foreground depends on the interplay of top-down, volitional attention and stimulus-driven, involuntary attention. Here, we test the idea that auditory attention is object based by exploring whether continuity of a non-spatial feature (talker identity, a feature that helps acoustic elements bind into one perceptual object) also influences selective attention performance. In Experiment 1, we show that perceptual continuity of target talker voice helps listeners report a sequence of spoken target digits embedded in competing reversed digits spoken by different talkers. In Experiment 2, we provide evidence that this benefit of voice continuity is obligatory and automatic, as if voice continuity biases listeners by making it easier to focus on a subsequent target digit when it is perceptually linked to what was already in the attentional foreground. Our results support the idea that feature continuity enhances streaming automatically, thereby influencing the dynamic processes that allow listeners to successfully attend to objects through time in the cacophony that assails our ears in many everyday settings.

  14. Psychological Therapies for Auditory Hallucinations (Voices): Current Status and Key Directions for Future Research

    NARCIS (Netherlands)

    Thomas, N.; Hayward, M.; Peters, E; van der Gaag, M.; Bentall, R.P.; Jenner, J.; Strauss, C.; Sommer, I.E.; Johns, L.C.; Varese, F.; Gracia-Montes, J.M.; Waters, F.; Dodgson, G.; McCarthy-Jones, S.

    2014-01-01

    This report from the International Consortium on Hallucinations Research considers the current status and future directions in research on psychological therapies targeting auditory hallucinations (hearing voices). Therapy approaches have evolved from behavioral and coping-focused interventions,

  15. Ring a bell? Adaptive Auditory Game Feedback to Sustain Performance in Stroke Rehabilitation

    DEFF Research Database (Denmark)

    Hald, Kasper; Knoche, Hendrik

    2016-01-01

    This paper investigates the effect of adaptive auditory feed- back on continued player performance for stroke patients in a Whack- a-Mole style tablet game. The feedback consisted of accumulatively in- creasing the pitch of positive feedback sounds on tasks with fast reaction time and resetting...... it after slow reaction times. The analysis was based on data was obtained in a field trial with lesion patients during their regular rehabilitation. The auditory feedback events were categorized by feedback type (positive/negative) and the associated pitch change of ei- ther high or low magnitude. Both...... feedback type and magnitude had a significant effect on players performance. Negative feedback improved re- action time on the subsequent hit by 0.42 second and positive feedback impaired performance by 0.15 seconds....

  16. Effect of auditory feedback differs according to side of hemiparesis: a comparative pilot study

    OpenAIRE

    Robertson, Johanna VG; Hoellinger, Thomas; Lindberg, P?vel; Bensmail, Djamel; Hanneton, Sylvain; Roby-Brami, Agn?s

    2009-01-01

    Abstract Background Following stroke, patients frequently demonstrate loss of motor control and function and altered kinematic parameters of reaching movements. Feedback is an essential component of rehabilitation and auditory feedback of kinematic parameters may be a useful tool for rehabilitation of reaching movements at the impairment level. The aim of this study was to investigate the effect of 2 types of auditory feedback on the kinematics of reaching movements in hemiparetic stroke pati...

  17. Silent reading of direct versus indirect speech activates voice-selective areas in the auditory cortex.

    Science.gov (United States)

    Yao, Bo; Belin, Pascal; Scheepers, Christoph

    2011-10-01

    In human communication, direct speech (e.g., Mary said: "I'm hungry") is perceived to be more vivid than indirect speech (e.g., Mary said [that] she was hungry). However, for silent reading, the representational consequences of this distinction are still unclear. Although many of us share the intuition of an "inner voice," particularly during silent reading of direct speech statements in text, there has been little direct empirical confirmation of this experience so far. Combining fMRI with eye tracking in human volunteers, we show that silent reading of direct versus indirect speech engenders differential brain activation in voice-selective areas of the auditory cortex. This suggests that readers are indeed more likely to engage in perceptual simulations (or spontaneous imagery) of the reported speaker's voice when reading direct speech as opposed to meaning-equivalent indirect speech statements as part of a more vivid representation of the former. Our results may be interpreted in line with embodied cognition and form a starting point for more sophisticated interdisciplinary research on the nature of auditory mental simulation during reading.

  18. Selective and divided attention modulates auditory-vocal integration in the processing of pitch feedback errors.

    Science.gov (United States)

    Liu, Ying; Hu, Huijing; Jones, Jeffery A; Guo, Zhiqiang; Li, Weifeng; Chen, Xi; Liu, Peng; Liu, Hanjun

    2015-08-01

    Speakers rapidly adjust their ongoing vocal productions to compensate for errors they hear in their auditory feedback. It is currently unclear what role attention plays in these vocal compensations. This event-related potential (ERP) study examined the influence of selective and divided attention on the vocal and cortical responses to pitch errors heard in auditory feedback regarding ongoing vocalisations. During the production of a sustained vowel, participants briefly heard their vocal pitch shifted up two semitones while they actively attended to auditory or visual events (selective attention), or both auditory and visual events (divided attention), or were not told to attend to either modality (control condition). The behavioral results showed that attending to the pitch perturbations elicited larger vocal compensations than attending to the visual stimuli. Moreover, ERPs were likewise sensitive to the attentional manipulations: P2 responses to pitch perturbations were larger when participants attended to the auditory stimuli compared to when they attended to the visual stimuli, and compared to when they were not explicitly told to attend to either the visual or auditory stimuli. By contrast, dividing attention between the auditory and visual modalities caused suppressed P2 responses relative to all the other conditions and caused enhanced N1 responses relative to the control condition. These findings provide strong evidence for the influence of attention on the mechanisms underlying the auditory-vocal integration in the processing of pitch feedback errors. In addition, selective attention and divided attention appear to modulate the neurobehavioral processing of pitch feedback errors in different ways. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  19. Psychological Therapies for Auditory Hallucinations (Voices): Current Status and Key Directions for Future Research

    Science.gov (United States)

    Thomas, Neil; Hayward, Mark; Peters, Emmanuelle; van der Gaag, Mark; Bentall, Richard P.; Jenner, Jack; Strauss, Clara; Sommer, Iris E.; Johns, Louise C.; Varese, Filippo; García-Montes, José Manuel; Waters, Flavie; Dodgson, Guy; McCarthy-Jones, Simon

    2014-01-01

    This report from the International Consortium on Hallucinations Research considers the current status and future directions in research on psychological therapies targeting auditory hallucinations (hearing voices). Therapy approaches have evolved from behavioral and coping-focused interventions, through formulation-driven interventions using methods from cognitive therapy, to a number of contemporary developments. Recent developments include the application of acceptance- and mindfulness-based approaches, and consolidation of methods for working with connections between voices and views of self, others, relationships and personal history. In this article, we discuss the development of therapies for voices and review the empirical findings. This review shows that psychological therapies are broadly effective for people with positive symptoms, but that more research is required to understand the specific application of therapies to voices. Six key research directions are identified: (1) moving beyond the focus on overall efficacy to understand specific therapeutic processes targeting voices, (2) better targeting psychological processes associated with voices such as trauma, cognitive mechanisms, and personal recovery, (3) more focused measurement of the intended outcomes of therapy, (4) understanding individual differences among voice hearers, (5) extending beyond a focus on voices and schizophrenia into other populations and sensory modalities, and (6) shaping interventions for service implementation. PMID:24936081

  20. The auditory dorsal stream plays a crucial role in projecting hallucinated voices into external space

    NARCIS (Netherlands)

    Looijestijn, Jasper; Diederen, Kelly M. J.; Goekoop, Rutger; Sommer, Iris E. C.; Daalman, Kirstin; Kahn, Rene S.; Hoek, Hans W.; Blom, Jan Dirk

    Introduction: Verbal auditory hallucinations (VAHs) are experienced as spoken voices which seem to originate in the extracorporeal environment or inside the head. Animal and human research has identified a 'where' pathway for sound processing comprising the planum temporale, the middle frontal gyrus

  1. Hear today, not gone tomorrow? An exploratory longitudinal study of auditory verbal hallucinations (hearing voices).

    Science.gov (United States)

    Hartigan, Nicky; McCarthy-Jones, Simon; Hayward, Mark

    2014-01-01

    Despite an increasing volume of cross-sectional work on auditory verbal hallucinations (hearing voices), there remains a paucity of work on how the experience may change over time. The first aim of this study was to attempt replication of a previous finding that beliefs about voices are enduring and stable, irrespective of changes in the severity of voices, and do not change without a specific intervention. The second aim was to examine whether voice-hearers' interrelations with their voices change over time, without a specific intervention. A 12-month longitudinal examination of these aspects of voices was undertaken with hearers in routine clinical treatment (N = 18). We found beliefs about voices' omnipotence and malevolence were stable over a 12-month period, as were styles of interrelating between voice and hearer, despite trends towards reductions in voice-related distress and disruption. However, there was a trend for beliefs about the benevolence of voices to decrease over time. Styles of interrelating between voice and hearer appear relatively stable and enduring, as are beliefs about the voices' malevolent intent and power. Although there was some evidence that beliefs about benevolence may reduce over time, the reasons for this were not clear. Our exploratory study was limited by only being powered to detect large effect sizes. Implications for clinical practice and future research are discussed.

  2. Combined mirror visual and auditory feedback therapy for upper limb phantom pain: a case report

    Directory of Open Access Journals (Sweden)

    Yan Kun

    2011-01-01

    Full Text Available Abstract Introduction Phantom limb sensation and phantom limb pain is a very common issue after amputations. In recent years there has been accumulating data implicating 'mirror visual feedback' or 'mirror therapy' as helpful in the treatment of phantom limb sensation and phantom limb pain. Case presentation We present the case of a 24-year-old Caucasian man, a left upper limb amputee, treated with mirror visual feedback combined with auditory feedback with improved pain relief. Conclusion This case may suggest that auditory feedback might enhance the effectiveness of mirror visual feedback and serve as a valuable addition to the complex multi-sensory processing of body perception in patients who are amputees.

  3. Auditory-Motor Control of Vocal Production during Divided Attention: Behavioral and ERP Correlates.

    Science.gov (United States)

    Liu, Ying; Fan, Hao; Li, Jingting; Jones, Jeffery A; Liu, Peng; Zhang, Baofeng; Liu, Hanjun

    2018-01-01

    When people hear unexpected perturbations in auditory feedback, they produce rapid compensatory adjustments of their vocal behavior. Recent evidence has shown enhanced vocal compensations and cortical event-related potentials (ERPs) in response to attended pitch feedback perturbations, suggesting that this reflex-like behavior is influenced by selective attention. Less is known, however, about auditory-motor integration for voice control during divided attention. The present cross-modal study investigated the behavioral and ERP correlates of auditory feedback control of vocal pitch production during divided attention. During the production of sustained vowels, 32 young adults were instructed to simultaneously attend to both pitch feedback perturbations they heard and flashing red lights they saw. The presentation rate of the visual stimuli was varied to produce a low, intermediate, and high attentional load. The behavioral results showed that the low-load condition elicited significantly smaller vocal compensations for pitch perturbations than the intermediate-load and high-load conditions. As well, the cortical processing of vocal pitch feedback was also modulated as a function of divided attention. When compared to the low-load and intermediate-load conditions, the high-load condition elicited significantly larger N1 responses and smaller P2 responses to pitch perturbations. These findings provide the first neurobehavioral evidence that divided attention can modulate auditory feedback control of vocal pitch production.

  4. Amygdala and auditory cortex exhibit distinct sensitivity to relevant acoustic features of auditory emotions.

    Science.gov (United States)

    Pannese, Alessia; Grandjean, Didier; Frühholz, Sascha

    2016-12-01

    Discriminating between auditory signals of different affective value is critical to successful social interaction. It is commonly held that acoustic decoding of such signals occurs in the auditory system, whereas affective decoding occurs in the amygdala. However, given that the amygdala receives direct subcortical projections that bypass the auditory cortex, it is possible that some acoustic decoding occurs in the amygdala as well, when the acoustic features are relevant for affective discrimination. We tested this hypothesis by combining functional neuroimaging with the neurophysiological phenomena of repetition suppression (RS) and repetition enhancement (RE) in human listeners. Our results show that both amygdala and auditory cortex responded differentially to physical voice features, suggesting that the amygdala and auditory cortex decode the affective quality of the voice not only by processing the emotional content from previously processed acoustic features, but also by processing the acoustic features themselves, when these are relevant to the identification of the voice's affective value. Specifically, we found that the auditory cortex is sensitive to spectral high-frequency voice cues when discriminating vocal anger from vocal fear and joy, whereas the amygdala is sensitive to vocal pitch when discriminating between negative vocal emotions (i.e., anger and fear). Vocal pitch is an instantaneously recognized voice feature, which is potentially transferred to the amygdala by direct subcortical projections. These results together provide evidence that, besides the auditory cortex, the amygdala too processes acoustic information, when this is relevant to the discrimination of auditory emotions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Exploring the use of tactile feedback in an ERP-based auditory BCI.

    Science.gov (United States)

    Schreuder, Martijn; Thurlings, Marieke E; Brouwer, Anne-Marie; Van Erp, Jan B F; Tangermann, Michael

    2012-01-01

    Giving direct, continuous feedback on a brain state is common practice in motor imagery based brain-computer interfaces (BCI), but has not been reported for BCIs based on event-related potentials (ERP), where feedback is only given once after a sequence of stimuli. Potentially, direct feedback could allow the user to adjust his strategy during a running trial to obtain the required response. In order to test the usefulness of such feedback, directionally congruent vibrotactile feedback was given during an online auditory BCI experiment. Users received either no feedback, short feedback pulses or continuous feedback. The feedback conditions showed reduced performance both on a behavioral task and in terms of classification accuracy. Several explanations are discussed that give interesting starting points for further research on this topic.

  6. Stuttering Inhibition via Altered Auditory Feedback during Scripted Telephone Conversations

    Science.gov (United States)

    Hudock, Daniel; Kalinowski, Joseph

    2014-01-01

    Background: Overt stuttering is inhibited by approximately 80% when people who stutter read aloud as they hear an altered form of their speech feedback to them. However, levels of stuttering inhibition vary from 60% to 100% depending on speaking situation and signal presentation. For example, binaural presentations of delayed auditory feedback…

  7. Effects of consensus training on the reliability of auditory perceptual ratings of voice quality.

    Science.gov (United States)

    Iwarsson, Jenny; Reinholt Petersen, Niels

    2012-05-01

    This study investigates the effect of consensus training of listeners on intrarater and interrater reliability and agreement of perceptual voice analysis. The use of such training, including a reference voice sample, could be assumed to make the internal standards held in memory common and more robust, which is of great importance to reduce the variability of auditory perceptual ratings. A prospective design with testing before and after training. Thirteen students of audiologopedics served as listening subjects. The ratings were made using a multidimensional protocol with four-point equal-appearing interval scales. The stimuli consisted of text reading by authentic dysphonic patients. The consensus training for each perceptual voice parameter included (1) definition, (2) underlying physiology, (3) presentation of carefully selected sound examples representing the parameter in three different grades followed by group discussions of perceived characteristics, and (4) practical exercises including imitation to make use of the listeners' proprioception. Intrarater reliability and agreement showed a marked improvement for intermittent aphonia but not for vocal fry. Interrater reliability was high for most parameters before training with a slight increase after training. Interrater agreement showed marked increases for most voice quality parameters as a result of the training. The results support the recommendation of specific consensus training, including use of a reference voice sample material, to calibrate, equalize, and stabilize the internal standards held in memory by the listeners. Copyright © 2012 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  8. The role of auditory feedback in music-supported stroke rehabilitation: A single-blinded randomised controlled intervention.

    Science.gov (United States)

    van Vugt, F T; Kafczyk, T; Kuhn, W; Rollnik, J D; Tillmann, B; Altenmüller, E

    2016-01-01

    Learning to play musical instruments such as piano was previously shown to benefit post-stroke motor rehabilitation. Previous work hypothesised that the mechanism of this rehabilitation is that patients use auditory feedback to correct their movements and therefore show motor learning. We tested this hypothesis by manipulating the auditory feedback timing in a way that should disrupt such error-based learning. We contrasted a patient group undergoing music-supported therapy on a piano that emits sounds immediately (as in previous studies) with a group whose sounds are presented after a jittered delay. The delay was not noticeable to patients. Thirty-four patients in early stroke rehabilitation with moderate motor impairment and no previous musical background learned to play the piano using simple finger exercises and familiar children's songs. Rehabilitation outcome was not impaired in the jitter group relative to the normal group. Conversely, some clinical tests suggests the jitter group outperformed the normal group. Auditory feedback-based motor learning is not the beneficial mechanism of music-supported therapy. Immediate auditory feedback therapy may be suboptimal. Jittered delay may increase efficacy of the proposed therapy and allow patients to fully benefit from motivational factors of music training. Our study shows a novel way to test hypotheses concerning music training in a single-blinded way, which is an important improvement over existing unblinded tests of music interventions.

  9. Auditory interfaces in automated driving: an international survey

    Directory of Open Access Journals (Sweden)

    Pavlo Bazilinskyy

    2015-08-01

    Full Text Available This study investigated peoples’ opinion on auditory interfaces in contemporary cars and their willingness to be exposed to auditory feedback in automated driving. We used an Internet-based survey to collect 1,205 responses from 91 countries. The respondents stated their attitudes towards two existing auditory driver assistance systems, a parking assistant (PA and a forward collision warning system (FCWS, as well as towards a futuristic augmented sound system (FS proposed for fully automated driving. The respondents were positive towards the PA and FCWS, and rated the willingness to have automated versions of these systems as 3.87 and 3.77, respectively (on a scale from 1 = disagree strongly to 5 = agree strongly. The respondents tolerated the FS (the mean willingness to use it was 3.00 on the same scale. The results showed that among the available response options, the female voice was the most preferred feedback type for takeover requests in highly automated driving, regardless of whether the respondents’ country was English speaking or not. The present results could be useful for designers of automated vehicles and other stakeholders.

  10. Hearing the unheard: An interdisciplinary, mixed methodology study of women’s experiences of hearing voices (auditory verbal hallucinations

    Directory of Open Access Journals (Sweden)

    Simon eMcCarthy-Jones

    2015-12-01

    Full Text Available This paper explores the experiences of women who ‘hear voices’ (auditory verbal hallucinations. We begin by examining historical understandings of women hearing voices, showing these have been driven by androcentric theories of how women’s bodies functioned, leading to women being viewed as requiring their voices be interpreted by men. We show the twentieth-century was associated with recognition that the mental violation of women’s minds (represented by some voice-hearing was often a consequence of the physical violation of women’s bodies. We next report the results of a qualitative study into voice-hearing women’s experiences (N=8. This found similarities between women’s relationships with their voices and their relationships with others and the wider social context. Finally, we present results from a quantitative study comparing voice-hearing in women (n=65 and men (n=132 in a psychiatric setting. Women were more likely than men to have certain forms of voice-hearing (voices conversing and to have antecedent events of trauma, physical illness, and relationship problems. Voices identified as female may have more positive affect than male voices. We conclude that women voice-hearers have and continue to face specific challenges necessitating research and activism, and hope this paper will act as a stimulus to such work.

  11. Psycho-physiological assessment of a prosthetic hand sensory feedback system based on an auditory display: a preliminary study.

    Science.gov (United States)

    Gonzalez, Jose; Soma, Hirokazu; Sekine, Masashi; Yu, Wenwei

    2012-06-09

    Prosthetic hand users have to rely extensively on visual feedback, which seems to lead to a high conscious burden for the users, in order to manipulate their prosthetic devices. Indirect methods (electro-cutaneous, vibrotactile, auditory cues) have been used to convey information from the artificial limb to the amputee, but the usability and advantages of these feedback methods were explored mainly by looking at the performance results, not taking into account measurements of the user's mental effort, attention, and emotions. The main objective of this study was to explore the feasibility of using psycho-physiological measurements to assess cognitive effort when manipulating a robot hand with and without the usage of a sensory substitution system based on auditory feedback, and how these psycho-physiological recordings relate to temporal and grasping performance in a static setting. 10 male subjects (26+/-years old), participated in this study and were asked to come for 2 consecutive days. On the first day the experiment objective, tasks, and experiment setting was explained. Then, they completed a 30 minutes guided training. On the second day each subject was tested in 3 different modalities: Auditory Feedback only control (AF), Visual Feedback only control (VF), and Audiovisual Feedback control (AVF). For each modality they were asked to perform 10 trials. At the end of each test, the subject had to answer the NASA TLX questionnaire. Also, during the test the subject's EEG, ECG, electro-dermal activity (EDA), and respiration rate were measured. The results show that a higher mental effort is needed when the subjects rely only on their vision, and that this effort seems to be reduced when auditory feedback is added to the human-machine interaction (multimodal feedback). Furthermore, better temporal performance and better grasping performance was obtained in the audiovisual modality. The performance improvements when using auditory cues, along with vision

  12. Psycho-physiological assessment of a prosthetic hand sensory feedback system based on an auditory display: a preliminary study

    Directory of Open Access Journals (Sweden)

    Gonzalez Jose

    2012-06-01

    Full Text Available Abstract Background Prosthetic hand users have to rely extensively on visual feedback, which seems to lead to a high conscious burden for the users, in order to manipulate their prosthetic devices. Indirect methods (electro-cutaneous, vibrotactile, auditory cues have been used to convey information from the artificial limb to the amputee, but the usability and advantages of these feedback methods were explored mainly by looking at the performance results, not taking into account measurements of the user’s mental effort, attention, and emotions. The main objective of this study was to explore the feasibility of using psycho-physiological measurements to assess cognitive effort when manipulating a robot hand with and without the usage of a sensory substitution system based on auditory feedback, and how these psycho-physiological recordings relate to temporal and grasping performance in a static setting. Methods 10 male subjects (26+/-years old, participated in this study and were asked to come for 2 consecutive days. On the first day the experiment objective, tasks, and experiment setting was explained. Then, they completed a 30 minutes guided training. On the second day each subject was tested in 3 different modalities: Auditory Feedback only control (AF, Visual Feedback only control (VF, and Audiovisual Feedback control (AVF. For each modality they were asked to perform 10 trials. At the end of each test, the subject had to answer the NASA TLX questionnaire. Also, during the test the subject’s EEG, ECG, electro-dermal activity (EDA, and respiration rate were measured. Results The results show that a higher mental effort is needed when the subjects rely only on their vision, and that this effort seems to be reduced when auditory feedback is added to the human-machine interaction (multimodal feedback. Furthermore, better temporal performance and better grasping performance was obtained in the audiovisual modality. Conclusions The performance

  13. Auditory feedback improves heart rate moderation during moderate-intensity exercise.

    Science.gov (United States)

    Shaykevich, Alex; Grove, J Robert; Jackson, Ben; Landers, Grant J; Dimmock, James

    2015-05-01

    The objective of this study is to determine whether exposure to automated HR feedback can produce improvements in the ability to regulate HR during moderate-intensity exercise and to evaluate the persistence of these improvements after feedback is removed. Twenty healthy adults performed 10 indoor exercise sessions on cycle ergometers over 5 wk after a twice-weekly schedule. During these sessions (FB), participants received auditory feedback designed to maintain HR within a personalized, moderate-intensity training zone between 70% and 80% of estimated maximum HR. All feedback was delivered via a custom mobile software application. Participants underwent an initial assessment (PREFB) to measure their ability to maintain exercise intensity defined by the training zone without use of feedback. After completing the feedback training, participants performed three additional assessments identical to PREFB at 1 wk (POST1), 2 wk (POST2), and 4 wk (POST3) after their last feedback session. Time in zone (TIZ), defined as the ratio of the time spent within the training zone divided by the overall time of exercise, rate of perceived exertion, instrumental attitudes, and affective attitudes were then evaluated to assess results using two-way, mixed-model ANOVA with sessions and gender as factors. Training with feedback significantly improved TIZ (P moderate-intensity exercise in healthy adults.

  14. Auditory vocal analysis and factors associated with voice disorders among teachers.

    Science.gov (United States)

    de Ceballos, Albanita Gomes da Costa; Carvalho, Fernando Martins; de Araújo, Tânia Maria; Dos Reis, Eduardo José Farias Borges

    2011-06-01

    Teachers are professionals who demand much of their voices and, consequently, present a high risk of developing vocal disorders during the course of employment. To identify factors associated with vocal disorders among teachers. An exploratory cross-sectional study, which investigated 476 teachers in primary and secondary schools in the city of Salvador, Bahia. Teachers answered a questionnaire and were submitted to auditory vocal analysis. The GRBAS was used for the diagnosis of vocal disorders. The study population comprised 82.8% women, teachers with an average age of 40.7 years, teachers with higher education (88.4%), with an average workday of 38 hours per week, average 11.5 years of professional practice and average monthly income of R$1.817.18. The prevalence of voice disorders was 53.6%. (255 teachers). The bivariate analysis showed statistically significant associations between vocal disorders and age above 40 years (PR = 1.83; 95% CI; 1.27-2.64), family history of dysphonia (PR = 1.72; 95% CI; 1.06-2.80), over 20 hours of weekly working hours (PR = 1.66; 95% CI; 1.09-2.52) and presence of chalk dust in the classroom (PR = 1.70; 95% CI; 1.14-2.53). The study concluded that teachers, 40 years old and over, with a family history of dysphonia, working over 20 hours weekly, and teaching in classrooms with chalk dust are more likely to develop voice disorders than others.

  15. Auditory feedback affects perception of effort when exercising with a Pulley machine

    DEFF Research Database (Denmark)

    Bordegoni, Monica; Ferrise, Francesco; Grani, Francesco

    2013-01-01

    In this paper we describe an experiment that investigates the role of auditory feedback in affecting the perception of effort when using a physical pulley machine. Specifically, we investigated whether variations in the amplitude and frequency content of the pulley sound affect perception of effo...

  16. Comparing the experience of voices in borderline personality disorder with the experience of voices in a psychotic disorder: A systematic review.

    Science.gov (United States)

    Merrett, Zalie; Rossell, Susan L; Castle, David J

    2016-07-01

    In clinical settings, there is substantial evidence both clinically and empirically to suggest that approximately 50% of individuals with borderline personality disorder experience auditory verbal hallucinations. However, there is limited research investigating the phenomenology of these voices. The aim of this study was to review and compare our current understanding of auditory verbal hallucinations in borderline personality disorder with auditory verbal hallucinations in patients with a psychotic disorder, to critically analyse existing studies investigating auditory verbal hallucinations in borderline personality disorder and to identify gaps in current knowledge, which will help direct future research. The literature was searched using the electronic database Scopus, PubMed and MEDLINE. Relevant studies were included if they were written in English, were empirical studies specifically addressing auditory verbal hallucinations and borderline personality disorder, were peer reviewed, used only adult humans and sample comprising borderline personality disorder as the primary diagnosis, and included a comparison group with a primary psychotic disorder such as schizophrenia. Our search strategy revealed a total of 16 articles investigating the phenomenology of auditory verbal hallucinations in borderline personality disorder. Some studies provided evidence to suggest that the voice experiences in borderline personality disorder are similar to those experienced by people with schizophrenia, for example, occur inside the head, and often involved persecutory voices. Other studies revealed some differences between schizophrenia and borderline personality disorder voice experiences, with the borderline personality disorder voices sounding more derogatory and self-critical in nature and the voice-hearers' response to the voices were more emotionally resistive. Furthermore, in one study, the schizophrenia group's voices resulted in more disruption in daily functioning

  17. Auditory feedback blocks memory benefits of cueing during sleep.

    Science.gov (United States)

    Schreiner, Thomas; Lehmann, Mick; Rasch, Björn

    2015-10-28

    It is now widely accepted that re-exposure to memory cues during sleep reactivates memories and can improve later recall. However, the underlying mechanisms are still unknown. As reactivation during wakefulness renders memories sensitive to updating, it remains an intriguing question whether reactivated memories during sleep also become susceptible to incorporating further information after the cue. Here we show that the memory benefits of cueing Dutch vocabulary during sleep are in fact completely blocked when memory cues are directly followed by either correct or conflicting auditory feedback, or a pure tone. In addition, immediate (but not delayed) auditory stimulation abolishes the characteristic increases in oscillatory theta and spindle activity typically associated with successful reactivation during sleep as revealed by high-density electroencephalography. We conclude that plastic processes associated with theta and spindle oscillations occurring during a sensitive period immediately after the cue are necessary for stabilizing reactivated memory traces during sleep.

  18. Continuous Auditory Feedback of Eye Movements: An Exploratory Study toward Improving Oculomotor Control

    Directory of Open Access Journals (Sweden)

    Eric O. Boyer

    2017-04-01

    Full Text Available As eye movements are mostly automatic and overtly generated to attain visual goals, individuals have a poor metacognitive knowledge of their own eye movements. We present an exploratory study on the effects of real-time continuous auditory feedback generated by eye movements. We considered both a tracking task and a production task where smooth pursuit eye movements (SPEM can be endogenously generated. In particular, we used a visual paradigm which enables to generate and control SPEM in the absence of a moving visual target. We investigated whether real-time auditory feedback of eye movement dynamics might improve learning in both tasks, through a training protocol over 8 days. The results indicate that real-time sonification of eye movements can actually modify the oculomotor behavior, and reinforce intrinsic oculomotor perception. Nevertheless, large inter-individual differences were observed preventing us from reaching a strong conclusion on sensorimotor learning improvements.

  19. Speaker's voice as a memory cue.

    Science.gov (United States)

    Campeanu, Sandra; Craik, Fergus I M; Alain, Claude

    2015-02-01

    Speaker's voice occupies a central role as the cornerstone of auditory social interaction. Here, we review the evidence suggesting that speaker's voice constitutes an integral context cue in auditory memory. Investigation into the nature of voice representation as a memory cue is essential to understanding auditory memory and the neural correlates which underlie it. Evidence from behavioral and electrophysiological studies suggest that while specific voice reinstatement (i.e., same speaker) often appears to facilitate word memory even without attention to voice at study, the presence of a partial benefit of similar voices between study and test is less clear. In terms of explicit memory experiments utilizing unfamiliar voices, encoding methods appear to play a pivotal role. Voice congruency effects have been found when voice is specifically attended at study (i.e., when relatively shallow, perceptual encoding takes place). These behavioral findings coincide with neural indices of memory performance such as the parietal old/new recollection effect and the late right frontal effect. The former distinguishes between correctly identified old words and correctly identified new words, and reflects voice congruency only when voice is attended at study. Characterization of the latter likely depends upon voice memory, rather than word memory. There is also evidence to suggest that voice effects can be found in implicit memory paradigms. However, the presence of voice effects appears to depend greatly on the task employed. Using a word identification task, perceptual similarity between study and test conditions is, like for explicit memory tests, crucial. In addition, the type of noise employed appears to have a differential effect. While voice effects have been observed when white noise is used at both study and test, using multi-talker babble does not confer the same results. In terms of neuroimaging research modulations, characterization of an implicit memory effect

  20. Face the voice

    DEFF Research Database (Denmark)

    Lønstrup, Ansa

    2014-01-01

    will be based on a reception aesthetic and phenomenological approach, the latter as presented by Don Ihde in his book Listening and Voice. Phenomenologies of Sound , and my analytical sketches will be related to theoretical statements concerning the understanding of voice and media (Cavarero, Dolar, La......Belle, Neumark). Finally, the article will discuss the specific artistic combination and our auditory experience of mediated human voices and sculpturally projected faces in an art museum context under the general conditions of the societal panophonia of disembodied and mediated voices, as promoted by Steven...

  1. A software module for implementing auditory and visual feedback on a video-based eye tracking system

    Science.gov (United States)

    Rosanlall, Bharat; Gertner, Izidor; Geri, George A.; Arrington, Karl F.

    2016-05-01

    We describe here the design and implementation of a software module that provides both auditory and visual feedback of the eye position measured by a commercially available eye tracking system. The present audio-visual feedback module (AVFM) serves as an extension to the Arrington Research ViewPoint EyeTracker, but it can be easily modified for use with other similar systems. Two modes of audio feedback and one mode of visual feedback are provided in reference to a circular area-of-interest (AOI). Auditory feedback can be either a click tone emitted when the user's gaze point enters or leaves the AOI, or a sinusoidal waveform with frequency inversely proportional to the distance from the gaze point to the center of the AOI. Visual feedback is in the form of a small circular light patch that is presented whenever the gaze-point is within the AOI. The AVFM processes data that are sent to a dynamic-link library by the EyeTracker. The AVFM's multithreaded implementation also allows real-time data collection (1 kHz sampling rate) and graphics processing that allow display of the current/past gaze-points as well as the AOI. The feedback provided by the AVFM described here has applications in military target acquisition and personnel training, as well as in visual experimentation, clinical research, marketing research, and sports training.

  2. Comparisons of Stuttering Frequency during and after Speech Initiation in Unaltered Feedback, Altered Auditory Feedback and Choral Speech Conditions

    Science.gov (United States)

    Saltuklaroglu, Tim; Kalinowski, Joseph; Robbins, Mary; Crawcour, Stephen; Bowers, Andrew

    2009-01-01

    Background: Stuttering is prone to strike during speech initiation more so than at any other point in an utterance. The use of auditory feedback (AAF) has been found to produce robust decreases in the stuttering frequency by creating an electronic rendition of choral speech (i.e., speaking in unison). However, AAF requires users to self-initiate…

  3. [Hearing voices does not always constitute a psychosis].

    Science.gov (United States)

    Sommer, I E C; van der Spek, D W

    2016-01-01

    Hearing voices (i.e. auditory verbal hallucinations) is mainly known as part of schizophrenia and other psychotic disorders. However, hearing voices is a symptom that can occur in many psychiatric, neurological and general medical conditions. We present three cases of non-psychotic patients with auditory verbal hallucinations caused by different disorders. The first patient is a 74-year-old male with voices due to hearing loss, the second is a 20-year-old woman with voices due to traumatisation. The third patient is a 27-year-old woman with voices caused by temporal lobe epilepsy. Hearing voices is a phenomenon that occurs in a variety of disorders. Therefore, identification of the underlying disorder is essential to indicate treatment. Improvement of coping with the voices can reduce their impact on a patient. Antipsychotic drugs are especially effective when hearing voices is accompanied by delusions or disorganization. When this is not the case, the efficacy of antipsychotic drugs will probably not outweigh the side-effects.

  4. Effect of visual distraction and auditory feedback on patient effort during robot-assisted movement training after stroke.

    Science.gov (United States)

    Secoli, Riccardo; Milot, Marie-Helene; Rosati, Giulio; Reinkensmeyer, David J

    2011-04-23

    Practicing arm and gait movements with robotic assistance after neurologic injury can help patients improve their movement ability, but patients sometimes reduce their effort during training in response to the assistance. Reduced effort has been hypothesized to diminish clinical outcomes of robotic training. To better understand patient slacking, we studied the role of visual distraction and auditory feedback in modulating patient effort during a common robot-assisted tracking task. Fourteen participants with chronic left hemiparesis from stroke, five control participants with chronic right hemiparesis and fourteen non-impaired healthy control participants, tracked a visual target with their arms while receiving adaptive assistance from a robotic arm exoskeleton. We compared four practice conditions: the baseline tracking task alone; tracking while also performing a visual distracter task; tracking with the visual distracter and sound feedback; and tracking with sound feedback. For the distracter task, symbols were randomly displayed in the corners of the computer screen, and the participants were instructed to click a mouse button when a target symbol appeared. The sound feedback consisted of a repeating beep, with the frequency of repetition made to increase with increasing tracking error. Participants with stroke halved their effort and doubled their tracking error when performing the visual distracter task with their left hemiparetic arm. With sound feedback, however, these participants increased their effort and decreased their tracking error close to their baseline levels, while also performing the distracter task successfully. These effects were significantly smaller for the participants who used their non-paretic arm and for the participants without stroke. Visual distraction decreased participants effort during a standard robot-assisted movement training task. This effect was greater for the hemiparetic arm, suggesting that the increased demands associated

  5. Object discrimination using optimized multi-frequency auditory cross-modal haptic feedback.

    Science.gov (United States)

    Gibson, Alison; Artemiadis, Panagiotis

    2014-01-01

    As the field of brain-machine interfaces and neuro-prosthetics continues to grow, there is a high need for sensor and actuation mechanisms that can provide haptic feedback to the user. Current technologies employ expensive, invasive and often inefficient force feedback methods, resulting in an unrealistic solution for individuals who rely on these devices. This paper responds through the development, integration and analysis of a novel feedback architecture where haptic information during the neural control of a prosthetic hand is perceived through multi-frequency auditory signals. Through representing force magnitude with volume and force location with frequency, the feedback architecture can translate the haptic experiences of a robotic end effector into the alternative sensory modality of sound. Previous research with the proposed cross-modal feedback method confirmed its learnability, so the current work aimed to investigate which frequency map (i.e. frequency-specific locations on the hand) is optimal in helping users distinguish between hand-held objects and tasks associated with them. After short use with the cross-modal feedback during the electromyographic (EMG) control of a prosthetic hand, testing results show that users are able to use audial feedback alone to discriminate between everyday objects. While users showed adaptation to three different frequency maps, the simplest map containing only two frequencies was found to be the most useful in discriminating between objects. This outcome provides support for the feasibility and practicality of the cross-modal feedback method during the neural control of prosthetics.

  6. Auditory Sketches: Very Sparse Representations of Sounds Are Still Recognizable.

    Directory of Open Access Journals (Sweden)

    Vincent Isnard

    Full Text Available Sounds in our environment like voices, animal calls or musical instruments are easily recognized by human listeners. Understanding the key features underlying this robust sound recognition is an important question in auditory science. Here, we studied the recognition by human listeners of new classes of sounds: acoustic and auditory sketches, sounds that are severely impoverished but still recognizable. Starting from a time-frequency representation, a sketch is obtained by keeping only sparse elements of the original signal, here, by means of a simple peak-picking algorithm. Two time-frequency representations were compared: a biologically grounded one, the auditory spectrogram, which simulates peripheral auditory filtering, and a simple acoustic spectrogram, based on a Fourier transform. Three degrees of sparsity were also investigated. Listeners were asked to recognize the category to which a sketch sound belongs: singing voices, bird calls, musical instruments, and vehicle engine noises. Results showed that, with the exception of voice sounds, very sparse representations of sounds (10 features, or energy peaks, per second could be recognized above chance. No clear differences could be observed between the acoustic and the auditory sketches. For the voice sounds, however, a completely different pattern of results emerged, with at-chance or even below-chance recognition performances, suggesting that the important features of the voice, whatever they are, were removed by the sketch process. Overall, these perceptual results were well correlated with a model of auditory distances, based on spectro-temporal excitation patterns (STEPs. This study confirms the potential of these new classes of sounds, acoustic and auditory sketches, to study sound recognition.

  7. Effect of visual distraction and auditory feedback on patient effort during robot-assisted movement training after stroke

    Directory of Open Access Journals (Sweden)

    Reinkensmeyer David J

    2011-04-01

    Full Text Available Abstract Background Practicing arm and gait movements with robotic assistance after neurologic injury can help patients improve their movement ability, but patients sometimes reduce their effort during training in response to the assistance. Reduced effort has been hypothesized to diminish clinical outcomes of robotic training. To better understand patient slacking, we studied the role of visual distraction and auditory feedback in modulating patient effort during a common robot-assisted tracking task. Methods Fourteen participants with chronic left hemiparesis from stroke, five control participants with chronic right hemiparesis and fourteen non-impaired healthy control participants, tracked a visual target with their arms while receiving adaptive assistance from a robotic arm exoskeleton. We compared four practice conditions: the baseline tracking task alone; tracking while also performing a visual distracter task; tracking with the visual distracter and sound feedback; and tracking with sound feedback. For the distracter task, symbols were randomly displayed in the corners of the computer screen, and the participants were instructed to click a mouse button when a target symbol appeared. The sound feedback consisted of a repeating beep, with the frequency of repetition made to increase with increasing tracking error. Results Participants with stroke halved their effort and doubled their tracking error when performing the visual distracter task with their left hemiparetic arm. With sound feedback, however, these participants increased their effort and decreased their tracking error close to their baseline levels, while also performing the distracter task successfully. These effects were significantly smaller for the participants who used their non-paretic arm and for the participants without stroke. Conclusions Visual distraction decreased participants effort during a standard robot-assisted movement training task. This effect was greater for

  8. Fast negative feedback enables mammalian auditory nerve fibers to encode a wide dynamic range of sound intensities.

    Directory of Open Access Journals (Sweden)

    Mark Ospeck

    Full Text Available Mammalian auditory nerve fibers (ANF are remarkable for being able to encode a 40 dB, or hundred fold, range of sound pressure levels into their firing rate. Most of the fibers are very sensitive and raise their quiescent spike rate by a small amount for a faint sound at auditory threshold. Then as the sound intensity is increased, they slowly increase their spike rate, with some fibers going up as high as ∼300 Hz. In this way mammals are able to combine sensitivity and wide dynamic range. They are also able to discern sounds embedded within background noise. ANF receive efferent feedback, which suggests that the fibers are readjusted according to the background noise in order to maximize the information content of their auditory spike trains. Inner hair cells activate currents in the unmyelinated distal dendrites of ANF where sound intensity is rate-coded into action potentials. We model this spike generator compartment as an attenuator that employs fast negative feedback. Input current induces rapid and proportional leak currents. This way ANF are able to have a linear frequency to input current (f-I curve that has a wide dynamic range. The ANF spike generator remains very sensitive to threshold currents, but efferent feedback is able to lower its gain in response to noise.

  9. Effects of Delayed Auditory Feedback in Stuttering Patterns

    Directory of Open Access Journals (Sweden)

    Janeth Hernández Jaramillo

    2014-05-01

    Full Text Available The present study corresponds to a single subject design, analyzes the patterns of stuttering in the speech corpus in various oral language tasks, under the conditions of use or non-use of Delayed Auditory Feedback (DAF, in order to establish the effect of the DAF in the frequency of occur¬rence and type of dysrhythmia. The study concludes the positive effect of the DAF, with a rate of return of 25 % on the errors of fluency, with variation depending on the type of oral production task. This in turn suggests that 75 % of the disfluency or linked with top encode failures or not susceptible to resolve or compensated by the DAF. The authors discuss the implications of these findings for therapeutic intervention in stuttering.

  10. Visual attention modulates brain activation to angry voices.

    Science.gov (United States)

    Mothes-Lasch, Martin; Mentzel, Hans-Joachim; Miltner, Wolfgang H R; Straube, Thomas

    2011-06-29

    In accordance with influential models proposing prioritized processing of threat, previous studies have shown automatic brain responses to angry prosody in the amygdala and the auditory cortex under auditory distraction conditions. However, it is unknown whether the automatic processing of angry prosody is also observed during cross-modal distraction. The current fMRI study investigated brain responses to angry versus neutral prosodic stimuli during visual distraction. During scanning, participants were exposed to angry or neutral prosodic stimuli while visual symbols were displayed simultaneously. By means of task requirements, participants either attended to the voices or to the visual stimuli. While the auditory task revealed pronounced activation in the auditory cortex and amygdala to angry versus neutral prosody, this effect was absent during the visual task. Thus, our results show a limitation of the automaticity of the activation of the amygdala and auditory cortex to angry prosody. The activation of these areas to threat-related voices depends on modality-specific attention.

  11. The processing of auditory and visual recognition of self-stimuli.

    Science.gov (United States)

    Hughes, Susan M; Nicholson, Shevon E

    2010-12-01

    This study examined self-recognition processing in both the auditory and visual modalities by determining how comparable hearing a recording of one's own voice was to seeing photograph of one's own face. We also investigated whether the simultaneous presentation of auditory and visual self-stimuli would either facilitate or inhibit self-identification. Ninety-one participants completed reaction-time tasks of self-recognition when presented with their own faces, own voices, and combinations of the two. Reaction time and errors made when responding with both the right and left hand were recorded to determine if there were lateralization effects on these tasks. Our findings showed that visual self-recognition for facial photographs appears to be superior to auditory self-recognition for voice recordings. Furthermore, a combined presentation of one's own face and voice appeared to inhibit rather than facilitate self-recognition and there was a left-hand advantage for reaction time on the combined-presentation tasks. Copyright © 2010 Elsevier Inc. All rights reserved.

  12. Auditory N1 reveals planning and monitoring processes during music performance.

    Science.gov (United States)

    Mathias, Brian; Gehring, William J; Palmer, Caroline

    2017-02-01

    The current study investigated the relationship between planning processes and feedback monitoring during music performance, a complex task in which performers prepare upcoming events while monitoring their sensory outcomes. Theories of action planning in auditory-motor production tasks propose that the planning of future events co-occurs with the perception of auditory feedback. This study investigated the neural correlates of planning and feedback monitoring by manipulating the contents of auditory feedback during music performance. Pianists memorized and performed melodies at a cued tempo in a synchronization-continuation task while the EEG was recorded. During performance, auditory feedback associated with single melody tones was occasionally substituted with tones corresponding to future (next), present (current), or past (previous) melody tones. Only future-oriented altered feedback disrupted behavior: Future-oriented feedback caused pianists to slow down on the subsequent tone more than past-oriented feedback, and amplitudes of the auditory N1 potential elicited by the tone immediately following the altered feedback were larger for future-oriented than for past-oriented or noncontextual (unrelated) altered feedback; larger N1 amplitudes were associated with greater slowing following altered feedback in the future condition only. Feedback-related negativities were elicited in all altered feedback conditions. In sum, behavioral and neural evidence suggests that future-oriented feedback disrupts performance more than past-oriented feedback, consistent with planning theories that posit similarity-based interference between feedback and planning contents. Neural sensory processing of auditory feedback, reflected in the N1 ERP, may serve as a marker for temporal disruption caused by altered auditory feedback in auditory-motor production tasks. © 2016 Society for Psychophysiological Research.

  13. A pilot study of the relations within which hearing voices participates: Towards a functional distinction between voice hearers and controls

    NARCIS (Netherlands)

    McEnteggart, C.; Barnes-Holmes, Y.; Egger, J.I.M.; Barnes-Holmes, D.

    2016-01-01

    The current research used the Implicit Relational Assessment Procedure (IRAP) as a preliminary step toward bringing a broad, functional approach to understanding psychosis, by focusing on the specific phenomenon of auditory hallucinations of voices and sounds (often referred to as hearing voices).

  14. Shop 'til you hear it drop - Influence of Interactive Auditory Feedback in a Virtual Reality Supermarket

    DEFF Research Database (Denmark)

    Sikström, Erik; Høeg, Emil Rosenlund; Mangano, Luca

    2016-01-01

    In this paper we describe an experiment aiming to investigate the impact of auditory feedback in a virtual reality supermarket scenario. The participants were asked to read a shopping list and collect items one by one and place them into a shopping cart. Three conditions were presented randomly...

  15. Auditory prediction during speaking and listening.

    Science.gov (United States)

    Sato, Marc; Shiller, Douglas M

    2018-02-02

    In the present EEG study, the role of auditory prediction in speech was explored through the comparison of auditory cortical responses during active speaking and passive listening to the same acoustic speech signals. Two manipulations of sensory prediction accuracy were used during the speaking task: (1) a real-time change in vowel F1 feedback (reducing prediction accuracy relative to unaltered feedback) and (2) presenting a stable auditory target rather than a visual cue to speak (enhancing auditory prediction accuracy during baseline productions, and potentially enhancing the perturbing effect of altered feedback). While subjects compensated for the F1 manipulation, no difference between the auditory-cue and visual-cue conditions were found. Under visually-cued conditions, reduced N1/P2 amplitude was observed during speaking vs. listening, reflecting a motor-to-sensory prediction. In addition, a significant correlation was observed between the magnitude of behavioral compensatory F1 response and the magnitude of this speaking induced suppression (SIS) for P2 during the altered auditory feedback phase, where a stronger compensatory decrease in F1 was associated with a stronger the SIS effect. Finally, under the auditory-cued condition, an auditory repetition-suppression effect was observed in N1/P2 amplitude during the listening task but not active speaking, suggesting that auditory predictive processes during speaking and passive listening are functionally distinct. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. Noise perception in the workplace and auditory and extra-auditory symptoms referred by university professors.

    Science.gov (United States)

    Servilha, Emilse Aparecida Merlin; Delatti, Marina de Almeida

    2012-01-01

    To investigate the correlation between noise in the work environment and auditory and extra-auditory symptoms referred by university professors. Eighty five professors answered a questionnaire about identification, functional status, and health. The relationship between occupational noise and auditory and extra-auditory symptoms was investigated. Statistical analysis considered the significance level of 5%. None of the professors indicated absence of noise. Responses were grouped in Always (A) (n=21) and Not Always (NA) (n=63). Significant sources of noise were both the yard and another class, which were classified as high intensity; poor acoustic and echo. There was no association between referred noise and health complaints, such as digestive, hormonal, osteoarticular, dental, circulatory, respiratory and emotional complaints. There was also no association between referred noise and hearing complaints, and the group A showed higher occurrence of responses regarding noise nuisance, hearing difficulty and dizziness/vertigo, tinnitus, and earache. There was association between referred noise and voice alterations, and the group NA presented higher percentage of cases with voice alterations than the group A. The university environment was considered noisy; however, there was no association with auditory and extra-auditory symptoms. The hearing complaints were more evident among professors in the group A. Professors' health is a multi-dimensional product and, therefore, noise cannot be considered the only aggravation factor.

  17. Recovering from Hallucinations: A Qualitative Study of Coping with Voices Hearing of People with Schizophrenia in Hong Kong

    Directory of Open Access Journals (Sweden)

    Petrus Ng

    2012-01-01

    Full Text Available Auditory hallucination is a positive symptom of schizophrenia and has significant impacts on the lives of individuals. People with auditory hallucination require considerable assistance from mental health professionals. Apart from medications, they may apply different lay methods to cope with their voice hearing. Results from qualitative interviews showed that people with schizophrenia in the Chinese sociocultural context of Hong Kong were coping with auditory hallucination in different ways, including (a changing social contacts, (b manipulating the voices, and (c changing perception and meaning towards the voices. Implications for recovery from psychiatric illness of individuals with auditory hallucinations are discussed.

  18. Auditory-motor learning influences auditory memory for music.

    Science.gov (United States)

    Brown, Rachel M; Palmer, Caroline

    2012-05-01

    In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.

  19. The effect of background music in auditory health persuasion

    NARCIS (Netherlands)

    Elbert, Sarah; Dijkstra, Arie

    2013-01-01

    In auditory health persuasion, threatening information regarding health is communicated by voice only. One relevant context of auditory persuasion is the addition of background music. There are different mechanisms through which background music might influence persuasion, for example through mood

  20. Utility estimation of the application of auditory-visual-tactile sense feedback in respiratory gated radiation therapy

    Energy Technology Data Exchange (ETDEWEB)

    Jo, Jung Hun; KIm, Byeong Jin; Roh, Shi Won; Lee, Hyeon Chan; Jang, Hyeong Jun; Kim, Hoi Nam [Dept. of Radiation Oncology, Biomedical Engineering, Seoul St. Mary' s Hospital, Seoul (Korea, Republic of); Song, Jae Hoon [Dept. of Biomedical Engineering, Seoul St. Mary' s Hospital, Seoul (Korea, Republic of); Kim, Young Jae [Dept. of Radiological Technology, Gwang Yang Health Collage, Gwangyang (Korea, Republic of)

    2013-03-15

    The purpose of this study was to evaluate the possibility to optimize the gated treatment delivery time and maintenance of stable respiratory by the introduction of breath with the assistance of auditory-visual-tactile sense. The experimenter's respiration were measured by ANZAI 4D system. We obtained natural breathing signal, monitor-induced breathing signal, monitor and ventilator-induced breathing signal, and breath-hold signal using real time monitor during 10 minutes beam-on-time. In order to check the stability of respiratory signals distributed in each group were compared with means, standard deviation, variation value, beam{sub t}ime of the respiratory signal. The stability of each respiratory was measured in consideration of deviation change studied in each respiratory time lapse. As a result of an analysis of respiratory signal, all experimenters has showed that breathing signal used both Real time monitor and Ventilator was the most stable and shortest time. In this study, it was evaluated that respiratory gated radiation therapy with auditory-visual-tactual sense and without auditory-visual-tactual sense feedback. The study showed that respiratory gated radiation therapy delivery time could significantly be improved by the application of video feedback when this is combined with audio-tactual sense assistance. This delivery technique did prove its feasibility to limit the tumor motion during treatment delivery for all patients to a defined value while maintaining the accuracy and proved the applicability of the technique in a conventional clinical schedule.

  1. Utility estimation of the application of auditory-visual-tactile sense feedback in respiratory gated radiation therapy

    International Nuclear Information System (INIS)

    Jo, Jung Hun; KIm, Byeong Jin; Roh, Shi Won; Lee, Hyeon Chan; Jang, Hyeong Jun; Kim, Hoi Nam; Song, Jae Hoon; Kim, Young Jae

    2013-01-01

    The purpose of this study was to evaluate the possibility to optimize the gated treatment delivery time and maintenance of stable respiratory by the introduction of breath with the assistance of auditory-visual-tactile sense. The experimenter's respiration were measured by ANZAI 4D system. We obtained natural breathing signal, monitor-induced breathing signal, monitor and ventilator-induced breathing signal, and breath-hold signal using real time monitor during 10 minutes beam-on-time. In order to check the stability of respiratory signals distributed in each group were compared with means, standard deviation, variation value, beam t ime of the respiratory signal. The stability of each respiratory was measured in consideration of deviation change studied in each respiratory time lapse. As a result of an analysis of respiratory signal, all experimenters has showed that breathing signal used both Real time monitor and Ventilator was the most stable and shortest time. In this study, it was evaluated that respiratory gated radiation therapy with auditory-visual-tactual sense and without auditory-visual-tactual sense feedback. The study showed that respiratory gated radiation therapy delivery time could significantly be improved by the application of video feedback when this is combined with audio-tactual sense assistance. This delivery technique did prove its feasibility to limit the tumor motion during treatment delivery for all patients to a defined value while maintaining the accuracy and proved the applicability of the technique in a conventional clinical schedule

  2. Functional connectivity between face-movement and speech-intelligibility areas during auditory-only speech perception.

    Science.gov (United States)

    Schall, Sonja; von Kriegstein, Katharina

    2014-01-01

    It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.

  3. Investigations of Hemispheric Specialization of Self-Voice Recognition

    Science.gov (United States)

    Rosa, Christine; Lassonde, Maryse; Pinard, Claudine; Keenan, Julian Paul; Belin, Pascal

    2008-01-01

    Three experiments investigated functional asymmetries related to self-recognition in the domain of voices. In Experiment 1, participants were asked to identify one of three presented voices (self, familiar or unknown) by responding with either the right or the left-hand. In Experiment 2, participants were presented with auditory morphs between the…

  4. Using voice input and audio feedback to enhance the reality of a virtual experience

    Energy Technology Data Exchange (ETDEWEB)

    Miner, N.E.

    1994-04-01

    Virtual Reality (VR) is a rapidly emerging technology which allows participants to experience a virtual environment through stimulation of the participant`s senses. Intuitive and natural interactions with the virtual world help to create a realistic experience. Typically, a participant is immersed in a virtual environment through the use of a 3-D viewer. Realistic, computer-generated environment models and accurate tracking of a participant`s view are important factors for adding realism to a virtual experience. Stimulating a participant`s sense of sound and providing a natural form of communication for interacting with the virtual world are equally important. This paper discusses the advantages and importance of incorporating voice recognition and audio feedback capabilities into a virtual world experience. Various approaches and levels of complexity are discussed. Examples of the use of voice and sound are presented through the description of a research application developed in the VR laboratory at Sandia National Laboratories.

  5. Feedback Valence Affects Auditory Perceptual Learning Independently of Feedback Probability

    Science.gov (United States)

    Amitay, Sygal; Moore, David R.; Molloy, Katharine; Halliday, Lorna F.

    2015-01-01

    Previous studies have suggested that negative feedback is more effective in driving learning than positive feedback. We investigated the effect on learning of providing varying amounts of negative and positive feedback while listeners attempted to discriminate between three identical tones; an impossible task that nevertheless produces robust learning. Four feedback conditions were compared during training: 90% positive feedback or 10% negative feedback informed the participants that they were doing equally well, while 10% positive or 90% negative feedback informed them they were doing equally badly. In all conditions the feedback was random in relation to the listeners’ responses (because the task was to discriminate three identical tones), yet both the valence (negative vs. positive) and the probability of feedback (10% vs. 90%) affected learning. Feedback that informed listeners they were doing badly resulted in better post-training performance than feedback that informed them they were doing well, independent of valence. In addition, positive feedback during training resulted in better post-training performance than negative feedback, but only positive feedback indicating listeners were doing badly on the task resulted in learning. As we have previously speculated, feedback that better reflected the difficulty of the task was more effective in driving learning than feedback that suggested performance was better than it should have been given perceived task difficulty. But contrary to expectations, positive feedback was more effective than negative feedback in driving learning. Feedback thus had two separable effects on learning: feedback valence affected motivation on a subjectively difficult task, and learning occurred only when feedback probability reflected the subjective difficulty. To optimize learning, training programs need to take into consideration both feedback valence and probability. PMID:25946173

  6. Prevalence and correlates of auditory vocal hallucinations in middle childhood

    NARCIS (Netherlands)

    Bartels-Velthuis, A.A.; Jenner, J.A.; van de Willige, G.; van Os, J.; Wiersma, D.

    Background Hearing voices occurs in middle childhood, but little is known about prevalence, aetiology and immediate consequences. Aims To investigate prevalence, developmental risk factors and behavioural correlates of auditory vocal hallucinations in 7- and 8-year-olds. Method Auditory vocal

  7. Using Facebook to Reach People Who Experience Auditory Hallucinations

    OpenAIRE

    Crosier, Benjamin Sage; Brian, Rachel Marie; Ben-Zeev, Dror

    2016-01-01

    Background Auditory hallucinations (eg, hearing voices) are relatively common and underreported false sensory experiences that may produce distress and impairment. A large proportion of those who experience auditory hallucinations go unidentified and untreated. Traditional engagement methods oftentimes fall short in reaching the diverse population of people who experience auditory hallucinations. Objective The objective of this proof-of-concept study was to examine the viability of leveraging...

  8. Feedforward and feedback projections of caudal belt and parabelt areas of auditory cortex: refining the hierarchical model

    Directory of Open Access Journals (Sweden)

    Troy A Hackett

    2014-04-01

    Full Text Available Our working model of the primate auditory cortex recognizes three major regions (core, belt, parabelt, subdivided into thirteen areas. The connections between areas are topographically ordered in a manner consistent with information flow along two major anatomical axes: core-belt-parabelt and caudal-rostral. Remarkably, most of the connections supporting this model were revealed using retrograde tracing techniques. Little is known about laminar circuitry, as anterograde tracing of axon terminations has rarely been used. The purpose of the present study was to examine the laminar projections of three areas of auditory cortex, pursuant to analysis of all areas. The selected areas were: middle lateral belt (ML; caudomedial belt (CM; and caudal parabelt (CPB. Injections of anterograde tracers yielded data consistent with major features of our model, and also new findings that compel modifications. Results supporting the model were: 1 feedforward projection from ML and CM terminated in CPB; 2 feedforward projections from ML and CPB terminated in rostral areas of the belt and parabelt; and 3 feedback projections typified inputs to the core region from belt and parabelt. At odds with the model was the convergence of feedforward inputs into rostral medial belt from ML and CPB. This was unexpected since CPB is at a higher stage of the processing hierarchy, with mainly feedback projections to all other belt areas. Lastly, extending the model, feedforward projections from CM, ML, and CPB overlapped in the temporal parietal occipital area (TPO in the superior temporal sulcus, indicating significant auditory influence on sensory processing in this region. The combined results refine our working model and highlight the need to complete studies of the laminar inputs to all areas of auditory cortex. Their documentation is essential for developing informed hypotheses about the neurophysiological influences of inputs to each layer and area.

  9. Speaker-Sex Discrimination for Voiced and Whispered Vowels at Short Durations

    OpenAIRE

    Smith, David R. R.

    2016-01-01

    Whispered vowels, produced with no vocal fold vibration, lack the periodic temporal fine structure which in voiced vowels underlies the perceptual attribute of pitch (a salient auditory cue to speaker sex). Voiced vowels possess no temporal fine structure at very short durations (below two glottal cycles). The prediction was that speaker-sex discrimination performance for whispered and voiced vowels would be similar for very short durations but, as stimulus duration increases, voiced vowel pe...

  10. Foetal response to music and voice.

    Science.gov (United States)

    Al-Qahtani, Noura H

    2005-10-01

    To examine whether prenatal exposure to music and voice alters foetal behaviour and whether foetal response to music differs from human voice. A prospective observational study was conducted in 20 normal term pregnant mothers. Ten foetuses were exposed to music and voice for 15 s at different sound pressure levels to find out the optimal setting for the auditory stimulation. Music, voice and sham were played to another 10 foetuses via a headphone on the maternal abdomen. The sound pressure level was 105 db and 94 db for music and voice, respectively. Computerised assessment of foetal heart rate and activity were recorded. 90 actocardiograms were obtained for the whole group. One way anova followed by posthoc (Student-Newman-Keuls method) analysis was used to find if there is significant difference in foetal response to music and voice versus sham. Foetuses responded with heart rate acceleration and motor response to both music and voice. This was statistically significant compared to sham. There was no significant difference between the foetal heart rate acceleration to music and voice. Prenatal exposure to music and voice alters the foetal behaviour. No difference was detected in foetal response to music and voice.

  11. Superior voice recognition in a patient with acquired prosopagnosia and object agnosia.

    Science.gov (United States)

    Hoover, Adria E N; Démonet, Jean-François; Steeves, Jennifer K E

    2010-11-01

    Anecdotally, it has been reported that individuals with acquired prosopagnosia compensate for their inability to recognize faces by using other person identity cues such as hair, gait or the voice. Are they therefore superior at the use of non-face cues, specifically voices, to person identity? Here, we empirically measure person and object identity recognition in a patient with acquired prosopagnosia and object agnosia. We quantify person identity (face and voice) and object identity (car and horn) recognition for visual, auditory, and bimodal (visual and auditory) stimuli. The patient is unable to recognize faces or cars, consistent with his prosopagnosia and object agnosia, respectively. He is perfectly able to recognize people's voices and car horns and bimodal stimuli. These data show a reverse shift in the typical weighting of visual over auditory information for audiovisual stimuli in a compromised visual recognition system. Moreover, the patient shows selectively superior voice recognition compared to the controls revealing that two different stimulus domains, persons and objects, may not be equally affected by sensory adaptation effects. This also implies that person and object identity recognition are processed in separate pathways. These data demonstrate that an individual with acquired prosopagnosia and object agnosia can compensate for the visual impairment and become quite skilled at using spared aspects of sensory processing. In the case of acquired prosopagnosia it is advantageous to develop a superior use of voices for person identity recognition in everyday life. Copyright © 2010 Elsevier Ltd. All rights reserved.

  12. The Effects of Computerized Auditory Feedback on Electronic Article Surveillance Tag Placement in an Auto-Parts Distribution Center

    Science.gov (United States)

    Goomas, David T.

    2008-01-01

    In this report from the field, computerized auditory feedback was used to inform order selectors and order selector auditors in a distribution center to add an electronic article surveillance (EAS) adhesive tag. This was done by programming handheld computers to emit a loud beep for high-priced items upon scanning the item's bar-coded Universal…

  13. Effect of an auditory feedback substitution, tactilo-kinesthetic, or visual feedback on kinematics of pouring water from kettle into cup.

    Science.gov (United States)

    Portnoy, Sigal; Halaby, Orli; Dekel-Chen, Dotan; Dierick, Frédéric

    2015-11-01

    Pouring hot water from a kettle into a cup may prove a hazardous task, especially for the elderly or the visually-impaired. Individuals with deteriorating eyesight may endanger their hands by performing this task with both hands, relaying on tactilo-kinesthetic feedback (TKF). Auditory feedback (AF) may allow them to perform the task singlehandedly, thereby reducing the risk for injury. However since relying on an AF is not intuitive and requires practice, we aimed to determine if AF supplied during the task of pouring water can be used naturally as visual feedback (VF) following practice. For this purpose, we quantified, in young healthy sighted subjects (n = 20), the performance and kinematics of pouring water in the presence of three isolated feedbacks: visual, tactilo-kinesthetic, or auditory. There were no significant differences between the weights of spilled water in the AF condition compared to the TKF condition in the first, fifth or thirteenth trials. The subjectively-reported difficulty levels of using the TKF and the AF were significantly reduced between the first and thirteenth trials for both TKF (p = 0.01) and AF (p = 0.001). Trunk rotation during the first trial using the TKF was significantly lower than the trunk rotation while using VF. Also, shoulder adduction during the first trial using the TKF was significantly higher than the shoulder adduction while using the VF. During the AF trials, the median travel distance of the tip of the kettle was significantly reduced in the first trials so that in the thirtieth trial it did not differ significantly from the median travel distance during the thirtieth trial using TKF and VF. The maximal velocity of the tip of the kettle was constant for each of the feedback conditions but was higher in 10 cm s(-1) using VF than TKF, which was higher in 10 cm s(-1) from using AF. The smoothness of movement of the TKF and AF conditions, expressed by the normalized jerk score (NJSM), was one and two orders

  14. Auditory Hallucinations as Translational Psychiatry: Evidence from Magnetic Resonance Imaging.

    Science.gov (United States)

    Hugdahl, Kenneth

    2017-12-01

    In this invited review article, I present a translational perspective and overview of our research on auditory hallucinations in schizophrenia at the University of Bergen, Norway, with a focus on the neuronal mechanisms underlying the phenomenology of experiencing "hearing voices". An auditory verbal hallucination (i.e. hearing a voice) is defined as a sensory experience in the absence of a corresponding external sensory source that could explain the phenomenological experience. I suggest a general frame or scheme for the study of auditory verbal hallucinations, called Levels of Explanation. Using a Levels of Explanation approach, mental phenomena can be described and explained at different levels (cultural, clinical, cognitive, brain-imaging, cellular and molecular). Another way of saying this is that, to advance knowledge in a research field, it is not only necessary to replicate findings, but also to show how evidence obtained with one method, and at one level of explanation, converges with evidence obtained with another method at another level. To achieve breakthroughs in our understanding of auditory verbal hallucinations, we have to advance vertically through the various levels, rather than the more common approach of staying at our favourite level and advancing horizontally (e.g., more advanced techniques and data acquisition analyses). The horizontal expansion will, however, not advance a deeper understanding of how an auditory verbal hallucination spontaneously starts and stops. Finally, I present data from the clinical, cognitive, brain-imaging, and cellular levels, where data from one level validate and support data at another level, called converging of evidence. Using a translational approach, the current status of auditory verbal hallucinations is that they implicate speech perception areas in the left temporal lobe, impairing perception of and attention to external sounds. Preliminary results also show that amygdala is implicated in the emotional

  15. Auditory Hallucinations as Translational Psychiatry: Evidence from Magnetic Resonance Imaging

    Directory of Open Access Journals (Sweden)

    Kenneth Hugdahl

    2017-12-01

    Full Text Available In this invited review article, I present a translational perspective and overview of our research on auditory hallucinations in schizophrenia at the University of Bergen, Norway, with a focus on the neuronal mechanisms underlying the phenomenology of experiencing "hearing voices". An auditory verbal hallucination (i.e. hearing a voice is defined as a sensory experience in the absence of a corresponding external sensory source that could explain the phenomenological experience. I suggest a general frame or scheme for the study of auditory verbal hallucinations, called Levels of Explanation. Using a Levels of Explanation approach, mental phenomena can be described and explained at different levels (cultural, clinical, cognitive, brain-imaging, cellular and molecular. Another way of saying this is that, to advance knowledge in a research field, it is not only necessary to replicate findings, but also to show how evidence obtained with one method, and at one level of explanation, converges with evidence obtained with another method at another level. To achieve breakthroughs in our understanding of auditory verbal hallucinations, we have to advance vertically through the various levels, rather than the more common approach of staying at our favourite level and advancing horizontally (e.g., more advanced techniques and data acquisition analyses. The horizontal expansion will, however, not advance a deeper understanding of how an auditory verbal hallucination spontaneously starts and stops. Finally, I present data from the clinical, cognitive, brain-imaging, and cellular levels, where data from one level validate and support data at another level, called converging of evidence. Using a translational approach, the current status of auditory verbal hallucinations is that they implicate speech perception areas in the left temporal lobe, impairing perception of and attention to external sounds. Preliminary results also show that amygdala is implicated in

  16. Cognitive biases and auditory verbal hallucinations in healthy and clinical individuals

    NARCIS (Netherlands)

    Daalman, K.; Sommer, I. E. C.; Derks, E. M.; Peters, E. R.

    2013-01-01

    Background. Several cognitive biases are related to psychotic symptoms, including auditory verbal hallucinations (AVH). It remains unclear whether these biases differ in voice-hearers with and without a 'need-for-care'. Method. A total of 72 healthy controls, 72 healthy voice-hearers and 72 clinical

  17. The Effect of Delayed Auditory Feedback on Activity in the Temporal Lobe while Speaking: A Positron Emission Tomography Study

    Science.gov (United States)

    Takaso, Hideki; Eisner, Frank; Wise, Richard J. S.; Scott, Sophie K.

    2010-01-01

    Purpose: Delayed auditory feedback is a technique that can improve fluency in stutterers, while disrupting fluency in many nonstuttering individuals. The aim of this study was to determine the neural basis for the detection of and compensation for such a delay, and the effects of increases in the delay duration. Method: Positron emission…

  18. Kinematic Analysis of Speech Sound Sequencing Errors Induced by Delayed Auditory Feedback.

    Science.gov (United States)

    Cler, Gabriel J; Lee, Jackson C; Mittelman, Talia; Stepp, Cara E; Bohland, Jason W

    2017-06-22

    Delayed auditory feedback (DAF) causes speakers to become disfluent and make phonological errors. Methods for assessing the kinematics of speech errors are lacking, with most DAF studies relying on auditory perceptual analyses, which may be problematic, as errors judged to be categorical may actually represent blends of sounds or articulatory errors. Eight typical speakers produced nonsense syllable sequences under normal and DAF (200 ms). Lip and tongue kinematics were captured with electromagnetic articulography. Time-locked acoustic recordings were transcribed, and the kinematics of utterances with and without perceived errors were analyzed with existing and novel quantitative methods. New multivariate measures showed that for 5 participants, kinematic variability for productions perceived to be error free was significantly increased under delay; these results were validated by using the spatiotemporal index measure. Analysis of error trials revealed both typical productions of a nontarget syllable and productions with articulatory kinematics that incorporated aspects of both the target and the perceived utterance. This study is among the first to characterize articulatory changes under DAF and provides evidence for different classes of speech errors, which may not be perceptually salient. New methods were developed that may aid visualization and analysis of large kinematic data sets. https://doi.org/10.23641/asha.5103067.

  19. Distúrbio de voz em professores: autorreferência, avaliação perceptiva da voz e das pregas vocais Voice disorders in teachers: self-report, auditory-perceptive assessment of voice and vocal fold assessment

    Directory of Open Access Journals (Sweden)

    Maria Fabiana Bonfim de Lima-Silva

    2012-12-01

    Full Text Available OBJETIVO: Analisar a presença do distúrbio de voz em professores na concordância entre autorreferência, avaliação perceptiva da voz e das pregas vocais. MÉTODOS: Deste estudo transversal, participaram 60 professores de duas escolas públicas de ensino fundamental e médio. Após responderem questionário de autopercepção (Condição de Produção Vocal do Professor - CPV-P para caracterização da amostra e levantamento de dados sobre autorreferência ao distúrbio de voz, foram submetidos à coleta de amostra de fala e exame nasofibrolaringoscópico. Para classificar as vozes, três juízes fonoaudiólogos utilizaram à escala GRBASI e, para pregas vocais (PPVV, um otorrinolaringologista descreveu as alterações encontradas. Os dados foram analisados descritivamente, e a seguir submetidos a testes de associação. RESULTADOS: No questionário, 63,3% dos participantes referiram ter ou ter tido distúrbio de voz. Do total, 43,3% foram diagnosticados com alteração em voz e 46,7%, em prega vocal. Não houve associação entre autorreferência e avaliação da voz, nem entre autorreferência e avaliação de PPVV, com registro de concordância baixa entre as três avaliações. Porém, houve associação entre a avaliação da voz e de PPVV, com concordância intermediária entre elas. CONCLUSÃO: Há maior autorreferência a distúrbio de voz do que o constatado pela avaliação perceptiva da voz e das pregas vocais. A concordância intermediária entre as duas avaliações prediz a necessidade da realização de pelo menos uma delas por ocasião da triagem em professores.PURPOSE: To analyze the presence of voice disorders in teachers in agreement between self-report, auditory-perceptive assessment of voice quality and vocal fold assessment. METHODS: The subjects of this cross-sectional study were 60 public elementary, middle and high-school teachers. After answering a self-awareness questionnaire (Voice Production Conditions of

  20. Multi-modal assessment of on-road demand of voice and manual phone calling and voice navigation entry across two embedded vehicle systems.

    Science.gov (United States)

    Mehler, Bruce; Kidd, David; Reimer, Bryan; Reagan, Ian; Dobres, Jonathan; McCartt, Anne

    2016-03-01

    One purpose of integrating voice interfaces into embedded vehicle systems is to reduce drivers' visual and manual distractions with 'infotainment' technologies. However, there is scant research on actual benefits in production vehicles or how different interface designs affect attentional demands. Driving performance, visual engagement, and indices of workload (heart rate, skin conductance, subjective ratings) were assessed in 80 drivers randomly assigned to drive a 2013 Chevrolet Equinox or Volvo XC60. The Chevrolet MyLink system allowed completing tasks with one voice command, while the Volvo Sensus required multiple commands to navigate the menu structure. When calling a phone contact, both voice systems reduced visual demand relative to the visual-manual interfaces, with reductions for drivers in the Equinox being greater. The Equinox 'one-shot' voice command showed advantages during contact calling but had significantly higher error rates than Sensus during destination address entry. For both secondary tasks, neither voice interface entirely eliminated visual demand. Practitioner Summary: The findings reinforce the observation that most, if not all, automotive auditory-vocal interfaces are multi-modal interfaces in which the full range of potential demands (auditory, vocal, visual, manipulative, cognitive, tactile, etc.) need to be considered in developing optimal implementations and evaluating drivers' interaction with the systems. Social Media: In-vehicle voice-interfaces can reduce visual demand but do not eliminate it and all types of demand need to be taken into account in a comprehensive evaluation.

  1. Visual face-movement sensitive cortex is relevant for auditory-only speech recognition.

    Science.gov (United States)

    Riedel, Philipp; Ragert, Patrick; Schelinski, Stefanie; Kiebel, Stefan J; von Kriegstein, Katharina

    2015-07-01

    It is commonly assumed that the recruitment of visual areas during audition is not relevant for performing auditory tasks ('auditory-only view'). According to an alternative view, however, the recruitment of visual cortices is thought to optimize auditory-only task performance ('auditory-visual view'). This alternative view is based on functional magnetic resonance imaging (fMRI) studies. These studies have shown, for example, that even if there is only auditory input available, face-movement sensitive areas within the posterior superior temporal sulcus (pSTS) are involved in understanding what is said (auditory-only speech recognition). This is particularly the case when speakers are known audio-visually, that is, after brief voice-face learning. Here we tested whether the left pSTS involvement is causally related to performance in auditory-only speech recognition when speakers are known by face. To test this hypothesis, we applied cathodal transcranial direct current stimulation (tDCS) to the pSTS during (i) visual-only speech recognition of a speaker known only visually to participants and (ii) auditory-only speech recognition of speakers they learned by voice and face. We defined the cathode as active electrode to down-regulate cortical excitability by hyperpolarization of neurons. tDCS to the pSTS interfered with visual-only speech recognition performance compared to a control group without pSTS stimulation (tDCS to BA6/44 or sham). Critically, compared to controls, pSTS stimulation additionally decreased auditory-only speech recognition performance selectively for voice-face learned speakers. These results are important in two ways. First, they provide direct evidence that the pSTS is causally involved in visual-only speech recognition; this confirms a long-standing prediction of current face-processing models. Secondly, they show that visual face-sensitive pSTS is causally involved in optimizing auditory-only speech recognition. These results are in line

  2. Auditory-Perceptual Evaluation of Dysphonia: A Comparison Between Narrow and Broad Terminology Systems

    DEFF Research Database (Denmark)

    Iwarsson, Jenny

    2017-01-01

    of the terminology used in the multiparameter Danish Dysphonia Assessment (DDA) approach into the five-parameter GRBAS system. Methods. Voice samples illustrating type and grade of the voice qualities included in DDA were rated by five speech language pathologists using the GRBAS system with the aim of estimating...... terms and antagonists, reflecting muscular hypo- and hyperfunction. Key Words: Auditory-perceptual voice analysis–Dysphonia–GRBAS–Listening test–Voice ratings....

  3. Hearing an Illusory Vowel in Noise : Suppression of Auditory Cortical Activity

    NARCIS (Netherlands)

    Riecke, Lars; Vanbussel, Mieke; Hausfeld, Lars; Baskent, Deniz; Formisano, Elia; Esposito, Fabrizio

    2012-01-01

    Human hearing is constructive. For example, when a voice is partially replaced by an extraneous sound (e.g., on the telephone due to a transmission problem), the auditory system may restore the missing portion so that the voice can be perceived as continuous (Miller and Licklider, 1950; for review,

  4. The Speaker Behind The Voice: Therapeutic Practice from the Perspective of Pragmatic Theory

    Directory of Open Access Journals (Sweden)

    Felicity eDeamer

    2015-06-01

    Full Text Available Many attempts at understanding auditory verbal hallucinations (AVHs have tried to explain why there is an auditory experience in the absence of an appropriate stimulus. We suggest that many instance of voice-hearing should be approached differently. More specifically, they could be viewed primarily as hallucinated acts of communication, rather than hallucinated sounds. We suggest that this change of perspective is reflected in, and helps to explain, the successes of two recent therapeutic techniques. These two techniques are: Relating Therapy for Voices and Avatar Therapy.

  5. Finding your mate at a cocktail party: frequency separation promotes auditory stream segregation of concurrent voices in multi-species frog choruses.

    Directory of Open Access Journals (Sweden)

    Vivek Nityananda

    Full Text Available Vocal communication in crowded social environments is a difficult problem for both humans and nonhuman animals. Yet many important social behaviors require listeners to detect, recognize, and discriminate among signals in a complex acoustic milieu comprising the overlapping signals of multiple individuals, often of multiple species. Humans exploit a relatively small number of acoustic cues to segregate overlapping voices (as well as other mixtures of concurrent sounds, like polyphonic music. By comparison, we know little about how nonhuman animals are adapted to solve similar communication problems. One important cue enabling source segregation in human speech communication is that of frequency separation between concurrent voices: differences in frequency promote perceptual segregation of overlapping voices into separate "auditory streams" that can be followed through time. In this study, we show that frequency separation (ΔF also enables frogs to segregate concurrent vocalizations, such as those routinely encountered in mixed-species breeding choruses. We presented female gray treefrogs (Hyla chrysoscelis with a pulsed target signal (simulating an attractive conspecific call in the presence of a continuous stream of distractor pulses (simulating an overlapping, unattractive heterospecific call. When the ΔF between target and distractor was small (e.g., ≤3 semitones, females exhibited low levels of responsiveness, indicating a failure to recognize the target as an attractive signal when the distractor had a similar frequency. Subjects became increasingly more responsive to the target, as indicated by shorter latencies for phonotaxis, as the ΔF between target and distractor increased (e.g., ΔF = 6-12 semitones. These results support the conclusion that gray treefrogs, like humans, can exploit frequency separation as a perceptual cue to segregate concurrent voices in noisy social environments. The ability of these frogs to segregate

  6. The Effect of Anchors and Training on the Reliability of Voice Quality Ratings for Different Types of Speech Stimuli.

    Science.gov (United States)

    Brinca, Lilia; Batista, Ana Paula; Tavares, Ana Inês; Pinto, Patrícia N; Araújo, Lara

    2015-11-01

    The main objective of the present study was to investigate if the type of voice stimuli-sustained vowel, oral reading, and connected speech-results in good intrarater and interrater agreement/reliability. A short-term panel study was performed. Voice samples from 30 native European Portuguese speakers were used in the present study. The speech materials used were (1) the sustained vowel /a/, (2) oral reading of the European Portuguese version of "The Story of Arthur the Rat," and (3) connected speech. After an extensive training with textual and auditory anchors, the judges were asked to rate the severity of dysphonic voice stimuli using the phonation dimensions G, R, and B from the GRBAS scale. The voice samples were judged 6 months and 1 year after the training. Intrarater agreement and reliability were generally very good for all the phonation dimensions and voice stimuli. The highest interrater reliability was obtained using the oral reading stimulus, particularly for phonation dimensions grade (G) and breathiness (B). Roughness (R) was the voice quality that was the most difficult to evaluate, leading to interrater unreliability in all voice quality ratings. Extensive training using textual and auditory anchors and the use of anchors during the voice evaluations appear to be good methods for auditory-perceptual evaluation of dysphonic voices. The best results of interrater reliability were obtained when the oral reading stimulus was used. Breathiness appears to be a voice quality that is easier to evaluate than roughness. Copyright © 2015 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  7. Multidimensional assessment of strongly irregular voices such as in substitution voicing and spasmodic dysphonia: a compilation of own research.

    Science.gov (United States)

    Moerman, Mieke; Martens, Jean-Pierre; Dejonckere, Philippe

    2015-04-01

    This article is a compilation of own research performed during the European COoperation in Science and Technology (COST) action 2103: 'Advance Voice Function Assessment', an initiative of voice and speech processing teams consisting of physicists, engineers, and clinicians. This manuscript concerns analyzing largely irregular voicing types, namely substitution voicing (SV) and adductor spasmodic dysphonia (AdSD). A specific perceptual rating scale (IINFVo) was developed, and the Auditory Model Based Pitch Extractor (AMPEX), a piece of software that automatically analyses running speech and generates pitch values in background noise, was applied. The IINFVo perceptual rating scale has been shown to be useful in evaluating SV. The analysis of strongly irregular voices stimulated a modification of the European Laryngological Society's assessment protocol which was originally designed for the common types of (less severe) dysphonia. Acoustic analysis with AMPEX demonstrates that the most informative features are, for SV, the voicing-related acoustic features and, for AdSD, the perturbation measures. Poor correlations between self-assessment and acoustic and perceptual dimensions in the assessment of highly irregular voices argue for a multidimensional approach.

  8. The role of emotions in the development of voice

    Directory of Open Access Journals (Sweden)

    Anna Maria Disanto

    2014-06-01

    Full Text Available In this paper the authors refer to the voice as expressive sphere of communication between two people. The voice expresses a symbolic meaning whose function is to represent our feelings, and thus our emotional life.The emission of sounds weaves an unconscious communication of affection, expresses the archaic nature of the links between body and language, the presence of a strong sensorial auditory, olfactory, tactile and visual.

  9. Neural basis of the time window for subjective motor-auditory integration

    Directory of Open Access Journals (Sweden)

    Koichi eToida

    2016-01-01

    Full Text Available Temporal contiguity between an action and corresponding auditory feedback is crucial to the perception of self-generated sound. However, the neural mechanisms underlying motor–auditory temporal integration are unclear. Here, we conducted four experiments with an oddball paradigm to examine the specific event-related potentials (ERPs elicited by delayed auditory feedback for a self-generated action. The first experiment confirmed that a pitch-deviant auditory stimulus elicits mismatch negativity (MMN and P300, both when it is generated passively and by the participant’s action. In our second and third experiments, we investigated the ERP components elicited by delayed auditory feedback of for a self-generated action. We found that delayed auditory feedback elicited an enhancement of P2 (enhanced-P2 and a N300 component, which were apparently different from the MMN and P300 components observed in the first experiment. We further investigated the sensitivity of the enhanced-P2 and N300 to delay length in our fourth experiment. Strikingly, the amplitude of the N300 increased as a function of the delay length. Additionally, the N300 amplitude was significantly correlated with the conscious detection of the delay (the 50% detection point was around 200 ms, and hence reduction in the feeling of authorship of the sound (the sense of agency. In contrast, the enhanced-P2 was most prominent in short-delay (≤ 200 ms conditions and diminished in long-delay conditions. Our results suggest that different neural mechanisms are employed for the processing of temporally-deviant and pitch-deviant auditory feedback. Additionally, the temporal window for subjective motor–auditory integration is likely about 200 ms, as indicated by these auditory ERP components.

  10. Emotional feedback for mobile devices

    CERN Document Server

    Seebode, Julia

    2015-01-01

    This book investigates the functional adequacy as well as the affective impression made by feedback messages on mobile devices. It presents an easily adoptable experimental setup to examine context effects on various feedback messages, and applies it to auditory, tactile and auditory-tactile feedback messages. This approach provides insights into the relationship between the affective impression and functional applicability of these messages as well as an understanding of the influence of unimodal components on the perception of multimodal feedback messages. The developed paradigm can also be extended to investigate other aspects of context and used to investigate feedback messages in modalities other than those presented. The book uses questionnaires implemented on a Smartphone, which can easily be adopted for field studies to broaden the scope even wider. Finally, the book offers guidelines for the design of system feedback.

  11. Performance of Phonatory Deviation Diagrams in Synthesized Voice Analysis.

    Science.gov (United States)

    Lopes, Leonardo Wanderley; da Silva, Karoline Evangelista; da Silva Evangelista, Deyverson; Almeida, Anna Alice; Silva, Priscila Oliveira Costa; Lucero, Jorge; Behlau, Mara

    2018-05-02

    To analyze the performance of a phonatory deviation diagram (PDD) in discriminating the presence and severity of voice deviation and the predominant voice quality of synthesized voices. A speech-language pathologist performed the auditory-perceptual analysis of the synthesized voice (n = 871). The PDD distribution of voice signals was analyzed according to area, quadrant, shape, and density. Differences in signal distribution regarding the PDD area and quadrant were detected when differentiating the signals with and without voice deviation and with different predominant voice quality. Differences in signal distribution were found in all PDD parameters as a function of the severity of voice disorder. The PDD area and quadrant can differentiate normal voices from deviant synthesized voices. There are differences in signal distribution in PDD area and quadrant as a function of the severity of voice disorder and the predominant voice quality. However, the PDD area and quadrant do not differentiate the signals as a function of severity of voice disorder and differentiated only the breathy and rough voices from the normal and strained voices. PDD density is able to differentiate only signals with moderate and severe deviation. PDD shape shows differences between signals with different severities of voice deviation. © 2018 S. Karger AG, Basel.

  12. Syllogisms delivered in an angry voice lead to improved performance and engagement of a different neural system compared to neutral voice

    OpenAIRE

    Kathleen Walton Smith; Laura-Lee eBalkwill; Oshin eVartanian; Vinod eGoel; Vinod eGoel

    2015-01-01

    Despite the fact that most real-world reasoning occurs in some emotional context, very little is known about the underlying behavioral and neural implications of such context. To further understand the role of emotional context in logical reasoning we scanned 15 participants with fMRI while they engaged in logical reasoning about neutral syllogisms presented through the auditory channel in a sad, angry, or neutral tone of voice. Exposure to angry voice led to improved reasoning performance co...

  13. Developmental programming of auditory learning

    Directory of Open Access Journals (Sweden)

    Melania Puddu

    2012-10-01

    Full Text Available The basic structures involved in the development of auditory function and consequently in language acquisition are directed by genetic code, but the expression of individual genes may be altered by exposure to environmental factors, which if favorable, orient it in the proper direction, leading its development towards normality, if unfavorable, they deviate it from its physiological course. Early sensorial experience during the foetal period (i.e. intrauterine noise floor, sounds coming from the outside and attenuated by the uterine filter, particularly mother’s voice and modifications induced by it at the cochlear level represent the first example of programming in one of the earliest critical periods in development of the auditory system. This review will examine the factors that influence the developmental programming of auditory learning from the womb to the infancy. In particular it focuses on the following points: the prenatal auditory experience and the plastic phenomena presumably induced by it in the auditory system from the basilar membrane to the cortex;the involvement of these phenomena on language acquisition and on the perception of language communicative intention after birth;the consequences of auditory deprivation in critical periods of auditory development (i.e. premature interruption of foetal life.

  14. Auditory short-term memory activation during score reading.

    Science.gov (United States)

    Simoens, Veerle L; Tervaniemi, Mari

    2013-01-01

    Performing music on the basis of reading a score requires reading ahead of what is being played in order to anticipate the necessary actions to produce the notes. Score reading thus not only involves the decoding of a visual score and the comparison to the auditory feedback, but also short-term storage of the musical information due to the delay of the auditory feedback during reading ahead. This study investigates the mechanisms of encoding of musical information in short-term memory during such a complicated procedure. There were three parts in this study. First, professional musicians participated in an electroencephalographic (EEG) experiment to study the slow wave potentials during a time interval of short-term memory storage in a situation that requires cross-modal translation and short-term storage of visual material to be compared with delayed auditory material, as it is the case in music score reading. This delayed visual-to-auditory matching task was compared with delayed visual-visual and auditory-auditory matching tasks in terms of EEG topography and voltage amplitudes. Second, an additional behavioural experiment was performed to determine which type of distractor would be the most interfering with the score reading-like task. Third, the self-reported strategies of the participants were also analyzed. All three parts of this study point towards the same conclusion according to which during music score reading, the musician most likely first translates the visual score into an auditory cue, probably starting around 700 or 1300 ms, ready for storage and delayed comparison with the auditory feedback.

  15. Auditory Verbal Experience and Agency in Waking, Sleep Onset, REM, and Non-REM Sleep.

    Science.gov (United States)

    Speth, Jana; Harley, Trevor A; Speth, Clemens

    2017-04-01

    We present one of the first quantitative studies on auditory verbal experiences ("hearing voices") and auditory verbal agency (inner speech, and specifically "talking to (imaginary) voices or characters") in healthy participants across states of consciousness. Tools of quantitative linguistic analysis were used to measure participants' implicit knowledge of auditory verbal experiences (VE) and auditory verbal agencies (VA), displayed in mentation reports from four different states. Analysis was conducted on a total of 569 mentation reports from rapid eye movement (REM) sleep, non-REM sleep, sleep onset, and waking. Physiology was controlled with the nightcap sleep-wake mentation monitoring system. Sleep-onset hallucinations, traditionally at the focus of scientific attention on auditory verbal hallucinations, showed the lowest degree of VE and VA, whereas REM sleep showed the highest degrees. Degrees of different linguistic-pragmatic aspects of VE and VA likewise depend on the physiological states. The quantity and pragmatics of VE and VA are a function of the physiologically distinct state of consciousness in which they are conceived. Copyright © 2016 Cognitive Science Society, Inc.

  16. Audiovisual speech facilitates voice learning.

    Science.gov (United States)

    Sheffert, Sonya M; Olson, Elizabeth

    2004-02-01

    In this research, we investigated the effects of voice and face information on the perceptual learning of talkers and on long-term memory for spoken words. In the first phase, listeners were trained over several days to identify voices from words presented auditorily or audiovisually. The training data showed that visual information about speakers enhanced voice learning, revealing cross-modal connections in talker processing akin to those observed in speech processing. In the second phase, the listeners completed an auditory or audiovisual word recognition memory test in which equal numbers of words were spoken by familiar and unfamiliar talkers. The data showed that words presented by familiar talkers were more likely to be retrieved from episodic memory, regardless of modality. Together, these findings provide new information about the representational code underlying familiar talker recognition and the role of stimulus familiarity in episodic word recognition.

  17. The Effect of Learning Modality and Auditory Feedback on Word Memory: Cochlear-Implanted versus Normal-Hearing Adults.

    Science.gov (United States)

    Taitelbaum-Swead, Riki; Icht, Michal; Mama, Yaniv

    2017-03-01

    In recent years, the effect of cognitive abilities on the achievements of cochlear implant (CI) users has been evaluated. Some studies have suggested that gaps between CI users and normal-hearing (NH) peers in cognitive tasks are modality specific, and occur only in auditory tasks. The present study focused on the effect of learning modality (auditory, visual) and auditory feedback on word memory in young adults who were prelingually deafened and received CIs before the age of 5 yr, and their NH peers. A production effect (PE) paradigm was used, in which participants learned familiar study words by vocal production (saying aloud) or by no-production (silent reading or listening). Words were presented (1) in the visual modality (written) and (2) in the auditory modality (heard). CI users performed the visual condition twice-once with the implant ON and once with it OFF. All conditions were followed by free recall tests. Twelve young adults, long-term CI users, implanted between ages 1.7 and 4.5 yr, and who showed ≥50% in monosyllabic consonant-vowel-consonant open-set test with their implants were enrolled. A group of 14 age-matched NH young adults served as the comparison group. For each condition, we calculated the proportion of study words recalled. Mixed-measures analysis of variances were carried out with group (NH, CI) as a between-subjects variable, and learning condition (aloud or silent reading) as a within-subject variable. Following this, paired sample t tests were used to evaluate the PE size (differences between aloud and silent words) and overall recall ratios (aloud and silent words combined) in each of the learning conditions. With visual word presentation, young adults with CIs (regardless of implant status CI-ON or CI-OFF), showed comparable memory performance (and a similar PE) to NH peers. However, with auditory presentation, young adults with CIs showed poorer memory for nonproduced words (hence a larger PE) relative to their NH peers. The

  18. Effect of classic uvulopalatopharyngoplasty and laser-assisted uvulopalatopharyngoplasty on voice acoustics and speech nasalance

    International Nuclear Information System (INIS)

    Mahmoud Y Abu El-ella

    2010-01-01

    Uvulopalatopharyngoplasty (UPPP) is a commonly used surgical technique for oropharyngeal reconstruction in patients with obstructive sleep apnea (OSA). This procedure can be done either through the classic or the laser-assisted uvulopalatopharyngoplasty (LAUP) technique. The purpose of this study was to evaluate the effect of classic UPPP and LAUP on acoustics of voice and speech nasalance, and to compare the effect of each operation on these two domains. Patients and The study included 27 patients with a mean age of 46 years. All patients were diagnosed with OSA based on polysomnographic examination. Patients were divided into two groups according to the type of surgical procedure. Fifteen patients underwent classic UPPP, whereas 12 patients were subjected to LAUP. A full assessment was done for all patients preoperatively and postoperatively, including auditory perceptual assessment (APA) of voice and speech, objective assessment using acoustic voice analysis and nasometry. Auditory perceptual assessment of speech and voice, acoustic analysis of voice and nasometric analysis of speech did not show statistically significant differences between the preoperative and postoperative evaluations in either group (P>.05).The results of this study demonstrated that in patients with OSA, the surgical technique, whether classic UPPP or LAUP, does not have significant effects on the patients' voice quality or their speech outcomes (Author).

  19. Glottal inverse filtering analysis of human voice production — A ...

    Indian Academy of Sciences (India)

    A (grossly) simplified manner to study the functioning of the human speech production ...... selective auditory impairment in autism: can perceive but do not attend, Proc. Natl. Acad. .... Fritzell B 1996 Voice disorders and occupations, Logoped.

  20. Motor Training: Comparison of Visual and Auditory Coded Proprioceptive Cues

    Directory of Open Access Journals (Sweden)

    Philip Jepson

    2012-05-01

    Full Text Available Self-perception of body posture and movement is achieved through multi-sensory integration, particularly the utilisation of vision, and proprioceptive information derived from muscles and joints. Disruption to these processes can occur following a neurological accident, such as stroke, leading to sensory and physical impairment. Rehabilitation can be helped through use of augmented visual and auditory biofeedback to stimulate neuro-plasticity, but the effective design and application of feedback, particularly in the auditory domain, is non-trivial. Simple auditory feedback was tested by comparing the stepping accuracy of normal subjects when given a visual spatial target (step length and an auditory temporal target (step duration. A baseline measurement of step length and duration was taken using optical motion capture. Subjects (n=20 took 20 ‘training’ steps (baseline ±25% using either an auditory target (950 Hz tone, bell-shaped gain envelope or visual target (spot marked on the floor and were then asked to replicate the target step (length or duration corresponding to training with all feedback removed. Visual cues (mean percentage error=11.5%; SD ± 7.0%; auditory cues (mean percentage error = 12.9%; SD ± 11.8%. Visual cues elicit a high degree of accuracy both in training and follow-up un-cued tasks; despite the novelty of the auditory cues present for subjects, the mean accuracy of subjects approached that for visual cues, and initial results suggest a limited amount of practice using auditory cues can improve performance.

  1. Movement goals and feedback and feedforward control mechanisms in speech production.

    Science.gov (United States)

    Perkell, Joseph S

    2012-09-01

    Studies of speech motor control are described that support a theoretical framework in which fundamental control variables for phonemic movements are multi-dimensional regions in auditory and somatosensory spaces. Auditory feedback is used to acquire and maintain auditory goals and in the development and function of feedback and feedforward control mechanisms. Several lines of evidence support the idea that speakers with more acute sensory discrimination acquire more distinct goal regions and therefore produce speech sounds with greater contrast. Feedback modification findings indicate that fluently produced sound sequences are encoded as feedforward commands, and feedback control serves to correct mismatches between expected and produced sensory consequences.

  2. Is the auditory evoked P2 response a biomarker of learning?

    Directory of Open Access Journals (Sweden)

    Kelly eTremblay

    2014-02-01

    Full Text Available Even though auditory training exercises for humans have been shown to improve certain perceptual skills of individuals with and without hearing loss, there is a lack of knowledge pertaining to which aspects of training are responsible for the perceptual gains, and which aspects of perception are changed. To better define how auditory training impacts brain and behavior, electroencephalography and magnetoencephalography have been used to determine the time course and coincidence of cortical modulations associated with different types of training. Here we focus on P1-N1-P2 auditory evoked responses (AEP, as there are consistent reports of gains in P2 amplitude following various types of auditory training experiences; including music and speech-sound training. The purpose of this experiment was to determine if the auditory evoked P2 response is a biomarker of learning. To do this, we taught native English speakers to identify a new pre-voiced temporal cue that is not used phonemically in the English language so that coinciding changes in evoked neural activity could be characterized. To differentiate possible effects of repeated stimulus exposure and a button-pushing task from learning itself, we examined modulations in brain activity in a group of participants who learned to identify the pre-voicing contrast and compared it to participants, matched in time, and stimulus exposure, that did not. The main finding was that the amplitude of the P2 auditory evoked response increased across repeated EEG sessions for all groups, regardless of any change in perceptual performance. What’s more, these effects were retained for months. Changes in P2 amplitude were attributed to changes in neural activity associated with the acquisition process and not the learned outcome itself. A further finding was the expression of a late negativity (LN wave 600-900 ms post-stimulus onset, post-training, exclusively for the group that learned to identify the pre-voiced

  3. Duration reproduction with sensory feedback delay: Differential involvement of perception and action time

    Directory of Open Access Journals (Sweden)

    Stephanie eGanzenmüller

    2012-10-01

    Full Text Available Previous research has shown that voluntary action can attract subsequent, delayed feedback events towards the action, and adaptation to the sensorimotor delay can even reverse motor-sensory temporal-order judgments. However, whether and how sensorimotor delay affects duration reproduction is still unclear. To investigate this, we injected an onset- or offset-delay to the sensory feedback signal from a duration reproduction task. We compared duration reproductions within (visual, auditory modality and across audiovisual modalities with feedback signal onset- and offset-delay manipulations. We found that the reproduced duration was lengthened in both visual and auditory feedback signal onset-delay conditions. The lengthening effect was evident immediately, on the first trial with the onset delay. However, when the onset of the feedback signal was prior to the action, the lengthening effect was diminished. In contrast, a shortening effect was found with feedback signal offset-delay, though the effect was weaker and manifested only in the auditory offset-delay condition. These findings indicate that participants tend to mix the onset of action and the feedback signal more when the feedback is delayed, and they heavily rely on motor-stop signals for the duration reproduction. Furthermore, auditory duration was overestimated compared to visual duration in crossmodal feedback conditions, and the overestimation of auditory duration (or the underestimation of visual duration was independent of the delay manipulation.

  4. Longitudinal variations of laryngeal overpressure and voice-related quality of life in spasmodic dysphonia.

    Science.gov (United States)

    Yeung, Jeffrey C; Fung, Kevin; Davis, Eric; Rai, Sunita K; Day, Adam M B; Dzioba, Agnieszka; Bornbaum, Catherine; Doyle, Philip C

    2015-03-01

    Adductor spasmodic dysphonia (AdSD) is a voice disorder characterized by variable symptom severity and voice disability. Those with the disorder experience a wide spectrum of symptom severity over time, resulting in varied degrees of perceived voice disability. This study investigated the longitudinal variability of AdSD, with a focus on auditory-perceptual judgments of a dimension termed laryngeal overpressure (LO) and patient self-assessments of voice-related quality of life (V-RQOL). Longitudinal, correlational study. Ten adults with AdSD were followed over three time periods. At each, both voice samples and self-ratings of V-RQOL were gathered prior to their scheduled Botox injection. Voice recordings subsequently were perceptually evaluated by eight listeners for LO using a visual analog scale. LO ratings for all-voiced and Rainbow Passage sentence stimuli were found to be highly correlated. However, only the LO ratings obtained from judgments of AV stimuli were found to correlate moderately with self-ratings of voice disability for both the physical functioning and social-emotional subscores, as well as the total V-RQOL score. Based on perceptual judgments, LO appears to provide a reliable means of quantifying the severity of voice abnormalities in AdSD. Variability in self-ratings of the V-RQOL suggest that perceived disability related to AdSD should be actively monitored. Further, auditory-perceptual judgments may provide an accurate index of the potential impact of the disorder on the speaker. Similarly, LO was supported as a simple clinical measure that serves as a reliable index of voice change over time. © 2014 The American Laryngological, Rhinological and Otological Society, Inc.

  5. Auditory and motor imagery modulate learning in music performance

    Directory of Open Access Journals (Sweden)

    Rachel M. Brown

    2013-07-01

    Full Text Available Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians’ encoding (during Learning, as they practiced novel melodies, and retrieval (during Recall of those melodies. Pianists learned melodies by listening without performing (auditory learning or performing without sound (motor learning; following Learning, pianists performed the melodies from memory with auditory feedback (Recall. During either Learning (Experiment 1 or Recall (Experiment 2, pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced and temporal regularity (variability of quarter-note interonset intervals were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists’ pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2. Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1: Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2: Higher auditory imagery skill predicted greater temporal regularity during Recall in the

  6. Speaker-Sex Discrimination for Voiced and Whispered Vowels at Short Durations.

    Science.gov (United States)

    Smith, David R R

    2016-01-01

    Whispered vowels, produced with no vocal fold vibration, lack the periodic temporal fine structure which in voiced vowels underlies the perceptual attribute of pitch (a salient auditory cue to speaker sex). Voiced vowels possess no temporal fine structure at very short durations (below two glottal cycles). The prediction was that speaker-sex discrimination performance for whispered and voiced vowels would be similar for very short durations but, as stimulus duration increases, voiced vowel performance would improve relative to whispered vowel performance as pitch information becomes available. This pattern of results was shown for women's but not for men's voices. A whispered vowel needs to have a duration three times longer than a voiced vowel before listeners can reliably tell whether it's spoken by a man or woman (∼30 ms vs. ∼10 ms). Listeners were half as sensitive to information about speaker-sex when it is carried by whispered compared with voiced vowels.

  7. Auditory interfaces in automated driving: an international survey

    NARCIS (Netherlands)

    Bazilinskyy, P.; de Winter, J.C.F.

    2015-01-01

    This study investigated peoples’ opinion on auditory interfaces in contemporary
    cars and their willingness to be exposed to auditory feedback in automated driving. We used an Internet-based survey to collect 1,205 responses from 91 countries. The respondents stated their attitudes towards two

  8. Auditory and motor imagery modulate learning in music performance.

    Science.gov (United States)

    Brown, Rachel M; Palmer, Caroline

    2013-01-01

    Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced novel melodies), and retrieval (during Recall of those melodies). Pianists learned melodies by listening without performing (auditory learning) or performing without sound (motor learning); following Learning, pianists performed the melodies from memory with auditory feedback (Recall). During either Learning (Experiment 1) or Recall (Experiment 2), pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced) and temporal regularity (variability of quarter-note interonset intervals) were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists' pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2). Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1): Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2): Higher auditory imagery skill predicted greater temporal regularity during Recall in the presence of

  9. Auditory and motor imagery modulate learning in music performance

    Science.gov (United States)

    Brown, Rachel M.; Palmer, Caroline

    2013-01-01

    Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced novel melodies), and retrieval (during Recall of those melodies). Pianists learned melodies by listening without performing (auditory learning) or performing without sound (motor learning); following Learning, pianists performed the melodies from memory with auditory feedback (Recall). During either Learning (Experiment 1) or Recall (Experiment 2), pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced) and temporal regularity (variability of quarter-note interonset intervals) were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists' pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2). Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1): Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2): Higher auditory imagery skill predicted greater temporal regularity during Recall in the presence of

  10. The addition of voice prompts to audiovisual feedback and debriefing does not modify CPR quality or outcomes in out of hospital cardiac arrest--a prospective, randomized trial.

    Science.gov (United States)

    Bohn, Andreas; Weber, Thomas P; Wecker, Sascha; Harding, Ulf; Osada, Nani; Van Aken, Hugo; Lukas, Roman P

    2011-03-01

    Chest compression quality is a determinant of survival from out-of-hospital cardiac arrest (OHCA). ERC 2005 guidelines recommend the use of technical devices to support rescuers giving compressions. This prospective randomized study reviewed influence of different feedback configurations on survival and compression quality. 312 patients suffering an OHCA were randomly allocated to two different feedback configurations. In the limited feedback group a metronome and visual feedback was used. In the extended feedback group voice prompts were added. A training program was completed prior to implementation, performance debriefing was conducted throughout the study. Survival did not differ between the extended and limited feedback groups (47.8% vs 43.9%, p = 0.49). Average compression depth (mean ± SD: 4.74 ± 0.86 cm vs 4.84 ± 0.93 cm, p = 0.31) was similar in both groups. There were no differences in compression rate (103 ± 7 vs 102 ± 5 min(-1), p=0.74) or hands-off fraction (16.16% ± 0.07 to 17.04% ± 0.07, p = 0.38). Bystander CPR, public arrest location, presenting rhythm and chest compression depth were predictors of short term survival (ROSC to ED). Even limited CPR-feedback combined with training and ongoing debriefing leads to high chest compression quality. Bystander CPR, location, rhythm and chest compression depth are determinants of survival from out of hospital cardiac arrest. Addition of voice prompts does neither modify CPR quality nor outcome in OHCA. CC depth significantly influences survival and therefore more focus should be put on correct delivery. Further studies are needed to examine the best configuration of feedback to improve CPR quality and survival. ClinicalTrials.gov (NCT00449969), http://www.clinicalTrials.gov. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  11. Voice loops as coordination aids in space shuttle mission control.

    Science.gov (United States)

    Patterson, E S; Watts-Perotti, J; Woods, D D

    1999-01-01

    Voice loops, an auditory groupware technology, are essential coordination support tools for experienced practitioners in domains such as air traffic management, aircraft carrier operations and space shuttle mission control. They support synchronous communication on multiple channels among groups of people who are spatially distributed. In this paper, we suggest reasons for why the voice loop system is a successful medium for supporting coordination in space shuttle mission control based on over 130 hours of direct observation. Voice loops allow practitioners to listen in on relevant communications without disrupting their own activities or the activities of others. In addition, the voice loop system is structured around the mission control organization, and therefore directly supports the demands of the domain. By understanding how voice loops meet the particular demands of the mission control environment, insight can be gained for the design of groupware tools to support cooperative activity in other event-driven domains.

  12. Show and Tell: Video Modeling and Instruction Without Feedback Improves Performance but Is Not Sufficient for Retention of a Complex Voice Motor Skill.

    Science.gov (United States)

    Look, Clarisse; McCabe, Patricia; Heard, Robert; Madill, Catherine J

    2018-02-02

    Modeling and instruction are frequent components of both traditional and technology-assisted voice therapy. This study investigated the value of video modeling and instruction in the early acquisition and short-term retention of a complex voice task without external feedback. Thirty participants were randomized to two conditions and trained to produce a vocal siren over 40 trials. One group received a model and verbal instructions, the other group received a model only. Sirens were analyzed for phonation time, vocal intensity, cepstral peak prominence, peak-to-peak time, and root-mean-square error at five time points. The model and instruction group showed significant improvement on more outcome measures than the model-only group. There was an interaction effect for vocal intensity, which showed that instructions facilitated greater improvement when they were first introduced. However, neither group reproduced the model's siren performance across all parameters or retained the skill 1 day later. Providing verbal instruction with a model appears more beneficial than providing a model only in the prepractice phase of acquiring a complex voice skill. Improved performance was observed; however, the higher level of performance was not retained after 40 trials in both conditions. Other prepractice variables may need to be considered. Findings have implications for traditional and technology-assisted voice therapy. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  13. Left-hemisphere activation is associated with enhanced vocal pitch error detection in musicians with absolute pitch

    Science.gov (United States)

    Behroozmand, Roozbeh; Ibrahim, Nadine; Korzyukov, Oleg; Robin, Donald A.; Larson, Charles R.

    2014-01-01

    The ability to process auditory feedback for vocal pitch control is crucial during speaking and singing. Previous studies have suggested that musicians with absolute pitch (AP) develop specialized left-hemisphere mechanisms for pitch processing. The present study adopted an auditory feedback pitch perturbation paradigm combined with ERP recordings to test the hypothesis whether the neural mechanisms of the left-hemisphere enhance vocal pitch error detection and control in AP musicians compared with relative pitch (RP) musicians and non-musicians (NM). Results showed a stronger N1 response to pitch-shifted voice feedback in the right-hemisphere for both AP and RP musicians compared with the NM group. However, the left-hemisphere P2 component activation was greater in AP and RP musicians compared with NMs and also for the AP compared with RP musicians. The NM group was slower in generating compensatory vocal reactions to feedback pitch perturbation compared with musicians, and they failed to re-adjust their vocal pitch after the feedback perturbation was removed. These findings suggest that in the earlier stages of cortical neural processing, the right hemisphere is more active in musicians for detecting pitch changes in voice feedback. In the later stages, the left-hemisphere is more active during the processing of auditory feedback for vocal motor control and seems to involve specialized mechanisms that facilitate pitch processing in the AP compared with RP musicians. These findings indicate that the left hemisphere mechanisms of AP ability are associated with improved auditory feedback pitch processing during vocal pitch control in tasks such as speaking or singing. PMID:24355545

  14. Embodiment in a Child-Like Talking Virtual Body Influences Object Size Perception, Self-Identification, and Subsequent Real Speaking.

    Science.gov (United States)

    Tajadura-Jiménez, Ana; Banakou, Domna; Bianchi-Berthouze, Nadia; Slater, Mel

    2017-08-29

    People's mental representations of their own body are malleable and continuously updated through sensory cues. Altering one's body-representation can lead to changes in object perception and implicit attitudes. Virtual reality has been used to embody adults in the body of a 4-year-old child or a scaled-down adult body. Child embodiment was found to cause an overestimation of object sizes, approximately double that during adult embodiment, and identification of the self with child-like attributes. Here we tested the contribution of auditory cues related to one's own voice to these visually-driven effects. In a 2 × 2 factorial design, visual and auditory feedback on one's own body were varied across conditions, which included embodiment in a child or scaled-down adult body, and real (undistorted) or child-like voice feedback. The results replicated, in an older population, previous findings regarding size estimations and implicit attitudes. Further, although auditory cues were not found to enhance these effects, we show that the strength of the embodiment illusion depends on the child-like voice feedback being congruent or incongruent with the age of the virtual body. Results also showed the positive emotional impact of the illusion of owning a child's body, opening up possibilities for health applications.

  15. A Comprehensive Review of Auditory Verbal Hallucinations: Lifetime Prevalence, Correlates and Mechanisms in Healthy and Clinical Individuals

    Directory of Open Access Journals (Sweden)

    Saskia ede Leede-Smith

    2013-07-01

    Full Text Available Over the years, the prevalence of auditory verbal hallucinations (AVH has been documented across the lifespan in varied contexts, and with a range of potential long-term outcomes. Initially the emphasis focused on whether AVHs conferred risk for psychosis. However, recent research has identified significant differences in the presentation and outcomes of AVH in patients compared to those in non-clinical populations. For this reason, it has been suggested that auditory hallucinations are an entity by themselves and not necessarily indicative of transition along the psychosis continuum. This review will examine the presentation of auditory hallucinations across the life span. The stages described include childhood, adolescence, adult non-clinical populations, hypnaogogic/hypnopompic experiences, high schizotypal traits, schizophrenia, substance induced AVH, AVH in epilepsy and AVH in the elderly. In children, need for care depends upon whether the child associates the voice with negative beliefs, appraisals and other symptoms of psychosis. This theme appears to carry right through to healthy voice hearers in adulthood, in which a negative impact of the voice usually only exists if the individual has negative experiences as a result of their voice(s. This includes features of the voices such as the negative content, frequency and emotional valence as well as anxiety and depression, independently or caused by voices presence. It seems possible that the mechanisms which maintain AVH in non-clinical populations are different from those which are behind AVH presentations in psychotic illness. For example; the existence of maladaptive coping strategies in patient populations is one significant difference between clinical and non-clinical groups which is associated with a need for care. Whether or not these mechanisms start out the same and have differential trajectories is not yet evidenced. Future research needs to focus on the comparison of underlying

  16. Functional Mapping of the Human Auditory Cortex: fMRI Investigation of a Patient with Auditory Agnosia from Trauma to the Inferior Colliculus.

    Science.gov (United States)

    Poliva, Oren; Bestelmeyer, Patricia E G; Hall, Michelle; Bultitude, Janet H; Koller, Kristin; Rafal, Robert D

    2015-09-01

    To use functional magnetic resonance imaging to map the auditory cortical fields that are activated, or nonreactive, to sounds in patient M.L., who has auditory agnosia caused by trauma to the inferior colliculi. The patient cannot recognize speech or environmental sounds. Her discrimination is greatly facilitated by context and visibility of the speaker's facial movements, and under forced-choice testing. Her auditory temporal resolution is severely compromised. Her discrimination is more impaired for words differing in voice onset time than place of articulation. Words presented to her right ear are extinguished with dichotic presentation; auditory stimuli in the right hemifield are mislocalized to the left. We used functional magnetic resonance imaging to examine cortical activations to different categories of meaningful sounds embedded in a block design. Sounds activated the caudal sub-area of M.L.'s primary auditory cortex (hA1) bilaterally and her right posterior superior temporal gyrus (auditory dorsal stream), but not the rostral sub-area (hR) of her primary auditory cortex or the anterior superior temporal gyrus in either hemisphere (auditory ventral stream). Auditory agnosia reflects dysfunction of the auditory ventral stream. The ventral and dorsal auditory streams are already segregated as early as the primary auditory cortex, with the ventral stream projecting from hR and the dorsal stream from hA1. M.L.'s leftward localization bias, preserved audiovisual integration, and phoneme perception are explained by preserved processing in her right auditory dorsal stream.

  17. Rehabilitation of the Upper Extremity after Stroke: A Case Series Evaluating REO Therapy and an Auditory Sensor Feedback for Trunk Control

    Directory of Open Access Journals (Sweden)

    G. Thielman

    2012-01-01

    Full Text Available Background and Purpose. Training in the virtual environment in post stroke rehab is being established as a new approach for neurorehabilitation, specifically, ReoTherapy (REO a robot-assisted virtual training device. Trunk stabilization strapping has been part of the concept with this device, and literature is lacking to support this for long-term functional changes with individuals after stroke. The purpose of this case series was to measure the feasibility of auditory trunk sensor feedback during REO therapy, in moderate to severely impaired individuals after stroke. Case Description. Using an open label crossover comparison design, 3 chronic stroke subjects were trained for 12 sessions over six weeks on either the REO or the control condition of task related training (TRT; after a washout period of 4 weeks; the alternative therapy was given. Outcomes. With both interventions, clinically relevant improvements were found for measures of body function and structure, as well as for activity, for two participants. Providing auditory feedback during REO training for trunk control was found to be feasible. Discussion. The degree of changes evident varied per protocol and may be due to the appropriateness of the technique chosen, as well as based on patients impaired arm motor control.

  18. Auditory Peripheral Processing of Degraded Speech

    National Research Council Canada - National Science Library

    Ghitza, Oded

    2003-01-01

    ...". The underlying thesis is that the auditory periphery contributes to the robust performance of humans in speech reception in noise through a concerted contribution of the efferent feedback system...

  19. Implicit multisensory associations influence voice recognition.

    Directory of Open Access Journals (Sweden)

    Katharina von Kriegstein

    2006-10-01

    Full Text Available Natural objects provide partially redundant information to the brain through different sensory modalities. For example, voices and faces both give information about the speech content, age, and gender of a person. Thanks to this redundancy, multimodal recognition is fast, robust, and automatic. In unimodal perception, however, only part of the information about an object is available. Here, we addressed whether, even under conditions of unimodal sensory input, crossmodal neural circuits that have been shaped by previous associative learning become activated and underpin a performance benefit. We measured brain activity with functional magnetic resonance imaging before, while, and after participants learned to associate either sensory redundant stimuli, i.e. voices and faces, or arbitrary multimodal combinations, i.e. voices and written names, ring tones, and cell phones or brand names of these cell phones. After learning, participants were better at recognizing unimodal auditory voices that had been paired with faces than those paired with written names, and association of voices with faces resulted in an increased functional coupling between voice and face areas. No such effects were observed for ring tones that had been paired with cell phones or names. These findings demonstrate that brief exposure to ecologically valid and sensory redundant stimulus pairs, such as voices and faces, induces specific multisensory associations. Consistent with predictive coding theories, associative representations become thereafter available for unimodal perception and facilitate object recognition. These data suggest that for natural objects effective predictive signals can be generated across sensory systems and proceed by optimization of functional connectivity between specialized cortical sensory modules.

  20. Auditory midbrain processing is differentially modulated by auditory and visual cortices: An auditory fMRI study.

    Science.gov (United States)

    Gao, Patrick P; Zhang, Jevin W; Fan, Shu-Juan; Sanes, Dan H; Wu, Ed X

    2015-12-01

    The cortex contains extensive descending projections, yet the impact of cortical input on brainstem processing remains poorly understood. In the central auditory system, the auditory cortex contains direct and indirect pathways (via brainstem cholinergic cells) to nuclei of the auditory midbrain, called the inferior colliculus (IC). While these projections modulate auditory processing throughout the IC, single neuron recordings have samples from only a small fraction of cells during stimulation of the corticofugal pathway. Furthermore, assessments of cortical feedback have not been extended to sensory modalities other than audition. To address these issues, we devised blood-oxygen-level-dependent (BOLD) functional magnetic resonance imaging (fMRI) paradigms to measure the sound-evoked responses throughout the rat IC and investigated the effects of bilateral ablation of either auditory or visual cortices. Auditory cortex ablation increased the gain of IC responses to noise stimuli (primarily in the central nucleus of the IC) and decreased response selectivity to forward species-specific vocalizations (versus temporally reversed ones, most prominently in the external cortex of the IC). In contrast, visual cortex ablation decreased the gain and induced a much smaller effect on response selectivity. The results suggest that auditory cortical projections normally exert a large-scale and net suppressive influence on specific IC subnuclei, while visual cortical projections provide a facilitatory influence. Meanwhile, auditory cortical projections enhance the midbrain response selectivity to species-specific vocalizations. We also probed the role of the indirect cholinergic projections in the auditory system in the descending modulation process by pharmacologically blocking muscarinic cholinergic receptors. This manipulation did not affect the gain of IC responses but significantly reduced the response selectivity to vocalizations. The results imply that auditory cortical

  1. Acoustic cues for the recognition of self-voice and other-voice

    Directory of Open Access Journals (Sweden)

    Mingdi eXu

    2013-10-01

    Full Text Available Self-recognition, being indispensable for successful social communication, has become a major focus in current social neuroscience. The physical aspects of the self are most typically manifested in the face and voice. Compared with the wealth of studies on self-face recognition, self-voice recognition (SVR has not gained much attention. Converging evidence has suggested that the fundamental frequency (F0 and formant structures serve as the key acoustic cues for other-voice recognition (OVR. However, little is known about which, and how, acoustic cues are utilized for SVR as opposed to OVR. To address this question, we independently manipulated the F0 and formant information of recorded voices and investigated their contributions to SVR and OVR. Japanese participants were presented with recorded vocal stimuli and were asked to identify the speaker—either themselves or one of their peers. Six groups of 5 peers of the same sex participated in the study. Under conditions where the formant information was fully preserved and where only the frequencies lower than the third formant (F3 were retained, accuracies of SVR deteriorated significantly with the modulation of the F0, and the results were comparable for OVR. By contrast, under a condition where only the frequencies higher than F3 were retained, the accuracy of SVR was significantly higher than that of OVR throughout the range of F0 modulations, and the F0 scarcely affected the accuracies of SVR and OVR. Our results indicate that while both F0 and formant information are involved in SVR, as well as in OVR, the advantage of SVR is manifested only when major formant information for speech intelligibility is absent. These findings imply the robustness of self-voice representation, possibly by virtue of auditory familiarity and other factors such as its association with motor/articulatory representation.

  2. The role of auditory temporal cues in the fluency of stuttering adults

    OpenAIRE

    Furini, Juliana; Picoloto, Luana Altran; Marconato, Eduarda; Bohnen, Anelise Junqueira; Cardoso, Ana Claudia Vieira; Oliveira, Cristiane Moço Canhetti de

    2017-01-01

    ABSTRACT Purpose: to compare the frequency of disfluencies and speech rate in spontaneous speech and reading in adults with and without stuttering in non-altered and delayed auditory feedback (NAF, DAF). Methods: participants were 30 adults: 15 with Stuttering (Research Group - RG), and 15 without stuttering (Control Group - CG). The procedures were: audiological assessment and speech fluency evaluation in two listening conditions, normal and delayed auditory feedback (100 milliseconds dela...

  3. Temporal control and compensation for perturbed voicing feedback

    DEFF Research Database (Denmark)

    Mitsuya, Takashi; MacDonald, Ewen; Munhall, Kevin G.

    2014-01-01

    Previous research employing a real-time auditory perturbation paradigm has shown that talkers monitor their own speech attributes such as fundamental frequency, vowel intensity, vowel formants, and fricative noise as part of speech motor control. In the case of vowel formants or fricative noise...

  4. Collaboration and conquest: MTD as viewed by voice teacher (singing voice specialist) and speech-language pathologist.

    Science.gov (United States)

    Goffi-Fynn, Jeanne C; Carroll, Linda M

    2013-05-01

    This study was designed as a qualitative case study to demonstrate the process of diagnosis and treatment between a voice team to manage a singer diagnosed with muscular tension dysphonia (MTD). Traditionally, literature suggests that MTD is challenging to treat and little in the literature directly addresses singers with MTD. Data collected included initial medical screening with laryngologist, referral to speech-language pathologist (SLP) specializing in voice disorders among singers, and adjunctive voice training with voice teacher trained in vocology (singing voice specialist or SVS). Initial target goals with SLP included reducing extrinsic laryngeal tension, using a relaxed laryngeal posture, and effective abdominal-diaphragmatic support for all phonation events. Balance of respiratory forces, laryngeal coordination, and use of optimum filtering of the source signal through resonance and articulatory awareness was emphasized. Further work with SVS included three main goals including a lowered breathing pattern to aid in decreasing subglottic air pressure, vertical laryngeal position to lower to allow for a relaxed laryngeal position, and a top-down singing approach to encourage an easier, more balanced registration, and better resonance. Initial results also emphasize the retraining of subject toward a sensory rather than auditory mode of monitoring. Other areas of consideration include singers' training and vocal use, the psychological effects of MTD, the personalities potentially associated with it, and its relationship with stress. Finally, the results emphasize that a positive rapport with the subject and collaboration between all professionals involved in a singer's care are essential for recovery. Copyright © 2013 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  5. Precise auditory-vocal mirroring in neurons for learned vocal communication.

    Science.gov (United States)

    Prather, J F; Peters, S; Nowicki, S; Mooney, R

    2008-01-17

    Brain mechanisms for communication must establish a correspondence between sensory and motor codes used to represent the signal. One idea is that this correspondence is established at the level of single neurons that are active when the individual performs a particular gesture or observes a similar gesture performed by another individual. Although neurons that display a precise auditory-vocal correspondence could facilitate vocal communication, they have yet to be identified. Here we report that a certain class of neurons in the swamp sparrow forebrain displays a precise auditory-vocal correspondence. We show that these neurons respond in a temporally precise fashion to auditory presentation of certain note sequences in this songbird's repertoire and to similar note sequences in other birds' songs. These neurons display nearly identical patterns of activity when the bird sings the same sequence, and disrupting auditory feedback does not alter this singing-related activity, indicating it is motor in nature. Furthermore, these neurons innervate striatal structures important for song learning, raising the possibility that singing-related activity in these cells is compared to auditory feedback to guide vocal learning.

  6. Identification of neural structures involved in stuttering using vibrotactile feedback.

    Science.gov (United States)

    Cheadle, Oliver; Sorger, Clarissa; Howell, Peter

    Feedback delivered over auditory and vibratory afferent pathways has different effects on the fluency of people who stutter (PWS). These features were exploited to investigate the neural structures involved in stuttering. The speech signal vibrated locations on the body (vibrotactile feedback, VTF). Eleven PWS read passages under VTF and control (no-VTF) conditions. All combinations of vibration amplitude, synchronous or delayed VTF and vibrator position (hand, sternum or forehead) were presented. Control conditions were performed at the beginning, middle and end of test sessions. Stuttering rate, but not speaking rate, differed between the control and VTF conditions. Notably, speaking rate did not change between when VTF was delayed versus when it was synchronous in contrast with what happens with auditory feedback. This showed that cerebellar mechanisms, which are affected when auditory feedback is delayed, were not implicated in the fluency-enhancing effects of VTF, suggesting that there is a second fluency-enhancing mechanism. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. Cognitive and behavioural therapy of voices for with patients intellectual disability: Two case reports

    Directory of Open Access Journals (Sweden)

    Pernier Sophie

    2007-08-01

    Full Text Available Abstract Background Two case studies are presented to examine how cognitive behavioural therapy (CBT of auditory hallucinations can be fitted to mild and moderate intellectual disability. Methods A 38-year-old female patient with mild intellectual disability and a 44-year-old male patient with moderate intellectual disability, both suffering from persistent auditory hallucinations, were treated with CBT. Patients were assessed on beliefs about their voices and their inappropriate coping behaviour to them. The traditional CBT techniques were modified to reduce the emphasis placed on cognitive abilities. Verbal strategies were replaced by more concrete tasks using roleplaying, figurines and touch and feel experimentation. Results Both patients improved on selected variables. They both gradually managed to reduce the power they attributed to the voice after the introduction of the therapy, and maintained their progress at follow-up. Their inappropriate behaviour consecutive to the belief about voices diminished in both cases. Conclusion These two case studies illustrate the feasibility of CBT for psychotic symptoms with intellectually disabled people, but need to be confirmed by more stringent studies.

  8. Syllogisms delivered in an angry voice lead to improved performance and engagement of a different neural system compared to neutral voice

    Directory of Open Access Journals (Sweden)

    Kathleen Walton Smith

    2015-05-01

    Full Text Available Despite the fact that most real-world reasoning occurs in some emotional context, very little is known about the underlying behavioral and neural implications of such context. To further understand the role of emotional context in logical reasoning we scanned 15 participants with fMRI while they engaged in logical reasoning about neutral syllogisms presented through the auditory channel in a sad, angry, or neutral tone of voice. Exposure to angry voice led to improved reasoning performance compared to exposure to sad and neutral voice. A likely explanation for this effect is that exposure to expressions of anger increases selective attention toward the relevant features of target stimuli, in this case the reasoning task. Supporting this interpretation, reasoning in the context of angry voice was accompanied by activation in the superior frontal gyrus—a region known to be associated with selective attention. Our findings contribute to a greater understanding of the neural processes that underlie reasoning in an emotional context by demonstrating that two emotional contexts, despite being of the same (negative valence, have different effects on reasoning.

  9. Effects of first formant onset frequency on [-voice] judgments result from auditory processes not specific to humans.

    Science.gov (United States)

    Kluender, K R; Lotto, A J

    1994-02-01

    When F1-onset frequency is lower, longer F1 cut-back (VOT) is required for human listeners to perceive synthesized stop consonants as voiceless. K. R. Kluender [J. Acoust. Soc. Am. 90, 83-96 (1991)] found comparable effects of F1-onset frequency on the "labeling" of stop consonants by Japanese quail (coturnix coturnix japonica) trained to distinguish stop consonants varying in F1 cut-back. In that study, CVs were synthesized with natural-like rising F1 transitions, and endpoint training stimuli differed in the onset frequency of F1 because a longer cut-back resulted in a higher F1 onset. In order to assess whether earlier results were due to auditory predispositions or due to animals having learned the natural covariance between F1 cut-back and F1-onset frequency, the present experiment was conducted with synthetic continua having either a relatively low (375 Hz) or high (750 Hz) constant-frequency F1. Six birds were trained to respond differentially to endpoint stimuli from three series of synthesized /CV/s varying in duration of F1 cut-back. Second and third formant transitions were appropriate for labial, alveolar, or velar stops. Despite the fact that there was no opportunity for animal subjects to use experienced covariation of F1-onset frequency and F1 cut-back, quail typically exhibited shorter labeling boundaries (more voiceless stops) for intermediate stimuli of the continua when F1 frequency was higher. Responses by human subjects listening to the same stimuli were also collected. Results lend support to the earlier conclusion that part or all of the effect of F1 onset frequency on perception of voicing may be adequately explained by general auditory processes.(ABSTRACT TRUNCATED AT 250 WORDS)

  10. Stigma and need for care in individuals who hear voices.

    Science.gov (United States)

    Vilhauer, Ruvanee P

    2017-02-01

    Voice hearing experiences, or auditory verbal hallucinations, occur in healthy individuals as well as in individuals who need clinical care, but news media depict voice hearing primarily as a symptom of mental illness, particularly schizophrenia. This article explores whether, and how, public perception of an exaggerated association between voice hearing and mental illness might influence individuals' need for clinical care. A narrative literature review was conducted, using relevant peer-reviewed research published in the English language. Stigma may prevent disclosure of voice hearing experiences. Non-disclosure can prevent access to sources of normalizing information and lead to isolation, loss of social support and distress. Internalization of stigma and concomitantly decreased self-esteem could potentially affect features of voices such as perceived voice power, controllability, negativity and frequency, as well as distress. Increased distress may result in a decrease in functioning and increased need for clinical care. The literature reviewed suggests that stigma has the potential to increase need for care through many interrelated pathways. However, the ability to draw definitive conclusions was constrained by the designs of the studies reviewed. Further research is needed to confirm the findings of this review.

  11. Contextual modulation of primary visual cortex by auditory signals.

    Science.gov (United States)

    Petro, L S; Paton, A T; Muckli, L

    2017-02-19

    Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol. 23, 195-201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol. 24, 1256-1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Authors.

  12. Gay- and Lesbian-Sounding Auditory Cues Elicit Stereotyping and Discrimination.

    Science.gov (United States)

    Fasoli, Fabio; Maass, Anne; Paladino, Maria Paola; Sulpizio, Simone

    2017-07-01

    The growing body of literature on the recognition of sexual orientation from voice ("auditory gaydar") is silent on the cognitive and social consequences of having a gay-/lesbian- versus heterosexual-sounding voice. We investigated this issue in four studies (overall N = 276), conducted in Italian language, in which heterosexual listeners were exposed to single-sentence voice samples of gay/lesbian and heterosexual speakers. In all four studies, listeners were found to make gender-typical inferences about traits and preferences of heterosexual speakers, but gender-atypical inferences about those of gay or lesbian speakers. Behavioral intention measures showed that listeners considered lesbian and gay speakers as less suitable for a leadership position, and male (but not female) listeners took distance from gay speakers. Together, this research demonstrates that having a gay/lesbian rather than heterosexual-sounding voice has tangible consequences for stereotyping and discrimination.

  13. A comprehensive review of auditory verbal hallucinations: lifetime prevalence, correlates and mechanisms in healthy and clinical individuals.

    Science.gov (United States)

    de Leede-Smith, Saskia; Barkus, Emma

    2013-01-01

    Over the years, the prevalence of auditory verbal hallucinations (AVHs) have been documented across the lifespan in varied contexts, and with a range of potential long-term outcomes. Initially the emphasis focused on whether AVHs conferred risk for psychosis. However, recent research has identified significant differences in the presentation and outcomes of AVH in patients compared to those in non-clinical populations. For this reason, it has been suggested that auditory hallucinations are an entity by themselves and not necessarily indicative of transition along the psychosis continuum. This review will examine the presentation of auditory hallucinations across the life span, as well as in various clinical groups. The stages described include childhood, adolescence, adult non-clinical populations, hypnagogic/hypnopompic experiences, high schizotypal traits, schizophrenia, substance induced AVH, AVH in epilepsy, and AVH in the elderly. In children, need for care depends upon whether the child associates the voice with negative beliefs, appraisals and other symptoms of psychosis. This theme appears to carry right through to healthy voice hearers in adulthood, in which a negative impact of the voice usually only exists if the individual has negative experiences as a result of their voice(s). This includes features of the voices such as the negative content, frequency, and emotional valence as well as anxiety and depression, independently or caused by voices presence. It seems possible that the mechanisms which maintain AVH in non-clinical populations are different from those which are behind AVH presentations in psychotic illness. For example, the existence of maladaptive coping strategies in patient populations is one significant difference between clinical and non-clinical groups which is associated with a need for care. Whether or not these mechanisms start out the same and have differential trajectories is not yet evidenced. Future research needs to focus on the

  14. A comprehensive review of auditory verbal hallucinations: lifetime prevalence, correlates and mechanisms in healthy and clinical individuals

    Science.gov (United States)

    de Leede-Smith, Saskia; Barkus, Emma

    2013-01-01

    Over the years, the prevalence of auditory verbal hallucinations (AVHs) have been documented across the lifespan in varied contexts, and with a range of potential long-term outcomes. Initially the emphasis focused on whether AVHs conferred risk for psychosis. However, recent research has identified significant differences in the presentation and outcomes of AVH in patients compared to those in non-clinical populations. For this reason, it has been suggested that auditory hallucinations are an entity by themselves and not necessarily indicative of transition along the psychosis continuum. This review will examine the presentation of auditory hallucinations across the life span, as well as in various clinical groups. The stages described include childhood, adolescence, adult non-clinical populations, hypnagogic/hypnopompic experiences, high schizotypal traits, schizophrenia, substance induced AVH, AVH in epilepsy, and AVH in the elderly. In children, need for care depends upon whether the child associates the voice with negative beliefs, appraisals and other symptoms of psychosis. This theme appears to carry right through to healthy voice hearers in adulthood, in which a negative impact of the voice usually only exists if the individual has negative experiences as a result of their voice(s). This includes features of the voices such as the negative content, frequency, and emotional valence as well as anxiety and depression, independently or caused by voices presence. It seems possible that the mechanisms which maintain AVH in non-clinical populations are different from those which are behind AVH presentations in psychotic illness. For example, the existence of maladaptive coping strategies in patient populations is one significant difference between clinical and non-clinical groups which is associated with a need for care. Whether or not these mechanisms start out the same and have differential trajectories is not yet evidenced. Future research needs to focus on the

  15. Speech-Language Pathology production regarding voice in popular singing.

    Science.gov (United States)

    Drumond, Lorena Badaró; Vieira, Naymme Barbosa; Oliveira, Domingos Sávio Ferreira de

    2011-12-01

    To present a literature review about the Brazilian scientific production in Speech-Language Pathology and Audiology regarding voice in popular singing in the last decade, as for number of publications, musical styles studied, focus of the researches, and instruments used for data collection. Cross-sectional descriptive study carried out in two stages: search in databases and publications encompassing the last decade of researches in this area in Brazil, and reading of the material obtained for posterior categorization. The databases LILACS and SciELO, the Databasis of Dissertations and Theses organized by CAPES, the online version of Acta ORL, and the online version of OPUS were searched, using the following uniterms: voice, professional voice, singing voice, dysphonia, voice disorders, voice training, music, dysodia. Articles published between the years 2000 and 2010 were selected. The researches found were classified and categorized after reading their abstracts and, when necessary, the whole study. Twenty researches within the proposed theme were selected, all of which were descriptive, involving several musical styles. Twelve studies focused on the evaluation of the popular singer's voice, and the most frequently used data collection instrument was the auditory-perceptual evaluation. The results of the publications found corroborate the objectives proposed by the authors and the different methodologies. The number of studies published is still restricted when compared to the diversity of musical genres and the uniqueness of popular singer.

  16. Auditory white noise reduces postural fluctuations even in the absence of vision.

    Science.gov (United States)

    Ross, Jessica Marie; Balasubramaniam, Ramesh

    2015-08-01

    The contributions of somatosensory, vestibular, and visual feedback to balance control are well documented, but the influence of auditory information, especially acoustic noise, on balance is less clear. Because somatosensory noise has been shown to reduce postural sway, we hypothesized that noise from the auditory modality might have a similar effect. Given that the nervous system uses noise to optimize signal transfer, adding mechanical or auditory noise should lead to increased feedback about sensory frames of reference used in balance control. In the present experiment, postural sway was analyzed in healthy young adults where they were presented with continuous white noise, in the presence and absence of visual information. Our results show reduced postural sway variability (as indexed by the body's center of pressure) in the presence of auditory noise, even when visual information was not present. Nonlinear time series analysis revealed that auditory noise has an additive effect, independent of vision, on postural stability. Further analysis revealed that auditory noise reduced postural sway variability in both low- and high-frequency regimes (> or noise. Our results support the idea that auditory white noise reduces postural sway, suggesting that auditory noise might be used for therapeutic and rehabilitation purposes in older individuals and those with balance disorders.

  17. How Do Batters Use Visual, Auditory, and Tactile Information about the Success of a Baseball Swing?

    Science.gov (United States)

    Gray, Rob

    2009-01-01

    Bat/ball contact produces visual (the ball leaving the bat), auditory (the "crack" of the bat), and tactile (bat vibration) feedback about the success of the swing. We used a batting simulation to investigate how college baseball players use visual, tactile, and auditory feedback. In Experiment 1, swing accuracy (i.e., the lateral separation…

  18. Effects of Consensus Training on the Reliability of Auditory Perceptual Ratings of Voice Quality

    DEFF Research Database (Denmark)

    Iwarsson, Jenny; Petersen, Niels Reinholt

    2012-01-01

    Objectives/Hypothesis: This study investigates the effect of consensus training of listeners on intrarater and interrater reliability and agreement of perceptual voice analysis. The use of such training, including a reference voice sample, could be assumed to make the internal standards held in m...

  19. Internal versus External Auditory Hallucinations in Schizophrenia: Symptom and Course Correlates

    Science.gov (United States)

    Docherty, Nancy M.; Dinzeo, Thomas J.; McCleery, Amanda; Bell, Emily K.; Shakeel, Mohammed K.; Moe, Aubrey

    2015-01-01

    Introduction The auditory hallucinations associated with schizophrenia are phenomenologically diverse. “External” hallucinations classically have been considered to reflect more severe psychopathology than “internal” hallucinations, but empirical support has been equivocal. Methods We examined associations of “internal” v. “external” hallucinations with (a) other characteristics of the hallucinations, (b) severity of other symptoms, and (c) course of illness variables, in a sample of 97 stable outpatients with schizophrenia or schizoaffective disorder who experienced auditory hallucinations. Results Patients with internal hallucinations did not differ from those with external hallucinations on severity of other symptoms. However, they reported their hallucinations to be more emotionally negative, distressing, and long-lasting, less controllable, and less likely to remit over time. They also were more likely to experience voices commenting, conversing, or commanding. However, they also were more likely to have insight into the self-generated nature of their voices. Patients with internal hallucinations were not older, but had a later age of illness onset. Conclusions Differences in characteristics of auditory hallucinations are associated with differences in other characteristics of the disorder, and hence may be relevant to identifying subgroups of patients that are more homogeneous with respect to their underlying disease processes. PMID:25530157

  20. Group climate in the voice therapy of patients with Parkinson's Disease.

    Science.gov (United States)

    Diaféria, Giovana; Madazio, Glaucya; Pacheco, Claudia; Takaki, Patricia Barbarini; Behlau, Mara

    2017-09-04

    To verify the impact that group dynamics and coaching strategies have on the PD patients voice, speech and communication, as well as the group climate. 16 individuals with mild to moderate dysarthria due to the PD were divided into two groups: the CG (8 patients), submitted to traditional therapy with 12 regular therapy sessions plus 4 additional support sessions; and the EG (8 patients), submitted to traditional therapy with 12 regular therapy sessions plus 4 sessions with group dynamics and coaching strategies. The Living with Dysarthria questionnaire (LwD), the self-evaluation of voice, speech and communication, and the perceptual-auditory analysis of the vocal quality were assess in 3 moments: pre-traditional therapy (pre); post-traditional therapy (post 1); and post support sessions/coaching strategies (post 2); in post 1 and post 2 moments, the Group Climate Questionnaire (GCQ) was also applied. CG and EG showed an improvement in the LwD from pre to post 1 and post 2 moments. Voice self-evaluation was better for the EG - when pre was compared with post 2 and when post 1 was compared with post 2 - ranging from regular to very good; both groups presented improvement in the communication self-evaluation. The perceptual-auditory evaluation of the vocal quality was better for the EG in the post 1 moment. No difference was found for the GCQ; however, the EG presented lower avoidance scores in post 2. All patients showed improvement in the voice, speech and communication self-evaluation; EG showed lower avoidance scores, creating a more collaborative and propitious environment for speech therapy.

  1. Effect of singing training on total laryngectomees wearing a tracheoesophageal voice prosthesis.

    Science.gov (United States)

    Onofre, Fernanda; Ricz, Hilton Marcos Alves; Takeshita-Monaretti, Telma Kioko; Prado, Maria Yuka de Almeida; Aguiar-Ricz, Lílian Neto

    2013-02-01

    To assess the effect of a program of singing training on the voice of total laryngectomees wearing tracheoesophageal voice prosthesis, considering the quality of alaryngeal phonation, vocal extension and the musical elements of tunning and legato. Five laryngectomees wearing tracheoesophageal voice prosthesis completed the singing training program over a period of three months, with exploration of the strengthening of the respiratory muscles and vocalization and with evaluation of perceptive-auditory and singing voice being performed before and after 12 sessions of singing therapy. After the program of singing voice training, the quality of tracheoesophageal voice showed improvement or the persistence of the general degree of dysphonia for the emitted vowels and for the parameters of roughness and breathiness. For the vowel "a", the pitch was displaced to grave in two participants and to acute in one, and remained adequate in the others. A similar situation was observed also for the vowel "i". After the singing program, all participants presented tunning and most of them showed a greater presence of legato. The vocal extension improved in all participants. Singing training seems to have a favorable effect on the quality of tracheoesophageal phonation and on singing voice.

  2. Auditory Selective Attention: an introduction and evidence for distinct facilitation and inhibition mechanisms

    OpenAIRE

    Mikyska, Constanze Elisabeth Anna

    2012-01-01

    Objective Auditory selective attention is a complex brain function that is still not completely understood. The classic example is the so-called “cocktail party effect” (Cherry, 1953), which describes the impressive ability to focus one’s attention on a single voice from a multitude of voices. This means that particular stimuli in the environment are enhanced in contrast to other ones of lower priority that are ignored. To be able to understand how attention can influence the perception and p...

  3. Guided self-help cognitive-behaviour Intervention for VoicEs (GiVE): Results from a pilot randomised controlled trial in a transdiagnostic sample.

    Science.gov (United States)

    Hazell, Cassie M; Hayward, Mark; Cavanagh, Kate; Jones, Anna-Marie; Strauss, Clara

    2017-10-12

    Few patients have access to cognitive behaviour therapy for psychosis (CBTp) even though at least 16 sessions of CBTp is recommended in treatment guidelines. Briefer CBTp could improve access as the same number of therapists could see more patients. In addition, focusing on single psychotic symptoms, such as auditory hallucinations ('voices'), rather than on psychosis more broadly, may yield greater benefits. This pilot RCT recruited 28 participants (with a range of diagnoses) from NHS mental health services who were distressed by hearing voices. The study compared an 8-session guided self-help CBT intervention for distressing voices with a wait-list control. Data were collected at baseline and at 12weeks with post-therapy assessments conducted blind to allocation. Voice-impact was the pre-determined primary outcome. Secondary outcomes were depression, anxiety, wellbeing and recovery. Mechanism measures were self-esteem, beliefs about self, beliefs about voices and voice-relating. Recruitment and retention was feasible with low study (3.6%) and therapy (14.3%) dropout. There were large, statistically significant between-group effects on the primary outcome of voice-impact (d=1.78; 95% CIs: 0.86-2.70), which exceeded the minimum clinically important difference. Large, statistically significant effects were found on a number of secondary and mechanism measures. Large effects on the pre-determined primary outcome of voice-impact are encouraging, and criteria for progressing to a definitive trial are met. Significant between-group effects on measures of self-esteem, negative beliefs about self and beliefs about voice omnipotence are consistent with these being mechanisms of change and this requires testing in a future trial. Copyright © 2017. Published by Elsevier B.V.

  4. Making social robots more attractive: the effects of voice pitch, humor and empathy

    NARCIS (Netherlands)

    Niculescu, A.I.; Ge, S.S.; van Dijk, Elisabeth M.A.G.; Nijholt, Antinus; Li, Haizhou; See, Swan Lan

    2013-01-01

    In this paper we explore how simple auditory/verbal features of the spoken language, such as voice characteristics (pitch) and language cues (empathy/humor expression) influence the quality of interaction with a social robot receptionist. For our experiment two robot characters were created: Olivia,

  5. Self-Voice, but Not Self-Face, Reduces the McGurk Effect

    Directory of Open Access Journals (Sweden)

    Christopher Aruffo

    2011-10-01

    Full Text Available The McGurk effect represents a perceptual illusion resulting from the integration of an auditory syllable dubbed onto an incongruous visual syllable. The involuntary and impenetrable nature of the illusion is frequently used to support the multisensory nature of audiovisual speech perception. Here we show that both self-speech and familiarized speech reduce the effect. When self-speech was separated into self-voice and self-face mismatched with different faces and voices, only self-voice weakened the illusion. Thus, a familiar vocal identity automatically confers a processing advantage to multisensory speech, while a familiar facial identity does not. When another group of participants were familiarized with the speakers, participants' ability to take advantage of that familiarization was inversely correlated with their overall susceptibility to the McGurk illusion.

  6. Behavioural evidence of a dissociation between voice gender categorization and phoneme categorization using auditory morphed stimuli

    Directory of Open Access Journals (Sweden)

    Cyril R Pernet

    2014-01-01

    Full Text Available Both voice gender and speech perception rely on neuronal populations located in the peri-sylvian areas. However, whilst functional imaging studies suggest a left versus right hemisphere and anterior versus posterior dissociation between voice and speech categorization, psycholinguistic studies on talker variability suggest that these two processes (voice and speech categorization share common mechanisms. In this study, we investigated the categorical perception of voice gender (male vs. female and phonemes (/pa/ vs. /ta/ using the same stimulus continua generated by morphing. This allowed the investigation of behavioural differences while controlling acoustic characteristics, since the same stimuli were used in both tasks. Despite a higher acoustic dissimilarity between items during the phoneme categorization task (a male and female voice producing the same phonemes than the gender task (the same person producing 2 phonemes, results showed that speech information is being processed much faster than voice information. In addition, f0 or timbre equalization did not affect RT, which disagrees with the classical psycholinguistic models in which voice information is stripped away or normalized to access phonetic content. Also, despite similar response (percentages and perceptual (d’ curves, a reverse correlation analysis on acoustic features revealed, as expected, that the formant frequencies of the consonant distinguished stimuli in the phoneme task, but that only the vowel formant frequencies distinguish stimuli in the gender task. The 2nd set of results thus also disagrees with models postulating that the same acoustic information is used for voice and speech. Altogether these results suggest that voice gender categorization and phoneme categorization are dissociated at an early stage on the basis of different enhanced acoustic features that are diagnostic to the task at hand.

  7. Vocal Acoustic and Auditory-Perceptual Characteristics During Fluctuations in Estradiol Levels During the Menstrual Cycle: A Longitudinal Study.

    Science.gov (United States)

    Arruda, Polyanna; Diniz da Rosa, Marine Raquel; Almeida, Larissa Nadjara Alves; de Araujo Pernambuco, Leandro; Almeida, Anna Alice

    2018-03-07

    Estradiol production varies cyclically, changes in levels are hypothesized to affect the voice. The main objective of this study was to investigate vocal acoustic and auditory-perceptual characteristics during fluctuations in the levels of the hormone estradiol during the menstrual cycle. A total of 44 volunteers aged between 18 and 45 were selected. Of these, 27 women with regular menstrual cycles comprised the test group (TG) and 17 combined oral contraceptive users comprised the control group (CG). The study was performed in two phases. In phase 1, anamnesis was performed. Subsequently, the TG underwent blood sample collection for measurement of estradiol levels and voice recording for later acoustic and auditory-perceptual analysis. The CG underwent only voice recording. Phase 2 involved the same measurements as phase 1 for each group. Variables were evaluated using descriptive and inferential analysis to compare groups and phases and to determine relationships between variables. Voice changes were found during the menstrual cycle, and such changes were determined to be related to variations in estradiol levels. Impaired voice quality was observed to be associated with decreased levels of estradiol. The CG did not demonstrate significant vocal changes during phases 1 and 2. The TG showed significant increases in vocal parameters of roughness, tension, and instability during phase 2 (the period of low estradiol levels) when compared with the CG. Low estradiol levels were also found to be negatively correlated with the parameters of tension, instability, and jitter and positively correlated with fundamental voice frequency. Copyright © 2018 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  8. Multi-modal assessment of on-road demand of voice and manual phone calling and voice navigation entry across two embedded vehicle systems

    Science.gov (United States)

    Mehler, Bruce; Kidd, David; Reimer, Bryan; Reagan, Ian; Dobres, Jonathan; McCartt, Anne

    2016-01-01

    Abstract One purpose of integrating voice interfaces into embedded vehicle systems is to reduce drivers’ visual and manual distractions with ‘infotainment’ technologies. However, there is scant research on actual benefits in production vehicles or how different interface designs affect attentional demands. Driving performance, visual engagement, and indices of workload (heart rate, skin conductance, subjective ratings) were assessed in 80 drivers randomly assigned to drive a 2013 Chevrolet Equinox or Volvo XC60. The Chevrolet MyLink system allowed completing tasks with one voice command, while the Volvo Sensus required multiple commands to navigate the menu structure. When calling a phone contact, both voice systems reduced visual demand relative to the visual–manual interfaces, with reductions for drivers in the Equinox being greater. The Equinox ‘one-shot’ voice command showed advantages during contact calling but had significantly higher error rates than Sensus during destination address entry. For both secondary tasks, neither voice interface entirely eliminated visual demand. Practitioner Summary: The findings reinforce the observation that most, if not all, automotive auditory–vocal interfaces are multi-modal interfaces in which the full range of potential demands (auditory, vocal, visual, manipulative, cognitive, tactile, etc.) need to be considered in developing optimal implementations and evaluating drivers’ interaction with the systems. Social Media: In-vehicle voice-interfaces can reduce visual demand but do not eliminate it and all types of demand need to be taken into account in a comprehensive evaluation. PMID:26269281

  9. Acute effects of radioiodine therapy on the voice and larynx of basedow-Graves patients

    International Nuclear Information System (INIS)

    Isolan-Cury, Roberta Werlang; Cury, Adriano Namo; Monte, Osmar; Silva, Marta Assumpcao de Andrada e; Duprat, Andre; Marone, Marilia; Almeida, Renata de; Iglesias, Alexandre

    2008-01-01

    Graves's disease is the most common cause of hyperthyroidism. There are three current therapeutic options: anti-thyroid medication, surgery, and radioactive iodine (I 131). There are few data in the literature regarding the effects of radioiodine therapy on the larynx and voice. The aim of this study was: to assess the effect of radioiodine therapy on the voice of Basedow-Graves patients. Material and method: A prospective study was done. Following the diagnosis of Grave's disease, patients underwent investigation of their voice, measurement of maximum phonatory time (/a/) and the s/z ratio, fundamental frequency analysis (Praat software), laryngoscopy and (perceptive-auditory) analysis in three different conditions: pre-treatment, 4 days, and 20 days post-radioiodine therapy. Conditions are based on the inflammatory pattern of thyroid tissue (Jones et al. 1999). Results: No statistically significant differences were found in voice characteristics in these three conditions. Conclusion: Radioiodine therapy does not affect voice quality. (author)

  10. Acute effects of radioiodine therapy on the voice and larynx of basedow-Graves patients

    Energy Technology Data Exchange (ETDEWEB)

    Isolan-Cury, Roberta Werlang; Cury, Adriano Namo [Sao Paulo Santa Casa de Misericordia, SP (Brazil). Medical Science School (FCMSCSP); Monte, Osmar [Sao Paulo Santa Casa de Misericordia, SP (Brazil). Physiology Department; Silva, Marta Assumpcao de Andrada e [Sao Paulo Santa Casa de Misericordia, SP (Brazil). Medical Science School (FCMSCSP). Speech Therapy School; Duprat, Andre [Sao Paulo Santa Casa de Misericordia, SP (Brazil). Medical Science School (FCMSCSP). Otorhinolaryngology Department; Marone, Marilia [Nuclimagem - Irmanity of the Sao Paulo Santa Casa de Misericordia, SP (Brazil). Nuclear Medicine Unit; Almeida, Renata de; Iglesias, Alexandre [Sao Paulo Santa Casa de Misericordia, SP (Brazil). Medical Science School (FCMSCSP). Otorhinolaryngology Department. Endocrinology and Metabology Unit

    2008-07-01

    Graves's disease is the most common cause of hyperthyroidism. There are three current therapeutic options: anti-thyroid medication, surgery, and radioactive iodine (I 131). There are few data in the literature regarding the effects of radioiodine therapy on the larynx and voice. The aim of this study was: to assess the effect of radioiodine therapy on the voice of Basedow-Graves patients. Material and method: A prospective study was done. Following the diagnosis of Grave's disease, patients underwent investigation of their voice, measurement of maximum phonatory time (/a/) and the s/z ratio, fundamental frequency analysis (Praat software), laryngoscopy and (perceptive-auditory) analysis in three different conditions: pre-treatment, 4 days, and 20 days post-radioiodine therapy. Conditions are based on the inflammatory pattern of thyroid tissue (Jones et al. 1999). Results: No statistically significant differences were found in voice characteristics in these three conditions. Conclusion: Radioiodine therapy does not affect voice quality. (author)

  11. Long-range correlation properties in timing of skilled piano performance: the influence of auditory feedback and deep brain stimulation.

    Directory of Open Access Journals (Sweden)

    Maria eHerrojo Ruiz

    2014-09-01

    Full Text Available Unintentional timing deviations during musical performance can be conceived of as timing errors. However, recent research on humanizing computer-generated music has demonstrated that timing fluctuations that exhibit long-range temporal correlations (LRTC are preferred by human listeners. This preference can be accounted for by the ubiquitous presence of LRTC in human tapping and rhythmic performances. Interestingly, the manifestation of LRTC in tapping behavior seems to be driven in a subject-specific manner by the LRTC properties of resting-state background cortical oscillatory activity. In this framework, the current study aimed to investigate whether propagation of timing deviations during the skilled, memorized piano performance (without metronome of 17 professional pianists exhibits LRTC and whether the structure of the correlations is influenced by the presence or absence of auditory feedback.As an additional goal, we set out to investigate the influence of altering the dynamics along the cortico-basal-ganglia-thalamo-cortical network via deep brain stimulation (DBS on the LRTC properties of musical performance. Specifically, we investigated temporal deviations during the skilled piano performance of a non-professional pianist who was treated with subthalamic-deep brain stimulation (STN-DBS due to severe Parkinson's disease, with predominant tremor affecting his right upper extremity. In the tremor-affected right hand, the timing fluctuations of the performance exhibited random correlations with DBS OFF. By contrast, DBS restored long-range dependency in the temporal fluctuations, corresponding with the general motor improvement on DBS.Overall, the present investigations are the first to demonstrate the presence of LRTC in skilled piano performances, indicating that unintentional temporal deviations are correlated over a wide range of time scales. This phenomenon is stable after removal of the auditory feedback, but is altered by STN

  12. Auditory-Perceptual and Acoustic Methods in Measuring Dysphonia Severity of Korean Speech.

    Science.gov (United States)

    Maryn, Youri; Kim, Hyung-Tae; Kim, Jaeock

    2016-09-01

    The purpose of this study was to explore the criterion-related concurrent validity of two standardized auditory-perceptual rating protocols and the Acoustic Voice Quality Index (AVQI) for measuring dysphonia severity in Korean speech. Sixty native Korean subjects with various voice disorders were asked to sustain the vowel [a:] and to read aloud the Korean text "Walk." A 3-second midvowel portion of the sustained vowel and two sentences (with 25 syllables) were edited, concatenated, and analyzed according to methods described elsewhere. From 56 participants, both continuous speech and sustained vowel recordings had sufficiently high signal-to-noise ratios (35.5 dB and 37 dB on average, respectively) and were therefore subjected to further dysphonia severity analysis with (1) "G" or Grade from the GRBAS protocol, (2) "OS" or Overall Severity from the Consensus Auditory-Perceptual Evaluation of Voice protocol, and (3) AVQI. First, high correlations were found between G and OS (rS = 0.955 for sustained vowels; rS = 0.965 for continuous speech). Second, the AVQI showed a strong correlation with G (rS = 0.911) as well as OS (rP = 0.924). These findings are in agreement with similar studies dealing with continuous speech in other languages. The present study highlights the criterion-related concurrent validity of these methods in Korean speech. Furthermore, it supports the cross-linguistic robustness of the AVQI as a valid and objective marker of overall dysphonia severity. Copyright © 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  13. The neural control of singing

    Directory of Open Access Journals (Sweden)

    Jean Mary eZarate

    2013-06-01

    Full Text Available Singing provides a unique opportunity to examine music performance—the musical instrument is contained wholly within the body, thus eliminating the need for creating artificial instruments or tasks in neuroimaging experiments. Here, more than two decades of voice and singing research will be reviewed to give an overview of the sensory-motor control of the singing voice, starting from the vocal tract and leading up to the brain regions involved in singing. Additionally, to demonstrate how sensory feedback is integrated with vocal motor control, recent functional magnetic resonance imaging (fMRI research on somatosensory and auditory feedback processing during singing will be presented. The relationship between the brain and singing behavior will be explored also by examining: 1 neuroplasticity as a function of various lengths and types of training, 2 vocal amusia due to a compromised singing network, and 3 singing performance in individuals with congenital amusia. Finally, the auditory-motor control network for singing will be considered alongside dual-stream models of auditory processing in music and speech to refine both these theoretical models and the singing network itself.

  14. [Review of Talking voices: Repetition, dialogue, and imagery in conversational discourse. 2nd edition. By Deborah Tannen

    OpenAIRE

    Dingemanse, M.

    2010-01-01

    Reviews the book, Talking voices: Repetition, dialogue, and imagery in conversational discourse. 2nd edition by Deborah Tannen. This book is the same as the 1989 original except for an added introduction. This introduction situates TV in the context of intertextuality and gives a survey of relevant research since the book first appeared. The strength of the book lies in its insightful analysis of the auditory side of conversation. Yet talking voices have always been embedded in richly context...

  15. Auditory imagery shapes movement timing and kinematics: evidence from a musical task.

    Science.gov (United States)

    Keller, Peter E; Dalla Bella, Simone; Koch, Iring

    2010-04-01

    The role of anticipatory auditory imagery in music-like sequential action was investigated by examining timing accuracy and kinematics using a motion capture system. Musicians responded to metronomic pacing signals by producing three unpaced taps on three vertically aligned keys at the given tempo. Taps triggered tones in two out of three blocked feedback conditions, where key-to-tone mappings were compatible or incompatible in terms of spatial and pitch height. Results indicate that, while timing was most accurate without tones, movements were smaller in amplitude and less forceful (i.e., acceleration prior to impact was lowest) when tones were present. Moreover, timing was more accurate and movements were less forceful with compatible than with incompatible auditory feedback. Observing these effects at the first tap (before tone onset) suggests that anticipatory auditory imagery modulates the temporal kinematics of regularly timed auditory action sequences, like those found in music. Such cross-modal ideomotor processes may function to facilitate planning efficiency and biomechanical economy in voluntary action. Copyright 2010 APA, all rights reserved.

  16. Multimodal information Management: Evaluation of Auditory and Haptic Cues for NextGen Communication Displays

    Science.gov (United States)

    Begault, Durand R.; Bittner, Rachel M.; Anderson, Mark R.

    2012-01-01

    Auditory communication displays within the NextGen data link system may use multiple synthetic speech messages replacing traditional ATC and company communications. The design of an interface for selecting amongst multiple incoming messages can impact both performance (time to select, audit and release a message) and preference. Two design factors were evaluated: physical pressure-sensitive switches versus flat panel "virtual switches", and the presence or absence of auditory feedback from switch contact. Performance with stimuli using physical switches was 1.2 s faster than virtual switches (2.0 s vs. 3.2 s); auditory feedback provided a 0.54 s performance advantage (2.33 s vs. 2.87 s). There was no interaction between these variables. Preference data were highly correlated with performance.

  17. Differential sensory cortical involvement in auditory and visual sensorimotor temporal recalibration: Evidence from transcranial direct current stimulation (tDCS).

    Science.gov (United States)

    Aytemür, Ali; Almeida, Nathalia; Lee, Kwang-Hyuk

    2017-02-01

    Adaptation to delayed sensory feedback following an action produces a subjective time compression between the action and the feedback (temporal recalibration effect, TRE). TRE is important for sensory delay compensation to maintain a relationship between causally related events. It is unclear whether TRE is a sensory modality-specific phenomenon. In 3 experiments employing a sensorimotor synchronization task, we investigated this question using cathodal transcranial direct-current stimulation (tDCS). We found that cathodal tDCS over the visual cortex, and to a lesser extent over the auditory cortex, produced decreased visual TRE. However, both auditory and visual cortex tDCS did not produce any measurable effects on auditory TRE. Our study revealed different nature of TRE in auditory and visual domains. Visual-motor TRE, which is more variable than auditory TRE, is a sensory modality-specific phenomenon, modulated by the auditory cortex. The robustness of auditory-motor TRE, unaffected by tDCS, suggests the dominance of the auditory system in temporal processing, by providing a frame of reference in the realignment of sensorimotor timing signals. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. An EMG Study of the Lip Muscles during Covert Auditory Verbal Hallucinations in Schizophrenia

    Science.gov (United States)

    Rapin, Lucile; Dohen, Marion; Polosan, Mircea; Perrier, Pascal; Loevenbruck, Hélène

    2013-01-01

    Purpose: "Auditory verbal hallucinations" (AVHs) are speech perceptions in the absence of external stimulation. According to an influential theoretical account of AVHs in schizophrenia, a deficit in inner-speech monitoring may cause the patients' verbal thoughts to be perceived as external voices. The account is based on a…

  19. Bringing voice in policy building.

    Science.gov (United States)

    Lotrecchiano, Gaetano R; Kane, Mary; Zocchi, Mark S; Gosa, Jessica; Lazar, Danielle; Pines, Jesse M

    2017-07-03

    Purpose The purpose of this paper is to describe the use of group concept mapping (GCM) as a tool for developing a conceptual model of an episode of acute, unscheduled care from illness or injury to outcomes such as recovery, death and chronic illness. Design/methodology/approach After generating a literature review drafting an initial conceptual model, GCM software (CS Global MAX TM ) is used to organize and identify strengths and directionality between concepts generated through feedback about the model from several stakeholder groups: acute care and non-acute care providers, patients, payers and policymakers. Through online and in-person population-specific focus groups, the GCM approach seeks feedback, assigned relationships and articulated priorities from participants to produce an output map that described overarching concepts and relationships within and across subsamples. Findings A clustered concept map made up of relational data points that produced a taxonomy of feedback was used to update the model for use in soliciting additional feedback from two technical expert panels (TEPs), and finally, a public comment exercise was performed. The results were a stakeholder-informed improved model for an acute care episode, identified factors that influence process and outcomes, and policy recommendations, which were delivered to the Department of Health and Human Services's (DHHS) Assistant Secretary for Preparedness and Response. Practical implications This study provides an example of the value of cross-population multi-stakeholder input to increase voice in shared problem health stakeholder groups. Originality/value This paper provides GCM results and a visual analysis of the relational characteristics both within and across sub-populations involved in the study. It also provides an assessment of observational key factors supporting how different stakeholder voices can be integrated to inform model development and policy recommendations.

  20. Computer-aided voice training in higher education: participants ...

    African Journals Online (AJOL)

    The training of performance singing in a multi lingual, multi cultural educational context presents unique problems and requires inventive teaching strategies. Computer-aided training offers objective visual feedback of the voice production that can be implemented as a teaching aid in higher education. This article reports on ...

  1. Listening to Schneiderian Voices: A Novel Phenomenological Analysis.

    Science.gov (United States)

    Rosen, Cherise; Chase, Kayla A; Jones, Nev; Grossman, Linda S; Gin, Hannah; Sharma, Rajiv P

    This paper reports on analyses designed to elucidate phenomenological characteristics, content and experience specifically targeting participants with Schneiderian voices conversing/commenting (VC) while exploring differences in clinical presentation and quality of life compared to those with voices not conversing (VNC). This mixed-method investigation of Schneiderian voices included standardized clinical metrics and exploratory phenomenological interviews designed to elicit in-depth information about the characteristics, content, meaning, and personification of auditory verbal hallucinations. The subjective experience shows a striking pattern of VC, as they are experienced as internal at initial onset and during the longer-term course of illness when compared to VNC. Participants in the VC group were more likely to attribute the origin of their voices to an external source such as God, telepathic communication, or mediumistic sources. VC and VNC were described as characterological entities that were distinct from self (I/we vs. you). We also found an association between VC and the positive, cognitive, and depression symptom profile. However, we did not find a significant group difference in overall quality of life. The clinical portrait of VC is complex, multisensory, and distinct, and suggests a need for further research into the biopsychosocial interface between subjective experience, socioenvironmental constraints, individual psychology, and the biological architecture of intersecting symptoms. © 2016 S. Karger AG, Basel.

  2. Comparison Between Vocal Function Exercises and Voice Amplification.

    Science.gov (United States)

    Teixeira, Letícia Caldas; Behlau, Mara

    2015-11-01

    To compare the effectiveness of vocal function exercises (VFEs) versus voice amplification (VA) after a 6-week therapy for teachers diagnosed with behavioral dysphonia. A total of 162 teachers with behavioral dysphonia were randomly allocated into two intervention groups and one control group (CG). Outcomes were assessed using auditory-perceptual evaluation of voice, laryngeal status assessment, self-ratings of the impact of dysphonia, and acoustic analysis. The VFE group showed effective changes across treatment outcome measures: overall severity of dysphonia relative to the CG, laryngeal evaluation, and self-perceived dysphonia. The VA group showed positive outcomes in some measures of self-rated dysphonia. The CG had poorer outcomes across self-assessment dimensions. The VFE method is effective in treating the behavioral dysphonia of teachers, can change the overall severity and the self-perception of the impact of dysphonia, and the laryngeal evaluation outcomes. The use of a voice amplifier is effective as a preventive measure because it results in an improved self-perception of dysphonia, especially in the work-related dimension. One case of dysphonia aggravation can be prevented in every three patients with behavioral dysphonia engaged in VFE, and one case in every five patients using VA. The lack of a therapeutic intervention worsens teachers' behavioral dysphonia in a period of 6 weeks. Copyright © 2015 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  3. Continuous vs. intermittent neurofeedback to regulate auditory cortex activity of tinnitus patients using real-time fMRI - A pilot study

    Directory of Open Access Journals (Sweden)

    Kirsten Emmert

    2017-01-01

    Overall, these results show that continuous feedback is suitable for long-term neurofeedback experiments while intermittent feedback presentation promises good results for single session experiments when using the auditory cortex as a target region. In particular, the down-regulation effect is more pronounced in the secondary auditory cortex, which might be more susceptible to voluntary modulation in comparison to a primary sensory region.

  4. Subjective Loudness and Reality of Auditory Verbal Hallucinations and Activation of the Inner Speech Processing Network

    NARCIS (Netherlands)

    Vercammen, Ans; Knegtering, Henderikus; Bruggeman, Richard; Aleman, Andre

    Background: One of the most influential cognitive models of auditory verbal hallucinations (AVH) suggests that a failure to adequately monitor the production of one's own inner speech leads to verbal thought being misidentified as an alien voice. However, it is unclear whether this theory can

  5. Spectral distribution of solo voice and accompaniment in pop music.

    Science.gov (United States)

    Borch, Daniel Zangger; Sundberg, Johan

    2002-01-01

    Singers performing in popular styles of music mostly rely on feedback provided by monitor loudspeakers on the stage. The highest sound level that these loudspeakers can provide without feedback noise is often too low to be heard over the ambient sound level on the stage. Long-term-average spectra of some orchestral accompaniments typically used in pop music are compared with those of classical symphonic orchestras. In loud pop accompaniment the sound level difference between 0.5 and 2.5 kHz is similar to that of a Wagner orchestra. Long-term-average spectra of pop singers' voices showed no signs of a singer's formant but a peak near 3.5 kHz. It is suggested that pop singers' difficulties to hear their own voices may be reduced if the frequency range 3-4 kHz is boosted in the monitor sound.

  6. The auditory scene: an fMRI study on melody and accompaniment in professional pianists.

    Science.gov (United States)

    Spada, Danilo; Verga, Laura; Iadanza, Antonella; Tettamanti, Marco; Perani, Daniela

    2014-11-15

    The auditory scene is a mental representation of individual sounds extracted from the summed sound waveform reaching the ears of the listeners. Musical contexts represent particularly complex cases of auditory scenes. In such a scenario, melody may be seen as the main object moving on a background represented by the accompaniment. Both melody and accompaniment vary in time according to harmonic rules, forming a typical texture with melody in the most prominent, salient voice. In the present sparse acquisition functional magnetic resonance imaging study, we investigated the interplay between melody and accompaniment in trained pianists, by observing the activation responses elicited by processing: (1) melody placed in the upper and lower texture voices, leading to, respectively, a higher and lower auditory salience; (2) harmonic violations occurring in either the melody, the accompaniment, or both. The results indicated that the neural activation elicited by the processing of polyphonic compositions in expert musicians depends upon the upper versus lower position of the melodic line in the texture, and showed an overall greater activation for the harmonic processing of melody over accompaniment. Both these two predominant effects were characterized by the involvement of the posterior cingulate cortex and precuneus, among other associative brain regions. We discuss the prominent role of the posterior medial cortex in the processing of melodic and harmonic information in the auditory stream, and propose to frame this processing in relation to the cognitive construction of complex multimodal sensory imagery scenes. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Presentation of dynamically overlapping auditory messages in user interfaces

    Energy Technology Data Exchange (ETDEWEB)

    Papp, III, Albert Louis [Univ. of California, Davis, CA (United States)

    1997-09-01

    This dissertation describes a methodology and example implementation for the dynamic regulation of temporally overlapping auditory messages in computer-user interfaces. The regulation mechanism exists to schedule numerous overlapping auditory messages in such a way that each individual message remains perceptually distinct from all others. The method is based on the research conducted in the area of auditory scene analysis. While numerous applications have been engineered to present the user with temporally overlapped auditory output, they have generally been designed without any structured method of controlling the perceptual aspects of the sound. The method of scheduling temporally overlapping sounds has been extended to function in an environment where numerous applications can present sound independently of each other. The Centralized Audio Presentation System is a global regulation mechanism that controls all audio output requests made from all currently running applications. The notion of multimodal objects is explored in this system as well. Each audio request that represents a particular message can include numerous auditory representations, such as musical motives and voice. The Presentation System scheduling algorithm selects the best representation according to the current global auditory system state, and presents it to the user within the request constraints of priority and maximum acceptable latency. The perceptual conflicts between temporally overlapping audio messages are examined in depth through the Computational Auditory Scene Synthesizer. At the heart of this system is a heuristic-based auditory scene synthesis scheduling method. Different schedules of overlapped sounds are evaluated and assigned penalty scores. High scores represent presentations that include perceptual conflicts between over-lapping sounds. Low scores indicate fewer and less serious conflicts. A user study was conducted to validate that the perceptual difficulties predicted by

  8. Auditory hallucinations in adults with hearing impairment: a large prevalence study.

    Science.gov (United States)

    Linszen, M M J; van Zanten, G A; Teunisse, R J; Brouwer, R M; Scheltens, P; Sommer, I E

    2018-03-20

    Similar to visual hallucinations in visually impaired patients, auditory hallucinations are often suggested to occur in adults with hearing impairment. However, research on this association is limited. This observational, cross-sectional study tested whether auditory hallucinations are associated with hearing impairment, by assessing their prevalence in an adult population with various degrees of objectified hearing impairment. Hallucination presence was determined in 1007 subjects aged 18-92, who were referred for audiometric testing to the Department of ENT-Audiology, University Medical Center Utrecht, the Netherlands. The presence and severity of hearing impairment were calculated using mean air conduction thresholds from the most recent pure tone audiometry. Out of 829 participants with hearing impairment, 16.2% (n = 134) had experienced auditory hallucinations in the past 4 weeks; significantly more than the non-impaired group [5.8%; n = 10/173; p impairment, with rates up to 24% in the most profoundly impaired group (p impairment in the best ear. Auditory hallucinations mostly consisted of voices (51%), music (36%), and doorbells or telephones (24%). Our findings reveal that auditory hallucinations are common among patients with hearing impairment, and increase with impairment severity. Although more research on potential confounding factors is necessary, clinicians should be aware of this phenomenon, by inquiring after hallucinations in hearing-impaired patients and, conversely, assessing hearing impairment in patients with auditory hallucinations, since it may be a treatable factor.

  9. Literature review of voice recognition and generation technology for Army helicopter applications

    Science.gov (United States)

    Christ, K. A.

    1984-08-01

    This report is a literature review on the topics of voice recognition and generation. Areas covered are: manual versus vocal data input, vocabulary, stress and workload, noise, protective masks, feedback, and voice warning systems. Results of the studies presented in this report indicate that voice data entry has less of an impact on a pilot's flight performance, during low-level flying and other difficult missions, than manual data entry. However, the stress resulting from such missions may cause the pilot's voice to change, reducing the recognition accuracy of the system. The noise present in helicopter cockpits also causes the recognition accuracy to decrease. Noise-cancelling devices are being developed and improved upon to increase the recognition performance in noisy environments. Future research in the fields of voice recognition and generation should be conducted in the areas of stress and workload, vocabulary, and the types of voice generation best suited for the helicopter cockpit. Also, specific tasks should be studied to determine whether voice recognition and generation can be effectively applied.

  10. Detection of Membrane Puncture with Haptic Feedback using a Tip-Force Sensing Needle.

    Science.gov (United States)

    Elayaperumal, Santhi; Bae, Jung Hwa; Daniel, Bruce L; Cutkosky, Mark R

    2014-09-01

    This paper presents calibration and user test results of a 3-D tip-force sensing needle with haptic feedback. The needle is a modified MRI-compatible biopsy needle with embedded fiber Bragg grating (FBG) sensors for strain detection. After calibration, the needle is interrogated at 2 kHz, and dynamic forces are displayed remotely with a voice coil actuator. The needle is tested in a single-axis master/slave system, with the voice coil haptic display at the master, and the needle at the slave end. Tissue phantoms with embedded membranes were used to determine the ability of the tip-force sensors to provide real-time haptic feedback as compared to external sensors at the needle base during needle insertion via the master/slave system. Subjects were able to determine the position of the embedded membranes with significantly better accuracy using FBG tip feedback than with base feedback using a commercial force/torque sensor (p = 0.045) or with no added haptic feedback (p = 0.0024).

  11. [Psychological effects of preventive voice care training in student teachers].

    Science.gov (United States)

    Nusseck, M; Richter, B; Echternach, M; Spahn, C

    2017-07-01

    Studies on the effectiveness of preventive voice care programs have focused mainly on voice parameters. Psychological parameters, however, have not been investigated in detail so far. The effect of a voice training program for German student teachers on psychological health parameters was investigated in a longitudinal study. The sample of 204 student teachers was divided into the intervention group (n = 123), who participated in the voice training program, and the control group (n = 81), who received no voice training. Voice training contained ten 90-min group courses and an individual visit by the voice trainer in a teaching situation with feedback afterwards. Participants were asked to fill out questionnaires (self-efficacy, Short-Form Health Survey, self-consciousness, voice self-concept, work-related behaviour and experience patterns) at the beginning and the end of their student teacher training period. The training program showed significant positive influences on psychological health, voice self-concept (i.e. more positive perception and increased awareness of one's own voice) and work-related coping behaviour in the intervention group. On average, the mental health status of all participants reduced over time, whereas the status in the trained group diminished significantly less than in the control group. Furthermore, the trained student teachers gained abilities to cope with work-related stress better than those without training. The training program clearly showed a positive impact on mental health. The results maintain the importance of such a training program not only for voice health, but also for wide-ranging aspects of constitutional health.

  12. Investigation of the mechanism of soft tissue conduction explains several perplexing auditory phenomena.

    Science.gov (United States)

    Adelman, Cahtia; Chordekar, Shai; Perez, Ronen; Sohmer, Haim

    2014-09-01

    Soft tissue conduction (STC) is a recently expounded mode of auditory stimulation in which the clinical bone vibrator delivers auditory frequency vibratory stimuli to skin sites on the head, neck, and thorax. Investigation of the mechanism of STC stimulation has served as a platform for the elucidation of the mechanics of cochlear activation, in general, and to a better understanding of several perplexing auditory phenomena. This review demonstrates that it is likely that the cochlear hair cells can be directly activated at low sound intensities by the fluid pressures initiated in the cochlea; that the fetus in utero, completely enveloped in amniotic fluid, hears by STC; that a speaker hears his/her own voice by air conduction and by STC; and that pulsatile tinnitus is likely due to pulsatile turbulent blood flow producing fluid pressures that reach the cochlea through the soft tissues.

  13. Perceptual adaptation of voice gender discrimination with spectrally shifted vowels.

    Science.gov (United States)

    Li, Tianhao; Fu, Qian-Jie

    2011-08-01

    To determine whether perceptual adaptation improves voice gender discrimination of spectrally shifted vowels and, if so, which acoustic cues contribute to the improvement. Voice gender discrimination was measured for 10 normal-hearing subjects, during 5 days of adaptation to spectrally shifted vowels, produced by processing the speech of 5 male and 5 female talkers with 16-channel sine-wave vocoders. The subjects were randomly divided into 2 groups; one subjected to 50-Hz, and the other to 200-Hz, temporal envelope cutoff frequencies. No preview or feedback was provided. There was significant adaptation in voice gender discrimination with the 200-Hz cutoff frequency, but significant improvement was observed only for 3 female talkers with F(0) > 180 Hz and 3 male talkers with F(0) gender discrimination under spectral shift conditions with perceptual adaptation, but spectral shift may limit the exclusive use of spectral information and/or the use of formant structure on voice gender discrimination. The results have implications for cochlear implant users and for understanding voice gender discrimination.

  14. Developmental Changes in Locating Voice and Sound in Space

    Science.gov (United States)

    Kezuka, Emiko; Amano, Sachiko; Reddy, Vasudevi

    2017-01-01

    We know little about how infants locate voice and sound in a complex multi-modal space. Using a naturalistic laboratory experiment the present study tested 35 infants at 3 ages: 4 months (15 infants), 5 months (12 infants), and 7 months (8 infants). While they were engaged frontally with one experimenter, infants were presented with (a) a second experimenter’s voice and (b) castanet sounds from three different locations (left, right, and behind). There were clear increases with age in the successful localization of sounds from all directions, and a decrease in the number of repetitions required for success. Nonetheless even at 4 months two-thirds of the infants attempted to search for the voice or sound. At all ages localizing sounds from behind was more difficult and was clearly present only at 7 months. Perseverative errors (looking at the last location) were present at all ages and appeared to be task specific (only present in the 7 month-olds for the behind location). Spontaneous attention shifts by the infants between the two experimenters, evident at 7 months, suggest early evidence for infant initiation of triadic attentional engagements. There was no advantage found for voice over castanet sounds in this study. Auditory localization is a complex and contextual process emerging gradually in the first half of the first year. PMID:28979220

  15. Reality Monitoring and Feedback Control of Speech Production Are Related Through Self-Agency.

    Science.gov (United States)

    Subramaniam, Karuna; Kothare, Hardik; Mizuiri, Danielle; Nagarajan, Srikantan S; Houde, John F

    2018-01-01

    Self-agency is the experience of being the agent of one's own thoughts and motor actions. The intact experience of self-agency is necessary for successful interactions with the outside world (i.e., reality monitoring) and for responding to sensory feedback of our motor actions (e.g., speech feedback control). Reality monitoring is the ability to distinguish internally self-generated information from outside reality (externally-derived information). In the present study, we examined the relationship of self-agency between lower-level speech feedback monitoring (i.e., monitoring what we hear ourselves say) and a higher-level cognitive reality monitoring task. In particular, we examined whether speech feedback monitoring and reality monitoring were driven by the capacity to experience self-agency-the ability to make reliable predictions about the outcomes of self-generated actions. During the reality monitoring task, subjects made judgments as to whether information was previously self-generated (self-agency judgments) or externally derived (external-agency judgments). During speech feedback monitoring, we assessed self-agency by altering environmental auditory feedback so that subjects listened to a perturbed version of their own speech. When subjects heard minimal perturbations in their auditory feedback while speaking, they made corrective responses, indicating that they judged the perturbations as errors in their speech output. We found that self-agency judgments in the reality-monitoring task were higher in people who had smaller corrective responses ( p = 0.05) and smaller inter-trial variability ( p = 0.03) during minimal pitch perturbations of their auditory feedback. These results provide support for a unitary process for the experience of self-agency governing low-level speech control and higher level reality monitoring.

  16. Hearing and saying. The functional neuro-anatomy of auditory word processing.

    Science.gov (United States)

    Price, C J; Wise, R J; Warburton, E A; Moore, C J; Howard, D; Patterson, K; Frackowiak, R S; Friston, K J

    1996-06-01

    The neural systems involved in hearing and repeating single words were investigated in a series of experiments using PET. Neuropsychological and psycholinguistic studies implicate the involvement of posterior and anterior left perisylvian regions (Wernicke's and Broca's areas). Although previous functional neuroimaging studies have consistently shown activation of Wernicke's area, there has been only variable implication of Broca's area. This study demonstrates that Broca's area is involved in both auditory word perception and repetition but activation is dependent on task (greater during repetition than hearing) and stimulus presentation (greater when hearing words at a slow rate). The peak of frontal activation in response to hearing words is anterior to that associated with repeating words; the former is probably located in Brodmann's area 45, the latter in Brodmann's area 44 and the adjacent precentral sulcus. As Broca's area activation is more subtle and complex than that in Wernicke's area during these tasks, the likelihood of observing it is influenced by both the study design and the image analysis technique employed. As a secondary outcome from the study, the response of bilateral auditory association cortex to 'own voice' during repetition was shown to be the same as when listening to "other voice' from a prerecorded tape.

  17. Intentional preparation of auditory attention-switches: Explicit cueing and sequential switch-predictability.

    Science.gov (United States)

    Seibold, Julia C; Nolden, Sophie; Oberem, Josefa; Fels, Janina; Koch, Iring

    2018-06-01

    In an auditory attention-switching paradigm, participants heard two simultaneously spoken number-words, each presented to one ear, and decided whether the target number was smaller or larger than 5 by pressing a left or right key. An instructional cue in each trial indicated which feature had to be used to identify the target number (e.g., female voice). Auditory attention-switch costs were found when this feature changed compared to when it repeated in two consecutive trials. Earlier studies employing this paradigm showed mixed results when they examined whether such cued auditory attention-switches can be prepared actively during the cue-stimulus interval. This study systematically assessed which preconditions are necessary for the advance preparation of auditory attention-switches. Three experiments were conducted that controlled for cue-repetition benefits, modality switches between cue and stimuli, as well as for predictability of the switch-sequence. Only in the third experiment, in which predictability for an attention-switch was maximal due to a pre-instructed switch-sequence and predictable stimulus onsets, active switch-specific preparation was found. These results suggest that the cognitive system can prepare auditory attention-switches, and this preparation seems to be triggered primarily by the memorised switching-sequence and valid expectations about the time of target onset.

  18. From sensory to long-term memory: evidence from auditory memory reactivation studies.

    Science.gov (United States)

    Winkler, István; Cowan, Nelson

    2005-01-01

    Everyday experience tells us that some types of auditory sensory information are retained for long periods of time. For example, we are able to recognize friends by their voice alone or identify the source of familiar noises even years after we last heard the sounds. It is thus somewhat surprising that the results of most studies of auditory sensory memory show that acoustic details, such as the pitch of a tone, fade from memory in ca. 10-15 s. One should, therefore, ask (1) what types of acoustic information can be retained for a longer term, (2) what circumstances allow or help the formation of durable memory records for acoustic details, and (3) how such memory records can be accessed. The present review discusses the results of experiments that used a model of auditory recognition, the auditory memory reactivation paradigm. Results obtained with this paradigm suggest that the brain stores features of individual sounds embedded within representations of acoustic regularities that have been detected for the sound patterns and sequences in which the sounds appeared. Thus, sounds closely linked with their auditory context are more likely to be remembered. The representations of acoustic regularities are automatically activated by matching sounds, enabling object recognition.

  19. PoLAR Voices: Informing Adult Learners about the Science and Story of Climate Change in the Polar Regions Through Audio Podcast

    Science.gov (United States)

    Quinney, A.; Murray, M. S.; Gobroski, K. A.; Topp, R. M.; Pfirman, S. L.

    2015-12-01

    The resurgence of audio programming with the advent of podcasting in the early 2000s spawned a new medium for communicating advances in science, research, and technology. To capitalize on this informal educational outlet, the Arctic Institute of North America partnered with the International Arctic Research Center, the University of Alaska Fairbanks, and the UA Museum of the North to develop a podcast series called PoLAR Voices for the Polar Learning and Responding (PoLAR) Climate Change Education Partnership. PoLAR Voices is a public education initiative that uses creative storytelling and novel narrative structures to immerse the listener in an auditory depiction of climate change. The programs will feature the science and story of climate change, approaching topics from both the points of view of researchers and Arctic indigenous peoples. This approach will engage the listener in the holistic story of climate change, addressing both scientific and personal perspectives, resulting in a program that is at once educational, entertaining and accessible. Feedback is being collected at each stage of development to ensure the content and format of the program satisfies listener interests and preferences. Once complete, the series will be released on thepolarhub.org and on iTunes. Additionally, blanket distribution of the programs will be accomplished via radio broadcast in urban, rural and remote areas, and in multiple languages to increase distribution and enhance accessibility.

  20. Long-Term Follow-Up of Patients with Spasmodic Dysphonia and Improved Voice despite Discontinuation of Treatment.

    Science.gov (United States)

    Geneid, Ahmed; Lindestad, Per-Åke; Granqvist, Svante; Möller, Riitta; Södersten, Maria

    2016-01-01

    To evaluate voice function in patients with adductor spasmodic dysphonia (AdSD) who discontinued botulinum toxin (BTX) treatment because they felt that their voice had improved sufficiently. Twenty-eight patients quit treatment in 2004, of whom 20 fulfilled the inclusion criteria for the study, with 3 subsequently excluded because of return of symptoms, leaving 17 patients (11 males, 6 females) included in this follow-up study. A questionnaire concerning current voice function and the Voice Handicap Index were completed. Audio-perceptual voice assessments were done by 3 listeners. The inter- and intrarater reliabilities were r > 0.80. All patients had a subjectively good stable voice, but with differences in their audio-perceptual voice assessment scores. Based on the pre-/posttreatment auditory scores on the overall degree of AdSD, patients were divided into 2 subgroups showing more and less improvement, with 10 and 7 patients, respectively. The subgroup with more improvement had shorter duration from the onset of symptoms until the start of BTX treatment, and included 7 males compared to only 4 males in the subgroup with less improvement. It seems plausible that the symptoms of spasmodic dysphonia may decrease over time. Early intervention and male gender seem to be important factors for long-term reduction of the voice symptoms of AdSD. © 2016 S. Karger AG, Basel.

  1. Gender differences in identifying emotions from auditory and visual stimuli.

    Science.gov (United States)

    Waaramaa, Teija

    2017-12-01

    The present study focused on gender differences in emotion identification from auditory and visual stimuli produced by two male and two female actors. Differences in emotion identification from nonsense samples, language samples and prolonged vowels were investigated. It was also studied whether auditory stimuli can convey the emotional content of speech without visual stimuli, and whether visual stimuli can convey the emotional content of speech without auditory stimuli. The aim was to get a better knowledge of vocal attributes and a more holistic understanding of the nonverbal communication of emotion. Females tended to be more accurate in emotion identification than males. Voice quality parameters played a role in emotion identification in both genders. The emotional content of the samples was best conveyed by nonsense sentences, better than by prolonged vowels or shared native language of the speakers and participants. Thus, vocal non-verbal communication tends to affect the interpretation of emotion even in the absence of language. The emotional stimuli were better recognized from visual stimuli than auditory stimuli by both genders. Visual information about speech may not be connected to the language; instead, it may be based on the human ability to understand the kinetic movements in speech production more readily than the characteristics of the acoustic cues.

  2. Tuned with a tune: Talker normalization via general auditory processes

    Directory of Open Access Journals (Sweden)

    Erika J C Laing

    2012-06-01

    Full Text Available Voices have unique acoustic signatures, contributing to the acoustic variability listeners must contend with in perceiving speech, and it has long been proposed that listeners normalize speech perception to information extracted from a talker’s speech. Initial attempts to explain talker normalization relied on extraction of articulatory referents, but recent studies of context-dependent auditory perception suggest that general auditory referents such as the long-term average spectrum (LTAS of a talker’s speech similarly affect speech perception. The present study aimed to differentiate the contributions of articulatory/linguistic versus auditory referents for context-driven talker normalization effects and, more specifically, to identify the specific constraints under which such contexts impact speech perception. Synthesized sentences manipulated to sound like different talkers influenced categorization of a subsequent speech target only when differences in the sentences’ LTAS were in the frequency range of the acoustic cues relevant for the target phonemic contrast. This effect was true both for speech targets preceded by spoken sentence contexts and for targets preceded by nonspeech tone sequences that were LTAS-matched to the spoken sentence contexts. Specific LTAS characteristics, rather than perceived talker, predicted the results suggesting that general auditory mechanisms play an important role in effects considered to be instances of perceptual talker normalization.

  3. Effects of Written Peer-Feedback Content and Sender's Competence on Perceptions, Performance, and Mindful Cognitive Processing

    Science.gov (United States)

    Berndt, Markus; Strijbos, Jan-Willem; Fischer, Frank

    2018-01-01

    Peer-feedback efficiency might be influenced by the oftentimes voiced concern of students that they perceive their peers' competence to provide feedback as inadequate. Feedback literature also identifies mindful processing of (peer)feedback and (peer)feedback content as important for its efficiency, but lacks systematic investigation. In a 2 × 2…

  4. A Randomized, Controlled Trial of Behavioral Voice Therapy for Dysphonia Related to Prematurity of Birth.

    Science.gov (United States)

    Reynolds, Victoria; Meldrum, Suzanne; Simmer, Karen; Vijayasekaran, Shyan; French, Noel

    2017-03-01

    Dysphonia is a potential complication of prematurity. Preterm children may sustain iatrogenic laryngeal damage from medical intervention in the neonatal period, and further, adopt compensatory, maladaptive voicing behaviors. This pilot study aimed to evaluate the effects of a voice therapy protocol on voice quality in school-aged, very preterm (VP) children. Twenty-seven VP children with dysphonia were randomized to an immediate intervention group (n = 7) or a delayed-intervention, waiting list control group (n = 14). Following analysis of these data, a secondary analysis was conducted on the pooled intervention data (n = 21). Six participants did not complete the trial. Change to voice quality was measured via pre- and posttreatment assessments using the Consensus Auditory Perceptual Evaluation of Voice. The intervention group did not demonstrate statistically significant improvements in voice quality, whereas this was observed in the control group (P = 0.026). However, when intervention data were pooled including both the immediate and delayed groups following intervention, dysphonia severity was significantly lower (P = 0.026) in the treatment group. Dysphonia in most VP children in this cohort was persistent. These pilot data indicate that some participants experienced acceptable voice outcomes on spontaneous recovery, whereas others demonstrated a response to behavioral intervention. Further research is needed to identify the facilitators of and barriers to intervention success, and to predict those who may experience spontaneous recovery. Copyright © 2017 The Voice Foundation. All rights reserved.

  5. Hearing voices: does it give your patient a headache? A case of auditory hallucinations as acoustic aura in migraine

    Directory of Open Access Journals (Sweden)

    Van der Feltz-Cornelis CM

    2012-03-01

    Full Text Available Christina M van der Feltz-Cornelis1–3, Henk Biemans1, Jan Timmer11Clinical Centre for Body, Mind and Health, GGz Breburg, Tilburg, The Netherlands; 2Faculty of Social and Behavioral Sciences, Tilburg University, Tilburg, The Netherlands; 3Trimbos Instituut, Utrecht, The NetherlandsObjective: Auditory hallucinations are generally considered to be a psychotic symptom. However, they do occur without other psychotic symptoms in a substantive number of cases in the general population and can cause a lot of individual distress because of the supposed association with schizophrenia. We describe a case of nonpsychotic auditory hallucinations occurring in the context of migraine.Method: Case report and literature review.Results: A 40-year-old man presented with imperative auditory hallucinations that caused depressive and anxiety symptoms. He reported migraine with visual aura as well which started at the same time as the auditory hallucinations. The auditory hallucinations occurred in the context of nocturnal migraine attacks, preceding them as aura. No psychotic disorder was present. After treatment of the migraine with propranolol 40 mg twice daily, explanation of the etiology of the hallucinations, and mirtazapine 45 mg daily, the migraine subsided and no further hallucinations occurred. The patient recovered.Discussion: Visual auras have been described in migraine and occur quite often. Auditory hallucinations as aura in migraine have been described in children without psychosis, but this is the first case describing auditory hallucinations without psychosis as aura in migraine in an adult. For description of this kind of hallucination, DSM-IV lacks an appropriate category.Conclusion: Psychiatrists should consider migraine with acoustic aura as a possible etiological factor in patients without further psychotic symptoms presenting with auditory hallucinations, and they should ask for headache symptoms when they take the history. Prognosis may be

  6. Using Facebook to Reach People Who Experience Auditory Hallucinations.

    Science.gov (United States)

    Crosier, Benjamin Sage; Brian, Rachel Marie; Ben-Zeev, Dror

    2016-06-14

    Auditory hallucinations (eg, hearing voices) are relatively common and underreported false sensory experiences that may produce distress and impairment. A large proportion of those who experience auditory hallucinations go unidentified and untreated. Traditional engagement methods oftentimes fall short in reaching the diverse population of people who experience auditory hallucinations. The objective of this proof-of-concept study was to examine the viability of leveraging Web-based social media as a method of engaging people who experience auditory hallucinations and to evaluate their attitudes toward using social media platforms as a resource for Web-based support and technology-based treatment. We used Facebook advertisements to recruit individuals who experience auditory hallucinations to complete an 18-item Web-based survey focused on issues related to auditory hallucinations and technology use in American adults. We systematically tested multiple elements of the advertisement and survey layout including image selection, survey pagination, question ordering, and advertising targeting strategy. Each element was evaluated sequentially and the most cost-effective strategy was implemented in the subsequent steps, eventually deriving an optimized approach. Three open-ended question responses were analyzed using conventional inductive content analysis. Coded responses were quantified into binary codes, and frequencies were then calculated. Recruitment netted N=264 total sample over a 6-week period. Ninety-seven participants fully completed all measures at a total cost of $8.14 per participant across testing phases. Systematic adjustments to advertisement design, survey layout, and targeting strategies improved data quality and cost efficiency. People were willing to provide information on what triggered their auditory hallucinations along with strategies they use to cope, as well as provide suggestions to others who experience auditory hallucinations. Women, people

  7. Using Facebook to Reach People Who Experience Auditory Hallucinations

    Science.gov (United States)

    Brian, Rachel Marie; Ben-Zeev, Dror

    2016-01-01

    Background Auditory hallucinations (eg, hearing voices) are relatively common and underreported false sensory experiences that may produce distress and impairment. A large proportion of those who experience auditory hallucinations go unidentified and untreated. Traditional engagement methods oftentimes fall short in reaching the diverse population of people who experience auditory hallucinations. Objective The objective of this proof-of-concept study was to examine the viability of leveraging Web-based social media as a method of engaging people who experience auditory hallucinations and to evaluate their attitudes toward using social media platforms as a resource for Web-based support and technology-based treatment. Methods We used Facebook advertisements to recruit individuals who experience auditory hallucinations to complete an 18-item Web-based survey focused on issues related to auditory hallucinations and technology use in American adults. We systematically tested multiple elements of the advertisement and survey layout including image selection, survey pagination, question ordering, and advertising targeting strategy. Each element was evaluated sequentially and the most cost-effective strategy was implemented in the subsequent steps, eventually deriving an optimized approach. Three open-ended question responses were analyzed using conventional inductive content analysis. Coded responses were quantified into binary codes, and frequencies were then calculated. Results Recruitment netted N=264 total sample over a 6-week period. Ninety-seven participants fully completed all measures at a total cost of $8.14 per participant across testing phases. Systematic adjustments to advertisement design, survey layout, and targeting strategies improved data quality and cost efficiency. People were willing to provide information on what triggered their auditory hallucinations along with strategies they use to cope, as well as provide suggestions to others who experience

  8. The role of auditory temporal cues in the fluency of stuttering adults

    Directory of Open Access Journals (Sweden)

    Juliana Furini

    Full Text Available ABSTRACT Purpose: to compare the frequency of disfluencies and speech rate in spontaneous speech and reading in adults with and without stuttering in non-altered and delayed auditory feedback (NAF, DAF. Methods: participants were 30 adults: 15 with Stuttering (Research Group - RG, and 15 without stuttering (Control Group - CG. The procedures were: audiological assessment and speech fluency evaluation in two listening conditions, normal and delayed auditory feedback (100 milliseconds delayed by Fono Tools software. Results: the DAF caused a significant improvement in the fluency of spontaneous speech in RG when compared to speech under NAF. The effect of DAF was different in CG, because it increased the common disfluencies and the total of disfluencies in spontaneous speech and reading, besides showing an increase in the frequency of stuttering-like disfluencies in reading. The intergroup analysis showed significant differences in the two speech tasks for the two listening conditions in the frequency of stuttering-like disfluencies and in the total of disfluencies, and in the flows of syllable and word-per-minute in the NAF. Conclusion: the results demonstrated that delayed auditory feedback promoted fluency in spontaneous speech of adults who stutter, without interfering in the speech rate. In non-stuttering adults an increase occurred in the number of common disfluencies and total of disfluencies as well as reduction of speech rate in spontaneous speech and reading.

  9. Auditory white noise reduces age-related fluctuations in balance.

    Science.gov (United States)

    Ross, J M; Will, O J; McGann, Z; Balasubramaniam, R

    2016-09-06

    Fall prevention technologies have the potential to improve the lives of older adults. Because of the multisensory nature of human balance control, sensory therapies, including some involving tactile and auditory noise, are being explored that might reduce increased balance variability due to typical age-related sensory declines. Auditory white noise has previously been shown to reduce postural sway variability in healthy young adults. In the present experiment, we examined this treatment in young adults and typically aging older adults. We measured postural sway of healthy young adults and adults over the age of 65 years during silence and auditory white noise, with and without vision. Our results show reduced postural sway variability in young and older adults with auditory noise, even in the absence of vision. We show that vision and noise can reduce sway variability for both feedback-based and exploratory balance processes. In addition, we show changes with auditory noise in nonlinear patterns of sway in older adults that reflect what is more typical of young adults, and these changes did not interfere with the typical random walk behavior of sway. Our results suggest that auditory noise might be valuable for therapeutic and rehabilitative purposes in older adults with typical age-related balance variability. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. Bottom-up influences of voice continuity in focusing selective auditory attention

    OpenAIRE

    Bressler, Scott; Masud, Salwa; Bharadwaj, Hari; Shinn-Cunningham, Barbara

    2014-01-01

    Selective auditory attention causes a relative enhancement of the neural representation of important information and suppression of the neural representation of distracting sound, which enables a listener to analyze and interpret information of interest. Some studies suggest that in both vision and in audition, the “unit” on which attention operates is an object: an estimate of the information coming from a particular external source out in the world. In this view, which object ends up in the...

  11. Connections between voice ergonomic risk factors and voice symptoms, voice handicap, and respiratory tract diseases.

    Science.gov (United States)

    Rantala, Leena M; Hakala, Suvi J; Holmqvist, Sofia; Sala, Eeva

    2012-11-01

    The aim of the study was to investigate the connections between voice ergonomic risk factors found in classrooms and voice-related problems in teachers. Voice ergonomic assessment was performed in 39 classrooms in 14 elementary schools by means of a Voice Ergonomic Assessment in Work Environment--Handbook and Checklist. The voice ergonomic risk factors assessed included working culture, noise, indoor air quality, working posture, stress, and access to a sound amplifier. Teachers from the above-mentioned classrooms reported their voice symptoms, respiratory tract diseases, and completed a Voice Handicap Index (VHI). The more voice ergonomic risk factors found in the classroom the higher were the teachers' total scores on voice symptoms and VHI. Stress was the factor that correlated most strongly with voice symptoms. Poor indoor air quality increased the occurrence of laryngitis. Voice ergonomics were poor in the classrooms studied and voice ergonomic risk factors affected the voice. It is important to convey information on voice ergonomics to education administrators and those responsible for school planning and taking care of school buildings. Copyright © 2012 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  12. Performance of the phonatory deviation diagram in the evaluation of rough and breathy synthesized voices.

    Science.gov (United States)

    Lopes, Leonardo Wanderley; Freitas, Jonas Almeida de; Almeida, Anna Alice; Silva, Priscila Oliveira Costa; Alves, Giorvan Ânderson Dos Santos

    2017-07-05

    Voice disorders alter the sound signal in several ways, combining several types of vocal emission disturbances and noise. The Phonatory Deviation Diagram (PDD) is a two-dimensional chart that allows the evaluation of the vocal signal based on the combination of periodicity (jitter, shimmer, and correlation coefficient) and noise (Glottal to Noise Excitation - GNE) measurements. The use of synthesized signals, where one has a greater control and knowledge of the production conditions, may allow a better understanding of the physiological and acoustic mechanisms underlying the vocal emission and its main perceptual-auditory correlates regarding the intensity of the deviation and types of vocal quality. To analyze the performance of the PDD in the discrimination of the presence and degree of roughness and breathiness in synthesized voices. 871 synthesized vocal signals were used corresponding to the vowel /ɛ/. The perceptual-auditory analysis of the degree of roughness and breathiness of the synthesized signals was performed using Visual Analogue Scale (VAS). Subsequently, the signals were categorized regarding the presence/absence of these parameters based on the VAS cutoff values. Acoustic analysis was performed by assessing the distribution of vocal signals according to the PDD area, quadrant, shape, and density. The equality of proportions and the chi-square tests were performed to compare the variables. Rough and breathy vocal signals were located predominantly outside the normal range and in the lower right quadrant of the PDD. Voices with higher degrees of roughness and breathiness were located outside the area of normality in the lower right quadrant and had concentrated density. The normality area and the PDD quadrant can discriminate healthy voices from rough and breathy ones. Voices with higher degrees of roughness and breathiness are proportionally located outside the area of normality, in the lower right quadrant and with concentrated density. Copyright

  13. External Validation of the Acoustic Voice Quality Index Version 03.01 With Extended Representativity.

    Science.gov (United States)

    Barsties, Ben; Maryn, Youri

    2016-07-01

    The Acoustic Voice Quality Index (AVQI) is an objective method to quantify the severity of overall voice quality in concatenated continuous speech and sustained phonation segments. Recently, AVQI was successfully modified to be more representative and ecologically valid because the internal consistency of AVQI was balanced out through equal proportion of the 2 speech types. The present investigation aims to explore its external validation in a large data set. An expert panel of 12 speech-language therapists rated the voice quality of 1058 concatenated voice samples varying from normophonia to severe dysphonia. The Spearman rank-order correlation coefficients (r) were used to measure concurrent validity. The AVQI's diagnostic accuracy was evaluated with several estimates of its receiver operating characteristics (ROC). Finally, 8 of the 12 experts were chosen because of reliability criteria. A strong correlation was identified between AVQI and auditoryperceptual rating (r = 0.815, P = .000). It indicated that 66.4% of the auditory-perceptual rating's variation was explained by AVQI. Additionally, the ROC results showed again the best diagnostic outcome at a threshold of AVQI = 2.43. This study highlights external validation and diagnostic precision of the AVQI version 03.01 as a robust and ecologically valid measurement to objectify voice quality. © The Author(s) 2016.

  14. Audio-visual identification of place of articulation and voicing in white and babble noise.

    Science.gov (United States)

    Alm, Magnus; Behne, Dawn M; Wang, Yue; Eg, Ragnhild

    2009-07-01

    Research shows that noise and phonetic attributes influence the degree to which auditory and visual modalities are used in audio-visual speech perception (AVSP). Research has, however, mainly focused on white noise and single phonetic attributes, thus neglecting the more common babble noise and possible interactions between phonetic attributes. This study explores whether white and babble noise differentially influence AVSP and whether these differences depend on phonetic attributes. White and babble noise of 0 and -12 dB signal-to-noise ratio were added to congruent and incongruent audio-visual stop consonant-vowel stimuli. The audio (A) and video (V) of incongruent stimuli differed either in place of articulation (POA) or voicing. Responses from 15 young adults show that, compared to white noise, babble resulted in more audio responses for POA stimuli, and fewer for voicing stimuli. Voiced syllables received more audio responses than voiceless syllables. Results can be attributed to discrepancies in the acoustic spectra of both the noise and speech target. Voiced consonants may be more auditorily salient than voiceless consonants which are more spectrally similar to white noise. Visual cues contribute to identification of voicing, but only if the POA is visually salient and auditorily susceptible to the noise type.

  15. Comparison of Perceptual Signs of Voice before and after Vocal Hygiene Program in Adults with Dysphonia

    Directory of Open Access Journals (Sweden)

    Seyyedeh Maryam khoddami

    2011-12-01

    Full Text Available Background and Aim: Vocal abuse and misuse are the most frequent causes of voice disorders. Consequently some therapy is needed to stop or modify such behaviors. This research was performed to study the effectiveness of vocal hygiene program on perceptual signs of voice in people with dysphonia.Methods: A Vocal hygiene program was performed to 8 adults with dysphonia for 6 weeks. At first, Consensus Auditory- Perceptual Evaluation of Voice was used to assess perceptual signs. Then the program was delivered, Individuals were followed in second and forth weeks visits. In the last session, perceptual assessment was performed and individuals’ opinions were collected. Perceptual findings were compared before and after the therapy.Results: After the program, mean score of perceptual assessment decreased. Mean score of every perceptual sign revealed significant difference before and after the therapy (p≤0.0001. «Loudness» had maximum score and coordination between speech and respiration indicated minimum score. All participants confirmed efficiency of the therapy.Conclusion: The vocal hygiene program improves all perceptual signs of voice although not equally. This deduction is confirmed by both clinician-based and patient-based assessments. As a result, vocal hygiene program is necessary for a comprehensive voice therapy but is not solely effective to resolve all voice problems.

  16. Impaired Feedforward Control and Enhanced Feedback Control of Speech in Patients with Cerebellar Degeneration.

    Science.gov (United States)

    Parrell, Benjamin; Agnew, Zarinah; Nagarajan, Srikantan; Houde, John; Ivry, Richard B

    2017-09-20

    The cerebellum has been hypothesized to form a crucial part of the speech motor control network. Evidence for this comes from patients with cerebellar damage, who exhibit a variety of speech deficits, as well as imaging studies showing cerebellar activation during speech production in healthy individuals. To date, the precise role of the cerebellum in speech motor control remains unclear, as it has been implicated in both anticipatory (feedforward) and reactive (feedback) control. Here, we assess both anticipatory and reactive aspects of speech motor control, comparing the performance of patients with cerebellar degeneration and matched controls. Experiment 1 tested feedforward control by examining speech adaptation across trials in response to a consistent perturbation of auditory feedback. Experiment 2 tested feedback control, examining online corrections in response to inconsistent perturbations of auditory feedback. Both male and female patients and controls were tested. The patients were impaired in adapting their feedforward control system relative to controls, exhibiting an attenuated anticipatory response to the perturbation. In contrast, the patients produced even larger compensatory responses than controls, suggesting an increased reliance on sensory feedback to guide speech articulation in this population. Together, these results suggest that the cerebellum is crucial for maintaining accurate feedforward control of speech, but relatively uninvolved in feedback control. SIGNIFICANCE STATEMENT Speech motor control is a complex activity that is thought to rely on both predictive, feedforward control as well as reactive, feedback control. While the cerebellum has been shown to be part of the speech motor control network, its functional contribution to feedback and feedforward control remains controversial. Here, we use real-time auditory perturbations of speech to show that patients with cerebellar degeneration are impaired in adapting feedforward control of

  17. Voice disorders in mucosal leishmaniasis.

    Directory of Open Access Journals (Sweden)

    Ana Cristina Nunes Ruas

    Full Text Available INTRODUCTION: Leishmaniasis is considered as one of the six most important infectious diseases because of its high detection coefficient and ability to produce deformities. In most cases, mucosal leishmaniasis (ML occurs as a consequence of cutaneous leishmaniasis. If left untreated, mucosal lesions can leave sequelae, interfering in the swallowing, breathing, voice and speech processes and requiring rehabilitation. OBJECTIVE: To describe the anatomical characteristics and voice quality of ML patients. MATERIALS AND METHODS: A descriptive transversal study was conducted in a cohort of ML patients treated at the Laboratory for Leishmaniasis Surveillance of the Evandro Chagas National Institute of Infectious Diseases-Fiocruz, between 2010 and 2013. The patients were submitted to otorhinolaryngologic clinical examination by endoscopy of the upper airways and digestive tract and to speech-language assessment through directed anamnesis, auditory perception, phonation times and vocal acoustic analysis. The variables of interest were epidemiologic (sex and age and clinic (lesion location, associated symptoms and voice quality. RESULTS: 26 patients under ML treatment and monitored by speech therapists were studied. 21 (81% were male and five (19% female, with ages ranging from 15 to 78 years (54.5+15.0 years. The lesions were distributed in the following structures 88.5% nasal, 38.5% oral, 34.6% pharyngeal and 19.2% laryngeal, with some patients presenting lesions in more than one anatomic site. The main complaint was nasal obstruction (73.1%, followed by dysphonia (38.5%, odynophagia (30.8% and dysphagia (26.9%. 23 patients (84.6% presented voice quality perturbations. Dysphonia was significantly associated to lesions in the larynx, pharynx and oral cavity. CONCLUSION: We observed that vocal quality perturbations are frequent in patients with mucosal leishmaniasis, even without laryngeal lesions; they are probably associated to disorders of some

  18. The human auditory brainstem response to running speech reveals a subcortical mechanism for selective attention.

    Science.gov (United States)

    Forte, Antonio Elia; Etard, Octave; Reichenbach, Tobias

    2017-10-10

    Humans excel at selectively listening to a target speaker in background noise such as competing voices. While the encoding of speech in the auditory cortex is modulated by selective attention, it remains debated whether such modulation occurs already in subcortical auditory structures. Investigating the contribution of the human brainstem to attention has, in particular, been hindered by the tiny amplitude of the brainstem response. Its measurement normally requires a large number of repetitions of the same short sound stimuli, which may lead to a loss of attention and to neural adaptation. Here we develop a mathematical method to measure the auditory brainstem response to running speech, an acoustic stimulus that does not repeat and that has a high ecological validity. We employ this method to assess the brainstem's activity when a subject listens to one of two competing speakers, and show that the brainstem response is consistently modulated by attention.

  19. Emotional expressions in voice and music: same code, same effect?

    Science.gov (United States)

    Escoffier, Nicolas; Zhong, Jidan; Schirmer, Annett; Qiu, Anqi

    2013-08-01

    Scholars have documented similarities in the way voice and music convey emotions. By using functional magnetic resonance imaging (fMRI) we explored whether these similarities imply overlapping processing substrates. We asked participants to trace changes in either the emotion or pitch of vocalizations and music using a joystick. Compared to music, vocalizations more strongly activated superior and middle temporal cortex, cuneus, and precuneus. However, despite these differences, overlapping rather than differing regions emerged when comparing emotion with pitch tracing for music and vocalizations, respectively. Relative to pitch tracing, emotion tracing activated medial superior frontal and anterior cingulate cortex regardless of stimulus type. Additionally, we observed emotion specific effects in primary and secondary auditory cortex as well as in medial frontal cortex that were comparable for voice and music. Together these results indicate that similar mechanisms support emotional inferences from vocalizations and music and that these mechanisms tap on a general system involved in social cognition. Copyright © 2011 Wiley Periodicals, Inc.

  20. Effects of acoustic feedback training in elite-standard Para-Rowing.

    Science.gov (United States)

    Schaffert, Nina; Mattes, Klaus

    2015-01-01

    Assessment and feedback devices have been regularly used in technique training in high-performance sports. Biomechanical analysis is mainly visually based and so can exclude athletes with visual impairments. The aim of this study was to examine the effects of auditory feedback on mean boat speed during on-water training of visually impaired athletes. The German National Para-Rowing team (six athletes, mean ± s, age 34.8 ± 10.6 years, body mass 76.5 ± 13.5 kg, stature 179.3 ± 8.6 cm) participated in the study. Kinematics included boat acceleration and distance travelled, collected with Sofirow at two intensities of training. The boat acceleration-time traces were converted online into acoustic feedback and presented via speakers during rowing (sections with and without alternately). Repeated-measures within-participant factorial ANOVA showed greater boat speed with acoustic feedback than baseline (0.08 ± 0.01 m·s(-1)). The time structure of rowing cycles was improved (extended time of positive acceleration). Questioning of athletes showed acoustic feedback to be a supportive training aid as it provided important functional information about the boat motion independent of vision. It gave access for visually impaired athletes to biomechanical analysis via auditory information. The concept for adaptive athletes has been successfully integrated into the preparation for the Para-Rowing World Championships and Paralympics.

  1. Changes in brain activity following intensive voice treatment in children with cerebral palsy.

    Science.gov (United States)

    Bakhtiari, Reyhaneh; Cummine, Jacqueline; Reed, Alesha; Fox, Cynthia M; Chouinard, Brea; Cribben, Ivor; Boliek, Carol A

    2017-09-01

    Eight children (3 females; 8-16 years) with motor speech disorders secondary to cerebral palsy underwent 4 weeks of an intensive neuroplasticity-principled voice treatment protocol, LSVT LOUD ® , followed by a structured 12-week maintenance program. Children were asked to overtly produce phonation (ah) at conversational loudness, cued-phonation at perceived twice-conversational loudness, a series of single words, and a prosodic imitation task while being scanned using fMRI, immediately pre- and post-treatment and 12 weeks following a maintenance program. Eight age- and sex-matched controls were scanned at each of the same three time points. Based on the speech and language literature, 16 bilateral regions of interest were selected a priori to detect potential neural changes following treatment. Reduced neural activity in the motor areas (decreased motor system effort) before and immediately after treatment, and increased activity in the anterior cingulate gyrus after treatment (increased contribution of decision making processes) were observed in the group with cerebral palsy compared to the control group. Using graphical models, post-treatment changes in connectivity were observed between the left supramarginal gyrus and the right supramarginal gyrus and the left precentral gyrus for the children with cerebral palsy, suggesting LSVT LOUD enhanced contributions of the feedback system in the speech production network instead of high reliance on feedforward control system and the somatosensory target map for regulating vocal effort. Network pruning indicates greater processing efficiency and the recruitment of the auditory and somatosensory feedback control systems following intensive treatment. Hum Brain Mapp 38:4413-4429, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  2. Análise perceptivo-auditiva, acústica computadorizada e laringológica da voz de adultos jovens fumantes e não-fumantes Auditory perceptual, acoustic, computerized and laryngological analysis of young smokers' and nonsmokers' voice

    Directory of Open Access Journals (Sweden)

    Daniele C. de Figueiredo

    2003-12-01

    Full Text Available OBJETIVO: Realizar a avaliação laringológica, análise perceptivo-auditiva e acústica computadorizada das vozes de adultos jovens fumantes e não-fumantes, sem queixa vocal, compará-las e verificar a incidência de alterações laríngeas. FORMA DE ESTUDO: Caso-controle. MATERIAL E MÉTODO: Foram analisadas as vozes de 80 indivíduos com idades compreendidas entre 20 e 40 anos. Estes foram divididos em quatro grupos: 20 homens fumantes, 20 homens não-fumantes, 20 mulheres fumantes e 20 mulheres não-fumantes. Este estudo envolveu laringoscopia, realizada e interpretada por uma médica otorrinolaringologista, e gravação em fita cassete das vogais sustentadas /a/, /m/, /i/ e /u/, contagem dos números de 1 a 20, emissão dos dias da semana, dos meses do ano e da canção "Parabéns a você". A gravação em fita cassete foi editada para posterior análise espectrográfica e avaliação perceptiva auditiva por quatro avaliadores com experiência na área de voz. RESULTADOS: Após a análise, foi constatada uma discreta diminuição da freqüência fundamental da voz dos indivíduos fumantes de ambos os sexos, bem como maior incidência de rouquidão e de alterações laríngeas entre os tabagistas.AIM: The goal of this study was to make the laryngological, auditory perceptual and acoustic computer analyses of young adults' (smokers and non-smokers voices, without vocal complaint, compare them and verify the incidence of vocal alterations. STUDY DESIGN: Clinical comparative. MATERIAL AND METHOD: The voices of 80 individuals with age range from 20 to 40 years were analyzed. These individuals were divided in four groups: 20 male smokers, 20 male non-smokers, 20 female smokers and 20 female non-smokers. This analysis involved laryngoscopy, which was performed and interpreted by an otolaryngologist, and cassette tape recordings of the sustained vowels /a/, /m/, /i/ e /u/, number counting from 1 to 20, speech of the days of the week, months of

  3. Extracting the Neural Representation of Tone Onsets for Separate Voices of Ensemble Music Using Multivariate EEG Analysis

    DEFF Research Database (Denmark)

    Sturm, Irene; Treder, Matthias S.; Miklody, Daniel

    2015-01-01

    responses to tone onsets, such as N1/P2 ERP components. Music clips (resembling minimalistic electro-pop) were presented to 11 subjects, either in an ensemble version (drums, bass, keyboard) or in the corresponding three solo versions. For each instrument we train a spatio-temporal regression filter...... at the level of early auditory ERPs parallels the perceptual segregation of multi-voiced music....

  4. Quantifying stimulus-response rehabilitation protocols by auditory feedback in Parkinson's disease gait pattern

    Science.gov (United States)

    Pineda, Gustavo; Atehortúa, Angélica; Iregui, Marcela; García-Arteaga, Juan D.; Romero, Eduardo

    2017-11-01

    External auditory cues stimulate motor related areas of the brain, activating motor ways parallel to the basal ganglia circuits and providing a temporary pattern for gait. In effect, patients may re-learn motor skills mediated by compensatory neuroplasticity mechanisms. However, long term functional gains are dependent on the nature of the pathology, follow-up is usually limited and reinforcement by healthcare professionals is crucial. Aiming to cope with these challenges, several researches and device implementations provide auditory or visual stimulation to improve Parkinsonian gait pattern, inside and outside clinical scenarios. The current work presents a semiautomated strategy for spatio-temporal feature extraction to study the relations between auditory temporal stimulation and spatiotemporal gait response. A protocol for auditory stimulation was built to evaluate the integrability of the strategy in the clinic practice. The method was evaluated in transversal measurement with an exploratory group of people with Parkinson's (n = 12 in stage 1, 2 and 3) and control subjects (n =6). The result showed a strong linear relation between auditory stimulation and cadence response in control subjects (R=0.98 +/-0.008) and PD subject in stage 2 (R=0.95 +/-0.03) and stage 3 (R=0.89 +/-0.05). Normalized step length showed a variable response between low and high gait velocity (0.2> R >0.97). The correlation between normalized mean velocity and stimulus was strong in all PD stage 2 (R>0.96) PD stage 3 (R>0.84) and controls (R>0.91) for all experimental conditions. Among participants, the largest variation from baseline was found in PD subject in stage 3 (53.61 +/-39.2 step/min, 0.12 +/- 0.06 in step length and 0.33 +/- 0.16 in mean velocity). In this group these values were higher than the own baseline. These variations are related with direct effect of metronome frequency on cadence and velocity. The variation of step length involves different regulation strategies and

  5. Translation and adaptation of functional auditory performance indicators (FAPI

    Directory of Open Access Journals (Sweden)

    Karina Ferreira

    2011-12-01

    Full Text Available Work with deaf children has gained new attention since the expectation and goal of therapy has expanded to language development and subsequent language learning. Many clinical tests were developed for evaluation of speech sound perception in young children in response to the need for accurate assessment of hearing skills that developed from the use of individual hearing aids or cochlear implants. These tests also allow the evaluation of the rehabilitation program. However, few of these tests are available in Portuguese. Evaluation with the Functional Auditory Performance Indicators (FAPI generates a child's functional auditory skills profile, which lists auditory skills in an integrated and hierarchical order. It has seven hierarchical categories, including sound awareness, meaningful sound, auditory feedback, sound source localizing, auditory discrimination, short-term auditory memory, and linguistic auditory processing. FAPI evaluation allows the therapist to map the child's hearing profile performance, determine the target for increasing the hearing abilities, and develop an effective therapeutic plan. Objective: Since the FAPI is an American test, the inventory was adapted for application in the Brazilian population. Material and Methods: The translation was done following the steps of translation and back translation, and reproducibility was evaluated. Four translated versions (two originals and two back-translated were compared, and revisions were done to ensure language adaptation and grammatical and idiomatic equivalence. Results: The inventory was duly translated and adapted. Conclusion: Further studies about the application of the translated FAPI are necessary to make the test practicable in Brazilian clinical use.

  6. Students' Perceived Preference for Visual and Auditory Assessment with E-Handwritten Feedback

    Science.gov (United States)

    Crews, Tena B.; Wilkinson, Kelly

    2010-01-01

    Undergraduate business communication students were surveyed to determine their perceived most effective method of assessment on writing assignments. The results indicated students' preference for a process that incorporates visual, auditory, and e-handwritten presentation via a tablet PC. Students also identified this assessment process would…

  7. The effect of escalating feedback on the acquisition of psychomotor skills for laparoscopy.

    Science.gov (United States)

    Van Sickle, K R; Gallagher, A G; Smith, C D

    2007-02-01

    In the acquisition of new skills that are difficult to master, such as those required for laparoscopy, feedback is a crucial component of the learning experience. Optimally, feedback should accurately reflect the task performance to be improved and be proximal to the training experience. In surgery, however, feedback typically is in vivo. The development of virtual reality training systems currently offers new training options. This study investigated the effect of feedback type and quality on laparoscopic skills acquisition. For this study, 32 laparoscopic novices were prospectively randomized into four training conditions, with 8 in each group. Group 1 (control) had no feedback. Group 2 (buzzer) had audio feedback when the edges were touched. Group 3 (voiced error) had an examiner voicing the word "error" each time the walls were touched. Group 4 (both) received both the audio buzzer and "error" voiced by the examiner All the subjects performed a maze-tracking task with a laparoscopic stylus inserted through a 5-mm port to simulate the fulcrum effect in minimally invasive surgery (MIS). A computer connected to the stylus scored an error each time the edge of the maze was touched, and the subjects were made aware of the error in the aforementioned manner. Ten 2-min trials were performed by the subjects while viewing a monitor. At the conclusion of training, all the subjects completed a 2-min trial of a simple laparoscopic cutting task, with the number of correct and incorrect incisions recorded. Group 4 (both) made significantly more correct incisions than the other three groups (F = 12.13; df = 3, 28; p < 0.001), and also made significantly fewer errors or incorrect incisions (F = 14.4; p < 0.0001). Group 4 also made three times more correct incisions and 7.4 times fewer incorrect incisions than group 1 (control). The type and quality of feedback during psychomotor skill acquisition for MIS have a large effect on the strength of skills generalization to a simple

  8. A Qualitative Analysis of Student Pharmacists’ Response after an Auditory Hallucination Simulation

    Directory of Open Access Journals (Sweden)

    Genevieve L Ness

    2017-08-01

    Full Text Available Objectives: The goal of this research was to evaluate pharmacy students’ experiences and reactions when exposed to an auditory hallucination simulator. Methods: A convenient sample of 16 pharmacy students enrolled in the Advanced Psychiatry Elective at a private, faith-based university in the southeastern United States was selected. Students participated in an activity in which they listened to an auditory hallucination simulator from their personal laptop computers and completed a variety of tasks. Following the conclusion of the simulator, students composed a reflection guided by a five-question prompt. Qualitative analysis of the reflections was then completed to identify and categorize overarching themes. Results: The overarching themes identified included: 1 students mentioned strategies they used to overcome the distraction; 2 students discussed how the voices affected their ability to complete the activities; 3 students discussed the mental/physical toll they experienced; 4 students identified methods to assist patients with schizophrenia; 5 students mentioned an increase in their empathy for patients; 6 students reported their reactions to the voices; 7 students recognized how schizophrenia could affect the lives of these patients; and 8 students expressed how their initial expectations and reactions to the voices changed throughout the course of the simulation. Overall, the use of this simulator as a teaching aid was well received by students. Summary: In conclusion, pharmacy students were impacted by the hallucination simulator and expressed an increased awareness of the challenges faced by these patients on a daily basis. Conflict of Interest We declare no conflicts of interest or financial interests that the authors or members of their immediate families have in any product or service discussed in the manuscript, including grants (pending or received, employment, gifts, stock holdings or options, honoraria, consultancies, expert

  9. Anti-voice adaptation suggests prototype-based coding of voice identity

    Directory of Open Access Journals (Sweden)

    Marianne eLatinus

    2011-07-01

    Full Text Available We used perceptual aftereffects induced by adaptation with anti-voice stimuli to investigate voice identity representations. Participants learned a set of voices then were tested on a voice identification task with vowel stimuli morphed between identities, after different conditions of adaptation. In Experiment 1, participants chose the identity opposite to the adapting anti-voice significantly more often than the other two identities (e.g., after being adapted to anti-A, they identified the average voice as A. In Experiment 2, participants showed a bias for identities opposite to the adaptor specifically for anti-voice, but not for non anti-voice adaptors. These results are strikingly similar to adaptation aftereffects observed for facial identity. They are compatible with a representation of individual voice identities in a multidimensional perceptual voice space referenced on a voice prototype.

  10. Temporal Sequence of Visuo-Auditory Interaction in Multiple Areas of the Guinea Pig Visual Cortex

    Science.gov (United States)

    Nishimura, Masataka; Song, Wen-Jie

    2012-01-01

    Recent studies in humans and monkeys have reported that acoustic stimulation influences visual responses in the primary visual cortex (V1). Such influences can be generated in V1, either by direct auditory projections or by feedback projections from extrastriate cortices. To test these hypotheses, cortical activities were recorded using optical imaging at a high spatiotemporal resolution from multiple areas of the guinea pig visual cortex, to visual and/or acoustic stimulations. Visuo-auditory interactions were evaluated according to differences between responses evoked by combined auditory and visual stimulation, and the sum of responses evoked by separate visual and auditory stimulations. Simultaneous presentation of visual and acoustic stimulations resulted in significant interactions in V1, which occurred earlier than in other visual areas. When acoustic stimulation preceded visual stimulation, significant visuo-auditory interactions were detected only in V1. These results suggest that V1 is a cortical origin of visuo-auditory interaction. PMID:23029483

  11. Comparison between treadmill training with rhythmic auditory stimulation and ground walking with rhythmic auditory stimulation on gait ability in chronic stroke patients: A pilot study.

    Science.gov (United States)

    Park, Jin; Park, So-yeon; Kim, Yong-wook; Woo, Youngkeun

    2015-01-01

    Generally, treadmill training is very effective intervention, and rhythmic auditory stimulation is designed to feedback during gait training in stroke patients. The purpose of this study was to compare the gait abilities in chronic stroke patients following either treadmill walking training with rhythmic auditory stimulation (TRAS) or over ground walking training with rhythmic auditory stimulation (ORAS). Nineteen subjects were divided into two groups: a TRAS group (9 subjects) and an ORAS group (10 subjects). Temporal and spatial gait parameters and motor recovery ability were measured before and after the training period. Gait ability was measured by the Biodex Gait trainer treadmill system, Timed up and go test (TUG), 6 meter walking distance (6MWD) and Functional gait assessment (FGA). After the training periods, the TRAS group showed a significant improvement in walking speed, step cycle, step length of the unaffected limb, coefficient of variation, 6MWD, and, FGA when compared to the ORAS group (p <  0.05). Treadmill walking training during the rhythmic auditory stimulation may be useful for rehabilitation of patients with chronic stroke.

  12. Neural Substrates of Auditory Emotion Recognition Deficits in Schizophrenia.

    Science.gov (United States)

    Kantrowitz, Joshua T; Hoptman, Matthew J; Leitman, David I; Moreno-Ortega, Marta; Lehrfeld, Jonathan M; Dias, Elisa; Sehatpour, Pejman; Laukka, Petri; Silipo, Gail; Javitt, Daniel C

    2015-11-04

    Deficits in auditory emotion recognition (AER) are a core feature of schizophrenia and a key component of social cognitive impairment. AER deficits are tied behaviorally to impaired ability to interpret tonal ("prosodic") features of speech that normally convey emotion, such as modulations in base pitch (F0M) and pitch variability (F0SD). These modulations can be recreated using synthetic frequency modulated (FM) tones that mimic the prosodic contours of specific emotional stimuli. The present study investigates neural mechanisms underlying impaired AER using a combined event-related potential/resting-state functional connectivity (rsfMRI) approach in 84 schizophrenia/schizoaffective disorder patients and 66 healthy comparison subjects. Mismatch negativity (MMN) to FM tones was assessed in 43 patients/36 controls. rsfMRI between auditory cortex and medial temporal (insula) regions was assessed in 55 patients/51 controls. The relationship between AER, MMN to FM tones, and rsfMRI was assessed in the subset who performed all assessments (14 patients, 21 controls). As predicted, patients showed robust reductions in MMN across FM stimulus type (p = 0.005), particularly to modulations in F0M, along with impairments in AER and FM tone discrimination. MMN source analysis indicated dipoles in both auditory cortex and anterior insula, whereas rsfMRI analyses showed reduced auditory-insula connectivity. MMN to FM tones and functional connectivity together accounted for ∼50% of the variance in AER performance across individuals. These findings demonstrate that impaired preattentive processing of tonal information and reduced auditory-insula connectivity are critical determinants of social cognitive dysfunction in schizophrenia, and thus represent key targets for future research and clinical intervention. Schizophrenia patients show deficits in the ability to infer emotion based upon tone of voice [auditory emotion recognition (AER)] that drive impairments in social cognition

  13. Validation of the Cepstral Spectral Index of Dysphonia (CSID) as a Screening Tool for Voice Disorders: Development of Clinical Cutoff Scores.

    Science.gov (United States)

    Awan, Shaheen N; Roy, Nelson; Zhang, Dong; Cohen, Seth M

    2016-03-01

    The purposes of this study were to (1) evaluate the performance of the Cepstral Spectral Index of Dysphonia (CSID--a multivariate estimate of dysphonia severity) as a potential screening tool for voice disorder identification and (2) identify potential clinical cutoff scores to classify voice-disordered cases versus controls. Subjects were 332 men and women (116 men, 216 women) comprised of subjects who presented to a physician with a voice-related complaint and a group of non-voice-related control subjects. Voice-disordered cases versus controls were initially defined via three reference standards: (1) auditory-perceptual judgment (dysphonia +/-); (2) Voice Handicap Index (VHI) score (VHI +/-); and (3) laryngoscopic description (laryngoscopic +/-). Speech samples were analyzed using the Analysis of Dysphonia in Speech and Voice program. Cepstral and spectral measures were combined into a CSID multivariate formula which estimated dysphonia severity for Rainbow Passage samples (i.e., the CSIDR). The ability of the CSIDR to accurately classify cases versus controls in relation to each reference standard was evaluated via a combination of logistic regression and receiver operating characteristic (ROC) analyses. The ability of the CSIDR to discriminate between cases and controls was represented by the "area under the ROC curve" (AUC). ROC classification of dysphonia-positive cases versus controls resulted in a strong AUC = 0.85. A CSIDR cutoff of ≈24 achieved the best balance between sensitivity and specificity, whereas a more liberal cutoff score of ≈19 resulted in higher sensitivity while maintaining respectable specificity which may be preferred for screening purposes. Weaker but adequate AUCs = 0.75 and 0.73 were observed for the classification of VHI-positive and laryngoscopic-positive cases versus controls, respectively. Logistic regression analyses indicated that subject age may be a significant covariate in the discrimination of dysphonia-positive and VHI

  14. Effects of tailoring ingredients in auditory persuasive health messages on fruit and vegetable intake

    NARCIS (Netherlands)

    Elbert, Sarah P.; Dijkstra, Arie; Rozema, Andrea

    2017-01-01

    Objective: Health messages can be tailored by applying different tailoring ingredients, among which personalisation, feedback and adaptation. This experiment investigated the separate effects of these tailoring ingredients on behaviour in auditory health persuasion. Furthermore, the moderating

  15. Feedforward and Feedback Control in Apraxia of Speech: Effects of Noise Masking on Vowel Production

    Science.gov (United States)

    Maas, Edwin; Mailend, Marja-Liisa; Guenther, Frank H.

    2015-01-01

    Purpose: This study was designed to test two hypotheses about apraxia of speech (AOS) derived from the Directions Into Velocities of Articulators (DIVA) model (Guenther et al., 2006): the feedforward system deficit hypothesis and the feedback system deficit hypothesis. Method: The authors used noise masking to minimize auditory feedback during…

  16. Audio Feedback to Physiotherapy Students for Viva Voce: How Effective Is "The Living Voice"?

    Science.gov (United States)

    Munro, Wendy; Hollingworth, Linda

    2014-01-01

    Assessment and feedback remains one of the categories that students are least satisfied with within the United Kingdom National Student Survey. The Student Charter promotes the use of various formats of feedback to enhance student learning. This study evaluates the use of audio MP3 as an alternative feedback mechanism to written feedback for…

  17. Voice deviation, dysphonia risk screening and quality of life in individuals with various laryngeal diagnoses

    Science.gov (United States)

    Nemr, Katia; Cota, Ariane; Tsuji, Domingos; Simões-Zenari, Marcia

    2018-01-01

    OBJECTIVES: To characterize the voice quality of individuals with dysphonia and to investigate possible correlations between the degree of voice deviation (D) and scores on the Dysphonia Risk Screening Protocol-General (DRSP), the Voice-Related Quality of Life (V-RQOL) measure and the Voice Handicap Index, short version (VHI-10). METHODS: The sample included 200 individuals with dysphonia. Following laryngoscopy, the participants completed the DRSP, the V-RQOL measure, and the VHI-10; subsequently, voice samples were recorded for auditory-perceptual and acoustic analyses. The correlation between the score for each questionnaire and the overall degree of vocal deviation was analyzed, as was the correlation among the scores for the three questionnaires. RESULTS: Most of the participants (62%) were female, and the mean age of the sample was 49 years. The most common laryngeal diagnosis was organic dysphonia (79.5%). The mean D was 59.54, and the predominance of roughness had a mean of 54.74. All the participants exhibited at least one abnormal acoustic aspect. The mean questionnaire scores were DRSP, 44.7; V-RQOL, 57.1; and VHI-10, 16. An inverse correlation was found between the V-RQOL score and D; however, a positive correlation was found between both the VHI-10 and DRSP scores and D. CONCLUSION: A predominance of adult women, organic dysphonia, moderate voice deviation, high dysphonia risk, and low to moderate quality of life impact characterized our sample. There were correlations between the scores of each of the three questionnaires and the degree of voice deviation. It should be noted that the DRSP monitored the degree of dysphonia severity, which reinforces its applicability for patients with different laryngeal diagnoses. PMID:29538494

  18. Development of kinesthetic-motor and auditory-motor representations in school-aged children.

    Science.gov (United States)

    Kagerer, Florian A; Clark, Jane E

    2015-07-01

    In two experiments using a center-out task, we investigated kinesthetic-motor and auditory-motor integrations in 5- to 12-year-old children and young adults. In experiment 1, participants moved a pen on a digitizing tablet from a starting position to one of three targets (visuo-motor condition), and then to one of four targets without visual feedback of the movement. In both conditions, we found that with increasing age, the children moved faster and straighter, and became less variable in their feedforward control. Higher control demands for movements toward the contralateral side were reflected in longer movement times and decreased spatial accuracy across all age groups. When feedforward control relies predominantly on kinesthesia, 7- to 10-year-old children were more variable, indicating difficulties in switching between feedforward and feedback control efficiently during that age. An inverse age progression was found for directional endpoint error; larger errors increasing with age likely reflect stronger functional lateralization for the dominant hand. In experiment 2, the same visuo-motor condition was followed by an auditory-motor condition in which participants had to move to acoustic targets (either white band or one-third octave noise). Since in the latter directional cues come exclusively from transcallosally mediated interaural time differences, we hypothesized that auditory-motor representations would show age effects. The results did not show a clear age effect, suggesting that corpus callosum functionality is sufficient in children to allow them to form accurate auditory-motor maps already at a young age.

  19. Auditory cortical function during verbal episodic memory encoding in Alzheimer's disease.

    Science.gov (United States)

    Dhanjal, Novraj S; Warren, Jane E; Patel, Maneesh C; Wise, Richard J S

    2013-02-01

    Episodic memory encoding of a verbal message depends upon initial registration, which requires sustained auditory attention followed by deep semantic processing of the message. Motivated by previous data demonstrating modulation of auditory cortical activity during sustained attention to auditory stimuli, we investigated the response of the human auditory cortex during encoding of sentences to episodic memory. Subsequently, we investigated this response in patients with mild cognitive impairment (MCI) and probable Alzheimer's disease (pAD). Using functional magnetic resonance imaging, 31 healthy participants were studied. The response in 18 MCI and 18 pAD patients was then determined, and compared to 18 matched healthy controls. Subjects heard factual sentences, and subsequent retrieval performance indicated successful registration and episodic encoding. The healthy subjects demonstrated that suppression of auditory cortical responses was related to greater success in encoding heard sentences; and that this was also associated with greater activity in the semantic system. In contrast, there was reduced auditory cortical suppression in patients with MCI, and absence of suppression in pAD. Administration of a central cholinesterase inhibitor (ChI) partially restored the suppression in patients with pAD, and this was associated with an improvement in verbal memory. Verbal episodic memory impairment in AD is associated with altered auditory cortical function, reversible with a ChI. Although these results may indicate the direct influence of pathology in auditory cortex, they are also likely to indicate a partially reversible impairment of feedback from neocortical systems responsible for sustained attention and semantic processing. Copyright © 2012 American Neurological Association.

  20. Haptic Feedback for Enhancing Realism of Walking Simulations

    DEFF Research Database (Denmark)

    Turchet, Luca; Burelli, Paolo; Serafin, Stefania

    2013-01-01

    system. While during the use of the interactive system subjects physically walked, during the use of the non-interactive system the locomotion was simulated while subjects were sitting on a chair. In both the configurations subjects were exposed to auditory and audio-visual stimuli presented...... with and without the haptic feedback. Results of the experiments provide a clear preference towards the simulations enhanced with haptic feedback showing that the haptic channel can lead to more realistic experiences in both interactive and non-interactive configurations. The majority of subjects clearly...... appreciated the added feedback. However, some subjects found the added feedback disturbing and annoying. This might be due on one hand to the limits of the haptic simulation and on the other hand to the different individual desire to be involved in the simulations. Our findings can be applied to the context...

  1. Can you hear me now? Musical training shapes functional brain networks for selective auditory attention and hearing speech in noise

    Directory of Open Access Journals (Sweden)

    Dana L Strait

    2011-06-01

    Full Text Available Even in the quietest of rooms, our senses are perpetually inundated by a barrage of sounds, requiring the auditory system to adapt to a variety of listening conditions in order to extract signals of interest (e.g., one speaker’s voice amidst others. Brain networks that promote selective attention are thought to sharpen the neural encoding of a target signal, suppressing competing sounds and enhancing perceptual performance. Here, we ask: does musical training benefit cortical mechanisms that underlie selective attention to speech? To answer this question, we assessed the impact of selective auditory attention on cortical auditory-evoked response variability in musicians and nonmusicians. Outcomes indicate strengthened brain networks for selective auditory attention in musicians in that musicians but not nonmusicians demonstrate decreased prefrontal response variability with auditory attention. Results are interpreted in the context of previous work from our laboratory documenting perceptual and subcortical advantages in musicians for the hearing and neural encoding of speech in background noise. Musicians’ neural proficiency for selectively engaging and sustaining auditory attention to language indicates a potential benefit of music for auditory training. Given the importance of auditory attention for the development of language-related skills, musical training may aid in the prevention, habilitation and remediation of children with a wide range of attention-based language and learning impairments.

  2. The Voice of Anger: Oscillatory EEG Responses to Emotional Prosody.

    Directory of Open Access Journals (Sweden)

    Renata Del Giudice

    Full Text Available Emotionally relevant stimuli and in particular anger are, due to their evolutionary relevance, often processed automatically and able to modulate attention independent of conscious access. Here, we tested whether attention allocation is enhanced when auditory stimuli are uttered by an angry voice. We recorded EEG and presented healthy individuals with a passive condition where unfamiliar names as well as the subject's own name were spoken both with an angry and neutral prosody. The active condition instead, required participants to actively count one of the presented (angry names. Results revealed that in the passive condition the angry prosody only elicited slightly stronger delta synchronization as compared to a neutral voice. In the active condition the attended (angry target was related to enhanced delta/theta synchronization as well as alpha desynchronization suggesting enhanced allocation of attention and utilization of working memory resources. Altogether, the current results are in line with previous findings and highlight that attention orientation can be systematically related to specific oscillatory brain responses. Potential applications include assessment of non-communicative clinical groups such as post-comatose patients.

  3. Acoustic markers to differentiate gender in prepubescent children's speaking and singing voice.

    Science.gov (United States)

    Guzman, Marco; Muñoz, Daniel; Vivero, Martin; Marín, Natalia; Ramírez, Mirta; Rivera, María Trinidad; Vidal, Carla; Gerhard, Julia; González, Catalina

    2014-10-01

    Investigation sought to determine whether there is any acoustic variable to objectively differentiate gender in children with normal voices. A total of 30 children, 15 boys and 15 girls, with perceptually normal voices were examined. They were between 7 and 10 years old (mean: 8.1, SD: 0.7 years). Subjects were required to perform the following phonatory tasks: (1) to phonate sustained vowels [a:], [i:], [u:], (2) to read a phonetically balanced text, and (3) to sing a song. Acoustic analysis included long-term average spectrum (LTAS), fundamental frequency (F0), speaking fundamental frequency (SFF), equivalent continuous sound level (Leq), linear predictive code (LPC) to obtain formant frequencies, perturbation measures, harmonic to noise ratio (HNR), and Cepstral peak prominence (CPP). Auditory perceptual analysis was performed by four blinded judges to determine gender. No significant gender-related differences were found for most acoustic variables. Perceptual assessment showed good intra and inter rater reliability for gender. Cepstrum for [a:], alpha ratio in text, shimmer for [i:], F3 in [a:], and F3 in [i:], were the parameters that composed the multivariate logistic regression model to best differentiate male and female children's voices. Since perceptual assessment reliably detected gender, it is likely that other acoustic markers (not evaluated in the present study) are able to make clearer gender differences. For example, gender-specific patterns of intonation may be a more accurate feature for differentiating gender in children's voices. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  4. Brain 'talks over' boring quotes: top-down activation of voice-selective areas while listening to monotonous direct speech quotations.

    Science.gov (United States)

    Yao, Bo; Belin, Pascal; Scheepers, Christoph

    2012-04-15

    In human communication, direct speech (e.g., Mary said, "I'm hungry") is perceived as more vivid than indirect speech (e.g., Mary said that she was hungry). This vividness distinction has previously been found to underlie silent reading of quotations: Using functional magnetic resonance imaging (fMRI), we found that direct speech elicited higher brain activity in the temporal voice areas (TVA) of the auditory cortex than indirect speech, consistent with an "inner voice" experience in reading direct speech. Here we show that listening to monotonously spoken direct versus indirect speech quotations also engenders differential TVA activity. This suggests that individuals engage in top-down simulations or imagery of enriched supra-segmental acoustic representations while listening to monotonous direct speech. The findings shed new light on the acoustic nature of the "inner voice" in understanding direct speech. Copyright © 2012 Elsevier Inc. All rights reserved.

  5. I feel your voice. Cultural differences in the multisensory perception of emotion.

    Science.gov (United States)

    Tanaka, Akihiro; Koizumi, Ai; Imai, Hisato; Hiramatsu, Saori; Hiramoto, Eriko; de Gelder, Beatrice

    2010-09-01

    Cultural differences in emotion perception have been reported mainly for facial expressions and to a lesser extent for vocal expressions. However, the way in which the perceiver combines auditory and visual cues may itself be subject to cultural variability. Our study investigated cultural differences between Japanese and Dutch participants in the multisensory perception of emotion. A face and a voice, expressing either congruent or incongruent emotions, were presented on each trial. Participants were instructed to judge the emotion expressed in one of the two sources. The effect of to-be-ignored voice information on facial judgments was larger in Japanese than in Dutch participants, whereas the effect of to-be-ignored face information on vocal judgments was smaller in Japanese than in Dutch participants. This result indicates that Japanese people are more attuned than Dutch people to vocal processing in the multisensory perception of emotion. Our findings provide the first evidence that multisensory integration of affective information is modulated by perceivers' cultural background.

  6. Auditory evoked potentials to abrupt pitch and timbre change of complex tones: electrophysiological evidence of 'streaming'?

    Science.gov (United States)

    Jones, S J; Longe, O; Vaz Pato, M

    1998-03-01

    Examination of the cortical auditory evoked potentials to complex tones changing in pitch and timbre suggests a useful new method for investigating higher auditory processes, in particular those concerned with 'streaming' and auditory object formation. The main conclusions were: (i) the N1 evoked by a sudden change in pitch or timbre was more posteriorly distributed than the N1 at the onset of the tone, indicating at least partial segregation of the neuronal populations responsive to sound onset and spectral change; (ii) the T-complex was consistently larger over the right hemisphere, consistent with clinical and PET evidence for particular involvement of the right temporal lobe in the processing of timbral and musical material; (iii) responses to timbral change were relatively unaffected by increasing the rate of interspersed changes in pitch, suggesting a mechanism for detecting the onset of a new voice in a constantly modulated sound stream; (iv) responses to onset, offset and pitch change of complex tones were relatively unaffected by interfering tones when the latter were of a different timbre, suggesting these responses must be generated subsequent to auditory stream segregation.

  7. Exploring the Impact of Role-Playing on Peer Feedback in an Online Case-Based Learning Activity

    Science.gov (United States)

    Ching, Yu-Hui

    2014-01-01

    This study explored the impact of role-playing on the quality of peer feedback and learners' perception of this strategy in a case-based learning activity with VoiceThread in an online course. The findings revealed potential positive impact of role-playing on learners' generation of constructive feedback as role-playing was associated with higher…

  8. Comparação entre as análises auditiva e acústica nas disartrias Comparison between auditory-perceptual and acoustic analyses in dysarthrias

    Directory of Open Access Journals (Sweden)

    Karin Zazo Ortiz

    2008-01-01

    Full Text Available OBJETIVO: Comparar os dados da análise perceptivo-auditiva (subjetiva com os dados da análise acústica (objetiva. MÉTODOS: Quarenta e dois pacientes disártricos, com diagnósticos neurológicos definidos, 21 do sexo masculino e 21 do sexo feminino foram submetidos à análise perceptual-auditiva e acústica. Todos os pacientes foram submetidos à gravação da voz, tendo sido avaliados, na análise auditiva, tipo de voz, ressonância (equilibrada, hipernasal ou laringo-faríngea, loudness (adequado, diminuído ou aumentado, pitch (adequado, grave, agudo ataque vocal (isocrônico, brusco ou soproso, e estabilidade (estável ou instável. Para a análise acústica foram utilizados os programas GRAM 5.1.7; para a análise da qualidade vocal e comportamento dos harmônicos na espectrografia e o Programa Vox Metria, para a obtenção das medidas objetivas. RESULTADOS: A comparação entre os achados das análises auditiva e acústica em sua maioria não foi significante, ou seja, não houve uma relação direta entre os achados subjetivos e os dados objetivos. Houve diferença estatisticamente significante apenas entre voz soprosa e Shimmer alterado (p=0,048 e entre a definição dos harmônicos e voz soprosa (p=0,040, sendo assim, observou-se correlação entre a presença de ruído à emissão e soprosidade. CONCLUSÕES: As análises perceptual-auditiva e acústica forneceram dados diferentes, porém complementares, auxiliando, de forma conjunta, no diagnóstico clínico das disartrias.PURPOSE: To compare data found in auditory-perceptual analyses (subjective and acoustic analyses (objective in dysarthric patients. METHODS: Forty-two patients with well defined neurological diagnosis, 21 male and 21 female, were evaluated in auditory-perceptual parameters and acoustic measures. All patients had their voices recorded. Auditory-perceptual voice analyses were made considering type of voice, resonance (balanced, hipernasal or laryngopharyngeal

  9. Syntactic processing in music and language: Effects of interrupting auditory streams with alternating timbres.

    Science.gov (United States)

    Fiveash, Anna; Thompson, William Forde; Badcock, Nicholas A; McArthur, Genevieve

    2018-07-01

    Music and language both rely on the processing of spectral (pitch, timbre) and temporal (rhythm) information to create structure and meaning from incoming auditory streams. Behavioral results have shown that interrupting a melodic stream with unexpected changes in timbre leads to reduced syntactic processing. Such findings suggest that syntactic processing is conditional on successful streaming of incoming sequential information. The current study used event-related potentials (ERPs) to investigate whether (1) the effect of alternating timbres on syntactic processing is reflected in a reduced brain response to syntactic violations, and (2) the phenomenon is similar for music and language. Participants listened to melodies and sentences with either one timbre (piano or one voice) or three timbres (piano, guitar, and vibraphone, or three different voices). Half the stimuli contained syntactic violations: an out-of-key note in the melodies, and a phrase-structure violation in the sentences. We found smaller ERPs to syntactic violations in music in the three-timbre compared to the one-timbre condition, reflected in a reduced early right anterior negativity (ERAN). A similar but non-significant pattern was observed for language stimuli in both the early left anterior negativity (ELAN) and the left anterior negativity (LAN) ERPs. The results suggest that disruptions to auditory streaming may interfere with syntactic processing, especially for melodic sequences. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Deep transcranial magnetic stimulation for the treatment of auditory hallucinations: a preliminary open-label study.

    Science.gov (United States)

    Rosenberg, Oded; Roth, Yiftach; Kotler, Moshe; Zangen, Abraham; Dannon, Pinhas

    2011-02-09

    Schizophrenia is a chronic and disabling disease that presents with delusions and hallucinations. Auditory hallucinations are usually expressed as voices speaking to or about the patient. Previous studies have examined the effect of repetitive transcranial magnetic stimulation (TMS) over the temporoparietal cortex on auditory hallucinations in schizophrenic patients. Our aim was to explore the potential effect of deep TMS, using the H coil over the same brain region on auditory hallucinations. Eight schizophrenic patients with refractory auditory hallucinations were recruited, mainly from Beer Ya'akov Mental Health Institution (Tel Aviv university, Israel) ambulatory clinics, as well as from other hospitals outpatient populations. Low-frequency deep TMS was applied for 10 min (600 pulses per session) to the left temporoparietal cortex for either 10 or 20 sessions. Deep TMS was applied using Brainsway's H1 coil apparatus. Patients were evaluated using the Auditory Hallucinations Rating Scale (AHRS) as well as the Scale for the Assessment of Positive Symptoms scores (SAPS), Clinical Global Impressions (CGI) scale, and the Scale for Assessment of Negative Symptoms (SANS). This preliminary study demonstrated a significant improvement in AHRS score (an average reduction of 31.7% ± 32.2%) and to a lesser extent improvement in SAPS results (an average reduction of 16.5% ± 20.3%). In this study, we have demonstrated the potential of deep TMS treatment over the temporoparietal cortex as an add-on treatment for chronic auditory hallucinations in schizophrenic patients. Larger samples in a double-blind sham-controlled design are now being preformed to evaluate the effectiveness of deep TMS treatment for auditory hallucinations. This trial is registered with clinicaltrials.gov (identifier: NCT00564096).

  11. Listen, you are writing!Speeding up online spelling with a dynamic auditory BCI

    Directory of Open Access Journals (Sweden)

    Martijn eSchreuder

    2011-10-01

    Full Text Available Representing an intuitive spelling interface for Brain-Computer Interfaces (BCI in the auditory domain is not straightforward. In consequence, all existing approaches based on event-related potentials (ERP rely at least partially on a visual representation of the interface. This online study introduces an auditory spelling interface that eliminates the necessity for such a visualization. In up to two sessions, a group of healthy subjects (N=21 was asked to use a text entry application, utilizing the spatial cues of the AMUSE paradigm (Auditory Multiclass Spatial ERP. The speller relies on the auditory sense both for stimulation and the core feedback. Without prior BCI experience, 76% of the participants were able to write a full sentence during the first session. By exploiting the advantages of a newly introduced dynamic stopping method, a maximum writing speed of 1.41 characters/minute (7.55 bits/minute could be reached during the second session (average: .94 char/min, 5.26 bits/min. For the first time, the presented work shows that an auditory BCI can reach performances similar to state-of-the-art visual BCIs based on covert attention. These results represent an important step towards a purely auditory BCI.

  12. Familiarity and Voice Representation: From Acoustic-Based Representation to Voice Averages

    Directory of Open Access Journals (Sweden)

    Maureen Fontaine

    2017-07-01

    Full Text Available The ability to recognize an individual from their voice is a widespread ability with a long evolutionary history. Yet, the perceptual representation of familiar voices is ill-defined. In two experiments, we explored the neuropsychological processes involved in the perception of voice identity. We specifically explored the hypothesis that familiar voices (trained-to-familiar (Experiment 1, and famous voices (Experiment 2 are represented as a whole complex pattern, well approximated by the average of multiple utterances produced by a single speaker. In experiment 1, participants learned three voices over several sessions, and performed a three-alternative forced-choice identification task on original voice samples and several “speaker averages,” created by morphing across varying numbers of different vowels (e.g., [a] and [i] produced by the same speaker. In experiment 2, the same participants performed the same task on voice samples produced by familiar speakers. The two experiments showed that for famous voices, but not for trained-to-familiar voices, identification performance increased and response times decreased as a function of the number of utterances in the averages. This study sheds light on the perceptual representation of familiar voices, and demonstrates the power of average in recognizing familiar voices. The speaker average captures the unique characteristics of a speaker, and thus retains the information essential for recognition; it acts as a prototype of the speaker.

  13. Improving Higher Education Practice through Student Evaluation Systems: Is the Student Voice Being Heard?

    Science.gov (United States)

    Blair, Erik; Valdez Noel, Keisha

    2014-01-01

    Many higher education institutions use student evaluation systems as a way of highlighting course and lecturer strengths and areas for improvement. Globally, the student voice has been increasing in volume, and capitalising on student feedback has been proposed as a means to benefit teacher professional development. This paper examines the student…

  14. Auditory agnosia.

    Science.gov (United States)

    Slevc, L Robert; Shell, Alison R

    2015-01-01

    Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. © 2015 Elsevier B.V. All rights reserved.

  15. Brain activity during auditory and visual phonological, spatial and simple discrimination tasks.

    Science.gov (United States)

    Salo, Emma; Rinne, Teemu; Salonen, Oili; Alho, Kimmo

    2013-02-16

    We used functional magnetic resonance imaging to measure human brain activity during tasks demanding selective attention to auditory or visual stimuli delivered in concurrent streams. Auditory stimuli were syllables spoken by different voices and occurring in central or peripheral space. Visual stimuli were centrally or more peripherally presented letters in darker or lighter fonts. The participants performed a phonological, spatial or "simple" (speaker-gender or font-shade) discrimination task in either modality. Within each modality, we expected a clear distinction between brain activations related to nonspatial and spatial processing, as reported in previous studies. However, within each modality, different tasks activated largely overlapping areas in modality-specific (auditory and visual) cortices, as well as in the parietal and frontal brain regions. These overlaps may be due to effects of attention common for all three tasks within each modality or interaction of processing task-relevant features and varying task-irrelevant features in the attended-modality stimuli. Nevertheless, brain activations caused by auditory and visual phonological tasks overlapped in the left mid-lateral prefrontal cortex, while those caused by the auditory and visual spatial tasks overlapped in the inferior parietal cortex. These overlapping activations reveal areas of multimodal phonological and spatial processing. There was also some evidence for intermodal attention-related interaction. Most importantly, activity in the superior temporal sulcus elicited by unattended speech sounds was attenuated during the visual phonological task in comparison with the other visual tasks. This effect might be related to suppression of processing irrelevant speech presumably distracting the phonological task involving the letters. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. [Voice disorders in female teachers assessed by Voice Handicap Index].

    Science.gov (United States)

    Niebudek-Bogusz, Ewa; Kuzańska, Anna; Woźnicka, Ewelina; Sliwińska-Kowalska, Mariola

    2007-01-01

    The aim of this study was to assess the application of Voice Handicap Index (VHI) in the diagnosis of occupational voice disorders in female teachers. The subjective assessment of voice by VHI was performed in fifty subjects with dysphonia diagnosed in laryngovideostroboscopic examination. The control group comprised 30 women whose jobs did not involve vocal effort. The results of the total VHI score and each of its subscales: functional, emotional and physical was significantly worse in the study group than in controls (p teachers estimated their own voice problems as a moderate disability, while 12% of them reported severe voice disability. However, all non-teachers assessed their voice problems as slight, their results ranged at the lowest level of VHI score. This study confirmed that VHI as a tool for self-assessment of voice can be a significant contribution to the diagnosis of occupational dysphonia.

  17. A Positive Generation Effect on Memory for Auditory Context.

    Science.gov (United States)

    Overman, Amy A; Richard, Alison G; Stephens, Joseph D W

    2017-06-01

    Self-generation of information during memory encoding has large positive effects on subsequent memory for items, but mixed effects on memory for contextual information associated with items. A processing account of generation effects on context memory (Mulligan in Journal of Experimental Psychology: Learning, Memory, and Cognition, 30(4), 838-855, 2004; Mulligan, Lozito, & Rosner in Journal of Experimental Psychology: Learning, Memory, and Cognition, 32(4), 836-846, 2006) proposes that these effects depend on whether the generation task causes any shift in processing of the type of context features for which memory is being tested. Mulligan and colleagues have used this account to predict various negative effects of generation on context memory, but the account also predicts positive generation effects under certain circumstances. The present experiment provided a critical test of the processing account by examining how generation affected memory for auditory rather than visual context. Based on the processing account, we predicted that generation of rhyme words should enhance processing of auditory information associated with the words (i.e., voice gender), whereas generation of antonym words should have no effect. These predictions were confirmed, providing support to the processing account.

  18. Selective attention modulates human auditory brainstem responses: relative contributions of frequency and spatial cues.

    Directory of Open Access Journals (Sweden)

    Alexandre Lehmann

    Full Text Available Selective attention is the mechanism that allows focusing one's attention on a particular stimulus while filtering out a range of other stimuli, for instance, on a single conversation in a noisy room. Attending to one sound source rather than another changes activity in the human auditory cortex, but it is unclear whether attention to different acoustic features, such as voice pitch and speaker location, modulates subcortical activity. Studies using a dichotic listening paradigm indicated that auditory brainstem processing may be modulated by the direction of attention. We investigated whether endogenous selective attention to one of two speech signals affects amplitude and phase locking in auditory brainstem responses when the signals were either discriminable by frequency content alone, or by frequency content and spatial location. Frequency-following responses to the speech sounds were significantly modulated in both conditions. The modulation was specific to the task-relevant frequency band. The effect was stronger when both frequency and spatial information were available. Patterns of response were variable between participants, and were correlated with psychophysical discriminability of the stimuli, suggesting that the modulation was biologically relevant. Our results demonstrate that auditory brainstem responses are susceptible to efferent modulation related to behavioral goals. Furthermore they suggest that mechanisms of selective attention actively shape activity at early subcortical processing stages according to task relevance and based on frequency and spatial cues.

  19. Singing voice outcomes following singing voice therapy.

    Science.gov (United States)

    Dastolfo-Hromack, Christina; Thomas, Tracey L; Rosen, Clark A; Gartner-Schmidt, Jackie

    2016-11-01

    The objectives of this study were to describe singing voice therapy (SVT), describe referred patient characteristics, and document the outcomes of SVT. Retrospective. Records of patients receiving SVT between June 2008 and June 2013 were reviewed (n = 51). All diagnoses were included. Demographic information, number of SVT sessions, and symptom severity were retrieved from the medical record. Symptom severity was measured via the 10-item Singing Voice Handicap Index (SVHI-10). Treatment outcome was analyzed by diagnosis, history of previous training, and SVHI-10. SVHI-10 scores decreased following SVT (mean change = 11, 40% decrease) (P singing lessons (n = 10) also completed an average of three SVT sessions. Primary muscle tension dysphonia (MTD1) and benign vocal fold lesion (lesion) were the most common diagnoses. Most patients (60%) had previous vocal training. SVHI-10 decrease was not significantly different between MTD and lesion. This is the first outcome-based study of SVT in a disordered population. Diagnosis of MTD or lesion did not influence treatment outcomes. Duration of SVT was short (approximately three sessions). Voice care providers are encouraged to partner with a singing voice therapist to provide optimal care for the singing voice. This study supports the use of SVT as a tool for the treatment of singing voice disorders. 4 Laryngoscope, 126:2546-2551, 2016. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.

  20. Abnormalities in auditory efferent activities in children with selective mutism.

    Science.gov (United States)

    Muchnik, Chava; Ari-Even Roth, Daphne; Hildesheimer, Minka; Arie, Miri; Bar-Haim, Yair; Henkin, Yael

    2013-01-01

    Two efferent feedback pathways to the auditory periphery may play a role in monitoring self-vocalization: the middle-ear acoustic reflex (MEAR) and the medial olivocochlear bundle (MOCB) reflex. Since most studies regarding the role of auditory efferent activity during self-vocalization were conducted in animals, human data are scarce. The working premise of the current study was that selective mutism (SM), a rare psychiatric disorder characterized by consistent failure to speak in specific social situations despite the ability to speak normally in other situations, may serve as a human model for studying the potential involvement of auditory efferent activity during self-vocalization. For this purpose, auditory efferent function was assessed in a group of 31 children with SM and compared to that of a group of 31 normally developing control children (mean age 8.9 and 8.8 years, respectively). All children exhibited normal hearing thresholds and type A tympanograms. MEAR and MOCB functions were evaluated by means of acoustic reflex thresholds and decay functions and the suppression of transient-evoked otoacoustic emissions, respectively. Auditory afferent function was tested by means of auditory brainstem responses (ABR). Results indicated a significantly higher proportion of children with abnormal MEAR and MOCB function in the SM group (58.6 and 38%, respectively) compared to controls (9.7 and 8%, respectively). The prevalence of abnormal MEAR and/or MOCB function was significantly higher in the SM group (71%) compared to controls (16%). Intact afferent function manifested in normal absolute and interpeak latencies of ABR components in all children. The finding of aberrant efferent auditory function in a large proportion of children with SM provides further support for the notion that MEAR and MOCB may play a significant role in the process of self-vocalization. © 2013 S. Karger AG, Basel.

  1. Real-time system for studies of the effects of acoustic feedback on animal vocalizations.

    Directory of Open Access Journals (Sweden)

    Mike eSkocik

    2013-01-01

    Full Text Available Studies of behavioral and neural responses to distorted auditory feedback can help shed light on the neural mechanisms of animal vocalizations. We describe an apparatus for generating real-time acoustic feedback. The system can very rapidly detect acoustic features in a song and output acoustic signals if the detected features match the desired acoustic template. The system uses spectrogram-based detection of acoustic elements. It is low-cost and can be programmed for a variety of behavioral experiments requiring acoustic feedback or neural stimulation. We use the system to study the effects of acoustic feedback on birds' vocalizations and demonstrate that such an acoustic feedback can cause both immediate and long-term changes to birds’ songs.

  2. Transfer Effect of Speech-sound Learning on Auditory-motor Processing of Perceived Vocal Pitch Errors.

    Science.gov (United States)

    Chen, Zhaocong; Wong, Francis C K; Jones, Jeffery A; Li, Weifeng; Liu, Peng; Chen, Xi; Liu, Hanjun

    2015-08-17

    Speech perception and production are intimately linked. There is evidence that speech motor learning results in changes to auditory processing of speech. Whether speech motor control benefits from perceptual learning in speech, however, remains unclear. This event-related potential study investigated whether speech-sound learning can modulate the processing of feedback errors during vocal pitch regulation. Mandarin speakers were trained to perceive five Thai lexical tones while learning to associate pictures with spoken words over 5 days. Before and after training, participants produced sustained vowel sounds while they heard their vocal pitch feedback unexpectedly perturbed. As compared to the pre-training session, the magnitude of vocal compensation significantly decreased for the control group, but remained consistent for the trained group at the post-training session. However, the trained group had smaller and faster N1 responses to pitch perturbations and exhibited enhanced P2 responses that correlated significantly with their learning performance. These findings indicate that the cortical processing of vocal pitch regulation can be shaped by learning new speech-sound associations, suggesting that perceptual learning in speech can produce transfer effects to facilitating the neural mechanisms underlying the online monitoring of auditory feedback regarding vocal production.

  3. Extensive Tonotopic Mapping across Auditory Cortex Is Recapitulated by Spectrally Directed Attention and Systematically Related to Cortical Myeloarchitecture.

    Science.gov (United States)

    Dick, Frederic K; Lehet, Matt I; Callaghan, Martina F; Keller, Tim A; Sereno, Martin I; Holt, Lori L

    2017-12-13

    Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation-acoustic frequency-might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R 1 -estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although

  4. An overview of neural function and feedback control in human communication.

    Science.gov (United States)

    Hood, L J

    1998-01-01

    The speech and hearing mechanisms depend on accurate sensory information and intact feedback mechanisms to facilitate communication. This article provides a brief overview of some components of the nervous system important for human communication and some electrophysiological methods used to measure cortical function in humans. An overview of automatic control and feedback mechanisms in general and as they pertain to the speech motor system and control of the hearing periphery is also presented, along with a discussion of how the speech and auditory systems interact.

  5. The Role of Age and Executive Function in Auditory Category Learning

    Science.gov (United States)

    Reetzke, Rachel; Maddox, W. Todd; Chandrasekaran, Bharath

    2015-01-01

    Auditory categorization is a natural and adaptive process that allows for the organization of high-dimensional, continuous acoustic information into discrete representations. Studies in the visual domain have identified a rule-based learning system that learns and reasons via a hypothesis-testing process that requires working memory and executive attention. The rule-based learning system in vision shows a protracted development, reflecting the influence of maturing prefrontal function on visual categorization. The aim of the current study is two-fold: (a) to examine the developmental trajectory of rule-based auditory category learning from childhood through adolescence, into early adulthood; and (b) to examine the extent to which individual differences in rule-based category learning relate to individual differences in executive function. Sixty participants with normal hearing, 20 children (age range, 7–12), 21 adolescents (age range, 13–19), and 19 young adults (age range, 20–23), learned to categorize novel dynamic ripple sounds using trial-by-trial feedback. The spectrotemporally modulated ripple sounds are considered the auditory equivalent of the well-studied Gabor patches in the visual domain. Results revealed that auditory categorization accuracy improved with age, with young adults outperforming children and adolescents. Computational modeling analyses indicated that the use of the task-optimal strategy (i.e. a conjunctive rule-based learning strategy) improved with age. Notably, individual differences in executive flexibility significantly predicted auditory category learning success. The current findings demonstrate a protracted development of rule-based auditory categorization. The results further suggest that executive flexibility coupled with perceptual processes play important roles in successful rule-based auditory category learning. PMID:26491987

  6. Deep transcranial magnetic stimulation for the treatment of auditory hallucinations: a preliminary open-label study

    Directory of Open Access Journals (Sweden)

    Zangen Abraham

    2011-02-01

    Full Text Available Abstract Background Schizophrenia is a chronic and disabling disease that presents with delusions and hallucinations. Auditory hallucinations are usually expressed as voices speaking to or about the patient. Previous studies have examined the effect of repetitive transcranial magnetic stimulation (TMS over the temporoparietal cortex on auditory hallucinations in schizophrenic patients. Our aim was to explore the potential effect of deep TMS, using the H coil over the same brain region on auditory hallucinations. Patients and methods Eight schizophrenic patients with refractory auditory hallucinations were recruited, mainly from Beer Ya'akov Mental Health Institution (Tel Aviv university, Israel ambulatory clinics, as well as from other hospitals outpatient populations. Low-frequency deep TMS was applied for 10 min (600 pulses per session to the left temporoparietal cortex for either 10 or 20 sessions. Deep TMS was applied using Brainsway's H1 coil apparatus. Patients were evaluated using the Auditory Hallucinations Rating Scale (AHRS as well as the Scale for the Assessment of Positive Symptoms scores (SAPS, Clinical Global Impressions (CGI scale, and the Scale for Assessment of Negative Symptoms (SANS. Results This preliminary study demonstrated a significant improvement in AHRS score (an average reduction of 31.7% ± 32.2% and to a lesser extent improvement in SAPS results (an average reduction of 16.5% ± 20.3%. Conclusions In this study, we have demonstrated the potential of deep TMS treatment over the temporoparietal cortex as an add-on treatment for chronic auditory hallucinations in schizophrenic patients. Larger samples in a double-blind sham-controlled design are now being preformed to evaluate the effectiveness of deep TMS treatment for auditory hallucinations. Trial registration This trial is registered with clinicaltrials.gov (identifier: NCT00564096.

  7. How far away is plug 'n' play? Assessing the near-term potential of sonification and auditory display

    Science.gov (United States)

    Bargar, Robin

    1995-01-01

    The commercial music industry offers a broad range of plug 'n' play hardware and software scaled to music professionals and scaled to a broad consumer market. The principles of sound synthesis utilized in these products are relevant to application in virtual environments (VE). However, the closed architectures used in commercial music synthesizers are prohibitive to low-level control during real-time rendering, and the algorithms and sounds themselves are not standardized from product to product. To bring sound into VE requires a new generation of open architectures designed for human-controlled performance from interfaces embedded in immersive environments. This presentation addresses the state of the sonic arts in scientific computing and VE, analyzes research challenges facing sound computation, and offers suggestions regarding tools we might expect to become available during the next few years. A list of classes of audio functionality in VE includes sonification -- the use of sound to represent data from numerical models; 3D auditory display (spatialization and localization, also called externalization); navigation cues for positional orientation and for finding items or regions inside large spaces; voice recognition for controlling the computer; external communications between users in different spaces; and feedback to the user concerning his own actions or the state of the application interface. To effectively convey this considerable variety of signals, we apply principles of acoustic design to ensure the messages are neither confusing nor competing. We approach the design of auditory experience through a comprehensive structure for messages, and message interplay we refer to as an Automated Sound Environment. Our research addresses real-time sound synthesis, real-time signal processing and localization, interactive control of high-dimensional systems, and synchronization of sound and graphics.

  8. Auditory and audio-visual processing in patients with cochlear, auditory brainstem, and auditory midbrain implants: An EEG study.

    Science.gov (United States)

    Schierholz, Irina; Finke, Mareike; Kral, Andrej; Büchner, Andreas; Rach, Stefan; Lenarz, Thomas; Dengler, Reinhard; Sandmann, Pascale

    2017-04-01

    There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  9. How well do you see what you hear? The acuity of visual-to-auditory sensory substitution

    Directory of Open Access Journals (Sweden)

    Alastair eHaigh

    2013-06-01

    Full Text Available Sensory substitution devices (SSDs aim to compensate for the loss of a sensory modality, typically vision, by converting information from the lost modality into stimuli in a remaining modality. The vOICe is a visual-to-auditory SSD which encodes images taken by a camera worn by the user into soundscapes such that an experienced user can extract information about their surroundings. Here we investigated how much detail was resolvable during the early induction stages by testing the acuity of blindfolded sighted, naïve vOICe users. Initial performance was well above chance. Participants who took the test twice as a form of minimal training showed a marked improvement on the second test. Acuity was slightly but not significantly impaired when participants wore a camera and judged letter orientations live. A positive correlation was found between participants’ musical training and their acuity. The relationship between auditory expertise via musical training and the lack of a relationship with visual imagery, suggests that early use of a sensory substitution device draws primarily on the mechanisms of the sensory modality being used rather than the one being substituted. If vision is lost, audition represents the sensory channel of highest bandwidth of those remaining. The level of acuity found here, and the fact it was achieved with very little experience in sensory substitution by naïve users is promising.

  10. Active auditory experience in infancy promotes brain plasticity in Theta and Gamma oscillations

    Directory of Open Access Journals (Sweden)

    Gabriella Musacchia

    2017-08-01

    Full Text Available Language acquisition in infants is driven by on-going neural plasticity that is acutely sensitive to environmental acoustic cues. Recent studies showed that attention-based experience with non-linguistic, temporally-modulated auditory stimuli sharpens cortical responses. A previous ERP study from this laboratory showed that interactive auditory experience via behavior-based feedback (AEx, over a 6-week period from 4- to 7-months-of-age, confers a processing advantage, compared to passive auditory exposure (PEx or maturation alone (Naïve Control, NC. Here, we provide a follow-up investigation of the underlying neural oscillatory patterns in these three groups. In AEx infants, Standard stimuli with invariant frequency (STD elicited greater Theta-band (4–6 Hz activity in Right Auditory Cortex (RAC, as compared to NC infants, and Deviant stimuli with rapid frequency change (DEV elicited larger responses in Left Auditory Cortex (LAC. PEx and NC counterparts showed less-mature bilateral patterns. AEx infants also displayed stronger Gamma (33–37 Hz activity in the LAC during DEV discrimination, compared to NCs, while NC and PEx groups demonstrated bilateral activity in this band, if at all. This suggests that interactive acoustic experience with non-linguistic stimuli can promote a distinct, robust and precise cortical pattern during rapid auditory processing, perhaps reflecting mechanisms that support fine-tuning of early acoustic mapping.

  11. Auditory Reserve and the Legacy of Auditory Experience

    Directory of Open Access Journals (Sweden)

    Erika Skoe

    2014-11-01

    Full Text Available Musical training during childhood has been linked to more robust encoding of sound later in life. We take this as evidence for an auditory reserve: a mechanism by which individuals capitalize on earlier life experiences to promote auditory processing. We assert that early auditory experiences guide how the reserve develops and is maintained over the lifetime. Experiences that occur after childhood, or which are limited in nature, are theorized to affect the reserve, although their influence on sensory processing may be less long-lasting and may potentially fade over time if not repeated. This auditory reserve may help to explain individual differences in how individuals cope with auditory impoverishment or loss of sensorineural function.

  12. I like my voice better: self-enhancement bias in perceptions of voice attractiveness.

    Science.gov (United States)

    Hughes, Susan M; Harrison, Marissa A

    2013-01-01

    Previous research shows that the human voice can communicate a wealth of nonsemantic information; preferences for voices can predict health, fertility, and genetic quality of the speaker, and people often use voice attractiveness, in particular, to make these assessments of others. But it is not known what we think of the attractiveness of our own voices as others hear them. In this study eighty men and women rated the attractiveness of an array of voice recordings of different individuals and were not told that their own recorded voices were included in the presentation. Results showed that participants rated their own voices as sounding more attractive than others had rated their voices, and participants also rated their own voices as sounding more attractive than they had rated the voices of others. These findings suggest that people may engage in vocal implicit egotism, a form of self-enhancement.

  13. Associations between the Transsexual Voice Questionnaire (TVQMtF ) and self-report of voice femininity and acoustic voice measures.

    Science.gov (United States)

    Dacakis, Georgia; Oates, Jennifer; Douglas, Jacinta

    2017-11-01

    The Transsexual Voice Questionnaire (TVQ MtF ) was designed to capture the voice-related perceptions of individuals whose gender identity as female is the opposite of their birth-assigned gender (MtF women). Evaluation of the psychometric properties of the TVQ MtF is ongoing. To investigate associations between TVQ MtF scores and (1) self-perceptions of voice femininity and (2) acoustic parameters of voice pitch and voice quality in order to evaluate further the validity of the TVQ MtF . A strong correlation between TVQ MtF scores and self-ratings of voice femininity was predicted, but no association between TVQ MtF scores and acoustic measures of voice pitch and quality was proposed. Participants were 148 MtF women (mean age 48.14 years) recruited from the La Trobe Communication Clinic and the clinics of three doctors specializing in transgender health. All participants completed the TVQ MtF and 34 of these participants also provided a voice sample for acoustic analysis. Pearson product-moment correlation analysis was conducted to examine the associations between TVQ MtF scores and (1) self-perceptions of voice femininity and (2) acoustic measures of F0, jitter (%), shimmer (dB) and harmonic-to-noise ratio (HNR). Strong negative correlations between the participants' perceptions of their voice femininity and the TVQ MtF scores demonstrated that for this group of MtF women a low self-rating of voice femininity was associated with more frequent negative voice-related experiences. This association was strongest with the vocal-functioning component of the TVQ MtF . These strong correlations and high levels of shared variance between the TVQ MtF and a measure of a related construct provides evidence for the convergent validity of the TVQ MtF . The absence of significant correlations between the TVQ MtF and the acoustic data is consistent with the equivocal findings of earlier research. This finding indicates that these two measures assess different aspects of the voice

  14. Phonomicrosurgery in Vocal Fold Nodules: Quantification of Outcomes in Professional and Non-Professional Voice Users.

    Science.gov (United States)

    Caffier, Philipp P; Salmen, Tatjana; Ermakova, Tatiana; Forbes, Eleanor; Ko, Seo-Rin; Song, Wen; Gross, Manfred; Nawka, Tadeus

    2017-12-01

    There are few data demonstrating the specific extent to which surgical intervention for vocal fold nodules (VFN) improves vocal function in professional (PVU) and non-professional voice users (NVU). The objective of this study was to compare and quantify results after phonomicrosurgery for VFN in these patient groups. In a prospective clinical study, surgery was performed via microlaryngoscopy in 37 female patients with chronic VFN manifestations (38±12 yrs, mean±SD). Pre- and postoperative evaluations of treatment efficacy comprised videolaryngostroboscopy, auditory-perceptual voice assessment, voice range profile (VRP), acoustic-aerodynamic analysis, and voice handicap index (VHI-9i). The dysphonia severity index (DSI) was compared with the vocal extent measure (VEM). PVU (n=24) and NVU (n=13) showed comparable laryngeal findings and levels of suffering (VHI-9i 16±7 vs 17±8), but PVU had a better pretherapeutic vocal range (26.8±7.4 vs 17.7±5.1 semitones, p<0.001) and vocal capacity (VEM 106±18 vs 74±29, p<0.01). Three months postoperatively, all patients had straight vocal fold edges, complete glottal closure, and recovered mucosal wave propagation. The mean VHI-9i score decreased by 8±6 points. DSI increased from 4.0±2.4 to 5.5±2.4, and VEM from 95±27 to 108±23 (p<0.001). Both parameters correlated significantly (rs=0.82). The average vocal range increased by 4.1±5.3 semitones, and the mean speaking pitch lowered by 0.5±1.4 semitones. These results confirm that phonomicrosurgery for VFN is a safe therapy for voice improvement in both PVU and NVU who do not respond to voice therapy alone. Top-level artistic capabilities in PVU were restored, but numeric changes of most vocal parameters were considerably larger in NVU.

  15. Instantaneous and Frequency-Warped Signal Processing Techniques for Auditory Source Separation.

    Science.gov (United States)

    Wang, Avery Li-Chun

    This thesis summarizes several contributions to the areas of signal processing and auditory source separation. The philosophy of Frequency-Warped Signal Processing is introduced as a means for separating the AM and FM contributions to the bandwidth of a complex-valued, frequency-varying sinusoid p (n), transforming it into a signal with slowly-varying parameters. This transformation facilitates the removal of p (n) from an additive mixture while minimizing the amount of damage done to other signal components. The average winding rate of a complex-valued phasor is explored as an estimate of the instantaneous frequency. Theorems are provided showing the robustness of this measure. To implement frequency tracking, a Frequency-Locked Loop algorithm is introduced which uses the complex winding error to update its frequency estimate. The input signal is dynamically demodulated and filtered to extract the envelope. This envelope may then be remodulated to reconstruct the target partial, which may be subtracted from the original signal mixture to yield a new, quickly-adapting form of notch filtering. Enhancements to the basic tracker are made which, under certain conditions, attain the Cramer -Rao bound for the instantaneous frequency estimate. To improve tracking, the novel idea of Harmonic -Locked Loop tracking, using N harmonically constrained trackers, is introduced for tracking signals, such as voices and certain musical instruments. The estimated fundamental frequency is computed from a maximum-likelihood weighting of the N tracking estimates, making it highly robust. The result is that harmonic signals, such as voices, can be isolated from complex mixtures in the presence of other spectrally overlapping signals. Additionally, since phase information is preserved, the resynthesized harmonic signals may be removed from the original mixtures with relatively little damage to the residual signal. Finally, a new methodology is given for designing linear-phase FIR filters

  16. Contribution of auditory working memory to speech understanding in mandarin-speaking cochlear implant users.

    Science.gov (United States)

    Tao, Duoduo; Deng, Rui; Jiang, Ye; Galvin, John J; Fu, Qian-Jie; Chen, Bing

    2014-01-01

    To investigate how auditory working memory relates to speech perception performance by Mandarin-speaking cochlear implant (CI) users. Auditory working memory and speech perception was measured in Mandarin-speaking CI and normal-hearing (NH) participants. Working memory capacity was measured using forward digit span and backward digit span; working memory efficiency was measured using articulation rate. Speech perception was assessed with: (a) word-in-sentence recognition in quiet, (b) word-in-sentence recognition in speech-shaped steady noise at +5 dB signal-to-noise ratio, (c) Chinese disyllable recognition in quiet, (d) Chinese lexical tone recognition in quiet. Self-reported school rank was also collected regarding performance in schoolwork. There was large inter-subject variability in auditory working memory and speech performance for CI participants. Working memory and speech performance were significantly poorer for CI than for NH participants. All three working memory measures were strongly correlated with each other for both CI and NH participants. Partial correlation analyses were performed on the CI data while controlling for demographic variables. Working memory efficiency was significantly correlated only with sentence recognition in quiet when working memory capacity was partialled out. Working memory capacity was correlated with disyllable recognition and school rank when efficiency was partialled out. There was no correlation between working memory and lexical tone recognition in the present CI participants. Mandarin-speaking CI users experience significant deficits in auditory working memory and speech performance compared with NH listeners. The present data suggest that auditory working memory may contribute to CI users' difficulties in speech understanding. The present pattern of results with Mandarin-speaking CI users is consistent with previous auditory working memory studies with English-speaking CI users, suggesting that the lexical importance

  17. Specialization of the auditory system for the processing of bio-sonar information in the frequency domain: Mustached bats.

    Science.gov (United States)

    Suga, Nobuo

    2018-04-01

    For echolocation, mustached bats emit velocity-sensitive orientation sounds (pulses) containing a constant-frequency component consisting of four harmonics (CF 1-4 ). They show unique behavior called Doppler-shift compensation for Doppler-shifted echoes and hunting behavior for frequency and amplitude modulated echoes from fluttering insects. Their peripheral auditory system is highly specialized for fine frequency analysis of CF 2 (∼61.0 kHz) and detecting echo CF 2 from fluttering insects. In their central auditory system, lateral inhibition occurring at multiple levels sharpens V-shaped frequency-tuning curves at the periphery and creates sharp spindle-shaped tuning curves and amplitude tuning. The large CF 2 -tuned area of the auditory cortex systematically represents the frequency and amplitude of CF 2 in a frequency-versus-amplitude map. "CF/CF" neurons are tuned to a specific combination of pulse CF 1 and Doppler-shifted echo CF 2 or 3 . They are tuned to specific velocities. CF/CF neurons cluster in the CC ("C" stands for CF) and DIF (dorsal intrafossa) areas of the auditory cortex. The CC area has the velocity map for Doppler imaging. The DIF area is particularly for Dopper imaging of other bats approaching in cruising flight. To optimize the processing of behaviorally relevant sounds, cortico-cortical interactions and corticofugal feedback modulate the frequency tuning of cortical and sub-cortical auditory neurons and cochlear hair cells through a neural net consisting of positive feedback associated with lateral inhibition. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Connections between voice ergonomic risk factors in classrooms and teachers' voice production.

    Science.gov (United States)

    Rantala, Leena M; Hakala, Suvi; Holmqvist, Sofia; Sala, Eeva

    2012-01-01

    The aim of the study was to investigate if voice ergonomic risk factors in classrooms correlated with acoustic parameters of teachers' voice production. The voice ergonomic risk factors in the fields of working culture, working postures and indoor air quality were assessed in 40 classrooms using the Voice Ergonomic Assessment in Work Environment - Handbook and Checklist. Teachers (32 females, 8 males) from the above-mentioned classrooms recorded text readings before and after a working day. Fundamental frequency, sound pressure level (SPL) and the slope of the spectrum (alpha ratio) were analyzed. The higher the number of the risk factors in the classrooms, the higher SPL the teachers used and the more strained the males' voices (increased alpha ratio) were. The SPL was already higher before the working day in the teachers with higher risk than in those with lower risk. In the working environment with many voice ergonomic risk factors, speakers increase voice loudness and use more strained voice quality (males). A practical implication of the results is that voice ergonomic assessments are needed in schools. Copyright © 2013 S. Karger AG, Basel.

  19. A Review of Auditory Prediction and Its Potential Role in Tinnitus Perception.

    Science.gov (United States)

    Durai, Mithila; O'Keeffe, Mary G; Searchfield, Grant D

    2018-06-01

    The precise mechanisms underlying tinnitus perception and distress are still not fully understood. A recent proposition is that auditory prediction errors and related memory representations may play a role in driving tinnitus perception. It is of interest to further explore this. To obtain a comprehensive narrative synthesis of current research in relation to auditory prediction and its potential role in tinnitus perception and severity. A narrative review methodological framework was followed. The key words Prediction Auditory, Memory Prediction Auditory, Tinnitus AND Memory, Tinnitus AND Prediction in Article Title, Abstract, and Keywords were extensively searched on four databases: PubMed, Scopus, SpringerLink, and PsychINFO. All study types were selected from 2000-2016 (end of 2016) and had the following exclusion criteria applied: minimum age of participants article not available in English. Reference lists of articles were reviewed to identify any further relevant studies. Articles were short listed based on title relevance. After reading the abstracts and with consensus made between coauthors, a total of 114 studies were selected for charting data. The hierarchical predictive coding model based on the Bayesian brain hypothesis, attentional modulation and top-down feedback serves as the fundamental framework in current literature for how auditory prediction may occur. Predictions are integral to speech and music processing, as well as in sequential processing and identification of auditory objects during auditory streaming. Although deviant responses are observable from middle latency time ranges, the mismatch negativity (MMN) waveform is the most commonly studied electrophysiological index of auditory irregularity detection. However, limitations may apply when interpreting findings because of the debatable origin of the MMN and its restricted ability to model real-life, more complex auditory phenomenon. Cortical oscillatory band activity may act as

  20. Voice Habits and Behaviors: Voice Care Among Flamenco Singers.

    Science.gov (United States)

    Garzón García, Marina; Muñoz López, Juana; Y Mendoza Lara, Elvira

    2017-03-01

    The purpose of this study is to analyze the vocal behavior of flamenco singers, as compared with classical music singers, to establish a differential vocal profile of voice habits and behaviors in flamenco music. Bibliographic review was conducted, and the Singer's Vocal Habits Questionnaire, an experimental tool designed by the authors to gather data regarding hygiene behavior, drinking and smoking habits, type of practice, voice care, and symptomatology perceived in both the singing and the speaking voice, was administered. We interviewed 94 singers, divided into two groups: the flamenco experimental group (FEG, n = 48) and the classical control group (CCG, n = 46). Frequency analysis, a Likert scale, and discriminant and exploratory factor analysis were used to obtain a differential profile for each group. The FEG scored higher than the CCG in speaking voice symptomatology. The FEG scored significantly higher than the CCG in use of "inadequate vocal technique" when singing. Regarding voice habits, the FEG scored higher in "lack of practice and warm-up" and "environmental habits." A total of 92.6% of the subjects classified themselves correctly in each group. The Singer's Vocal Habits Questionnaire has proven effective in differentiating flamenco and classical singers. Flamenco singers are exposed to numerous vocal risk factors that make them more prone to vocal fatigue, mucosa dehydration, phonotrauma, and muscle stiffness than classical singers. Further research is needed in voice training in flamenco music, as a means to strengthen the voice and enable it to meet the requirements of this musical genre. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  1. Am I ready for it? Students’ perceptions of meaningful feedback on entrustable professional activities

    NARCIS (Netherlands)

    Duijn, Chantal C. M. A.; Welink, Lisanne S; Mandoki, Mira; Ten Cate, Olle Th J; Kremer, Wim D. J.; Bok, Harold G. J.

    2017-01-01

    Background Receiving feedback while in the clinical workplace is probably the most frequently voiced desire of students. In clinical learning environments, providing and seeking performance-relevant information is often difficult for both supervisors and students. The use of entrustable professional

  2. Sound induced activity in voice sensitive cortex predicts voice memory ability

    Directory of Open Access Journals (Sweden)

    Rebecca eWatson

    2012-04-01

    Full Text Available The ‘temporal voice areas’ (TVAs (Belin et al., 2000 of the human brain show greater neuronal activity in response to human voices than to other categories of nonvocal sounds. However, a direct link between TVA activity and voice perceptionbehaviour has not yet been established. Here we show that a functional magnetic resonance imaging (fMRI measure of activity in the TVAs predicts individual performance at a separately administered voice memory test. This relation holds whengeneral sound memory ability is taken into account. These findings provide the first evidence that the TVAs are specifically involved in voice cognition.

  3. Integrating cues of social interest and voice pitch in men's preferences for women's voices.

    Science.gov (United States)

    Jones, Benedict C; Feinberg, David R; Debruine, Lisa M; Little, Anthony C; Vukovic, Jovana

    2008-04-23

    Most previous studies of vocal attractiveness have focused on preferences for physical characteristics of voices such as pitch. Here we examine the content of vocalizations in interaction with such physical traits, finding that vocal cues of social interest modulate the strength of men's preferences for raised pitch in women's voices. Men showed stronger preferences for raised pitch when judging the voices of women who appeared interested in the listener than when judging the voices of women who appeared relatively disinterested in the listener. These findings show that voice preferences are not determined solely by physical properties of voices and that men integrate information about voice pitch and the degree of social interest expressed by women when forming voice preferences. Women's preferences for raised pitch in women's voices were not modulated by cues of social interest, suggesting that the integration of cues of social interest and voice pitch when men judge the attractiveness of women's voices may reflect adaptations that promote efficient allocation of men's mating effort.

  4. Voice Therapy Practices and Techniques: A Survey of Voice Clinicians.

    Science.gov (United States)

    Mueller, Peter B.; Larson, George W.

    1992-01-01

    Eighty-three voice disorder therapists' ratings of statements regarding voice therapy practices indicated that vocal nodules are the most frequent disorder treated; vocal abuse and hard glottal attack elimination, counseling, and relaxation were preferred treatment approaches; and voice therapy is more effective with adults than with children.…

  5. Variations in voice level and fundamental frequency with changing background noise level and talker-to-listener distance while wearing hearing protectors: A pilot study.

    Science.gov (United States)

    Bouserhal, Rachel E; Macdonald, Ewen N; Falk, Tiago H; Voix, Jérémie

    2016-01-01

    Speech production in noise with varying talker-to-listener distance has been well studied for the open ear condition. However, occluding the ear canal can affect the auditory feedback and cause deviations from the models presented for the open-ear condition. Communication is a main concern for people wearing hearing protection devices (HPD). Although practical, radio communication is cumbersome, as it does not distinguish designated receivers. A smarter radio communication protocol must be developed to alleviate this problem. Thus, it is necessary to model speech production in noise while wearing HPDs. Such a model opens the door to radio communication systems that distinguish receivers and offer more efficient communication between persons wearing HPDs. This paper presents the results of a pilot study aimed to investigate the effects of occluding the ear on changes in voice level and fundamental frequency in noise and with varying talker-to-listener distance. Twelve participants with a mean age of 28 participated in this study. Compared to existing data, results show a trend similar to the open ear condition with the exception of the occluded quiet condition. This implies that a model can be developed to better understand speech production for the occluded ear.

  6. Learning effects of dynamic postural control by auditory biofeedback versus visual biofeedback training.

    Science.gov (United States)

    Hasegawa, Naoya; Takeda, Kenta; Sakuma, Moe; Mani, Hiroki; Maejima, Hiroshi; Asaka, Tadayoshi

    2017-10-01

    Augmented sensory biofeedback (BF) for postural control is widely used to improve postural stability. However, the effective sensory information in BF systems of motor learning for postural control is still unknown. The purpose of this study was to investigate the learning effects of visual versus auditory BF training in dynamic postural control. Eighteen healthy young adults were randomly divided into two groups (visual BF and auditory BF). In test sessions, participants were asked to bring the real-time center of pressure (COP) in line with a hidden target by body sway in the sagittal plane. The target moved in seven cycles of sine curves at 0.23Hz in the vertical direction on a monitor. In training sessions, the visual and auditory BF groups were required to change the magnitude of a visual circle and a sound, respectively, according to the distance between the COP and target in order to reach the target. The perceptual magnitudes of visual and auditory BF were equalized according to Stevens' power law. At the retention test, the auditory but not visual BF group demonstrated decreased postural performance errors in both the spatial and temporal parameters under the no-feedback condition. These findings suggest that visual BF increases the dependence on visual information to control postural performance, while auditory BF may enhance the integration of the proprioceptive sensory system, which contributes to motor learning without BF. These results suggest that auditory BF training improves motor learning of dynamic postural control. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Auditory conflict resolution correlates with medial-lateral frontal theta/alpha phase synchrony.

    Science.gov (United States)

    Huang, Samantha; Rossi, Stephanie; Hämäläinen, Matti; Ahveninen, Jyrki

    2014-01-01

    When multiple persons speak simultaneously, it may be difficult for the listener to direct attention to correct sound objects among conflicting ones. This could occur, for example, in an emergency situation in which one hears conflicting instructions and the loudest, instead of the wisest, voice prevails. Here, we used cortically-constrained oscillatory MEG/EEG estimates to examine how different brain regions, including caudal anterior cingulate (cACC) and dorsolateral prefrontal cortices (DLPFC), work together to resolve these kinds of auditory conflicts. During an auditory flanker interference task, subjects were presented with sound patterns consisting of three different voices, from three different directions (45° left, straight ahead, 45° right), sounding out either the letters "A" or "O". They were asked to discriminate which sound was presented centrally and ignore the flanking distracters that were phonetically either congruent (50%) or incongruent (50%) with the target. Our cortical MEG/EEG oscillatory estimates demonstrated a direct relationship between performance and brain activity, showing that efficient conflict resolution, as measured with reduced conflict-induced RT lags, is predicted by theta/alpha phase coupling between cACC and right lateral frontal cortex regions intersecting the right frontal eye fields (FEF) and DLPFC, as well as by increased pre-stimulus gamma (60-110 Hz) power in the left inferior fontal cortex. Notably, cACC connectivity patterns that correlated with behavioral conflict-resolution measures were found during both the pre-stimulus and the pre-response periods. Our data provide evidence that, instead of being only transiently activated upon conflict detection, cACC is involved in sustained engagement of attentional resources required for effective sound object selection performance.

  8. Auditory Conflict Resolution Correlates with Medial–Lateral Frontal Theta/Alpha Phase Synchrony

    Science.gov (United States)

    Huang, Samantha; Rossi, Stephanie; Hämäläinen, Matti; Ahveninen, Jyrki

    2014-01-01

    When multiple persons speak simultaneously, it may be difficult for the listener to direct attention to correct sound objects among conflicting ones. This could occur, for example, in an emergency situation in which one hears conflicting instructions and the loudest, instead of the wisest, voice prevails. Here, we used cortically-constrained oscillatory MEG/EEG estimates to examine how different brain regions, including caudal anterior cingulate (cACC) and dorsolateral prefrontal cortices (DLPFC), work together to resolve these kinds of auditory conflicts. During an auditory flanker interference task, subjects were presented with sound patterns consisting of three different voices, from three different directions (45° left, straight ahead, 45° right), sounding out either the letters “A” or “O”. They were asked to discriminate which sound was presented centrally and ignore the flanking distracters that were phonetically either congruent (50%) or incongruent (50%) with the target. Our cortical MEG/EEG oscillatory estimates demonstrated a direct relationship between performance and brain activity, showing that efficient conflict resolution, as measured with reduced conflict-induced RT lags, is predicted by theta/alpha phase coupling between cACC and right lateral frontal cortex regions intersecting the right frontal eye fields (FEF) and DLPFC, as well as by increased pre-stimulus gamma (60–110 Hz) power in the left inferior fontal cortex. Notably, cACC connectivity patterns that correlated with behavioral conflict-resolution measures were found during both the pre-stimulus and the pre-response periods. Our data provide evidence that, instead of being only transiently activated upon conflict detection, cACC is involved in sustained engagement of attentional resources required for effective sound object selection performance. PMID:25343503

  9. Auditory conflict resolution correlates with medial-lateral frontal theta/alpha phase synchrony.

    Directory of Open Access Journals (Sweden)

    Samantha Huang

    Full Text Available When multiple persons speak simultaneously, it may be difficult for the listener to direct attention to correct sound objects among conflicting ones. This could occur, for example, in an emergency situation in which one hears conflicting instructions and the loudest, instead of the wisest, voice prevails. Here, we used cortically-constrained oscillatory MEG/EEG estimates to examine how different brain regions, including caudal anterior cingulate (cACC and dorsolateral prefrontal cortices (DLPFC, work together to resolve these kinds of auditory conflicts. During an auditory flanker interference task, subjects were presented with sound patterns consisting of three different voices, from three different directions (45° left, straight ahead, 45° right, sounding out either the letters "A" or "O". They were asked to discriminate which sound was presented centrally and ignore the flanking distracters that were phonetically either congruent (50% or incongruent (50% with the target. Our cortical MEG/EEG oscillatory estimates demonstrated a direct relationship between performance and brain activity, showing that efficient conflict resolution, as measured with reduced conflict-induced RT lags, is predicted by theta/alpha phase coupling between cACC and right lateral frontal cortex regions intersecting the right frontal eye fields (FEF and DLPFC, as well as by increased pre-stimulus gamma (60-110 Hz power in the left inferior fontal cortex. Notably, cACC connectivity patterns that correlated with behavioral conflict-resolution measures were found during both the pre-stimulus and the pre-response periods. Our data provide evidence that, instead of being only transiently activated upon conflict detection, cACC is involved in sustained engagement of attentional resources required for effective sound object selection performance.

  10. The role of the medial temporal limbic system in processing emotions in voice and music.

    Science.gov (United States)

    Frühholz, Sascha; Trost, Wiebke; Grandjean, Didier

    2014-12-01

    Subcortical brain structures of the limbic system, such as the amygdala, are thought to decode the emotional value of sensory information. Recent neuroimaging studies, as well as lesion studies in patients, have shown that the amygdala is sensitive to emotions in voice and music. Similarly, the hippocampus, another part of the temporal limbic system (TLS), is responsive to vocal and musical emotions, but its specific roles in emotional processing from music and especially from voices have been largely neglected. Here we review recent research on vocal and musical emotions, and outline commonalities and differences in the neural processing of emotions in the TLS in terms of emotional valence, emotional intensity and arousal, as well as in terms of acoustic and structural features of voices and music. We summarize the findings in a neural framework including several subcortical and cortical functional pathways between the auditory system and the TLS. This framework proposes that some vocal expressions might already receive a fast emotional evaluation via a subcortical pathway to the amygdala, whereas cortical pathways to the TLS are thought to be equally used for vocal and musical emotions. While the amygdala might be specifically involved in a coarse decoding of the emotional value of voices and music, the hippocampus might process more complex vocal and musical emotions, and might have an important role especially for the decoding of musical emotions by providing memory-based and contextual associations. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Voice Disorders in Teachers: Clinical, Videolaryngoscopical, and Vocal Aspects.

    Science.gov (United States)

    Pereira, Eny Regina Bóia Neves; Tavares, Elaine Lara Mendes; Martins, Regina Helena Garcia

    2015-09-01

    Dysphonia is more prevalent in teachers than among the general population. The objective of this study was to analyze clinical, vocal, and videolaryngoscopical aspects in dysphonic teachers. Ninety dysphonic teachers were inquired about their voice, comorbidities, and work conditions. They underwent vocal auditory-perceptual evaluation (maximum phonation time and GRBASI scale), acoustic voice analysis, and videolaryngoscopy. The results were compared with a control group consisting of 90 dysphonic nonteachers, of similar gender and ages, and with professional activities excluding teaching and singing. In both groups, there were 85 women and five men (age range 31-50 years). In the controls, the majority of subjects worked in domestic activities, whereas the majority of teachers worked in primary (42.8%) and secondary school (37.7%). Teachers and controls reported, respectively: vocal abuse (76.7%; 37.8%), weekly hours of work between 21 and 40 years (72.2%; 80%), under 10 years of practice (36%; 23%), absenteeism (23%; 0%), sinonasal (66%; 20%) and gastroesophageal symptoms (44%; 22%), hoarseness (82%; 78%), throat clearing (70%; 62%), and phonatory effort (72%; 52%). In both groups, there were decreased values of maximum phonation time, impairment of the G parameter in the GRBASI scale (82%), decrease of F0 and increase of the rest of acoustic parameters. Nodules and laryngopharyngeal reflux were predominant in teachers; laryngopharyngeal reflux, polyps, and sulcus vocalis predominated in the controls. Vocal symptoms, comorbidities, and absenteeism were predominant among teachers. The vocal analyses were similar in both groups. Nodules and laryngopharyngeal reflux were predominant among teachers, whereas polyps, laryngopharyngeal reflux, and sulcus were predominant among controls. Copyright © 2015 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  12. Haptic feedback for enhancing realism of walking simulations.

    Science.gov (United States)

    Turchet, Luca; Burelli, Paolo; Serafin, Stefania

    2013-01-01

    In this paper, we describe several experiments whose goal is to evaluate the role of plantar vibrotactile feedback in enhancing the realism of walking experiences in multimodal virtual environments. To achieve this goal we built an interactive and a noninteractive multimodal feedback system. While during the use of the interactive system subjects physically walked, during the use of the noninteractive system the locomotion was simulated while subjects were sitting on a chair. In both the configurations subjects were exposed to auditory and audio-visual stimuli presented with and without the haptic feedback. Results of the experiments provide a clear preference toward the simulations enhanced with haptic feedback showing that the haptic channel can lead to more realistic experiences in both interactive and noninteractive configurations. The majority of subjects clearly appreciated the added feedback. However, some subjects found the added feedback unpleasant. This might be due, on one hand, to the limits of the haptic simulation and, on the other hand, to the different individual desire to be involved in the simulations. Our findings can be applied to the context of physical navigation in multimodal virtual environments as well as to enhance the user experience of watching a movie or playing a video game.

  13. The Central Auditory Processing Kit[TM]. Book 1: Auditory Memory [and] Book 2: Auditory Discrimination, Auditory Closure, and Auditory Synthesis [and] Book 3: Auditory Figure-Ground, Auditory Cohesion, Auditory Binaural Integration, and Compensatory Strategies.

    Science.gov (United States)

    Mokhemar, Mary Ann

    This kit for assessing central auditory processing disorders (CAPD), in children in grades 1 through 8 includes 3 books, 14 full-color cards with picture scenes, and a card depicting a phone key pad, all contained in a sturdy carrying case. The units in each of the three books correspond with auditory skill areas most commonly addressed in…

  14. Synchrony of auditory brain responses predicts behavioral ability to keep still in children with autism spectrum disorder

    Directory of Open Access Journals (Sweden)

    Yuko Yoshimura

    2016-01-01

    Full Text Available The auditory-evoked P1m, recorded by magnetoencephalography, reflects a central auditory processing ability in human children. One recent study revealed that asynchrony of P1m between the right and left hemispheres reflected a central auditory processing disorder (i.e., attention deficit hyperactivity disorder, ADHD in children. However, to date, the relationship between auditory P1m right-left hemispheric synchronization and the comorbidity of hyperactivity in children with autism spectrum disorder (ASD is unknown. In this study, based on a previous report of an asynchrony of P1m in children with ADHD, to clarify whether the P1m right-left hemispheric synchronization is related to the symptom of hyperactivity in children with ASD, we investigated the relationship between voice-evoked P1m right-left hemispheric synchronization and hyperactivity in children with ASD. In addition to synchronization, we investigated the right-left hemispheric lateralization. Our findings failed to demonstrate significant differences in these values between ASD children with and without the symptom of hyperactivity, which was evaluated using the Autism Diagnostic Observational Schedule, Generic (ADOS-G subscale. However, there was a significant correlation between the degrees of hemispheric synchronization and the ability to keep still during 12-minute MEG recording periods. Our results also suggested that asynchrony in the bilateral brain auditory processing system is associated with ADHD-like symptoms in children with ASD.

  15. Auditory Perceptual Abilities Are Associated with Specific Auditory Experience

    Directory of Open Access Journals (Sweden)

    Yael Zaltz

    2017-11-01

    Full Text Available The extent to which auditory experience can shape general auditory perceptual abilities is still under constant debate. Some studies show that specific auditory expertise may have a general effect on auditory perceptual abilities, while others show a more limited influence, exhibited only in a relatively narrow range associated with the area of expertise. The current study addresses this issue by examining experience-dependent enhancement in perceptual abilities in the auditory domain. Three experiments were performed. In the first experiment, 12 pop and rock musicians and 15 non-musicians were tested in frequency discrimination (DLF, intensity discrimination, spectrum discrimination (DLS, and time discrimination (DLT. Results showed significant superiority of the musician group only for the DLF and DLT tasks, illuminating enhanced perceptual skills in the key features of pop music, in which miniscule changes in amplitude and spectrum are not critical to performance. The next two experiments attempted to differentiate between generalization and specificity in the influence of auditory experience, by comparing subgroups of specialists. First, seven guitar players and eight percussionists were tested in the DLF and DLT tasks that were found superior for musicians. Results showed superior abilities on the DLF task for guitar players, though no difference between the groups in DLT, demonstrating some dependency of auditory learning on the specific area of expertise. Subsequently, a third experiment was conducted, testing a possible influence of vowel density in native language on auditory perceptual abilities. Ten native speakers of German (a language characterized by a dense vowel system of 14 vowels, and 10 native speakers of Hebrew (characterized by a sparse vowel system of five vowels, were tested in a formant discrimination task. This is the linguistic equivalent of a DLS task. Results showed that German speakers had superior formant

  16. Protective Strategies Against Dysphonia in Teachers: Preliminary Results Comparing Voice Amplification and 0.9% NaCl Nebulization.

    Science.gov (United States)

    Masson, Maria Lúcia Vaz; de Araújo, Tânia Maria

    2018-03-01

    This study aimed to compare the effects of two protective strategies, voice amplification (VA) and 0.9% NaCl nebulization (NEB), on teachers' voice in the work setting. An interventional evaluator-blind study was conducted, assigning 53 teachers from two public high schools to one of the two protective strategy groups (VA or NEB). Vocal function was assessed in a sound-treated booth before and after a 4-week period. Assessment included the severity of voice impairment (Consensus Auditory-Perceptual Evaluation of Voice [CAPE-V]), acoustic analysis of fundamental frequency (f0), sound pressure level (SPL), jitter, shimmer, glottal-to-noise excitation ratio (GNE), noise (VoxMetria), and the self-rated Screening Index for Voice Disorder (SIVD). Data were statistically analyzed using SPSS Statistics (version 22) with a significance level of P ≤ 0.05. Effect size was calculated using Cohen's d coefficient. There were no statistical differences between groups at baseline in terms of age, sex, time of teaching, teaching workload, and voice outcomes, except for SPL. During postintervention between groups, NEB displayed lower SIVD scores (VA = 3; NEB = 0; P = 0.018) and VA had lower acoustic irregularity (VA = 3.19; NEB = 3.69; P = 0.027), with moderate to large effect size. Postintervention within-groups decreased CAPE-V for VA (pretest = 31.97; posttest = 28.24; P = 0.021) and SIVD for NEB (pretest = 3; posttest = 0; P = 0.001). SPL decreased in both groups, NEB decreased in men only, and VA decreased in both men and women. NEB increased f0 for female participants (P ≤ 0.001). Both VA and NEB may help mitigate dysphonia in different pathways, being potential interventions for protecting teachers' voices in the work setting. An ongoing study with a control group will further support these preliminary results. Copyright © 2018 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  17. Voice following radiotherapy

    International Nuclear Information System (INIS)

    Stoicheff, M.L.

    1975-01-01

    This study was undertaken to provide information on the voice of patients following radiotherapy for glottic cancer. Part I presents findings from questionnaires returned by 227 of 235 patients successfully irradiated for glottic cancer from 1960 through 1971. Part II presents preliminary findings on the speaking fundamental frequencies of 22 irradiated patients. Normal to near-normal voice was reported by 83 percent of the 227 patients; however, 80 percent did indicate persisting vocal difficulties such as fatiguing of voice with much usage, inability to sing, reduced loudness, hoarse voice quality and inability to shout. Amount of talking during treatments appeared to affect length of time for voice to recover following treatments in those cases where it took from nine to 26 weeks; also, with increasing years since treatment, patients rated their voices more favorably. Smoking habits following treatments improved significantly with only 27 percent smoking heavily as compared with 65 percent prior to radiation therapy. No correlation was found between smoking (during or after treatments) and vocal ratings or between smoking and length of time for voice to recover. There was no relationship found between reported vocal ratings and stage of the disease

  18. Voices Not Heard: Voice-Use Profiles of Elementary Music Teachers, the Effects of Voice Amplification on Vocal Load, and Perceptions of Issues Surrounding Voice Use

    Science.gov (United States)

    Morrow, Sharon L.

    2009-01-01

    Teachers represent the largest group of occupational voice users and have voice-related problems at a rate of over twice that found in the general population. Among teachers, music teachers are roughly four times more likely than classroom teachers to develop voice-related problems. Although it has been established that music teachers use their…

  19. Dimensionality in voice quality.

    Science.gov (United States)

    Bele, Irene Velsvik

    2007-05-01

    This study concerns speaking voice quality in a group of male teachers (n = 35) and male actors (n = 36), as the purpose was to investigate normal and supranormal voices. The goal was the development of a method of valid perceptual evaluation for normal to supranormal and resonant voices. The voices (text reading at two loudness levels) had been evaluated by 10 listeners, for 15 vocal characteristics using VA scales. In this investigation, the results of an exploratory factor analysis of the vocal characteristics used in this method are presented, reflecting four dimensions of major importance for normal and supranormal voices. Special emphasis is placed on the effects on voice quality of a change in the loudness variable, as two loudness levels are studied. Furthermore, the vocal characteristics Sonority and Ringing voice quality are paid special attention, as the essence of the term "resonant voice" was a basic issue throughout a doctoral dissertation where this study was included.

  20. [Assessment of voice acoustic parameters in female teachers with diagnosed occupational voice disorders].

    Science.gov (United States)

    Niebudek-Bogusz, Ewa; Fiszer, Marta; Sliwińska-Kowalska, Mariola

    2005-01-01

    Laryngovideostroboscopy is the method most frequently used in the assessment of voice disorders. However, the employment of quantitative methods, such as voice acoustic analysis, is essential for evaluating the effectiveness of prophylactic and therapeutic activities as well as for objective medical certification of larynx pathologies. The aim of this study was to examine voice acoustic parameters in female teachers with occupational voice diseases. Acoustic analysis (IRIS software) was performed in 66 female teachers, including 35 teachers with occupational voice diseases and 31 with functional dysphonia. The teachers with occupational voice diseases presented the lower average fundamental frequency (193 Hz) compared to the group with functional dysphonia (209 Hz) and to the normative value (236 Hz), whereas other acoustic parameters did not differ significantly in both groups. Voice acoustic analysis, when applied separately from vocal loading, cannot be used as a testing method to verify the diagnosis of occupational voice disorders.

  1. Muscular tension and body posture in relation to voice handicap and voice quality in teachers with persistent voice complaints.

    Science.gov (United States)

    Kooijman, P G C; de Jong, F I C R S; Oudes, M J; Huinck, W; van Acht, H; Graamans, K

    2005-01-01

    The aim of this study was to investigate the relationship between extrinsic laryngeal muscular hypertonicity and deviant body posture on the one hand and voice handicap and voice quality on the other hand in teachers with persistent voice complaints and a history of voice-related absenteeism. The study group consisted of 25 female teachers. A voice therapist assessed extrinsic laryngeal muscular tension and a physical therapist assessed body posture. The assessed parameters were clustered in categories. The parameters in the different categories represent the same function. Further a tension/posture index was created, which is the summation of the different parameters. The different parameters and the index were related to the Voice Handicap Index (VHI) and the Dysphonia Severity Index (DSI). The scores of the VHI and the individual parameters differ significantly except for the posterior weight bearing and tension of the sternocleidomastoid muscle. There was also a significant difference between the individual parameters and the DSI, except for tension of the cricothyroid muscle and posterior weight bearing. The score of the tension/posture index correlates significantly with both the VHI and the DSI. In a linear regression analysis, the combination of hypertonicity of the sternocleidomastoid, the geniohyoid muscles and posterior weight bearing is the most important predictor for a high voice handicap. The combination of hypertonicity of the geniohyoid muscle, posterior weight bearing, high position of the hyoid bone, hypertonicity of the cricothyroid muscle and anteroposition of the head is the most important predictor for a low DSI score. The results of this study show the higher the score of the index, the higher the score of the voice handicap and the worse the voice quality is. Moreover, the results are indicative for the importance of assessment of muscular tension and body posture in the diagnosis of voice disorders.

  2. Mindfulness of voices, self-compassion, and secure attachment in relation to the experience of hearing voices.

    Science.gov (United States)

    Dudley, James; Eames, Catrin; Mulligan, John; Fisher, Naomi

    2018-03-01

    Developing compassion towards oneself has been linked to improvement in many areas of psychological well-being, including psychosis. Furthermore, developing a non-judgemental, accepting way of relating to voices is associated with lower levels of distress for people who hear voices. These factors have also been associated with secure attachment. This study explores associations between the constructs of mindfulness of voices, self-compassion, and distress from hearing voices and how secure attachment style related to each of these variables. Cross-sectional online. One hundred and twenty-eight people (73% female; M age  = 37.5; 87.5% Caucasian) who currently hear voices completed the Self-Compassion Scale, Southampton Mindfulness of Voices Questionnaire, Relationships Questionnaire, and Hamilton Programme for Schizophrenia Voices Questionnaire. Results showed that mindfulness of voices mediated the relationship between self-compassion and severity of voices, and self-compassion mediated the relationship between mindfulness of voices and severity of voices. Self-compassion and mindfulness of voices were significantly positively correlated with each other and negatively correlated with distress and severity of voices. Mindful relation to voices and self-compassion are associated with reduced distress and severity of voices, which supports the proposed potential benefits of mindful relating to voices and self-compassion as therapeutic skills for people experiencing distress by voice hearing. Greater self-compassion and mindfulness of voices were significantly associated with less distress from voices. These findings support theory underlining compassionate mind training. Mindfulness of voices mediated the relationship between self-compassion and distress from voices, indicating a synergistic relationship between the constructs. Although the current findings do not give a direction of causation, consideration is given to the potential impact of mindful and

  3. The Effect of Multimodal Feedback on Perceived Exertion on a VR Exercise Setting

    DEFF Research Database (Denmark)

    Bruun-Pedersen, Jon Ram; Andersen, Morten G.; Clemmesen, Mathias M.

    2018-01-01

    This paper seeks to determine if multimodal feedback, from auditory and haptic stimuli, can affect a user’s perceived exertion in a virtual reality setting. A simple virtual environment was created in the style of a desert to minimize the amount of visual distractions; a head mounted display was ...

  4. Skill learning from kinesthetic feedback.

    Science.gov (United States)

    Pinzon, David; Vega, Roberto; Sanchez, Yerly Paola; Zheng, Bin

    2017-10-01

    It is important for a surgeon to perform surgical tasks under appropriate guidance from visual and kinesthetic feedback. However, our knowledge on kinesthetic (muscle) memory and its role in learning motor skills remains elementary. To discover the effect of exclusive kinesthetic training on kinesthetic memory in both performance and learning. In Phase 1, a total of twenty participants duplicated five 2 dimensional movements of increasing complexity via passive kinesthetic guidance, without visual or auditory stimuli. Five participants were asked to repeat the task in the Phase 2 over a period of three weeks, for a total of nine sessions. Subjects accurately recalled movement direction using kinesthetic memory, but recalling movement length was less precise. Over the nine training sessions, error occurrence dropped after the sixth session. Muscle memory constructs the foundation for kinesthetic training. Knowledge gained helps surgeons learn skills from kinesthetic information in the condition where visual feedback is limited. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Auditory word recognition: extrinsic and intrinsic effects of word frequency.

    Science.gov (United States)

    Connine, C M; Titone, D; Wang, J

    1993-01-01

    Two experiments investigated the influence of word frequency in a phoneme identification task. Speech voicing continua were constructed so that one endpoint was a high-frequency word and the other endpoint was a low-frequency word (e.g., best-pest). Experiment 1 demonstrated that ambiguous tokens were labeled such that a high-frequency word was formed (intrinsic frequency effect). Experiment 2 manipulated the frequency composition of the list (extrinsic frequency effect). A high-frequency list bias produced an exaggerated influence of frequency; a low-frequency list bias showed a reverse frequency effect. Reaction time effects were discussed in terms of activation and postaccess decision models of frequency coding. The results support a late use of frequency in auditory word recognition.

  6. "Voice Forum" The Human Voice as Primary Instrument in Music Therapy

    DEFF Research Database (Denmark)

    Pedersen, Inge Nygaard; Storm, Sanne

    2009-01-01

    Aspects will be drawn on the human voice as tool for embodying our psychological and physiological state, and attempting integration of feelings. Presentations and dialogues on different methods and techniques in "Therapy related body-and voice work.", as well as the human voice as a tool for non...

  7. Voice Use Among Music Theory Teachers: A Voice Dosimetry and Self-Assessment Study.

    Science.gov (United States)

    Schiller, Isabel S; Morsomme, Dominique; Remacle, Angélique

    2017-07-25

    This study aimed (1) to investigate music theory teachers' professional and extra-professional vocal loading and background noise exposure, (2) to determine the correlation between vocal loading and background noise, and (3) to determine the correlation between vocal loading and self-evaluation data. Using voice dosimetry, 13 music theory teachers were monitored for one workweek. The parameters analyzed were voice sound pressure level (SPL), fundamental frequency (F0), phonation time, vocal loading index (VLI), and noise SPL. Spearman correlation was used to correlate vocal loading parameters (voice SPL, F0, and phonation time) and noise SPL. Each day, the subjects self-assessed their voice using visual analog scales. VLI and self-evaluation data were correlated using Spearman correlation. Vocal loading parameters and noise SPL were significantly higher in the professional than in the extra-professional environment. Voice SPL, phonation time, and female subjects' F0 correlated positively with noise SPL. VLI correlated with self-assessed voice quality, vocal fatigue, and amount of singing and speaking voice produced. Teaching music theory is a profession with high vocal demands. More background noise is associated with increased vocal loading and may indirectly increase the risk for voice disorders. Correlations between VLI and self-assessments suggest that these teachers are well aware of their vocal demands and feel their effect on voice quality and vocal fatigue. Visual analog scales seem to represent a useful tool for subjective vocal loading assessment and associated symptoms in these professional voice users. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  8. Writing with Voice

    Science.gov (United States)

    Kesler, Ted

    2012-01-01

    In this Teaching Tips article, the author argues for a dialogic conception of voice, based in the work of Mikhail Bakhtin. He demonstrates a dialogic view of voice in action, using two writing examples about the same topic from his daughter, a fifth-grade student. He then provides five practical tips for teaching a dialogic conception of voice in…

  9. Marshall’s Voice

    Directory of Open Access Journals (Sweden)

    Halper Thomas

    2017-12-01

    Full Text Available Most judicial opinions, for a variety of reasons, do not speak with the voice of identifiable judges, but an analysis of several of John Marshall’s best known opinions reveals a distinctive voice, with its characteristic language and style of argumentation. The power of this voice helps to account for the influence of his views.

  10. Auditory Association Cortex Lesions Impair Auditory Short-Term Memory in Monkeys

    Science.gov (United States)

    Colombo, Michael; D'Amato, Michael R.; Rodman, Hillary R.; Gross, Charles G.

    1990-01-01

    Monkeys that were trained to perform auditory and visual short-term memory tasks (delayed matching-to-sample) received lesions of the auditory association cortex in the superior temporal gyrus. Although visual memory was completely unaffected by the lesions, auditory memory was severely impaired. Despite this impairment, all monkeys could discriminate sounds closer in frequency than those used in the auditory memory task. This result suggests that the superior temporal cortex plays a role in auditory processing and retention similar to the role the inferior temporal cortex plays in visual processing and retention.

  11. Auditory hallucinations.

    Science.gov (United States)

    Blom, Jan Dirk

    2015-01-01

    Auditory hallucinations constitute a phenomenologically rich group of endogenously mediated percepts which are associated with psychiatric, neurologic, otologic, and other medical conditions, but which are also experienced by 10-15% of all healthy individuals in the general population. The group of phenomena is probably best known for its verbal auditory subtype, but it also includes musical hallucinations, echo of reading, exploding-head syndrome, and many other types. The subgroup of verbal auditory hallucinations has been studied extensively with the aid of neuroimaging techniques, and from those studies emerges an outline of a functional as well as a structural network of widely distributed brain areas involved in their mediation. The present chapter provides an overview of the various types of auditory hallucination described in the literature, summarizes our current knowledge of the auditory networks involved in their mediation, and draws on ideas from the philosophy of science and network science to reconceptualize the auditory hallucinatory experience, and point out directions for future research into its neurobiologic substrates. In addition, it provides an overview of known associations with various clinical conditions and of the existing evidence for pharmacologic and non-pharmacologic treatments. © 2015 Elsevier B.V. All rights reserved.

  12. A pneumatic Bionic Voice prosthesis-Pre-clinical trials of controlling the voice onset and offset.

    Science.gov (United States)

    Ahmadi, Farzaneh; Noorian, Farzad; Novakovic, Daniel; van Schaik, André

    2018-01-01

    Despite emergent progress in many fields of bionics, a functional Bionic Voice prosthesis for laryngectomy patients (larynx amputees) has not yet been achieved, leading to a lifetime of vocal disability for these patients. This study introduces a novel framework of Pneumatic Bionic Voice Prostheses as an electronic adaptation of the Pneumatic Artificial Larynx (PAL) device. The PAL is a non-invasive mechanical voice source, driven exclusively by respiration with an exceptionally high voice quality, comparable to the existing gold standard of Tracheoesophageal (TE) voice prosthesis. Following PAL design closely as the reference, Pneumatic Bionic Voice Prostheses seem to have a strong potential to substitute the existing gold standard by generating a similar voice quality while remaining non-invasive and non-surgical. This paper designs the first Pneumatic Bionic Voice prosthesis and evaluates its onset and offset control against the PAL device through pre-clinical trials on one laryngectomy patient. The evaluation on a database of more than five hours of continuous/isolated speech recordings shows a close match between the onset/offset control of the Pneumatic Bionic Voice and the PAL with an accuracy of 98.45 ±0.54%. When implemented in real-time, the Pneumatic Bionic Voice prosthesis controller has an average onset/offset delay of 10 milliseconds compared to the PAL. Hence it addresses a major disadvantage of previous electronic voice prostheses, including myoelectric Bionic Voice, in meeting the short time-frames of controlling the onset/offset of the voice in continuous speech.

  13. Auditory agnosia due to long-term severe hydrocephalus caused by spina bifida - specific auditory pathway versus nonspecific auditory pathway.

    Science.gov (United States)

    Zhang, Qing; Kaga, Kimitaka; Hayashi, Akimasa

    2011-07-01

    A 27-year-old female showed auditory agnosia after long-term severe hydrocephalus due to congenital spina bifida. After years of hydrocephalus, she gradually suffered from hearing loss in her right ear at 19 years of age, followed by her left ear. During the time when she retained some ability to hear, she experienced severe difficulty in distinguishing verbal, environmental, and musical instrumental sounds. However, her auditory brainstem response and distortion product otoacoustic emissions were largely intact in the left ear. Her bilateral auditory cortices were preserved, as shown by neuroimaging, whereas her auditory radiations were severely damaged owing to progressive hydrocephalus. Although she had a complete bilateral hearing loss, she felt great pleasure when exposed to music. After years of self-training to read lips, she regained fluent ability to communicate. Clinical manifestations of this patient indicate that auditory agnosia can occur after long-term hydrocephalus due to spina bifida; the secondary auditory pathway may play a role in both auditory perception and hearing rehabilitation.

  14. Voice similarity in identical twins.

    Science.gov (United States)

    Van Gysel, W D; Vercammen, J; Debruyne, F

    2001-01-01

    If people are asked to discriminate visually the two individuals of a monozygotic twin (MT), they mostly get into trouble. Does this problem also exist when listening to twin voices? Twenty female and 10 male MT voices were randomly assembled with one "strange" voice to get voice trios. The listeners (10 female students in Speech and Language Pathology) were asked to label the twins (voices 1-2, 1-3 or 2-3) in two conditions: two standard sentences read aloud and a 2.5-second midsection of a sustained /a/. The proportion correctly labelled twins was for female voices 82% and 63% and for male voices 74% and 52% for the sentences and the sustained /a/ respectively, both being significantly greater than chance (33%). The acoustic analysis revealed a high intra-twin correlation for the speaking fundamental frequency (SFF) of the sentences and the fundamental frequency (F0) of the sustained /a/. So the voice pitch could have been a useful characteristic in the perceptual identification of the twins. We conclude that there is a greater perceptual resemblance between the voices of identical twins than between voices without genetic relationship. The identification however is not perfect. The voice pitch possibly contributes to the correct twin identifications.

  15. Neural Segregation of Concurrent Speech: Effects of Background Noise and Reverberation on Auditory Scene Analysis in the Ventral Cochlear Nucleus.

    Science.gov (United States)

    Sayles, Mark; Stasiak, Arkadiusz; Winter, Ian M

    2016-01-01

    Concurrent complex sounds (e.g., two voices speaking at once) are perceptually disentangled into separate "auditory objects". This neural processing often occurs in the presence of acoustic-signal distortions from noise and reverberation (e.g., in a busy restaurant). A difference in periodicity between sounds is a strong segregation cue under quiet, anechoic conditions. However, noise and reverberation exert differential effects on speech intelligibility under "cocktail-party" listening conditions. Previous neurophysiological studies have concentrated on understanding auditory scene analysis under ideal listening conditions. Here, we examine the effects of noise and reverberation on periodicity-based neural segregation of concurrent vowels /a/ and /i/, in the responses of single units in the guinea-pig ventral cochlear nucleus (VCN): the first processing station of the auditory brain stem. In line with human psychoacoustic data, we find reverberation significantly impairs segregation when vowels have an intonated pitch contour, but not when they are spoken on a monotone. In contrast, noise impairs segregation independent of intonation pattern. These results are informative for models of speech processing under ecologically valid listening conditions, where noise and reverberation abound.

  16. Auditory short-term memory in the primate auditory cortex.

    Science.gov (United States)

    Scott, Brian H; Mishkin, Mortimer

    2016-06-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory. Published by Elsevier B.V.

  17. Sex differences in the representation of call stimuli in a songbird secondary auditory area.

    Science.gov (United States)

    Giret, Nicolas; Menardy, Fabien; Del Negro, Catherine

    2015-01-01

    Understanding how communication sounds are encoded in the central auditory system is critical to deciphering the neural bases of acoustic communication. Songbirds use learned or unlearned vocalizations in a variety of social interactions. They have telencephalic auditory areas specialized for processing natural sounds and considered as playing a critical role in the discrimination of behaviorally relevant vocal sounds. The zebra finch, a highly social songbird species, forms lifelong pair bonds. Only male zebra finches sing. However, both sexes produce the distance call when placed in visual isolation. This call is sexually dimorphic, is learned only in males and provides support for individual recognition in both sexes. Here, we assessed whether auditory processing of distance calls differs between paired males and females by recording spiking activity in a secondary auditory area, the caudolateral mesopallium (CLM), while presenting the distance calls of a variety of individuals, including the bird itself, the mate, familiar and unfamiliar males and females. In males, the CLM is potentially involved in auditory feedback processing important for vocal learning. Based on both the analyses of spike rates and temporal aspects of discharges, our results clearly indicate that call-evoked responses of CLM neurons are sexually dimorphic, being stronger, lasting longer, and conveying more information about calls in males than in females. In addition, how auditory responses vary among call types differ between sexes. In females, response strength differs between familiar male and female calls. In males, temporal features of responses reveal a sensitivity to the bird's own call. These findings provide evidence that sexual dimorphism occurs in higher-order processing areas within the auditory system. They suggest a sexual dimorphism in the function of the CLM, contributing to transmit information about the self-generated calls in males and to storage of information about the

  18. Sex differences in the representation of call stimuli in a songbird secondary auditory area

    Directory of Open Access Journals (Sweden)

    Nicolas eGiret

    2015-10-01

    Full Text Available Understanding how communication sounds are encoded in the central auditory system is critical to deciphering the neural bases of acoustic communication. Songbirds use learned or unlearned vocalizations in a variety of social interactions. They have telencephalic auditory areas specialized for processing natural sounds and considered as playing a critical role in the discrimination of behaviorally relevant vocal sounds. The zebra finch, a highly social songbird species, forms lifelong pair bonds. Only male zebra finches sing. However, both sexes produce the distance call when placed in visual isolation. This call is sexually dimorphic, is learned only in males and provides support for individual recognition in both sexes. Here, we assessed whether auditory processing of distance calls differs between paired males and females by recording spiking activity in a secondary auditory area, the caudolateral mesopallium (CLM, while presenting the distance calls of a variety of individuals, including the bird itself, the mate, familiar and unfamiliar males and females. In males, the CLM is potentially involved in auditory feedback processing important for vocal learning. Based on both the analyses of spike rates and temporal aspects of discharges, our results clearly indicate that call-evoked responses of CLM neurons are sexually dimorphic, being stronger, lasting longer and conveying more information about calls in males than in females. In addition, how auditory responses vary among call types differ between sexes. In females, response strength differs between familiar male and female calls. In males, temporal features of responses reveal a sensitivity to the bird’s own call. These findings provide evidence that sexual dimorphism occurs in higher-order processing areas within the auditory system. They suggest a sexual dimorphism in the function of the CLM, contributing to transmit information about the self-generated calls in males and to storage of

  19. Visual and auditory socio-cognitive perception in unilateral temporal lobe epilepsy in children and adolescents: a prospective controlled study.

    Science.gov (United States)

    Laurent, Agathe; Arzimanoglou, Alexis; Panagiotakaki, Eleni; Sfaello, Ignacio; Kahane, Philippe; Ryvlin, Philippe; Hirsch, Edouard; de Schonen, Scania

    2014-12-01

    A high rate of abnormal social behavioural traits or perceptual deficits is observed in children with unilateral temporal lobe epilepsy. In the present study, perception of auditory and visual social signals, carried by faces and voices, was evaluated in children or adolescents with temporal lobe epilepsy. We prospectively investigated a sample of 62 children with focal non-idiopathic epilepsy early in the course of the disorder. The present analysis included 39 children with a confirmed diagnosis of temporal lobe epilepsy. Control participants (72), distributed across 10 age groups, served as a control group. Our socio-perceptual evaluation protocol comprised three socio-visual tasks (face identity, facial emotion and gaze direction recognition), two socio-auditory tasks (voice identity and emotional prosody recognition), and three control tasks (lip reading, geometrical pattern and linguistic intonation recognition). All 39 patients also benefited from a neuropsychological examination. As a group, children with temporal lobe epilepsy performed at a significantly lower level compared to the control group with regards to recognition of facial identity, direction of eye gaze, and emotional facial expressions. We found no relationship between the type of visual deficit and age at first seizure, duration of epilepsy, or the epilepsy-affected cerebral hemisphere. Deficits in socio-perceptual tasks could be found independently of the presence of deficits in visual or auditory episodic memory, visual non-facial pattern processing (control tasks), or speech perception. A normal FSIQ did not exempt some of the patients from an underlying deficit in some of the socio-perceptual tasks. Temporal lobe epilepsy not only impairs development of emotion recognition, but can also impair development of perception of other socio-perceptual signals in children with or without intellectual deficiency. Prospective studies need to be designed to evaluate the results of appropriate re

  20. Tips for Healthy Voices

    Science.gov (United States)

    ... prevent voice problems and maintain a healthy voice: Drink water (stay well hydrated): Keeping your body well hydrated by drinking plenty of water each day (6-8 glasses) is essential to maintaining a healthy voice. The ...

  1. Your Cheatin' Voice Will Tell on You: Detection of Past Infidelity from Voice.

    Science.gov (United States)

    Hughes, Susan M; Harrison, Marissa A

    2017-01-01

    Evidence suggests that many physical, behavioral, and trait qualities can be detected solely from the sound of a person's voice, irrespective of the semantic information conveyed through speech. This study examined whether raters could accurately assess the likelihood that a person has cheated on committed, romantic partners simply by hearing the speaker's voice. Independent raters heard voice samples of individuals who self-reported that they either cheated or had never cheated on their romantic partners. To control for aspects that may clue a listener to the speaker's mate value, we used voice samples that did not differ between these groups for voice attractiveness, age, voice pitch, and other acoustic measures. We found that participants indeed rated the voices of those who had a history of cheating as more likely to cheat. Male speakers were given higher ratings for cheating, while female raters were more likely to ascribe the likelihood to cheat to speakers. Additionally, we manipulated the pitch of the voice samples, and for both sexes, the lower pitched versions were consistently rated to be from those who were more likely to have cheated. Regardless of the pitch manipulation, speakers were able to assess actual history of infidelity; the one exception was that men's accuracy decreased when judging women whose voices were lowered. These findings expand upon the idea that the human voice may be of value as a cheater detection tool and very thin slices of vocal information are all that is needed to make certain assessments about others.

  2. Unfamiliar voice identification: Effect of post-event information on accuracy and voice ratings

    Directory of Open Access Journals (Sweden)

    Harriet Mary Jessica Smith

    2014-04-01

    Full Text Available This study addressed the effect of misleading post-event information (PEI on voice ratings, identification accuracy, and confidence, as well as the link between verbal recall and accuracy. Participants listened to a dialogue between male and female targets, then read misleading information about voice pitch. Participants engaged in verbal recall, rated voices on a feature checklist, and made a lineup decision. Accuracy rates were low, especially on target-absent lineups. Confidence and accuracy were unrelated, but the number of facts recalled about the voice predicted later lineup accuracy. There was a main effect of misinformation on ratings of target voice pitch, but there was no effect on identification accuracy or confidence ratings. As voice lineup evidence from earwitnesses is used in courts, the findings have potential applied relevance.

  3. A pneumatic Bionic Voice prosthesis-Pre-clinical trials of controlling the voice onset and offset.

    Directory of Open Access Journals (Sweden)

    Farzaneh Ahmadi

    Full Text Available Despite emergent progress in many fields of bionics, a functional Bionic Voice prosthesis for laryngectomy patients (larynx amputees has not yet been achieved, leading to a lifetime of vocal disability for these patients. This study introduces a novel framework of Pneumatic Bionic Voice Prostheses as an electronic adaptation of the Pneumatic Artificial Larynx (PAL device. The PAL is a non-invasive mechanical voice source, driven exclusively by respiration with an exceptionally high voice quality, comparable to the existing gold standard of Tracheoesophageal (TE voice prosthesis. Following PAL design closely as the reference, Pneumatic Bionic Voice Prostheses seem to have a strong potential to substitute the existing gold standard by generating a similar voice quality while remaining non-invasive and non-surgical. This paper designs the first Pneumatic Bionic Voice prosthesis and evaluates its onset and offset control against the PAL device through pre-clinical trials on one laryngectomy patient. The evaluation on a database of more than five hours of continuous/isolated speech recordings shows a close match between the onset/offset control of the Pneumatic Bionic Voice and the PAL with an accuracy of 98.45 ±0.54%. When implemented in real-time, the Pneumatic Bionic Voice prosthesis controller has an average onset/offset delay of 10 milliseconds compared to the PAL. Hence it addresses a major disadvantage of previous electronic voice prostheses, including myoelectric Bionic Voice, in meeting the short time-frames of controlling the onset/offset of the voice in continuous speech.

  4. A pneumatic Bionic Voice prosthesis—Pre-clinical trials of controlling the voice onset and offset

    Science.gov (United States)

    Noorian, Farzad; Novakovic, Daniel; van Schaik, André

    2018-01-01

    Despite emergent progress in many fields of bionics, a functional Bionic Voice prosthesis for laryngectomy patients (larynx amputees) has not yet been achieved, leading to a lifetime of vocal disability for these patients. This study introduces a novel framework of Pneumatic Bionic Voice Prostheses as an electronic adaptation of the Pneumatic Artificial Larynx (PAL) device. The PAL is a non-invasive mechanical voice source, driven exclusively by respiration with an exceptionally high voice quality, comparable to the existing gold standard of Tracheoesophageal (TE) voice prosthesis. Following PAL design closely as the reference, Pneumatic Bionic Voice Prostheses seem to have a strong potential to substitute the existing gold standard by generating a similar voice quality while remaining non-invasive and non-surgical. This paper designs the first Pneumatic Bionic Voice prosthesis and evaluates its onset and offset control against the PAL device through pre-clinical trials on one laryngectomy patient. The evaluation on a database of more than five hours of continuous/isolated speech recordings shows a close match between the onset/offset control of the Pneumatic Bionic Voice and the PAL with an accuracy of 98.45 ±0.54%. When implemented in real-time, the Pneumatic Bionic Voice prosthesis controller has an average onset/offset delay of 10 milliseconds compared to the PAL. Hence it addresses a major disadvantage of previous electronic voice prostheses, including myoelectric Bionic Voice, in meeting the short time-frames of controlling the onset/offset of the voice in continuous speech. PMID:29466455

  5. Differential Effectiveness of Electromyograph Feedback, Verbal Relaxation Instructions, and Medication Placebo with Tension Headaches

    Science.gov (United States)

    Cox, Daniel J.; And Others

    1975-01-01

    Adults with chronic tension headaches were assigned to auditory electromyograph (EMG) feedback (N=9), to progressive relaxation (N=9), and to placebo treatment (N=9). Data indicated that biofeedback and verbal relaxation instructions were equally superior to the medicine placebo on all measured variables in the direction of clinical improvement,…

  6. The predictability of frequency-altered auditory feedback changes the weighting of feedback and feedforward input for speech motor control.

    Science.gov (United States)

    Scheerer, Nichole E; Jones, Jeffery A

    2014-12-01

    Speech production requires the combined effort of a feedback control system driven by sensory feedback, and a feedforward control system driven by internal models. However, the factors that dictate the relative weighting of these feedback and feedforward control systems are unclear. In this event-related potential (ERP) study, participants produced vocalisations while being exposed to blocks of frequency-altered feedback (FAF) perturbations that were either predictable in magnitude (consistently either 50 or 100 cents) or unpredictable in magnitude (50- and 100-cent perturbations varying randomly within each vocalisation). Vocal and P1-N1-P2 ERP responses revealed decreases in the magnitude and trial-to-trial variability of vocal responses, smaller N1 amplitudes, and shorter vocal, P1 and N1 response latencies following predictable FAF perturbation magnitudes. In addition, vocal response magnitudes correlated with N1 amplitudes, vocal response latencies, and P2 latencies. This pattern of results suggests that after repeated exposure to predictable FAF perturbations, the contribution of the feedforward control system increases. Examination of the presentation order of the FAF perturbations revealed smaller compensatory responses, smaller P1 and P2 amplitudes, and shorter N1 latencies when the block of predictable 100-cent perturbations occurred prior to the block of predictable 50-cent perturbations. These results suggest that exposure to large perturbations modulates responses to subsequent perturbations of equal or smaller size. Similarly, exposure to a 100-cent perturbation prior to a 50-cent perturbation within a vocalisation decreased the magnitude of vocal and N1 responses, but increased P1 and P2 latencies. Thus, exposure to a single perturbation can affect responses to subsequent perturbations. © 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  7. Sentence Comprehension in Adolescents with down Syndrome and Typically Developing Children: Role of Sentence Voice, Visual Context, and Auditory-Verbal Short-Term Memory.

    Science.gov (United States)

    Miolo, Giuliana; Chapman, Robins S.; Sindberg, Heidi A.

    2005-01-01

    The authors evaluated the roles of auditory-verbal short-term memory, visual short-term memory, and group membership in predicting language comprehension, as measured by an experimental sentence comprehension task (SCT) and the Test for Auditory Comprehension of Language--Third Edition (TACL-3; E. Carrow-Woolfolk, 1999) in 38 participants: 19 with…

  8. Singer's preferred acoustic condition in performance in an opera house and self-perception of the singer's voice

    Science.gov (United States)

    Noson, Dennis; Kato, Kosuke; Ando, Yoichi

    2004-05-01

    Solo singers have been shown to over estimate the relative sound pressure level of a delayed, external reproduction of their own voice, singing single syllables, which, in turn, appears to influence the preferred delay of simulated stage reflections [Noson, Ph.D. thesis, Kobe University, 2003]. Bone conduction is thought to be one factor separating singer versus instrumental performer judgments of stage acoustics. Using a parameter derived from the vocal signal autocorrelation function (ACF envelope), the changes in singer preference for delayed reflections is primarily explained by the ACF parameter, rather than internal bone conduction. An auditory model of a singer's preferred reflection delay is proposed, combining the effects of acoustical environment (reflection amplitude), bone conduction, and performer vocal overestimate, which may be applied to the acoustic design of reflecting elements in both upstage and forestage environments of opera stages. For example, soloists who characteristically underestimate external voice levels (or overestimate their own voice) should be provided shorter distances to reflective panels-irrespective of their singing style. Adjustable elements can be deployed to adapt opera houses intended for bel canto style performances to other styles. Additional examples will also be discussed. a)Now at Kumamoto Univ., Kumamoto, Japan. b)Now at: 1-10-27 Yamano Kami, Kumamoto, Japan.

  9. Promoting smoke-free homes: a novel behavioral intervention using real-time audio-visual feedback on airborne particle levels.

    Directory of Open Access Journals (Sweden)

    Neil E Klepeis

    Full Text Available Interventions are needed to protect the health of children who live with smokers. We pilot-tested a real-time intervention for promoting behavior change in homes that reduces second hand tobacco smoke (SHS levels. The intervention uses a monitor and feedback system to provide immediate auditory and visual signals triggered at defined thresholds of fine particle concentration. Dynamic graphs of real-time particle levels are also shown on a computer screen. We experimentally evaluated the system, field-tested it in homes with smokers, and conducted focus groups to obtain general opinions. Laboratory tests of the monitor demonstrated SHS sensitivity, stability, precision equivalent to at least 1 µg/m(3, and low noise. A linear relationship (R(2 = 0.98 was observed between the monitor and average SHS mass concentrations up to 150 µg/m(3. Focus groups and interviews with intervention participants showed in-home use to be acceptable and feasible. The intervention was evaluated in 3 homes with combined baseline and intervention periods lasting 9 to 15 full days. Two families modified their behavior by opening windows or doors, smoking outdoors, or smoking less. We observed evidence of lower SHS levels in these homes. The remaining household voiced reluctance to changing their smoking activity and did not exhibit lower SHS levels in main smoking areas or clear behavior change; however, family members expressed receptivity to smoking outdoors. This study established the feasibility of the real-time intervention, laying the groundwork for controlled trials with larger sample sizes. Visual and auditory cues may prompt family members to take immediate action to reduce SHS levels. Dynamic graphs of SHS levels may help families make decisions about specific mitigation approaches.

  10. Virtual sensory feedback for gait improvement in neurological patients

    Directory of Open Access Journals (Sweden)

    Yoram eBaram

    2013-10-01

    Full Text Available We review a treatment modality for movement disorders by sensory feedback. The natural closed-loop sensory-motor feedback system is imitated by a wearable virtual reality apparatus, employing body-mounted inertial sensors and responding dynamically to the patient’s own motion. Clinical trials have shown a significant gait improvement in patients with Parkinson's disease using the apparatus. In contrast to open-loop devices, which impose constant-velocity visual cues in a treadmill fashion, or rhythmic auditory cues in a metronome fashion, requiring constant vigilance and attention strategies, and in some cases, instigating freezing in Parkinson’s patients, the closed-loop device improved gait parameters and eliminated freezing in most patients, without side effects. Patients with multiple sclerosis, previous stroke, senile gait and cerebral palsy using the device also improved their balance and gait substantially. Training with the device has produced a residual improvement, suggesting virtual sensory feedback for the treatment of neurological movement disorders.

  11. METHODS FOR QUALITY ENHANCEMENT OF USER VOICE SIGNAL IN VOICE AUTHENTICATION SYSTEMS

    Directory of Open Access Journals (Sweden)

    O. N. Faizulaieva

    2014-03-01

    Full Text Available The reasonability for the usage of computer systems user voice in the authentication process is proved. The scientific task for improving the signal/noise ratio of the user voice signal in the authentication system is considered. The object of study is the process of input and output of the voice signal of authentication system user in computer systems and networks. Methods and means for input and extraction of voice signal against external interference signals are researched. Methods for quality enhancement of user voice signal in voice authentication systems are suggested. As modern computer facilities, including mobile ones, have two-channel audio card, the usage of two microphones is proposed in the voice signal input system of authentication system. Meanwhile, the task of forming a lobe of microphone array in a desired area of voice signal registration (100 Hz to 8 kHz is solved. The usage of directional properties of the proposed microphone array gives the possibility to have the influence of external interference signals two or three times less in the frequency range from 4 to 8 kHz. The possibilities for implementation of space-time processing of the recorded signals using constant and adaptive weighting factors are investigated. The simulation results of the proposed system for input and extraction of signals during digital processing of narrowband signals are presented. The proposed solutions make it possible to improve the value of the signal/noise ratio of the useful signals recorded up to 10, ..., 20 dB under the influence of external interference signals in the frequency range from 4 to 8 kHz. The results may be useful to specialists working in the field of voice recognition and speaker’s discrimination.

  12. Effects of tailoring ingredients in auditory persuasive health messages on fruit and vegetable intake

    OpenAIRE

    Elbert, Sarah P.; Dijkstra, Arie; Rozema, Andrea

    2017-01-01

    Objective: Health messages can be tailored by applying different tailoring ingredients, among which personalisation, feedback and adaptation. This experiment investigated the separate effects of these tailoring ingredients on behaviour in auditory health persuasion. Furthermore, the moderating effect of self-efficacy was assessed.Design: The between-participants design consisted of four conditions. A generic health message served as a control condition; personalisation was applied using the r...

  13. Auditory, visual and auditory-visual memory and sequencing performance in typically developing children.

    Science.gov (United States)

    Pillai, Roshni; Yathiraj, Asha

    2017-09-01

    The study evaluated whether there exists a difference/relation in the way four different memory skills (memory score, sequencing score, memory span, & sequencing span) are processed through the auditory modality, visual modality and combined modalities. Four memory skills were evaluated on 30 typically developing children aged 7 years and 8 years across three modality conditions (auditory, visual, & auditory-visual). Analogous auditory and visual stimuli were presented to evaluate the three modality conditions across the two age groups. The children obtained significantly higher memory scores through the auditory modality compared to the visual modality. Likewise, their memory scores were significantly higher through the auditory-visual modality condition than through the visual modality. However, no effect of modality was observed on the sequencing scores as well as for the memory and the sequencing span. A good agreement was seen between the different modality conditions that were studied (auditory, visual, & auditory-visual) for the different memory skills measures (memory scores, sequencing scores, memory span, & sequencing span). A relatively lower agreement was noted only between the auditory and visual modalities as well as between the visual and auditory-visual modality conditions for the memory scores, measured using Bland-Altman plots. The study highlights the efficacy of using analogous stimuli to assess the auditory, visual as well as combined modalities. The study supports the view that the performance of children on different memory skills was better through the auditory modality compared to the visual modality. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. The right hemisphere supports but does not replace left hemisphere auditory function in patients with persisting aphasia.

    Science.gov (United States)

    Teki, Sundeep; Barnes, Gareth R; Penny, William D; Iverson, Paul; Woodhead, Zoe V J; Griffiths, Timothy D; Leff, Alexander P

    2013-06-01

    In this study, we used magnetoencephalography and a mismatch paradigm to investigate speech processing in stroke patients with auditory comprehension deficits and age-matched control subjects. We probed connectivity within and between the two temporal lobes in response to phonemic (different word) and acoustic (same word) oddballs using dynamic causal modelling. We found stronger modulation of self-connections as a function of phonemic differences for control subjects versus aphasics in left primary auditory cortex and bilateral superior temporal gyrus. The patients showed stronger modulation of connections from right primary auditory cortex to right superior temporal gyrus (feed-forward) and from left primary auditory cortex to right primary auditory cortex (interhemispheric). This differential connectivity can be explained on the basis of a predictive coding theory which suggests increased prediction error and decreased sensitivity to phonemic boundaries in the aphasics' speech network in both hemispheres. Within the aphasics, we also found behavioural correlates with connection strengths: a negative correlation between phonemic perception and an inter-hemispheric connection (left superior temporal gyrus to right superior temporal gyrus), and positive correlation between semantic performance and a feedback connection (right superior temporal gyrus to right primary auditory cortex). Our results suggest that aphasics with impaired speech comprehension have less veridical speech representations in both temporal lobes, and rely more on the right hemisphere auditory regions, particularly right superior temporal gyrus, for processing speech. Despite this presumed compensatory shift in network connectivity, the patients remain significantly impaired.

  15. Domestic dogs and puppies can use human voice direction referentially.

    Science.gov (United States)

    Rossano, Federico; Nitzschner, Marie; Tomasello, Michael

    2014-06-22

    Domestic dogs are particularly skilled at using human visual signals to locate hidden food. This is, to our knowledge, the first series of studies that investigates the ability of dogs to use only auditory communicative acts to locate hidden food. In a first study, from behind a barrier, a human expressed excitement towards a baited box on either the right or left side, while sitting closer to the unbaited box. Dogs were successful in following the human's voice direction and locating the food. In the two following control studies, we excluded the possibility that dogs could locate the box containing food just by relying on smell, and we showed that they would interpret a human's voice direction in a referential manner only when they could locate a possible referent (i.e. one of the boxes) in the environment. Finally, in a fourth study, we tested 8-14-week-old puppies in the main experimental test and found that those with a reasonable amount of human experience performed overall even better than the adult dogs. These results suggest that domestic dogs' skills in comprehending human communication are not based on visual cues alone, but are instead multi-modal and highly flexible. Moreover, the similarity between young and adult dogs' performances has important implications for the domestication hypothesis.

  16. The Role of Occupational Voice Demand and Patient-Rated Impairment in Predicting Voice Therapy Adherence.

    Science.gov (United States)

    Ebersole, Barbara; Soni, Resha S; Moran, Kathleen; Lango, Miriam; Devarajan, Karthik; Jamal, Nausheen

    2018-05-01

    Examine the relationship among the severity of patient-perceived voice impairment, perceptual dysphonia severity, occupational voice demand, and voice therapy adherence. Identify clinical predictors of increased risk for therapy nonadherence. A retrospective cohort study of patients presenting with a chief complaint of persistent dysphonia at an interdisciplinary voice center was done. The Voice Handicap Index-10 (VHI-10) and the Voice-Related Quality of Life (V-RQOL) survey scores, clinician rating of dysphonia severity using the Grade score from the Grade, Roughness Breathiness, Asthenia, and Strain scale, occupational voice demand, and patient demographics were tested for associations with therapy adherence, defined as completion of the treatment plan. Classification and Regression Tree (CART) analysis was performed to establish thresholds for nonadherence risk. Of 166 patients evaluated, 111 were recommended for voice therapy. The therapy nonadherence rate was 56%. Occupational voice demand category, VHI-10, and V-RQOL scores were the only factors significantly correlated with therapy adherence (P demand are significantly more likely to be nonadherent with therapy than those with high occupational voice demand (P 40 is a significant cutoff point for predicting therapy nonadherence (P demand and patient perception of impairment are significantly and independently correlated with therapy adherence. A VHI-10 score of ≤9 or a V-RQOL score of >40 is a significant cutoff point for predicting nonadherence risk. Copyright © 2018 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  17. Diagnostic value of voice acoustic analysis in assessment of occupational voice pathologies in teachers.

    Science.gov (United States)

    Niebudek-Bogusz, Ewa; Fiszer, Marta; Kotylo, Piotr; Sliwinska-Kowalska, Mariola

    2006-01-01

    It has been shown that teachers are at risk of developing occupational dysphonia, which accounts for over 25% of all occupational diseases diagnosed in Poland. The most frequently used method of diagnosing voice diseases is videostroboscopy. However, to facilitate objective evaluation of voice efficiency as well as medical certification of occupational voice disorders, it is crucial to implement quantitative methods of voice assessment, particularly voice acoustic analysis. The aim of the study was to assess the results of acoustic analysis in 66 female teachers (aged 40-64 years), including 35 subjects with occupational voice pathologies (e.g., vocal nodules) and 31 subjects with functional dysphonia. The acoustic analysis was performed using the IRIS software, before and after a 30-minute vocal loading test. All participants were subjected also to laryngological and videostroboscopic examinations. After the vocal effort, the acoustic parameters displayed statistically significant abnormalities, mostly lowered fundamental frequency (Fo) and incorrect values of shimmer and noise to harmonic ratio. To conclude, quantitative voice acoustic analysis using the IRIS software seems to be an effective complement to voice examinations, which is particularly helpful in diagnosing occupational dysphonia.

  18. Objective voice parameters in Colombian school workers with healthy voices

    NARCIS (Netherlands)

    L.C. Cantor Cutiva (Lady Catherine); A. Burdorf (Alex)

    2015-01-01

    textabstractObjectives: To characterize the objective voice parameters among school workers, and to identify associated factors of three objective voice parameters, namely fundamental frequency, sound pressure level and maximum phonation time. Materials and methods: We conducted a cross-sectional

  19. Pedagogic Voice: Student Voice in Teaching and Engagement Pedagogies

    Science.gov (United States)

    Baroutsis, Aspa; McGregor, Glenda; Mills, Martin

    2016-01-01

    In this paper, we are concerned with the notion of "pedagogic voice" as it relates to the presence of student "voice" in teaching, learning and curriculum matters at an alternative, or second chance, school in Australia. This school draws upon many of the principles of democratic schooling via its utilisation of student voice…

  20. Effect of rhythmic auditory cueing on parkinsonian gait: A systematic review and meta-analysis.

    Science.gov (United States)

    Ghai, Shashank; Ghai, Ishan; Schmitz, Gerd; Effenberg, Alfred O

    2018-01-11

    The use of rhythmic auditory cueing to enhance gait performance in parkinsonian patients' is an emerging area of interest. Different theories and underlying neurophysiological mechanisms have been suggested for ascertaining the enhancement in motor performance. However, a consensus as to its effects based on characteristics of effective stimuli, and training dosage is still not reached. A systematic review and meta-analysis was carried out to analyze the effects of different auditory feedbacks on gait and postural performance in patients affected by Parkinson's disease. Systematic identification of published literature was performed adhering to PRISMA guidelines, from inception until May 2017, on online databases; Web of science, PEDro, EBSCO, MEDLINE, Cochrane, EMBASE and PROQUEST. Of 4204 records, 50 studies, involving 1892 participants met our inclusion criteria. The analysis revealed an overall positive effect on gait velocity, stride length, and a negative effect on cadence with application of auditory cueing. Neurophysiological mechanisms, training dosage, effects of higher information processing constraints, and use of cueing as an adjunct with medications are thoroughly discussed. This present review bridges the gaps in literature by suggesting application of rhythmic auditory cueing in conventional rehabilitation approaches to enhance motor performance and quality of life in the parkinsonian community.

  1. Voice Savers for Music Teachers

    Science.gov (United States)

    Cookman, Starr

    2012-01-01

    Music teachers are in a class all their own when it comes to voice use. These elite vocal athletes require stamina, strength, and flexibility from their voices day in, day out for hours at a time. Voice rehabilitation clinics and research show that music education ranks high among the professionals most commonly affected by voice problems.…

  2. The Impact of Wireless Technology Feedback on Inventory Management at a Dairy Manufacturing Plant

    Science.gov (United States)

    Goomas, David T.

    2012-01-01

    Replacing the method of counting inventory from paper count sheets to that of wireless reliably reduced the elapsed time to complete a daily inventory of the storage cooler in a dairy manufacturing plant. The handheld computers delivered immediate prompts as well as auditory and visual feedback. Reducing the time to complete the daily inventory…

  3. Auditory Spatial Layout

    Science.gov (United States)

    Wightman, Frederic L.; Jenison, Rick

    1995-01-01

    All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.

  4. Mechanics of human voice production and control.

    Science.gov (United States)

    Zhang, Zhaoyan

    2016-10-01

    As the primary means of communication, voice plays an important role in daily life. Voice also conveys personal information such as social status, personal traits, and the emotional state of the speaker. Mechanically, voice production involves complex fluid-structure interaction within the glottis and its control by laryngeal muscle activation. An important goal of voice research is to establish a causal theory linking voice physiology and biomechanics to how speakers use and control voice to communicate meaning and personal information. Establishing such a causal theory has important implications for clinical voice management, voice training, and many speech technology applications. This paper provides a review of voice physiology and biomechanics, the physics of vocal fold vibration and sound production, and laryngeal muscular control of the fundamental frequency of voice, vocal intensity, and voice quality. Current efforts to develop mechanical and computational models of voice production are also critically reviewed. Finally, issues and future challenges in developing a causal theory of voice production and perception are discussed.

  5. Multichannel auditory search: toward understanding control processes in polychotic auditory listening.

    Science.gov (United States)

    Lee, M D

    2001-01-01

    Two experiments are presented that serve as a framework for exploring auditory information processing. The framework is referred to as polychotic listening or auditory search, and it requires a listener to scan multiple simultaneous auditory streams for the appearance of a target word (the name of a letter such as A or M). Participants' ability to scan between two and six simultaneous auditory streams of letter and digit names for the name of a target letter was examined using six loudspeakers. The main independent variable was auditory load, or the number of active audio streams on a given trial. The primary dependent variables were target localization accuracy and reaction time. Results showed that as load increased, performance decreased. The performance decrease was evident in reaction time, accuracy, and sensitivity measures. The second study required participants to practice the same task for 10 sessions, for a total of 1800 trials. Results indicated that even with extensive practice, performance was still affected by auditory load. The present results are compared with findings in the visual search literature. The implications for the use of multiple auditory displays are discussed. Potential applications include cockpit and automobile warning displays, virtual reality systems, and training systems.

  6. Focal Suppression of Distractor Sounds by Selective Attention in Auditory Cortex.

    Science.gov (United States)

    Schwartz, Zachary P; David, Stephen V

    2018-01-01

    Auditory selective attention is required for parsing crowded acoustic environments, but cortical systems mediating the influence of behavioral state on auditory perception are not well characterized. Previous neurophysiological studies suggest that attention produces a general enhancement of neural responses to important target sounds versus irrelevant distractors. However, behavioral studies suggest that in the presence of masking noise, attention provides a focal suppression of distractors that compete with targets. Here, we compared effects of attention on cortical responses to masking versus non-masking distractors, controlling for effects of listening effort and general task engagement. We recorded single-unit activity from primary auditory cortex (A1) of ferrets during behavior and found that selective attention decreased responses to distractors masking targets in the same spectral band, compared with spectrally distinct distractors. This suppression enhanced neural target detection thresholds, suggesting that limited attention resources serve to focally suppress responses to distractors that interfere with target detection. Changing effort by manipulating target salience consistently modulated spontaneous but not evoked activity. Task engagement and changing effort tended to affect the same neurons, while attention affected an independent population, suggesting that distinct feedback circuits mediate effects of attention and effort in A1. © The Author 2017. Published by Oxford University Press.

  7. You're a What? Voice Actor

    Science.gov (United States)

    Liming, Drew

    2009-01-01

    This article talks about voice actors and features Tony Oliver, a professional voice actor. Voice actors help to bring one's favorite cartoon and video game characters to life. They also do voice-overs for radio and television commercials and movie trailers. These actors use the sound of their voice to sell a character's emotions--or an advertised…

  8. Voice disorders in the workplace: productivity in spasmodic dysphonia and the impact of botulinum toxin.

    Science.gov (United States)

    Meyer, Tanya K; Hu, Amanda; Hillel, Allen D

    2013-11-01

    The impact of the disordered voice on standard work productivity measures and employment trends is difficult to quantify; this is in large part due to the heterogeneity of the disease processes. Spasmodic dysphonia (SD), a chronic voice disorder, may be a useful model to study this impact. Self-reported work measures (worked missed, work impairment, overall work productivity, and activity impairment) were studied among patients receiving botulinum toxin (BTX) treatments for SD. It was hypothesized that there would be a substantial difference in work-related measures between the best and worst voicing periods. In addition, job types, employment shifts, and vocal requirements during the course of vocal disability from SD were investigated for each individual, and the impact of SD on these patterns was studied. A total of 145 patients with SD, either adductor or abductor, who were established in routine therapeutic BTX injections agreed to participate in a self-administered questionnaire study. Seventy-two participants were currently working and provided highly detailed information on work-related measures. Their answers characterized the effect of SD on their employment status, productivity at work, activity impairment outside of work, employment retention or change, and whether the individual perceived that BTX therapy affected these measures. Patients were asked to complete the Work Productivity and Activity Impairment (WPAI) instrument to determine these measures for their best and worst voicing weeks over the duration since their previous BTX injection. Voice-specific quality of life instruments (Voice Handicap Index-10) and perceptual assessments (Consensus Auditory Perceptual Evaluation of Voice) were elicited to provide correlations of work measures with patient-perceived voice handicap and clinician-perceived voice quality. Cross-sectional analysis using self-administered questionnaire. A total of 108 patients reported ever working during their diagnosis and

  9. Voice - How humans communicate?

    Science.gov (United States)

    Tiwari, Manjul; Tiwari, Maneesha

    2012-01-01

    Voices are important things for humans. They are the medium through which we do a lot of communicating with the outside world: our ideas, of course, and also our emotions and our personality. The voice is the very emblem of the speaker, indelibly woven into the fabric of speech. In this sense, each of our utterances of spoken language carries not only its own message but also, through accent, tone of voice and habitual voice quality it is at the same time an audible declaration of our membership of particular social regional groups, of our individual physical and psychological identity, and of our momentary mood. Voices are also one of the media through which we (successfully, most of the time) recognize other humans who are important to us-members of our family, media personalities, our friends, and enemies. Although evidence from DNA analysis is potentially vastly more eloquent in its power than evidence from voices, DNA cannot talk. It cannot be recorded planning, carrying out or confessing to a crime. It cannot be so apparently directly incriminating. As will quickly become evident, voices are extremely complex things, and some of the inherent limitations of the forensic-phonetic method are in part a consequence of the interaction between their complexity and the real world in which they are used. It is one of the aims of this article to explain how this comes about. This subject have unsolved questions, but there is no direct way to present the information that is necessary to understand how voices can be related, or not, to their owners.

  10. Diminished auditory sensory gating during active auditory verbal hallucinations.

    Science.gov (United States)

    Thoma, Robert J; Meier, Andrew; Houck, Jon; Clark, Vincent P; Lewine, Jeffrey D; Turner, Jessica; Calhoun, Vince; Stephen, Julia

    2017-10-01

    Auditory sensory gating, assessed in a paired-click paradigm, indicates the extent to which incoming stimuli are filtered, or "gated", in auditory cortex. Gating is typically computed as the ratio of the peak amplitude of the event related potential (ERP) to a second click (S2) divided by the peak amplitude of the ERP to a first click (S1). Higher gating ratios are purportedly indicative of incomplete suppression of S2 and considered to represent sensory processing dysfunction. In schizophrenia, hallucination severity is positively correlated with gating ratios, and it was hypothesized that a failure of sensory control processes early in auditory sensation (gating) may represent a larger system failure within the auditory data stream; resulting in auditory verbal hallucinations (AVH). EEG data were collected while patients (N=12) with treatment-resistant AVH pressed a button to indicate the beginning (AVH-on) and end (AVH-off) of each AVH during a paired click protocol. For each participant, separate gating ratios were computed for the P50, N100, and P200 components for each of the AVH-off and AVH-on states. AVH trait severity was assessed using the Psychotic Symptoms Rating Scales AVH Total score (PSYRATS). The results of a mixed model ANOVA revealed an overall effect for AVH state, such that gating ratios were significantly higher during the AVH-on state than during AVH-off for all three components. PSYRATS score was significantly and negatively correlated with N100 gating ratio only in the AVH-off state. These findings link onset of AVH with a failure of an empirically-defined auditory inhibition system, auditory sensory gating, and pave the way for a sensory gating model of AVH. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Effectiveness of auditory and tactile crossmodal cues in a dual-task visual and auditory scenario.

    Science.gov (United States)

    Hopkins, Kevin; Kass, Steven J; Blalock, Lisa Durrance; Brill, J Christopher

    2017-05-01

    In this study, we examined how spatially informative auditory and tactile cues affected participants' performance on a visual search task while they simultaneously performed a secondary auditory task. Visual search task performance was assessed via reaction time and accuracy. Tactile and auditory cues provided the approximate location of the visual target within the search display. The inclusion of tactile and auditory cues improved performance in comparison to the no-cue baseline conditions. In comparison to the no-cue conditions, both tactile and auditory cues resulted in faster response times in the visual search only (single task) and visual-auditory (dual-task) conditions. However, the effectiveness of auditory and tactile cueing for visual task accuracy was shown to be dependent on task-type condition. Crossmodal cueing remains a viable strategy for improving task performance without increasing attentional load within a singular sensory modality. Practitioner Summary: Crossmodal cueing with dual-task performance has not been widely explored, yet has practical applications. We examined the effects of auditory and tactile crossmodal cues on visual search performance, with and without a secondary auditory task. Tactile cues aided visual search accuracy when also engaged in a secondary auditory task, whereas auditory cues did not.

  12. Integrating cues of social interest and voice pitch in men's preferences for women's voices

    OpenAIRE

    Jones, Benedict C; Feinberg, David R; DeBruine, Lisa M; Little, Anthony C; Vukovic, Jovana

    2008-01-01

    Most previous studies of vocal attractiveness have focused on preferences for physical characteristics of voices such as pitch. Here we examine the content of vocalizations in interaction with such physical traits, finding that vocal cues of social interest modulate the strength of men's preferences for raised pitch in women's voices. Men showed stronger preferences for raised pitch when judging the voices of women who appeared interested in the listener than when judging the voices of women ...

  13. Exploring the Impact of Role-Playing on Peer Feedback in an Online Case-Based Learning Activity

    Directory of Open Access Journals (Sweden)

    Yu-Hui Ching

    2014-07-01

    Full Text Available This study explored the impact of role-playing on the quality of peer feedback and learners’ perception of this strategy in a case-based learning activity with VoiceThread in an online course. The findings revealed potential positive impact of role-playing on learners’ generation of constructive feedback as role-playing was associated with higher frequency of problem identification in the peer comments. Sixty percent of learners perceived the role-play strategy useful in assisting them to compose and provide meaningful feedback. Multiple motivations drove learners in making decisions on role choice when responding to their peers, mostly for peer benefits. Finally, 90% of learners reported the peer feedback useful or somewhat useful. Based on the findings of this study, we discussed educational and instructional design implications and future directions to further the line of research using role-play strategy to enhance peer feedback activity.

  14. Bihippocampal damage with emotional dysfunction: impaired auditory recognition of fear.

    Science.gov (United States)

    Ghika-Schmid, F; Ghika, J; Vuilleumier, P; Assal, G; Vuadens, P; Scherer, K; Maeder, P; Uske, A; Bogousslavsky, J

    1997-01-01

    A right-handed man developed a sudden transient, amnestic syndrome associated with bilateral hemorrhage of the hippocampi, probably due to Urbach-Wiethe disease. In the 3rd month, despite significant hippocampal structural damage on imaging, only a milder degree of retrograde and anterograde amnesia persisted on detailed neuropsychological examination. On systematic testing of recognition of facial and vocal expression of emotion, we found an impairment of the vocal perception of fear, but not that of other emotions, such as joy, sadness and anger. Such selective impairment of fear perception was not present in the recognition of facial expression of emotion. Thus emotional perception varies according to the different aspects of emotions and the different modality of presentation (faces versus voices). This is consistent with the idea that there may be multiple emotion systems. The study of emotional perception in this unique case of bilateral involvement of hippocampus suggests that this structure may play a critical role in the recognition of fear in vocal expression, possibly dissociated from that of other emotions and from that of fear in facial expression. In regard of recent data suggesting that the amygdala is playing a role in the recognition of fear in the auditory as well as in the visual modality this could suggest that the hippocampus may be part of the auditory pathway of fear recognition.

  15. Influence of classroom acoustics on the voice levels of teachers with and without voice problems: a field study

    DEFF Research Database (Denmark)

    Pelegrin Garcia, David; Lyberg-Åhlander, Viveka; Rydell, Roland

    2010-01-01

    of the classroom. The results thus suggest that teachers with voice problems are more aware of classroom acoustic conditions than their healthy colleagues and make use of the more supportive rooms to lower their voice levels. This behavior may result from an adaptation process of the teachers with voice problems...... of the voice problems was made with a questionnaire and a laryngological examination. During teaching, the sound pressure level at the teacher’s position was monitored. The teacher’s voice level and the activity noise level were separated using mixed Gaussians. In addition, objective acoustic parameters...... of Reverberation Time and Voice Support were measured in the 30 empty classrooms of the study. An empirical model shows that the measured voice levels depended on the activity noise levels and the voice support. Teachers with and without voice problems were differently affected by the voice support...

  16. The Sense of Agency Is More Sensitive to Manipulations of Outcome than Movement-Related Feedback Irrespective of Sensory Modality.

    Directory of Open Access Journals (Sweden)

    Nicole David

    Full Text Available The sense of agency describes the ability to experience oneself as the agent of one's own actions. Previous studies of the sense of agency manipulated the predicted sensory feedback related either to movement execution or to the movement's outcome, for example by delaying the movement of a virtual hand or the onset of a tone that resulted from a button press. Such temporal sensorimotor discrepancies reduce the sense of agency. It remains unclear whether movement-related feedback is processed differently than outcome-related feedback in terms of agency experience, especially if these types of feedback differ with respect to sensory modality. We employed a mixed-reality setup, in which participants tracked their finger movements by means of a virtual hand. They performed a single tap, which elicited a sound. The temporal contingency between the participants' finger movements and (i the movement of the virtual hand or (ii the expected auditory outcome was systematically varied. In a visual control experiment, the tap elicited a visual outcome. For each feedback type and participant, changes in the sense of agency were quantified using a forced-choice paradigm and the Method of Constant Stimuli. Participants were more sensitive to delays of outcome than to delays of movement execution. This effect was very similar for visual or auditory outcome delays. Our results indicate different contributions of movement- versus outcome-related sensory feedback to the sense of agency, irrespective of the modality of the outcome. We propose that this differential sensitivity reflects the behavioral importance of assessing authorship of the outcome of an action.

  17. Effects of tailoring ingredients in auditory persuasive health messages on fruit and vegetable intake.

    Science.gov (United States)

    Elbert, Sarah P; Dijkstra, Arie; Rozema, Andrea D

    2017-07-01

    Health messages can be tailored by applying different tailoring ingredients, among which personalisation, feedback and adaptation. This experiment investigated the separate effects of these tailoring ingredients on behaviour in auditory health persuasion. Furthermore, the moderating effect of self-efficacy was assessed. The between-participants design consisted of four conditions. A generic health message served as a control condition; personalisation was applied using the recipient's first name, feedback was given on the personal state, or the message was adapted to the recipient's value. The study consisted of a pre-test questionnaire (measuring fruit and vegetable intake and perceived difficulty of performing these behaviours, indicating self-efficacy), exposure to the auditory message and a follow-up questionnaire measuring fruit and vegetable intake two weeks after message exposure (n = 112). ANCOVAs showed no main effect of condition on either fruit or vegetable intake, but a moderation was found on vegetable intake: When self-efficacy was low, vegetable intake was higher after listening to the personalisation message. No significant differences between the conditions were found when self-efficacy was high. Individuals with low self-efficacy seemed to benefit from incorporating personalisation, but only regarding vegetable consumption. This finding warrants further investigation in tailoring research.

  18. Attending to auditory memory.

    Science.gov (United States)

    Zimmermann, Jacqueline F; Moscovitch, Morris; Alain, Claude

    2016-06-01

    Attention to memory describes the process of attending to memory traces when the object is no longer present. It has been studied primarily for representations of visual stimuli with only few studies examining attention to sound object representations in short-term memory. Here, we review the interplay of attention and auditory memory with an emphasis on 1) attending to auditory memory in the absence of related external stimuli (i.e., reflective attention) and 2) effects of existing memory on guiding attention. Attention to auditory memory is discussed in the context of change deafness, and we argue that failures to detect changes in our auditory environments are most likely the result of a faulty comparison system of incoming and stored information. Also, objects are the primary building blocks of auditory attention, but attention can also be directed to individual features (e.g., pitch). We review short-term and long-term memory guided modulation of attention based on characteristic features, location, and/or semantic properties of auditory objects, and propose that auditory attention to memory pathways emerge after sensory memory. A neural model for auditory attention to memory is developed, which comprises two separate pathways in the parietal cortex, one involved in attention to higher-order features and the other involved in attention to sensory information. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Understanding the 'Anorexic Voice' in Anorexia Nervosa.

    Science.gov (United States)

    Pugh, Matthew; Waller, Glenn

    2017-05-01

    In common with individuals experiencing a number of disorders, people with anorexia nervosa report experiencing an internal 'voice'. The anorexic voice comments on the individual's eating, weight and shape and instructs the individual to restrict or compensate. However, the core characteristics of the anorexic voice are not known. This study aimed to develop a parsimonious model of the voice characteristics that are related to key features of eating disorder pathology and to determine whether patients with anorexia nervosa fall into groups with different voice experiences. The participants were 49 women with full diagnoses of anorexia nervosa. Each completed validated measures of the power and nature of their voice experience and of their responses to the voice. Different voice characteristics were associated with current body mass index, duration of disorder and eating cognitions. Two subgroups emerged, with 'weaker' and 'stronger' voice experiences. Those with stronger voices were characterized by having more negative eating attitudes, more severe compensatory behaviours, a longer duration of illness and a greater likelihood of having the binge-purge subtype of anorexia nervosa. The findings indicate that the anorexic voice is an important element of the psychopathology of anorexia nervosa. Addressing the anorexic voice might be helpful in enhancing outcomes of treatments for anorexia nervosa, but that conclusion might apply only to patients with more severe eating psychopathology. Copyright © 2016 John Wiley & Sons, Ltd. Experiences of an internal 'anorexic voice' are common in anorexia nervosa. Clinicians should consider the role of the voice when formulating eating pathology in anorexia nervosa, including how individuals perceive and relate to that voice. Addressing the voice may be beneficial, particularly in more severe and enduring forms of anorexia nervosa. When working with the voice, clinicians should aim to address both the content of the voice and how

  20. The ironies of vehicle feedback in car design.

    Science.gov (United States)

    Walker, Guy H; Stanton, Neville A; Young, Mark S

    2006-02-10

    Car drivers show an acute sensitivity towards vehicle feedback, with most normal drivers able to detect 'the difference in vehicle feel of a medium-size saloon car with and without a fairly heavy passenger in the rear seat' (Joy and Hartley 1953-54). The irony is that this level of sensitivity stands in contrast to the significant changes in vehicle 'feel' accompanying modern trends in automotive design, such as drive-by-wire and increased automation. The aim of this paper is to move the debate from the anecdotal to the scientific level. This is achieved by using the Brunel University driving simulator to replicate some of these trends and changes by presenting (or removing) different forms of non-visual vehicle feedback, and measuring resultant driver situational awareness (SA) using a probe-recall method. The findings confirm that vehicle feedback plays a key role in coupling the driver to the dynamics of their environment (Moray 2004), with the role of auditory feedback particularly prominent. As a contrast, drivers in the study also rated their self-perceived levels of SA and a concerning dissociation occurred between the two sets of results. Despite the large changes in vehicle feedback presented in the simulator, and the measured changes in SA, drivers appeared to have little self-awareness of these changes. Most worryingly, drivers demonstrated little awareness of diminished SA. The issues surrounding vehicle feedback are therefore similar to the classic problems and ironies studied in aviation and automation, and highlight the role that ergonomics can also play within the domain of contemporary vehicle design.

  1. Voice application development for Android

    CERN Document Server

    McTear, Michael

    2013-01-01

    This book will give beginners an introduction to building voice-based applications on Android. It will begin by covering the basic concepts and will build up to creating a voice-based personal assistant. By the end of this book, you should be in a position to create your own voice-based applications on Android from scratch in next to no time.Voice Application Development for Android is for all those who are interested in speech technology and for those who, as owners of Android devices, are keen to experiment with developing voice apps for their devices. It will also be useful as a starting po

  2. DolphinAtack: Inaudible Voice Commands

    OpenAIRE

    Zhang, Guoming; Yan, Chen; Ji, Xiaoyu; Zhang, Taimin; Zhang, Tianchen; Xu, Wenyuan

    2017-01-01

    Speech recognition (SR) systems such as Siri or Google Now have become an increasingly popular human-computer interaction method, and have turned various systems into voice controllable systems(VCS). Prior work on attacking VCS shows that the hidden voice commands that are incomprehensible to people can control the systems. Hidden voice commands, though hidden, are nonetheless audible. In this work, we design a completely inaudible attack, DolphinAttack, that modulates voice commands on ultra...

  3. The effectiveness of familiar auditory stimulus on hospitalized neonates' physiologic responses to procedural pain.

    Science.gov (United States)

    Azarmnejad, Elham; Sarhangi, Forogh; Javadi, Mahrooz; Rejeh, Nahid; Amirsalari, Susan; Tadrisi, Seyed Davood

    2017-06-01

    Hospitalized neonates usually undergo different painful procedures. This study sought to test the effects of a familiar auditory stimulus on the physiologic responses to pain of venipuncture among neonates in intensive care unit. The study design is quasi-experimental. The randomized clinical trial study was done on 60 full-term neonates admitted to the neonatal intensive care unit between March 20 to June 20, 2014. The neonates were conveniently selected and randomly allocated to the control and the experimental groups. Recorded maternal voice was played for the neonates in the experimental group from 10 minutes before to 10 minutes after venipuncture while the neonates in the control group received no sound therapy intervention. The participants' physiologic parameters were assessed 10 minutes before, during, and after venipuncture. At baseline, the study groups did not differ significantly regarding the intended physiologic parameters (P > .05). During venipuncture, maternal voice was effective in reducing the neonates' heart rate, respiratory rate, and diastolic blood pressure (P familiar sounds to effectively manage neonates' physiologic responses to procedural pain of venipuncture. © 2017 John Wiley & Sons Australia, Ltd.

  4. Predictive coding of visual-auditory and motor-auditory events: An electrophysiological study.

    Science.gov (United States)

    Stekelenburg, Jeroen J; Vroomen, Jean

    2015-11-11

    The amplitude of auditory components of the event-related potential (ERP) is attenuated when sounds are self-generated compared to externally generated sounds. This effect has been ascribed to internal forward modals predicting the sensory consequences of one's own motor actions. Auditory potentials are also attenuated when a sound is accompanied by a video of anticipatory visual motion that reliably predicts the sound. Here, we investigated whether the neural underpinnings of prediction of upcoming auditory stimuli are similar for motor-auditory (MA) and visual-auditory (VA) events using a stimulus omission paradigm. In the MA condition, a finger tap triggered the sound of a handclap whereas in the VA condition the same sound was accompanied by a video showing the handclap. In both conditions, the auditory stimulus was omitted in either 50% or 12% of the trials. These auditory omissions induced early and mid-latency ERP components (oN1 and oN2, presumably reflecting prediction and prediction error), and subsequent higher-order error evaluation processes. The oN1 and oN2 of MA and VA were alike in amplitude, topography, and neural sources despite that the origin of the prediction stems from different brain areas (motor versus visual cortex). This suggests that MA and VA predictions activate a sensory template of the sound in auditory cortex. This article is part of a Special Issue entitled SI: Prediction and Attention. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Voice-to-Phoneme Conversion Algorithms for Voice-Tag Applications in Embedded Platforms

    Directory of Open Access Journals (Sweden)

    Yan Ming Cheng

    2008-08-01

    Full Text Available We describe two voice-to-phoneme conversion algorithms for speaker-independent voice-tag creation specifically targeted at applications on embedded platforms. These algorithms (batch mode and sequential are compared in speech recognition experiments where they are first applied in a same-language context in which both acoustic model training and voice-tag creation and application are performed on the same language. Then, their performance is tested in a cross-language setting where the acoustic models are trained on a particular source language while the voice-tags are created and applied on a different target language. In the same-language environment, both algorithms either perform comparably to or significantly better than the baseline where utterances are manually transcribed by a phonetician. In the cross-language context, the voice-tag performances vary depending on the source-target language pair, with the variation reflecting predicted phonological similarity between the source and target languages. Among the most similar languages, performance nears that of the native-trained models and surpasses the native reference baseline.

  6. Development of the auditory system

    Science.gov (United States)

    Litovsky, Ruth

    2015-01-01

    Auditory development involves changes in the peripheral and central nervous system along the auditory pathways, and these occur naturally, and in response to stimulation. Human development occurs along a trajectory that can last decades, and is studied using behavioral psychophysics, as well as physiologic measurements with neural imaging. The auditory system constructs a perceptual space that takes information from objects and groups, segregates sounds, and provides meaning and access to communication tools such as language. Auditory signals are processed in a series of analysis stages, from peripheral to central. Coding of information has been studied for features of sound, including frequency, intensity, loudness, and location, in quiet and in the presence of maskers. In the latter case, the ability of the auditory system to perform an analysis of the scene becomes highly relevant. While some basic abilities are well developed at birth, there is a clear prolonged maturation of auditory development well into the teenage years. Maturation involves auditory pathways. However, non-auditory changes (attention, memory, cognition) play an important role in auditory development. The ability of the auditory system to adapt in response to novel stimuli is a key feature of development throughout the nervous system, known as neural plasticity. PMID:25726262

  7. Animal models for auditory streaming

    Science.gov (United States)

    Itatani, Naoya

    2017-01-01

    Sounds in the natural environment need to be assigned to acoustic sources to evaluate complex auditory scenes. Separating sources will affect the analysis of auditory features of sounds. As the benefits of assigning sounds to specific sources accrue to all species communicating acoustically, the ability for auditory scene analysis is widespread among different animals. Animal studies allow for a deeper insight into the neuronal mechanisms underlying auditory scene analysis. Here, we will review the paradigms applied in the study of auditory scene analysis and streaming of sequential sounds in animal models. We will compare the psychophysical results from the animal studies to the evidence obtained in human psychophysics of auditory streaming, i.e. in a task commonly used for measuring the capability for auditory scene analysis. Furthermore, the neuronal correlates of auditory streaming will be reviewed in different animal models and the observations of the neurons’ response measures will be related to perception. The across-species comparison will reveal whether similar demands in the analysis of acoustic scenes have resulted in similar perceptual and neuronal processing mechanisms in the wide range of species being capable of auditory scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044022

  8. Risk factors for voice problems in teachers.

    NARCIS (Netherlands)

    Kooijman, P.G.C.; Jong, F.I.C.R.S. de; Thomas, G.; Huinck, W.J.; Donders, A.R.T.; Graamans, K.; Schutte, H.K.

    2006-01-01

    In order to identify factors that are associated with voice problems and voice-related absenteeism in teachers, 1,878 questionnaires were analysed. The questionnaires inquired about personal data, voice complaints, voice-related absenteeism from work and conditions that may lead to voice complaints

  9. Risk factors for voice problems in teachers

    NARCIS (Netherlands)

    Kooijman, P. G. C.; de Jong, F. I. C. R. S.; Thomas, G.; Huinck, W.; Donders, R.; Graamans, K.; Schutte, H. K.

    2006-01-01

    In order to identify factors that are associated with voice problems and voice-related absenteeism in teachers, 1,878 questionnaires were analysed. The questionnaires inquired about personal data, voice complaints, voice-related absenteeism from work and conditions that may lead to voice complaints

  10. Gender by assertiveness interaction in delayed auditory feedback.

    Science.gov (United States)

    Elias, J W; Rosenzweig, C M; Dippel, R L

    1981-04-01

    The College Self-Expression and the Marlowe-Crowne Social Desirability Scales were given to 144 undergraduates. High (N; 10 M; 10 F) and Low (N; 10 M 10 F) Assertiveness Ss were given a DAF test with a 'Phonic Mirror" and the Stroop test (naming the color of a word printed in a different color). DAF performance did not differ among the 4 subgroups (M and F, High and Low Assertiveness), except that Low Assertiveness women showed significantly greater DAF interference than the other subgroups. There was no significant correlation between the continuous interference of the DAF vs the discontinuous of the Stroop test. The difference may reside in the time available and the consequent reduction in anxiety, for the next stimulus in the Stroop test. These data show that, under certain circumstances, personality factors such as assertiveness can interact with gender to affect speech fluency and production. The ability to overcome feedback-related disfluencies in speech may be partially aided by improvement in self-concept or specific training in such behaviors as assertiveness, and this may be more important for females than males.

  11. Voice quality in relation to voice complaints and vocal fold condition during the screening of female student teachers.

    Science.gov (United States)

    Meulenbroek, Leo F P; de Jong, Felix I C R S

    2011-07-01

    The purpose of this study was to compare the perceptual examination of voice quality with the condition of the vocal folds and voice complaints during voice screening in female student teachers. This research was a cross-sectional study in 214 starting student teachers using the four-point grade scale of the GRBAS and laryngostroboscopic assessment of the vocal folds. The voice quality was assessed by speech pathologists using the ordinal 4-point G-scale (overall dysphonia) of the GRBAS method in a running speech sample. Glottal closure and vocal fold lesions were recorded. A questionnaire was used for assessing voice complaints. More students with an insufficient glottal closure (89%) were rated dysphonic compared with students with sufficient glottal closure (80%). Students with sufficient glottal closure had a significantly lower mean G-score (1.21) compared with the group with insufficient glottal closure (1.52) (P = 0.038). This study showed a larger percentage of students with vocal fold lesions (96%) labeled a dysphonic voice compared to students with no vocal fold problems (81%). Students with no vocal fold lesions had a significantly lower mean G-score (1.20) compared with the group with vocal fold lesions (2.05) (P=0.002). A dysphonic voice (G≥1) was rated in 76% of the students without voice complaints compared with 86% of the students with voice complaints. Students with no voice complaints had a lower mean G-score (1.07) compared with the group with voice complaints (1.41) (P=0.090). The present study showed that perceptual assessment of the voice and voice complaints is not sufficient to check if the future professional is at risk. Therefore, preventive measures are needed to detect students at risk early in their education and this depends on broader assessment: on the one hand, assessing voice quality and voice complaints and on the other hand, examination of the vocal folds of all starting students. Copyright © 2011 The Voice Foundation

  12. Moderate evidence for a Lombard effect in a phylogenetically basal primate

    Directory of Open Access Journals (Sweden)

    Christian Schopf

    2016-08-01

    Full Text Available When exposed to enhanced background noise, humans avoid signal masking by increasing the amplitude of the voice, a phenomenon termed the Lombard effect. This auditory feedback-mediated voice control has also been found in monkeys, bats, cetaceans, fish and some frogs and birds. We studied the Lombard effect for the first time in a phylogenetically basal primate, the grey mouse lemur, Microcebus murinus. When background noise was increased, mouse lemurs were able to raise the amplitude of the voice, comparable to monkeys, but they did not show this effect consistently across context/individuals. The Lombard effect, even if representing a generic vocal communication system property of mammals, may thus be affected by more complex mechanisms. The present findings emphasize an effect of context, and individual, and the need for further standardized approaches to disentangle the multiple system properties of mammalian vocal communication, important for understanding the evolution of the unique human faculty of speech and language.

  13. Improvement of auditory hallucinations and reduction of primary auditory area's activation following TMS

    International Nuclear Information System (INIS)

    Giesel, Frederik L.; Mehndiratta, Amit; Hempel, Albrecht; Hempel, Eckhard; Kress, Kai R.; Essig, Marco; Schröder, Johannes

    2012-01-01

    Background: In the present case study, improvement of auditory hallucinations following transcranial magnetic stimulation (TMS) therapy was investigated with respect to activation changes of the auditory cortices. Methods: Using functional magnetic resonance imaging (fMRI), activation of the auditory cortices was assessed prior to and after a 4-week TMS series of the left superior temporal gyrus in a schizophrenic patient with medication-resistant auditory hallucinations. Results: Hallucinations decreased slightly after the third and profoundly after the fourth week of TMS. Activation in the primary auditory area decreased, whereas activation in the operculum and insula remained stable. Conclusions: Combination of TMS and repetitive fMRI is promising to elucidate the physiological changes induced by TMS.

  14. VOICE QUALITY BEFORE AND AFTER THYROIDECTOMY

    Directory of Open Access Journals (Sweden)

    Dora CVELBAR

    2016-04-01

    Full Text Available Introduction: Voice disorders are a well-known complication which is often associated with thyroid gland diseases and because voice is still the basic mean of communication it is very important to maintain its quality healthy. Objectives: The aim of this study referred to questions whether there is a statistically significant difference between results of voice self-assessment, perceptual voice assessment and acoustic voice analysis before and after thyroidectomy and whether there are statistically significant correlations between variables of voice self-assessment, perceptual assessment and acoustic analysis before and after thyroidectomy. Methods: This scientific research included 12 participants aged between 41 and 76. Voice self-assessment was conducted with the help of Croatian version of Voice Handicap Index (VHI. Recorded reading samples were used for perceptual assessment and later evaluated by two clinical speech and language therapists. Recorded samples of phonation were used for acoustic analysis which was conducted with the help of acoustic program Praat. All of the data was processed through descriptive statistics and nonparametric statistical methods. Results: Results showed that there are statistically significant differences between results of voice self-assessments and results of acoustic analysis before and after thyroidectomy. Statistically significant correlations were found between variables of perceptual assessment and acoustic analysis. Conclusion: Obtained results indicate the importance of multidimensional, preoperative and postoperative assessment. This kind of assessment allows the clinician to describe all of the voice features and provides appropriate recommendation for further rehabilitation to the patient in order to optimize voice outcomes.

  15. A qualitative study on feedback provided by students in nurse education.

    Science.gov (United States)

    Chan, Zenobia C Y; Stanley, David John; Meadus, Robert J; Chien, Wai Tong

    2017-08-01

    This study aims to help nurse educators/academics understand the perspectives and expectations of students providing their feedback to educators about teaching performance and subject quality. The aim of this study is to reveal students' voices regarding their feedback in nurse education in order to shed light on how the current student feedback practice may be modified. A qualitative study using focus group inquiry. Convenience sampling was adopted and participants recruited from one school of nursing in Hong Kong. A total of 66 nursing students from two pre-registration programs were recruited for seven focus group interviews: one group of Year 1 students (n=21), two groups of Year 3 students (n=27), and four groups of Final Year students (n=18). The interviews were guided by a semi-structured interview guideline and the interview narratives were processed through content analysis. The trustworthiness of this study was guaranteed through peer checking, research meetings, and an audit trail. The participants' privacy was protected throughout the study. Four core themes were discerned based on the narratives of the focus group interviews: (1) "timing of collecting feedback at more than one time point"; (2) "modify the questions being asked in collecting student feedback"; (3) "are electronic means of collecting feedback good enough?; and (4) "what will be next for student feedback?". This study is significant in the following three domains: 1) it contributed to student feedback because it examined the issue from a student's perspective; 2) it explored the timing and channels for collecting feedback from the students' point of view; and 3) it showed the preferred uses of student feedback. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  16. Aerodynamic and sound intensity measurements in tracheoesophageal voice

    NARCIS (Netherlands)

    Grolman, Wilko; Eerenstein, Simone E. J.; Tan, Frédérique M. L.; Tange, Rinze A.; Schouwenburg, Paul F.

    2007-01-01

    BACKGROUND: In laryngectomized patients, tracheoesophageal voice generally provides a better voice quality than esophageal voice. Understanding the aerodynamics of voice production in patients with a voice prosthesis is important for optimizing prosthetic designs and successful voice rehabilitation.

  17. Crossing Cultures with Multi-Voiced Journals

    Science.gov (United States)

    Styslinger, Mary E.; Whisenant, Alison

    2004-01-01

    In this article, the authors discuss the benefits of using multi-voiced journals as a teaching strategy in reading instruction. Multi-voiced journals, an adaptation of dual-voiced journals, encourage responses to reading in varied, cultured voices of characters. It is similar to reading journals in that they prod students to connect to the lives…

  18. [Applicability of Voice Handicap Index to the evaluation of voice therapy effectiveness in teachers].

    Science.gov (United States)

    Niebudek-Bogusz, Ewa; Kuzańska, Anna; Błoch, Piotr; Domańska, Maja; Woźnicka, Ewelina; Politański, Piotr; Sliwińska-Kowalska, Mariola

    2007-01-01

    The aim of this study was to assess the applicability of Voice Handicap Index (VHI) to the evaluation of effectiveness of functional voice disorders treatment in teachers. The subjects were 45 female teachers with functional dysphonia who evaluated their voice problems according to the subjective VHI scale before and after phoniatric management. Group I (29 patients) were subjected to vocal training, whereas group II (16 patients) received only voice hygiene instructions. The results demonstrated that differences in the mean VHI score before and after phoniatric treatment were significantly higher in group 1 than in group II (p teacher's dysphonia.

  19. Voice parameters and videonasolaryngoscopy in children with vocal nodules: a longitudinal study, before and after voice therapy.

    Science.gov (United States)

    Valadez, Victor; Ysunza, Antonio; Ocharan-Hernandez, Esther; Garrido-Bustamante, Norma; Sanchez-Valerio, Araceli; Pamplona, Ma C

    2012-09-01

    Vocal Nodules (VN) are a functional voice disorder associated with voice misuse and abuse in children. There are few reports addressing vocal parameters in children with VN, especially after a period of vocal rehabilitation. The purpose of this study is to describe measurements of vocal parameters including Fundamental Frequency (FF), Shimmer (S), and Jitter (J), videonasolaryngoscopy examination and clinical perceptual assessment, before and after voice therapy in children with VN. Voice therapy was provided using visual support through Speech-Viewer software. Twenty patients with VN were studied. An acoustical analysis of voice was performed and compared with data from subjects from a control group matched by age and gender. Also, clinical perceptual assessment of voice and videonasolaryngoscopy were performed to all patients with VN. After a period of voice therapy, provided with visual support using Speech Viewer-III (SV-III-IBM) software, new acoustical analyses, perceptual assessments and videonasolaryngoscopies were performed. Before the onset of voice therapy, there was a significant difference (ptherapy period, a significant improvement (pvocal nodules were no longer discernible on the vocal folds in any of the cases. SV-III software seems to be a safe and reliable method for providing voice therapy in children with VN. Acoustic voice parameters, perceptual data and videonasolaryngoscopy were significantly improved after the speech therapy period was completed. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  20. Interactive Augmentation of Voice Quality and Reduction of Breath Airflow in the Soprano Voice.

    Science.gov (United States)

    Rothenberg, Martin; Schutte, Harm K

    2016-11-01

    In 1985, at a conference sponsored by the National Institutes of Health, Martin Rothenberg first described a form of nonlinear source-tract acoustic interaction mechanism by which some sopranos, singing in their high range, can use to reduce the total airflow, to allow holding the note longer, and simultaneously enrich the quality of the voice, without straining the voice. (M. Rothenberg, "Source-Tract Acoustic Interaction in the Soprano Voice and Implications for Vocal Efficiency," Fourth International Conference on Vocal Fold Physiology, New Haven, Connecticut, June 3-6, 1985.) In this paper, we describe additional evidence for this type of nonlinear source-tract interaction in some soprano singing and describe an analogous interaction phenomenon in communication engineering. We also present some implications for voice research and pedagogy. Copyright © 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  1. Force control tasks with pure haptic feedback promote short-term focused attention.

    Science.gov (United States)

    Wang, Dangxiao; Zhang, Yuru; Yang, Xiaoxiao; Yang, Gaofeng; Yang, Yi

    2014-01-01

    Focused attention has great impact on our quality of life. Our learning, social skills and even happiness are closely intertwined with our capacity for focused attention. Attention promotion is replete with examples of training-induced increases in attention capability, most of which rely on visual and auditory stimulation. Pure haptic stimulation to increase attention capability is rarely found. We show that accurate force control tasks with pure haptic feedback enhance short-term focused attention. Participants were trained by a force control task in which information from visual and auditory channels was blocked, and only haptic feedback was provided. The trainees were asked to exert a target force within a pre-defined force tolerance for a specific duration. The tolerance was adaptively modified to different levels of difficulty to elicit full participant engagement. Three attention tests showed significant changes in different aspects of focused attention in participants who had been trained as compared with those who had not, thereby illustrating the role of haptic-based sensory-motor tasks in the promotion of short-term focused attention. The findings highlight the potential value of haptic stimuli in brain plasticity and serve as a new tool to extend existing computer games for cognitive enhancement.

  2. Electronic monitoring and voice prompts improve hand hygiene and decrease nosocomial infections in an intermediate care unit.

    Science.gov (United States)

    Swoboda, Sandra M; Earsing, Karen; Strauss, Kevin; Lane, Stephen; Lipsett, Pamela A

    2004-02-01

    To determine whether electronic monitoring of hand hygiene and voice prompts can improve hand hygiene and decrease nosocomial infection rates in a surgical intermediate care unit. Three-phase quasi-experimental design. Phase I was electronic monitoring and direct observation; phase II was electronic monitoring and computerized voice prompts for failure to perform hand hygiene on room exit; and phase III was electronic monitoring only. Nine-room, 14-bed intermediate care unit in a university, tertiary-care institution. All patient rooms, utility room, and staff lavatory were monitored electronically. All healthcare personnel including physicians, nurses, nursing support personnel, ancillary staff, all visitors and family members, and any other personnel interacting with patients on the intermediate care unit. All patients with an intermediate care unit length of stay >48 hrs were followed for nosocomial infection. Electronic monitoring during all phases, computerized voice prompts during phase II only. We evaluated a total of 283,488 electronically monitored entries into a patient room with 251,526 exits for 420 days (10,080 hrs and 3,549 patient days). Compared with phase I, hand hygiene compliance in patient rooms improved 37% during phase II (odds ratio, 1.38; 95% confidence interval, 1.04-1.83) and 41% in phase III (odds ratio, 1.41; 95% confidence interval, 1.07-1.84). When adjusting for patient admissions during each phase, point estimates of nosocomial infections decreased by 22% during phase II and 48% during phase III; when adjusting for patient days, the number of infections decreased by 10% during phase II and 40% during phase III. Although the overall rate of nosocomial infections significantly decreased when combining phases II and III, the association between nosocomial infection and individual phase was not significant. Electronic monitoring provided effective ongoing feedback about hand hygiene compliance. During both the voice prompt phase and post

  3. Interventions for preventing voice disorders in adults.

    Science.gov (United States)

    Ruotsalainen, J H; Sellman, J; Lehto, L; Jauhiainen, M; Verbeek, J H

    2007-10-17

    Poor voice quality due to a voice disorder can lead to a reduced quality of life. In occupations where voice use is substantial it can lead to periods of absence from work. To evaluate the effectiveness of interventions to prevent voice disorders in adults. We searched MEDLINE (PubMed, 1950 to 2006), EMBASE (1974 to 2006), CENTRAL (The Cochrane Library, Issue 2 2006), CINAHL (1983 to 2006), PsychINFO (1967 to 2006), Science Citation Index (1986 to 2006) and the Occupational Health databases OSH-ROM (to 2006). The date of the last search was 05/04/06. Randomised controlled clinical trials (RCTs) of interventions evaluating the effectiveness of treatments to prevent voice disorders in adults. For work-directed interventions interrupted time series and prospective cohort studies were also eligible. Two authors independently extracted data and assessed trial quality. Meta-analysis was performed where appropriate. We identified two randomised controlled trials including a total of 53 participants in intervention groups and 43 controls. One study was conducted with teachers and the other with student teachers. Both trials were poor quality. Interventions were grouped into 1) direct voice training, 2) indirect voice training and 3) direct and indirect voice training combined.1) Direct voice training: One study did not find a significant decrease of the Voice Handicap Index for direct voice training compared to no intervention.2) Indirect voice training: One study did not find a significant decrease of the Voice Handicap Index for indirect voice training when compared to no intervention.3) Direct and indirect voice training combined: One study did not find a decrease of the Voice Handicap Index for direct and indirect voice training combined when compared to no intervention. The same study did however find an improvement in maximum phonation time (Mean Difference -3.18 sec; 95 % CI -4.43 to -1.93) for direct and indirect voice training combined when compared to no

  4. Designing a Voice Controlled Interface For Radio : Guidelines for The First Generation of Voice Controlled Public Radio

    OpenAIRE

    Päärni, Anna

    2017-01-01

    From being a fictional element in sci-fi, voice control has become a reality, with inventions such as Apple's Siri, and interactive voice response (IVR) when calling your doctor's office. The combination of radio’s strength as a hands-free medium, public radio’s mission to reach across all platforms and the rise of voice makes up a relevant intersection; voice controlled public radio in Sweden. This thesis has aimed to investigate how radio listeners wish to interact using voice control to li...

  5. Application of computer voice input/output

    International Nuclear Information System (INIS)

    Ford, W.; Shirk, D.G.

    1981-01-01

    The advent of microprocessors and other large-scale integration (LSI) circuits is making voice input and output for computers and instruments practical; specialized LSI chips for speech processing are appearing on the market. Voice can be used to input data or to issue instrument commands; this allows the operator to engage in other tasks, move about, and to use standard data entry systems. Voice synthesizers can generate audible, easily understood instructions. Using voice characteristics, a control system can verify speaker identity for security purposes. Two simple voice-controlled systems have been designed at Los Alamos for nuclear safeguards applicaations. Each can easily be expanded as time allows. The first system is for instrument control that accepts voice commands and issues audible operator prompts. The second system is for access control. The speaker's voice is used to verify his identity and to actuate external devices

  6. A neural network model of normal and abnormal auditory information processing.

    Science.gov (United States)

    Du, X; Jansen, B H

    2011-08-01

    The ability of the brain to attenuate the response to irrelevant sensory stimulation is referred to as sensory gating. A gating deficiency has been reported in schizophrenia. To study the neural mechanisms underlying sensory gating, a neuroanatomically inspired model of auditory information processing has been developed. The mathematical model consists of lumped parameter modules representing the thalamus (TH), the thalamic reticular nucleus (TRN), auditory cortex (AC), and prefrontal cortex (PC). It was found that the membrane potential of the pyramidal cells in the PC module replicated auditory evoked potentials, recorded from the scalp of healthy individuals, in response to pure tones. Also, the model produced substantial attenuation of the response to the second of a pair of identical stimuli, just as seen in actual human experiments. We also tested the viewpoint that schizophrenia is associated with a deficit in prefrontal dopamine (DA) activity, which would lower the excitatory and inhibitory feedback gains in the AC and PC modules. Lowering these gains by less than 10% resulted in model behavior resembling the brain activity seen in schizophrenia patients, and replicated the reported gating deficits. The model suggests that the TRN plays a critical role in sensory gating, with the smaller response to a second tone arising from a reduction in inhibition of TH by the TRN. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. Auditory Perspective Taking

    National Research Council Canada - National Science Library

    Martinson, Eric; Brock, Derek

    2006-01-01

    .... From this knowledge of another's auditory perspective, a conversational partner can then adapt his or her auditory output to overcome a variety of environmental challenges and insure that what is said is intelligible...

  8. Voice and silence in organizations

    Directory of Open Access Journals (Sweden)

    Moaşa, H.

    2011-01-01

    Full Text Available Unlike previous research on voice and silence, this article breaksthe distance between the two and declines to treat them as opposites. Voice and silence are interrelated and intertwined strategic forms ofcommunication which presuppose each other in such a way that the absence of one would minimize completely the other’s presence. Social actors are not voice, or silence. Social actors can have voice or silence, they can do both because they operate at multiple levels and deal with multiple issues at different moments in time.

  9. Distinguishing between forensic science and forensic pseudoscience: testing of validity and reliability, and approaches to forensic voice comparison.

    Science.gov (United States)

    Morrison, Geoffrey Stewart

    2014-05-01

    In this paper it is argued that one should not attempt to directly assess whether a forensic analysis technique is scientifically acceptable. Rather one should first specify what one considers to be appropriate principles governing acceptable practice, then consider any particular approach in light of those principles. This paper focuses on one principle: the validity and reliability of an approach should be empirically tested under conditions reflecting those of the case under investigation using test data drawn from the relevant population. Versions of this principle have been key elements in several reports on forensic science, including forensic voice comparison, published over the last four-and-a-half decades. The aural-spectrographic approach to forensic voice comparison (also known as "voiceprint" or "voicegram" examination) and the currently widely practiced auditory-acoustic-phonetic approach are considered in light of this principle (these two approaches do not appear to be mutually exclusive). Approaches based on data, quantitative measurements, and statistical models are also considered in light of this principle. © 2013.

  10. Classification of voice disorder in children with cochlear implantation and hearing aid using multiple classifier fusion

    Directory of Open Access Journals (Sweden)

    Tayarani Hamid

    2011-01-01

    Full Text Available Abstract Background Speech production and speech phonetic features gradually improve in children by obtaining audio feedback after cochlear implantation or using hearing aids. The aim of this study was to develop and evaluate automated classification of voice disorder in children with cochlear implantation and hearing aids. Methods We considered 4 disorder categories in children's voice using the following definitions: Level_1: Children who produce spontaneous phonation and use words spontaneously and imitatively. Level_2: Children, who produce spontaneous phonation, use words spontaneously and make short sentences imitatively. Level_3: Children, who produce spontaneous phonations, use words and arbitrary sentences spontaneously. Level_4: Normal children without any hearing loss background. Thirty Persian children participated in the study, including six children in each level from one to three and 12 children in level four. Voice samples of five isolated Persian words "mashin", "mar", "moosh", "gav" and "mouz" were analyzed. Four levels of the voice quality were considered, the higher the level the less significant the speech disorder. "Frame-based" and "word-based" features were extracted from voice signals. The frame-based features include intensity, fundamental frequency, formants, nasality and approximate entropy and word-based features include phase space features and wavelet coefficients. For frame-based features, hidden Markov models were used as classifiers and for word-based features, neural network was used. Results After Classifiers fusion with three methods: Majority Voting Rule, Linear Combination and Stacked fusion, the best classification rates were obtained using frame-based and word-based features with MVR rule (level 1:100%, level 2: 93.75%, level 3: 100%, level 4: 94%. Conclusions Result of this study may help speech pathologists follow up voice disorder recovery in children with cochlear implantation or hearing aid who are

  11. Competition and convergence between auditory and cross-modal visual inputs to primary auditory cortical areas

    Science.gov (United States)

    Mao, Yu-Ting; Hua, Tian-Miao

    2011-01-01

    Sensory neocortex is capable of considerable plasticity after sensory deprivation or damage to input pathways, especially early in development. Although plasticity can often be restorative, sometimes novel, ectopic inputs invade the affected cortical area. Invading inputs from other sensory modalities may compromise the original function or even take over, imposing a new function and preventing recovery. Using ferrets whose retinal axons were rerouted into auditory thalamus at birth, we were able to examine the effect of varying the degree of ectopic, cross-modal input on reorganization of developing auditory cortex. In particular, we assayed whether the invading visual inputs and the existing auditory inputs competed for or shared postsynaptic targets and whether the convergence of input modalities would induce multisensory processing. We demonstrate that although the cross-modal inputs create new visual neurons in auditory cortex, some auditory processing remains. The degree of damage to auditory input to the medial geniculate nucleus was directly related to the proportion of visual neurons in auditory cortex, suggesting that the visual and residual auditory inputs compete for cortical territory. Visual neurons were not segregated from auditory neurons but shared target space even on individual target cells, substantially increasing the proportion of multisensory neurons. Thus spatial convergence of visual and auditory input modalities may be sufficient to expand multisensory representations. Together these findings argue that early, patterned visual activity does not drive segregation of visual and auditory afferents and suggest that auditory function might be compromised by converging visual inputs. These results indicate possible ways in which multisensory cortical areas may form during development and evolution. They also suggest that rehabilitative strategies designed to promote recovery of function after sensory deprivation or damage need to take into

  12. Human-Avatar Symbiosis for the Treatment of Auditory Verbal Hallucinations in Schizophrenia through Virtual/Augmented Reality and Brain-Computer Interfaces.

    Science.gov (United States)

    Fernández-Caballero, Antonio; Navarro, Elena; Fernández-Sotos, Patricia; González, Pascual; Ricarte, Jorge J; Latorre, José M; Rodriguez-Jimenez, Roberto

    2017-01-01

    This perspective paper faces the future of alternative treatments that take advantage of a social and cognitive approach with regards to pharmacological therapy of auditory verbal hallucinations (AVH) in patients with schizophrenia. AVH are the perception of voices in the absence of auditory stimulation and represents a severe mental health symptom. Virtual/augmented reality (VR/AR) and brain computer interfaces (BCI) are technologies that are growing more and more in different medical and psychological applications. Our position is that their combined use in computer-based therapies offers still unforeseen possibilities for the treatment of physical and mental disabilities. This is why, the paper expects that researchers and clinicians undergo a pathway toward human-avatar symbiosis for AVH by taking full advantage of new technologies. This outlook supposes to address challenging issues in the understanding of non-pharmacological treatment of schizophrenia-related disorders and the exploitation of VR/AR and BCI to achieve a real human-avatar symbiosis.

  13. Differential Recruitment of Auditory Cortices in the Consolidation of Recent Auditory Fearful Memories.

    Science.gov (United States)

    Cambiaghi, Marco; Grosso, Anna; Renna, Annamaria; Sacchetti, Benedetto

    2016-08-17

    Memories of frightening events require a protracted consolidation process. Sensory cortex, such as the auditory cortex, is involved in the formation of fearful memories with a more complex sensory stimulus pattern. It remains controversial, however, whether the auditory cortex is also required for fearful memories related to simple sensory stimuli. In the present study, we found that, 1 d after training, the temporary inactivation of either the most anterior region of the auditory cortex, including the primary (Te1) cortex, or the most posterior region, which included the secondary (Te2) component, did not affect the retention of recent memories, which is consistent with the current literature. However, at this time point, the inactivation of the entire auditory cortices completely prevented the formation of new memories. Amnesia was site specific and was not due to auditory stimuli perception or processing and strictly related to the interference with memory consolidation processes. Strikingly, at a late time interval 4 d after training, blocking the posterior part (encompassing the Te2) alone impaired memory retention, whereas the inactivation of the anterior part (encompassing the Te1) left memory unaffected. Together, these data show that the auditory cortex is necessary for the consolidation of auditory fearful memories related to simple tones in rats. Moreover, these results suggest that, at early time intervals, memory information is processed in a distributed network composed of both the anterior and the posterior auditory cortical regions, whereas, at late time intervals, memory processing is concentrated in the most posterior part containing the Te2 region. Memories of threatening experiences undergo a prolonged process of "consolidation" to be maintained for a long time. The dynamic of fearful memory consolidation is poorly understood. Here, we show that 1 d after learning, memory is processed in a distributed network composed of both primary Te1 and

  14. Clinical voice analysis of Carnatic singers.

    Science.gov (United States)

    Arunachalam, Ravikumar; Boominathan, Prakash; Mahalingam, Shenbagavalli

    2014-01-01

    Carnatic singing is a classical South Indian style of music that involves rigorous training to produce an "open throated" loud, predominantly low-pitched singing, embedded with vocal nuances in higher pitches. Voice problems in singers are not uncommon. The objective was to report the nature of voice problems and apply a routine protocol to assess the voice. Forty-five trained performing singers (females: 36 and males: 9) who reported to a tertiary care hospital with voice problems underwent voice assessment. The study analyzed their problems and the clinical findings. Voice change, difficulty in singing higher pitches, and voice fatigue were major complaints. Most of the singers suffered laryngopharyngeal reflux that coexisted with muscle tension dysphonia and chronic laryngitis. Speaking voices were rated predominantly as "moderate deviation" on GRBAS (Grade, Rough, Breathy, Asthenia, and Strain). Maximum phonation time ranged from 4 to 29 seconds (females: 10.2, standard deviation [SD]: 5.28 and males: 15.7, SD: 5.79). Singing frequency range was reduced (females: 21.3 Semitones and males: 23.99 Semitones). Dysphonia severity index (DSI) scores ranged from -3.5 to 4.91 (females: 0.075 and males: 0.64). Singing frequency range and DSI did not show significant difference between sex and across clinical diagnosis. Self-perception using voice disorder outcome profile revealed overall severity score of 5.1 (SD: 2.7). Findings are discussed from a clinical intervention perspective. Study highlighted the nature of voice problems (hyperfunctional) and required modifications in assessment protocol for Carnatic singers. Need for regular assessments and vocal hygiene education to maintain good vocal health are emphasized as outcomes. Copyright © 2014 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  15. Voice Biometrics for Information Assurance Applications

    National Research Council Canada - National Science Library

    Kang, George

    2002-01-01

    .... The ultimate goal of voice biometrics is to enable the use of voice as a password. Voice biometrics are "man-in-the-loop" systems in which system performance is significantly dependent on human performance...

  16. Manipulation of Auditory Inputs as Rehabilitation Therapy for Maladaptive Auditory Cortical Reorganization

    Directory of Open Access Journals (Sweden)

    Hidehiko Okamoto

    2018-01-01

    Full Text Available Neurophysiological and neuroimaging data suggest that the brains of not only children but also adults are reorganized based on sensory inputs and behaviors. Plastic changes in the brain are generally beneficial; however, maladaptive cortical reorganization in the auditory cortex may lead to hearing disorders such as tinnitus and hyperacusis. Recent studies attempted to noninvasively visualize pathological neural activity in the living human brain and reverse maladaptive cortical reorganization by the suitable manipulation of auditory inputs in order to alleviate detrimental auditory symptoms. The effects of the manipulation of auditory inputs on maladaptively reorganized brain were reviewed herein. The findings obtained indicate that rehabilitation therapy based on the manipulation of auditory inputs is an effective and safe approach for hearing disorders. The appropriate manipulation of sensory inputs guided by the visualization of pathological brain activities using recent neuroimaging techniques may contribute to the establishment of new clinical applications for affected individuals.

  17. Modification of computational auditory scene analysis (CASA) for noise-robust acoustic feature

    Science.gov (United States)

    Kwon, Minseok

    While there have been many attempts to mitigate interferences of background noise, the performance of automatic speech recognition (ASR) still can be deteriorated by various factors with ease. However, normal hearing listeners can accurately perceive sounds of their interests, which is believed to be a result of Auditory Scene Analysis (ASA). As a first attempt, the simulation of the human auditory processing, called computational auditory scene analysis (CASA), was fulfilled through physiological and psychological investigations of ASA. CASA comprised of Zilany-Bruce auditory model, followed by tracking fundamental frequency for voice segmentation and detecting pairs of onset/offset at each characteristic frequency (CF) for unvoiced segmentation. The resulting Time-Frequency (T-F) representation of acoustic stimulation was converted into acoustic feature, gammachirp-tone frequency cepstral coefficients (GFCC). 11 keywords with various environmental conditions are used and the robustness of GFCC was evaluated by spectral distance (SD) and dynamic time warping distance (DTW). In "clean" and "noisy" conditions, the application of CASA generally improved noise robustness of the acoustic feature compared to a conventional method with or without noise suppression using MMSE estimator. The intial study, however, not only showed the noise-type dependency at low SNR, but also called the evaluation methods in question. Some modifications were made to capture better spectral continuity from an acoustic feature matrix, to obtain faster processing speed, and to describe the human auditory system more precisely. The proposed framework includes: 1) multi-scale integration to capture more accurate continuity in feature extraction, 2) contrast enhancement (CE) of each CF by competition with neighboring frequency bands, and 3) auditory model modifications. The model modifications contain the introduction of higher Q factor, middle ear filter more analogous to human auditory system

  18. Analysis of failure of voice production by a sound-producing voice prosthesis

    NARCIS (Netherlands)

    van der Torn, M.; van Gogh, C.D.L.; Verdonck-de Leeuw, I M; Festen, J.M.; Mahieu, H.F.

    OBJECTIVE: To analyse the cause of failing voice production by a sound-producing voice prosthesis (SPVP). METHODS: The functioning of a prototype SPVP is described in a female laryngectomee before and after its sound-producing mechanism was impeded by tracheal phlegm. This assessment included:

  19. The relation of vocal fold lesions and voice quality to voice handicap and psychosomatic well-being

    NARCIS (Netherlands)

    Smits, R.; Marres, H.A.; de Jong, F.

    2012-01-01

    BACKGROUND: Voice disorders have a multifactorial genesis and may be present in various ways. They can cause a significant communication handicap and impaired quality of life. OBJECTIVE: To assess the effect of vocal fold lesions and voice quality on voice handicap and psychosomatic well-being.

  20. Auditory temporal preparation induced by rhythmic cues during concurrent auditory working memory tasks.

    Science.gov (United States)

    Cutanda, Diana; Correa, Ángel; Sanabria, Daniel

    2015-06-01

    The present study investigated whether participants can develop temporal preparation driven by auditory isochronous rhythms when concurrently performing an auditory working memory (WM) task. In Experiment 1, participants had to respond to an auditory target presented after a regular or an irregular sequence of auditory stimuli while concurrently performing a Sternberg-type WM task. Results showed that participants responded faster after regular compared with irregular rhythms and that this effect was not affected by WM load; however, the lack of a significant main effect of WM load made it difficult to draw any conclusion regarding the influence of the dual-task manipulation in Experiment 1. In order to enhance dual-task interference, Experiment 2 combined the auditory rhythm procedure with an auditory N-Back task, which required WM updating (monitoring and coding of the information) and was presumably more demanding than the mere rehearsal of the WM task used in Experiment 1. Results now clearly showed dual-task interference effects (slower reaction times [RTs] in the high- vs. the low-load condition). However, such interference did not affect temporal preparation induced by rhythms, with faster RTs after regular than after irregular sequences in the high-load and low-load conditions. These results revealed that secondary tasks demanding memory updating, relative to tasks just demanding rehearsal, produced larger interference effects on overall RTs in the auditory rhythm task. Nevertheless, rhythm regularity exerted a strong temporal preparation effect that survived the interference of the WM task even when both tasks competed for processing resources within the auditory modality. (c) 2015 APA, all rights reserved).

  1. EXPERIMENTAL STUDY OF FIRMWARE FOR INPUT AND EXTRACTION OF USER’S VOICE SIGNAL IN VOICE AUTHENTICATION SYSTEMS

    Directory of Open Access Journals (Sweden)

    O. N. Faizulaieva

    2014-09-01

    Full Text Available Scientific task for improving the signal-to-noise ratio for user’s voice signal in computer systems and networks during the process of user’s voice authentication is considered. The object of study is the process of input and extraction of the voice signal of authentication system user in computer systems and networks. Methods and means for input and extraction of the voice signal on the background of external interference signals are investigated. Ways for quality improving of the user’s voice signal in systems of voice authentication are investigated experimentally. Firmware means for experimental unit of input and extraction of the user’s voice signal against external interference influence are considered. As modern computer means, including mobile, have two-channel audio card, two microphones are used in the voice signal input. The distance between sonic-wave sensors is 20 mm and it provides forming one direction pattern lobe of microphone array in a desired area of voice signal registration (from 100 Hz to 8 kHz. According to the results of experimental studies, the usage of directional properties of the proposed microphone array and space-time processing of the recorded signals with implementation of constant and adaptive weighting factors has made it possible to reduce considerably the influence of interference signals. The results of firmware experimental studies for input and extraction of the user’s voice signal against external interference influence are shown. The proposed solutions will give the possibility to improve the value of the signal/noise ratio of the useful signals recorded up to 20 dB under the influence of external interference signals in the frequency range from 4 to 8 kHz. The results may be useful to specialists working in the field of voice recognition and speaker discrimination.

  2. Short-term plasticity in auditory cognition.

    Science.gov (United States)

    Jääskeläinen, Iiro P; Ahveninen, Jyrki; Belliveau, John W; Raij, Tommi; Sams, Mikko

    2007-12-01

    Converging lines of evidence suggest that auditory system short-term plasticity can enable several perceptual and cognitive functions that have been previously considered as relatively distinct phenomena. Here we review recent findings suggesting that auditory stimulation, auditory selective attention and cross-modal effects of visual stimulation each cause transient excitatory and (surround) inhibitory modulations in the auditory cortex. These modulations might adaptively tune hierarchically organized sound feature maps of the auditory cortex (e.g. tonotopy), thus filtering relevant sounds during rapidly changing environmental and task demands. This could support auditory sensory memory, pre-attentive detection of sound novelty, enhanced perception during selective attention, influence of visual processing on auditory perception and longer-term plastic changes associated with perceptual learning.

  3. Auditory Processing Disorder (For Parents)

    Science.gov (United States)

    ... role. Auditory cohesion problems: This is when higher-level listening tasks are difficult. Auditory cohesion skills — drawing inferences from conversations, understanding riddles, or comprehending verbal math problems — require heightened auditory processing and language levels. ...

  4. The Effect of Working Memory Training on Auditory Stream Segregation in Auditory Processing Disorders Children

    OpenAIRE

    Abdollah Moossavi; Saeideh Mehrkian; Yones Lotfi; Soghrat Faghih zadeh; Hamed Adjedi

    2015-01-01

    Objectives: This study investigated the efficacy of working memory training for improving working memory capacity and related auditory stream segregation in auditory processing disorders children. Methods: Fifteen subjects (9-11 years), clinically diagnosed with auditory processing disorder participated in this non-randomized case-controlled trial. Working memory abilities and auditory stream segregation were evaluated prior to beginning and six weeks after completing the training program...

  5. Haptic force-feedback devices for the office computer: performance and musculoskeletal loading issues.

    Science.gov (United States)

    Dennerlein, J T; Yang, M C

    2001-01-01

    Pointing devices, essential input tools for the graphical user interface (GUI) of desktop computers, require precise motor control and dexterity to use. Haptic force-feedback devices provide the human operator with tactile cues, adding the sense of touch to existing visual and auditory interfaces. However, the performance enhancements, comfort, and possible musculoskeletal loading of using a force-feedback device in an office environment are unknown. Hypothesizing that the time to perform a task and the self-reported pain and discomfort of the task improve with the addition of force feedback, 26 people ranging in age from 22 to 44 years performed a point-and-click task 540 times with and without an attractive force field surrounding the desired target. The point-and-click movements were approximately 25% faster with the addition of force feedback (paired t-tests, p user discomfort and pain, as measured through a questionnaire, were also smaller with the addition of force feedback (p device improves performance, and potentially reduces musculoskeletal loading during mouse use. Actual or potential applications of this research include human-computer interface design, specifically that of the pointing device extensively used for the graphical user interface.

  6. Heart rate regulation during cycle-ergometer exercise via bio-feedback.

    Science.gov (United States)

    Argha, Ahmadreza; Su, Steven W; Hung Nguyen; Celler, Branko G

    2015-08-01

    This paper explains our developed control system which regulates the heart rate (HR) to track a desired trajectory. The controller is indeed a non-conventional non-model-based proportional, integral and derivative (PID) controller. The controller commands are interpreted as biofeedback auditory commands. These commands can be heard and implemented by the exercising subject as a part of the control-loop. However, transmitting a feedback signal while the pedals are not in the appropriate position to efficiently exert force may lead to a cognitive disengagement of the user from the feedback controller. This note explains a novel form of control system regarding as "actuator-based event-driven control system", designed specifically for the purpose of this project. We conclude that the developed event-driven controller makes it possible to precisely regulate HR to a predetermined HR profile.

  7. Voice Onset Time in Azerbaijani Consonants

    Directory of Open Access Journals (Sweden)

    Ali Jahan

    2009-10-01

    Full Text Available Objective: Voice onset time is known to be cue for the distinction between voiced and voiceless stops and it can be used to describe or categorize a range of developmental, neuromotor and linguistic disorders. The aim of this study is determination of standard values of voice onset time for Azerbaijani language (Tabriz dialect. Materials & Methods: In this description-analytical study, 30 Azeris persons whom were selected conveniently by simple selection, uttered 46 monosyllabic words initiating with 6 Azerbaijani stops twice. Using Praat software, the voice onset time values were analyzed by waveform and wideband spectrogram in milliseconds. Vowel effect, sex differences and the effect of place of articulation on VOT, were evaluated and data were analyzed by one-way ANOVA test. Results: There was no significant difference in voice onset time between male and female Azeris speakers (P<0.05. Vowel and place of articulation had significant correlation with voice onset time (P<0.001. Voice onset time values for /b/, /p/, /d/, /t/, /g/, /k/, and [c], [ɟ] allophones were 10.64, 86.88, 13.35, 87.09, 26.25, 100.62, 131.19, 63.18 mili second, respectively. Conclusion: Voice onset time values are the same for Azerbaijani men and women. However, like many other languages, back and high vowels and back place of articulation lengthen VOT. Also, voiceless stops are aspirated in this language and voiced stops have positive VOT values.

  8. Does CPAP treatment affect the voice?

    Science.gov (United States)

    Saylam, Güleser; Şahin, Mustafa; Demiral, Dilek; Bayır, Ömer; Yüceege, Melike Bağnu; Çadallı Tatar, Emel; Korkmaz, Mehmet Hakan

    2016-12-20

    The aim of this study was to investigate alterations in voice parameters among patients using continuous positive airway pressure (CPAP) for the treatment of obstructive sleep apnea syndrome. Patients with an indication for CPAP treatment without any voice problems and with normal laryngeal findings were included and voice parameters were evaluated before and 1 and 6 months after CPAP. Videolaryngostroboscopic findings, a self-rated scale (Voice Handicap Index-10, VHI-10), perceptual voice quality assessment (GRBAS: grade, roughness, breathiness, asthenia, strain), and acoustic parameters were compared. Data from 70 subjects (48 men and 22 women) with a mean age of 44.2 ± 6.0 years were evaluated. When compared with the pre-CPAP treatment period, there was a significant increase in the VHI-10 score after 1 month of treatment and in VHI- 10 and total GRBAS scores, jitter percent (P = 0.01), shimmer percent, noise-to-harmonic ratio, and voice turbulence index after 6 months of treatment. Vague negative effects on voice parameters after the first month of CPAP treatment became more evident after 6 months. We demonstrated nonsevere alterations in the voice quality of patients under CPAP treatment. Given that CPAP is a long-term treatment it is important to keep these alterations in mind.

  9. Managing dysphonia in occupational voice users.

    Science.gov (United States)

    Behlau, Mara; Zambon, Fabiana; Madazio, Glaucya

    2014-06-01

    Recent advances with regard to occupational voice disorders are highlighted with emphasis on issues warranting consideration when assessing, training, and treating professional voice users. Findings include the many particularities between the various categories of professional voice users, the concept that the environment plays a major role in occupational voice disorders, and that biopsychosocial influences should be analyzed on an individual basis. Assessment via self-evaluation protocols to quantify the impact of these disorders is mandatory as a component of an evaluation and to document treatment outcomes. Discomfort or odynophonia has evolved as a critical symptom in this population. Clinical trials are limited and the complexity of the environment may be a limitation in experiment design. This review reinforced the need for large population studies of professional voice users; new data highlighted important factors specific to each group of voice users. Interventions directed at student teachers are necessities to not only improving the quality of future professionals, but also to avoid the frustration and limitations associated with chronic voice problems. The causative relationship between the work environment and voice disorders has not yet been established. Randomized controlled trials are lacking and must be a focus to enhance treatment paradigms for this population.

  10. Atypical brain lateralisation in the auditory cortex and language performance in 3- to 7-year-old children with high-functioning autism spectrum disorder: a child-customised magnetoencephalography (MEG) study.

    Science.gov (United States)

    Yoshimura, Yuko; Kikuchi, Mitsuru; Shitamichi, Kiyomi; Ueno, Sanae; Munesue, Toshio; Ono, Yasuki; Tsubokawa, Tsunehisa; Haruta, Yasuhiro; Oi, Manabu; Niida, Yo; Remijn, Gerard B; Takahashi, Tsutomu; Suzuki, Michio; Higashida, Haruhiro; Minabe, Yoshio

    2013-10-08

    Magnetoencephalography (MEG) is used to measure the auditory evoked magnetic field (AEF), which reflects language-related performance. In young children, however, the simultaneous quantification of the bilateral auditory-evoked response during binaural hearing is difficult using conventional adult-sized MEG systems. Recently, a child-customised MEG device has facilitated the acquisition of bi-hemispheric recordings, even in young children. Using the child-customised MEG device, we previously reported that language-related performance was reflected in the strength of the early component (P50m) of the auditory evoked magnetic field (AEF) in typically developing (TD) young children (2 to 5 years old) [Eur J Neurosci 2012, 35:644-650]. The aim of this study was to investigate how this neurophysiological index in each hemisphere is correlated with language performance in autism spectrum disorder (ASD) and TD children. We used magnetoencephalography (MEG) to measure the auditory evoked magnetic field (AEF), which reflects language-related performance. We investigated the P50m that is evoked by voice stimuli (/ne/) bilaterally in 33 young children (3 to 7 years old) with ASD and in 30 young children who were typically developing (TD). The children were matched according to their age (in months) and gender. Most of the children with ASD were high-functioning subjects. The results showed that the children with ASD exhibited significantly less leftward lateralisation in their P50m intensity compared with the TD children. Furthermore, the results of a multiple regression analysis indicated that a shorter P50m latency in both hemispheres was specifically correlated with higher language-related performance in the TD children, whereas this latency was not correlated with non-verbal cognitive performance or chronological age. The children with ASD did not show any correlation between P50m latency and language-related performance; instead, increasing chronological age was a

  11. Epidemiology of Voice Disorders in Latvian School Teachers.

    Science.gov (United States)

    Trinite, Baiba

    2017-07-01

    The prevalence of voice disorders in the teacher population in Latvia has not been studied so far and this is the first epidemiological study whose goal is to investigate the prevalence of voice disorders and their risk factors in this professional group. A wide cross-sectional study using stratified sampling methodology was implemented in the general education schools of Latvia. The self-administered voice risk factor questionnaire and the Voice Handicap Index were completed by 522 teachers. Two teachers groups were formed: the voice disorders group which included 235 teachers with actual voice problems or problems during the last 9 months; and the control group which included 174 teachers without voice disorders. Sixty-six percent of teachers gave a positive answer to the following question: Have you ever had problems with your voice? Voice problems are more often found in female than male teachers (68.2% vs 48.8%). Music teachers suffer from voice disorders more often than teachers of other subjects. Eighty-two percent of teachers first faced voice problems in their professional carrier. The odds of voice disorders increase if the following risk factors exist: extra vocal load, shouting, throat clearing, neglecting of personal health, background noise, chronic illnesses of the upper respiratory tract, allergy, job dissatisfaction, and regular stress in the working place. The study findings indicated a high risk of voice disorders among Latvian teachers. The study confirmed data concerning the multifactorial etiology of voice disorders. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  12. Modularity in Sensory Auditory Memory

    OpenAIRE

    Clement, Sylvain; Moroni, Christine; Samson, Séverine

    2004-01-01

    The goal of this paper was to review various experimental and neuropsychological studies that support the modular conception of auditory sensory memory or auditory short-term memory. Based on initial findings demonstrating that verbal sensory memory system can be dissociated from a general auditory memory store at the functional and anatomical levels. we reported a series of studies that provided evidence in favor of multiple auditory sensory stores specialized in retaining eit...

  13. Beneficial auditory and cognitive effects of auditory brainstem implantation in children.

    Science.gov (United States)

    Colletti, Liliana

    2007-09-01

    This preliminary study demonstrates the development of hearing ability and shows that there is a significant improvement in some cognitive parameters related to selective visual/spatial attention and to fluid or multisensory reasoning, in children fitted with auditory brainstem implantation (ABI). The improvement in cognitive paramenters is due to several factors, among which there is certainly, as demonstrated in the literature on a cochlear implants (CIs), the activation of the auditory sensory canal, which was previously absent. The findings of the present study indicate that children with cochlear or cochlear nerve abnormalities with associated cognitive deficits should not be excluded from ABI implantation. The indications for ABI have been extended over the last 10 years to adults with non-tumoral (NT) cochlear or cochlear nerve abnormalities that cannot benefit from CI. We demonstrated that the ABI with surface electrodes may provide sufficient stimulation of the central auditory system in adults for open set speech recognition. These favourable results motivated us to extend ABI indications to children with profound hearing loss who were not candidates for a CI. This study investigated the performances of young deaf children undergoing ABI, in terms of their auditory perceptual development and their non-verbal cognitive abilities. In our department from 2000 to 2006, 24 children aged 14 months to 16 years received an ABI for different tumour and non-tumour diseases. Two children had NF2 tumours. Eighteen children had bilateral cochlear nerve aplasia. In this group, nine children had associated cochlear malformations, two had unilateral facial nerve agenesia and two had combined microtia, aural atresia and middle ear malformations. Four of these children had previously been fitted elsewhere with a CI with no auditory results. One child had bilateral incomplete cochlear partition (type II); one child, who had previously been fitted unsuccessfully elsewhere

  14. Listening instead of reading : The influence of voice intonation in auditory health persuasion aimed at increasing fruit and vegetable intake

    NARCIS (Netherlands)

    Elbert, Sarah; Dijkstra, Arie

    2013-01-01

    Purpose. In auditory health persuasion, the speaker’s speech becomes salient, as there is no visual information available. Intonation of speech is one important aspect that may influence persuasion. It was experimentally tested to what extent different levels of intonation are related to persuasion.

  15. What determines auditory distraction? On the roles of local auditory changes and expectation violations.

    Directory of Open Access Journals (Sweden)

    Jan P Röer

    Full Text Available Both the acoustic variability of a distractor sequence and the degree to which it violates expectations are important determinants of auditory distraction. In four experiments we examined the relative contribution of local auditory changes on the one hand and expectation violations on the other hand in the disruption of serial recall by irrelevant sound. We present evidence for a greater disruption by auditory sequences ending in unexpected steady state distractor repetitions compared to auditory sequences with expected changing state endings even though the former contained fewer local changes. This effect was demonstrated with piano melodies (Experiment 1 and speech distractors (Experiment 2. Furthermore, it was replicated when the expectation violation occurred after the encoding of the target items (Experiment 3, indicating that the items' maintenance in short-term memory was disrupted by attentional capture and not their encoding. This seems to be primarily due to the violation of a model of the specific auditory distractor sequences because the effect vanishes and even reverses when the experiment provides no opportunity to build up a specific neural model about the distractor sequence (Experiment 4. Nevertheless, the violation of abstract long-term knowledge about auditory regularities seems to cause a small and transient capture effect: Disruption decreased markedly over the course of the experiments indicating that participants habituated to the unexpected distractor repetitions across trials. The overall pattern of results adds to the growing literature that the degree to which auditory distractors violate situation-specific expectations is a more important determinant of auditory distraction than the degree to which a distractor sequence contains local auditory changes.

  16. Simulation model for transcervical laryngeal injection providing real-time feedback.

    Science.gov (United States)

    Ainsworth, Tiffiny A; Kobler, James B; Loan, Gregory J; Burns, James A

    2014-12-01

    This study aimed to develop and evaluate a model for teaching transcervical laryngeal injections. A 3-dimensional printer was used to create a laryngotracheal framework based on de-identified computed tomography images of a human larynx. The arytenoid cartilages and intrinsic laryngeal musculature were created in silicone from clay casts and thermoplastic molds. The thyroarytenoid (TA) muscle was created with electrically conductive silicone using metallic filaments embedded in silicone. Wires connected TA muscles to an electrical circuit incorporating a cell phone and speaker. A needle electrode completed the circuit when inserted in the TA during simulated injection, providing real-time feedback of successful needle placement by producing an audible sound. Face validation by the senior author confirmed appropriate tactile feedback and anatomical realism. Otolaryngologists pilot tested the model and completed presimulation and postsimulation questionnaires. The high-fidelity simulation model provided tactile and audio feedback during needle placement, simulating transcervical vocal fold injections. Otolaryngology residents demonstrated higher comfort levels with transcervical thyroarytenoid injection on postsimulation questionnaires. This is the first study to describe a simulator for developing transcervical vocal fold injection skills. The model provides real-time tactile and auditory feedback that aids in skill acquisition. Otolaryngologists reported increased confidence with transcervical injection after using the simulator. © The Author(s) 2014.

  17. Auditory-visual integration in fields of the auditory cortex.

    Science.gov (United States)

    Kubota, Michinori; Sugimoto, Shunji; Hosokawa, Yutaka; Ojima, Hisayuki; Horikawa, Junsei

    2017-03-01

    While multimodal interactions have been known to exist in the early sensory cortices, the response properties and spatiotemporal organization of these interactions are poorly understood. To elucidate the characteristics of multimodal sensory interactions in the cerebral cortex, neuronal responses to visual stimuli with or without auditory stimuli were investigated in core and belt fields of guinea pig auditory cortex using real-time optical imaging with a voltage-sensitive dye. On average, visual responses consisted of short excitation followed by long inhibition. Although visual responses were observed in core and belt fields, there were regional and temporal differences in responses. The most salient visual responses were observed in the caudal belt fields, especially posterior (P) and dorsocaudal belt (DCB) fields. Visual responses emerged first in fields P and DCB and then spread rostroventrally to core and ventrocaudal belt (VCB) fields. Absolute values of positive and negative peak amplitudes of visual responses were both larger in fields P and DCB than in core and VCB fields. When combined visual and auditory stimuli were applied, fields P and DCB were more inhibited than core and VCB fields beginning approximately 110 ms after stimuli. Correspondingly, differences between responses to auditory stimuli alone and combined audiovisual stimuli became larger in fields P and DCB than in core and VCB fields after approximately 110 ms after stimuli. These data indicate that visual influences are most salient in fields P and DCB, which manifest mainly as inhibition, and that they enhance differences in auditory responses among fields. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Designing feedback to mitigate teen distracted driving: A social norms approach.

    Science.gov (United States)

    Merrikhpour, Maryam; Donmez, Birsen

    2017-07-01

    The purpose of this research is to investigate teens' perceived social norms and whether providing normative information can reduce distracted driving behaviors among them. Parents are among the most important social referents for teens; they have significant influences on teens' driving behaviors, including distracted driving which significantly contributes to teens' crash risks. Social norms interventions have been successfully applied in various domains including driving; however, this approach is yet to be explored for mitigating driver distraction among teens. Forty teens completed a driving simulator experiment while performing a self-paced visual-manual secondary task in four between-subject conditions: a) social norms feedback that provided a report at the end of each drive on teens' distracted driving behavior, comparing their distraction engagement to their parent's, b) post-drive feedback that provided just the report on teens' distracted driving behavior without information on their parents, c) real-time feedback in the form of auditory warnings based on eyes of road-time, and d) no feedback as control. Questionnaires were administered to collect data on these teens' and their parents' self-reported engagement in driver distractions and the associated social norms. Social norms and real-time feedback conditions resulted in significantly smaller average off-road glance duration, rate of long (>2s) off-road glances, and standard deviation of lane position compared to no feedback. Further, social norms feedback decreased brake response time and percentage of time not looking at the road compared to no feedback. No major effect was observed for post-drive feedback. Questionnaire results suggest that teens appeared to overestimate parental norms, but no effect of feedback was found on their perceptions. Feedback systems that leverage social norms can help mitigate driver distraction among teens. Overall, both social norms and real-time feedback induced

  19. Audio-visual feedback improves the BCI performance in the navigational control of a humanoid robot

    Directory of Open Access Journals (Sweden)

    Emmanuele eTidoni

    2014-06-01

    Full Text Available Advancement in brain computer interfaces (BCI technology allows people to actively interact in the world through surrogates. Controlling real humanoid robots using BCI as intuitively as we control our body represents a challenge for current research in robotics and neuroscience. In order to successfully interact with the environment the brain integrates multiple sensory cues to form a coherent representation of the world. Cognitive neuroscience studies demonstrate that multisensory integration may imply a gain with respect to a single modality and ultimately improve the overall sensorimotor performance. For example, reactivity to simultaneous visual and auditory stimuli may be higher than to the sum of the same stimuli delivered in isolation or in temporal sequence. Yet, knowledge about whether audio-visual integration may improve the control of a surrogate is meager. To explore this issue, we provided human footstep sounds as audio feedback to BCI users while controlling a humanoid robot. Participants were asked to steer their robot surrogate and perform a pick-and-place task through BCI-SSVEPs. We found that audio-visual synchrony between footsteps sound and actual humanoid’s walk reduces the time required for steering the robot. Thus, auditory feedback congruent with the humanoid actions may improve motor decisions of the BCI’s user and help in the feeling of control over it. Our results shed light on the possibility to increase robot’s control through the combination of multisensory feedback to a BCI user.

  20. Assessing the aging effect on auditory-verbal memory by Persian version of dichotic auditory verbal memory test

    Directory of Open Access Journals (Sweden)

    Zahra Shahidipour

    2014-01-01

    Conclusion: Based on the obtained results, significant reduction in auditory memory was seen in aged group and the Persian version of dichotic auditory-verbal memory test, like many other auditory verbal memory tests, showed the aging effects on auditory verbal memory performance.

  1. [Assessment of the efficiency of the auditory training in children with dyslalia and auditory processing disorders].

    Science.gov (United States)

    Włodarczyk, Elżbieta; Szkiełkowska, Agata; Skarżyński, Henryk; Piłka, Adam

    2011-01-01

    To assess effectiveness of the auditory training in children with dyslalia and central auditory processing disorders. Material consisted of 50 children aged 7-9-years-old. Children with articulation disorders stayed under long-term speech therapy care in the Auditory and Phoniatrics Clinic. All children were examined by a laryngologist and a phoniatrician. Assessment included tonal and impedance audiometry and speech therapists' and psychologist's consultations. Additionally, a set of electrophysiological examinations was performed - registration of N2, P2, N2, P2, P300 waves and psychoacoustic test of central auditory functions: FPT - frequency pattern test. Next children took part in the regular auditory training and attended speech therapy. Speech assessment followed treatment and therapy, again psychoacoustic tests were performed and P300 cortical potentials were recorded. After that statistical analyses were performed. Analyses revealed that application of auditory training in patients with dyslalia and other central auditory disorders is very efficient. Auditory training may be a very efficient therapy supporting speech therapy in children suffering from dyslalia coexisting with articulation and central auditory disorders and in children with educational problems of audiogenic origin. Copyright © 2011 Polish Otolaryngology Society. Published by Elsevier Urban & Partner (Poland). All rights reserved.

  2. Auditory Integration Training

    Directory of Open Access Journals (Sweden)

    Zahra Jafari

    2002-07-01

    Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depressin, and hyperacute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of The Sound of a Moracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.

  3. Identifying hidden voice and video streams

    Science.gov (United States)

    Fan, Jieyan; Wu, Dapeng; Nucci, Antonio; Keralapura, Ram; Gao, Lixin

    2009-04-01

    Given the rising popularity of voice and video services over the Internet, accurately identifying voice and video traffic that traverse their networks has become a critical task for Internet service providers (ISPs). As the number of proprietary applications that deliver voice and video services to end users increases over time, the search for the one methodology that can accurately detect such services while being application independent still remains open. This problem becomes even more complicated when voice and video service providers like Skype, Microsoft, and Google bundle their voice and video services with other services like file transfer and chat. For example, a bundled Skype session can contain both voice stream and file transfer stream in the same layer-3/layer-4 flow. In this context, traditional techniques to identify voice and video streams do not work. In this paper, we propose a novel self-learning classifier, called VVS-I , that detects the presence of voice and video streams in flows with minimum manual intervention. Our classifier works in two phases: training phase and detection phase. In the training phase, VVS-I first extracts the relevant features, and subsequently constructs a fingerprint of a flow using the power spectral density (PSD) analysis. In the detection phase, it compares the fingerprint of a flow to the existing fingerprints learned during the training phase, and subsequently classifies the flow. Our classifier is not only capable of detecting voice and video streams that are hidden in different flows, but is also capable of detecting different applications (like Skype, MSN, etc.) that generate these voice/video streams. We show that our classifier can achieve close to 100% detection rate while keeping the false positive rate to less that 1%.

  4. Optical voice encryption based on digital holography.

    Science.gov (United States)

    Rajput, Sudheesh K; Matoba, Osamu

    2017-11-15

    We propose an optical voice encryption scheme based on digital holography (DH). An off-axis DH is employed to acquire voice information by obtaining phase retardation occurring in the object wave due to sound wave propagation. The acquired hologram, including voice information, is encrypted using optical image encryption. The DH reconstruction and decryption with all the correct parameters can retrieve an original voice. The scheme has the capability to record the human voice in holograms and encrypt it directly. These aspects make the scheme suitable for other security applications and help to use the voice as a potential security tool. We present experimental and some part of simulation results.

  5. Self-Generated Auditory Feedback as a Cue to Support Rhythmic Motor Stability

    Directory of Open Access Journals (Sweden)

    Gopher Daniel

    2011-12-01

    Full Text Available A goal of the SKILLS project is to develop Virtual Reality (VR-based training simulators for different application domains, one of which is juggling. Within this context the value of multimodal VR environments for skill acquisition is investigated. In this study, we investigated whether it was necessary to render the sounds of virtual balls hitting virtual hands within the juggling training simulator. First, we recorded sounds at the jugglers’ ears and found the sound of ball hitting hands to be audible. Second, we asked 24 jugglers to juggle under normal conditions (Audible or while listening to pink noise intended to mask the juggling sounds (Inaudible. We found that although the jugglers themselves reported no difference in their juggling across these two conditions, external juggling experts rated rhythmic stability worse in the Inaudible condition than in the Audible condition. This result suggests that auditory information should be rendered in the VR juggling training simulator.

  6. Data-Driven User Feedback: An Improved Neurofeedback Strategy considering the Interindividual Variability of EEG Features.

    Science.gov (United States)

    Han, Chang-Hee; Lim, Jeong-Hwan; Lee, Jun-Hak; Kim, Kangsan; Im, Chang-Hwan

    2016-01-01

    It has frequently been reported that some users of conventional neurofeedback systems can experience only a small portion of the total feedback range due to the large interindividual variability of EEG features. In this study, we proposed a data-driven neurofeedback strategy considering the individual variability of electroencephalography (EEG) features to permit users of the neurofeedback system to experience a wider range of auditory or visual feedback without a customization process. The main idea of the proposed strategy is to adjust the ranges of each feedback level using the density in the offline EEG database acquired from a group of individuals. Twenty-two healthy subjects participated in offline experiments to construct an EEG database, and five subjects participated in online experiments to validate the performance of the proposed data-driven user feedback strategy. Using the optimized bin sizes, the number of feedback levels that each individual experienced was significantly increased to 139% and 144% of the original results with uniform bin sizes in the offline and online experiments, respectively. Our results demonstrated that the use of our data-driven neurofeedback strategy could effectively increase the overall range of feedback levels that each individual experienced during neurofeedback training.

  7. Immediate Feedback on Accuracy and Performance: The Effects of Wireless Technology on Food Safety Tracking at a Distribution Center

    Science.gov (United States)

    Goomas, David T.

    2012-01-01

    The effects of wireless ring scanners, which provided immediate auditory and visual feedback, were evaluated to increase the performance and accuracy of order selectors at a meat distribution center. The scanners not only increased performance and accuracy compared to paper pick sheets, but were also instrumental in immediate and accurate data…

  8. Hemispheric lateralization for early auditory processing of lexical tones: dependence on pitch level and pitch contour.

    Science.gov (United States)

    Wang, Xiao-Dong; Wang, Ming; Chen, Lin

    2013-09-01

    In Mandarin Chinese, a tonal language, pitch level and pitch contour are two dimensions of lexical tones according to their acoustic features (i.e., pitch patterns). A change in pitch level features a step change whereas that in pitch contour features a continuous variation in voice pitch. Currently, relatively little is known about the hemispheric lateralization for the processing of each dimension. To address this issue, we made whole-head electrical recordings of mismatch negativity in native Chinese speakers in response to the contrast of Chinese lexical tones in each dimension. We found that pre-attentive auditory processing of pitch level was obviously lateralized to the right hemisphere whereas there is a tendency for that of pitch contour to be lateralized to the left. We also found that the brain responded faster to pitch level than to pitch contour at a pre-attentive stage. These results indicate that the hemispheric lateralization for early auditory processing of lexical tones depends on the pitch level and pitch contour, and suggest an underlying inter-hemispheric interactive mechanism for the processing. © 2013 Elsevier Ltd. All rights reserved.

  9. Occupational risk factors and voice disorders.

    Science.gov (United States)

    Vilkman, E

    1996-01-01

    From the point of view of occupational health, the field of voice disorders is very poorly developed as compared, for instance, to the prevention and diagnostics of occupational hearing disorders. In fact, voice disorders have not even been recognized in the field of occupational medicine. Hence, it is obviously very rare in most countries that the voice disorder of a professional voice user, e.g. a teacher, a singer or an actor, is accepted as an occupational disease by insurance companies. However, occupational voice problems do not lack significance from the point of view of the patient. We also know from questionnaires and clinical studies that voice complaints are very common. Another example of job-related health problems, which has proved more successful in terms of its occupational health status, is the repetition strain injury of the elbow, i.e. the "tennis elbow". Its textbook definition could be used as such to describe an occupational voice disorder ("dysphonia professional is"). In the present paper the effects of such risk factors as vocal loading itself, background noise and room acoustics and low relative humidity of the air are discussed. Due to individual factors underlying the development of professional voice disorders, recommendations rather than regulations are called for. There are many simple and even relatively low-cost methods available for the prevention of vocal problems as well as for supporting rehabilitation.

  10. Development of comprehensive unattended child warning and feedback system in vehicle

    Directory of Open Access Journals (Sweden)

    Sulaiman Norizam

    2017-01-01

    Full Text Available The cases of children being trapped and suffocated in unattended vehicle keep increasing even though the awareness campaign on the safety of children in non-moving vehicle were carried out by government. Various methods were introduced by researchers to overcome this issue but yet to be effective. Among them were the usage of capacitive sensor, microwave sensor, pressure sensor and image sensor where most of the techniques or systems were applied on the child’s seat to detect the presence of baby or infant. Thus, this research is carried out to provide a comprehensive and effective detection system to detect the presence of children including infant in unattended vehicle by using the combination of human physiological signals (voice and body odor detectors with the temperature and motion sensors. Here, once the proposed system recognizes any signals that generated from voice, odor, motion and temperature detectors in vehicle’s cabin, the system then will provide effective feedback system by sending short message to the parents first. If no response received in the specified allocation time, the system then will activate the vehicle’s horn system. Finally, the system will lower down the vehicle’s window to release the toxic gas and reduce the cabin temperature. The system is in prototyping stage where every design component was evaluated individually. Besides, the overall system was successfully tested where the detection and feedback system follow the instruction given by the microcontroller.

  11. Voice Response Systems Technology.

    Science.gov (United States)

    Gerald, Jeanette

    1984-01-01

    Examines two methods of generating synthetic speech in voice response systems, which allow computers to communicate in human terms (speech), using human interface devices (ears): phoneme and reconstructed voice systems. Considerations prior to implementation, current and potential applications, glossary, directory, and introduction to Input Output…

  12. Clinical Voices - an update

    DEFF Research Database (Denmark)

    Fusaroli, Riccardo; Weed, Ethan

    Anomalous aspects of speech and voice, including pitch, fluency, and voice quality, are reported to characterise many mental disorders. However, it has proven difficult to quantify and explain this oddness of speech by employing traditional statistical methods. In this talk we will show how...

  13. Changes after voice therapy in objective and subjective voice measurements of pediatric patients with vocal nodules.

    Science.gov (United States)

    Tezcaner, Ciler Zahide; Karatayli Ozgursoy, Selmin; Ozgursoy, Selmin Karatayli; Sati, Isil; Dursun, Gursel

    2009-12-01

    The aim of this study was to analyze the efficiency of the voice therapy in children with vocal nodules by using the acoustic analysis and subjective assessment. Thirty-nine patients with vocal fold nodules, aged between 7 and 14, were included in the study. Each subject had voice therapy led by an experienced voice therapist once a week. All diagnostic and follow-up workouts were performed before the voice therapy and after the third or the sixth month. Transoral and/or transnasal videostroboscopic examination and acoustic analysis were achieved using multi-dimensional voice program (MDVP) and subjective analysis with GRBAS scale. As for the perceptual assessment, the difference was significant for four parameters out of five. A significant improvement was found in the acoustic analysis parameters of jitter, shimmer, and noise-to-harmonic ratio. The voice therapy which was planned according to patients' needs, age, compliance and response to therapy had positive effects on pediatric patients with vocal nodules. Acoustic analysis and GRBAS may be used successfully in the follow-up of pediatric vocal nodule treatment.

  14. Playful Interaction with Voice Sensing Modular Robots

    DEFF Research Database (Denmark)

    Heesche, Bjarke; MacDonald, Ewen; Fogh, Rune

    2013-01-01

    This paper describes a voice sensor, suitable for modular robotic systems, which estimates the energy and fundamental frequency, F0, of the user’s voice. Through a number of example applications and tests with children, we observe how the voice sensor facilitates playful interaction between child...... children and two different robot configurations. In future work, we will investigate if such a system can motivate children to improve voice control and explore how to extend the sensor to detect emotions in the user’s voice....

  15. Objective Voice Parameters in Colombian School Workers with Healthy Voices

    Directory of Open Access Journals (Sweden)

    Lady Catherine Cantor Cutiva

    2015-09-01

    Full Text Available Objectives: To characterize the objective voice parameters among school workers, and to identi­fy associated factors of three objective voice parameters, namely fundamental frequency, sound pressure level and maximum phonation time. Materials and methods: We conducted a cross-sectional study among 116 Colombian teachers and 20 Colombian non-teachers. After signing the informed consent form, participants filled out a questionnaire. Then, a voice sample was recorded and evaluated perceptually by a speech therapist and by objective voice analysis with praat software. Short-term environmental measurements of sound level, temperature, humi­dity, and reverberation time were conducted during visits at the workplaces, such as classrooms and offices. Linear regression analysis was used to determine associations between individual and work-related factors and objective voice parameters. Results: Compared with men, women had higher fundamental frequency (201 Hz for teachers and 209 for non-teachers vs. 120 Hz for teachers and 127 for non-teachers and sound pressure level (82 dB vs. 80 dB, and shorter maximum phonation time (around 14 seconds vs. around 16 seconds. Female teachers younger than 50 years of age evidenced a significant tendency to speak with lower fundamental frequen­cy and shorter mpt compared with female teachers older than 50 years of age. Female teachers had significantly higher fundamental frequency (66 Hz, higher sound pressure level (2 dB and short phonation time (2 seconds than male teachers. Conclusion: Female teachers younger than 50 years of age had significantly lower F0 and shorter mpt compared with those older than 50 years of age. The multivariate analysis showed that gender was a much more important determinant of variations in F0, spl and mpt than age and teaching occupation. Objectively measured temperature also contributed to the changes on spl among school workers.

  16. Voice Quality Estimation in Wireless Networks

    Directory of Open Access Journals (Sweden)

    Petr Zach

    2015-01-01

    Full Text Available This article deals with the impact of Wireless (Wi-Fi networks on the perceived quality of voice services. The Quality of Service (QoS metrics must be monitored in the computer network during the voice data transmission to ensure proper voice service quality the end-user has paid for, especially in the wireless networks. In addition to the QoS, research area called Quality of Experience (QoE provides metrics and methods for quality evaluation from the end-user’s perspective. This article focuses on a QoE estimation of Voice over IP (VoIP calls in the wireless networks using network simulator. Results contribute to voice quality estimation based on characteristics of the wireless network and location of a wireless client.

  17. The Influence of Sleep Disorders on Voice Quality.

    Science.gov (United States)

    Rocha, Bruna Rainho; Behlau, Mara

    2017-09-19

    To verify the influence of sleep quality on the voice. Descriptive and analytical cross-sectional study. Data were collected by an online or printed survey divided in three parts: (1) demographic data and vocal health aspects; (2) self-assessment of sleep and vocal quality, and the influence that sleep has on voice; and (3) sleep and voice self-assessment inventories-the Epworth Sleepiness Scale (ESS), the Pittsburgh Sleep Quality Index (PSQI), and the Voice Handicap Index reduced version (VHI-10). A total of 862 people were included (493 women, 369 men), with a mean age of 32 years old (maximum age of 79 and minimum age of 18 years old). The perception of the influence that sleep has on voice showed a difference (P influence a voice handicap are vocal self-assessment, ESS total score, and self-assessment of the influence that sleep has on voice. The absence of daytime sleepiness is a protective factor (odds ratio [OR] > 1) against perceived voice handicap; the presence of daytime sleepiness is a damaging factor (OR influences voice. Perceived poor sleep quality is related to perceived poor vocal quality. Individuals with a voice handicap observe a greater influence of sleep on voice than those without. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  18. The musical environment and auditory plasticity: Hearing the pitch of percussion

    Directory of Open Access Journals (Sweden)

    Neil M Mclachlan

    2013-10-01

    Full Text Available Although musical skills clearly improve with training, pitch processing has generally been believed to be biologically determined by the behavior of brain stem neural mechanisms. Two main classes of pitch models have emerged over the last 50 years. Harmonic template models have been used to explain cross-channel integration of frequency information, and waveform periodicity models have been used to explain pitch discrimination that is much finer than the resolution of the auditory nerve. It has been proposed that harmonic templates are learnt from repeated exposure to voice, and so it may also be possible to learn inharmonic templates from repeated exposure to inharmonic music instruments. This study investigated whether pitch-matching accuracy for inharmonic percussion instruments was better in people who have trained on these instruments and could reliably recognize their timbre. We found that adults who had trained with Indonesian gamelan instruments were better at recognizing and pitch-matching gamelan instruments than people with similar levels of music training, but no prior exposure to these instruments. These findings suggest that gamelan musicians were able to use inharmonic templates to support accurate pitch processing for these instruments. We suggest that recognition mechanisms based on spectrotemporal patterns of afferent auditory excitation in the early stages of pitch processing allow rapid priming of the lowest frequency partial of inharmonic timbres, explaining how music training can adapt pitch processing to different musical genres and instruments.

  19. Absence of both auditory evoked potentials and auditory percepts dependent on timing cues.

    Science.gov (United States)

    Starr, A; McPherson, D; Patterson, J; Don, M; Luxford, W; Shannon, R; Sininger, Y; Tonakawa, L; Waring, M

    1991-06-01

    An 11-yr-old girl had an absence of sensory components of auditory evoked potentials (brainstem, middle and long-latency) to click and tone burst stimuli that she could clearly hear. Psychoacoustic tests revealed a marked impairment of those auditory perceptions dependent on temporal cues, that is, lateralization of binaural clicks, change of binaural masked threshold with changes in signal phase, binaural beats, detection of paired monaural clicks, monaural detection of a silent gap in a sound, and monaural threshold elevation for short duration tones. In contrast, auditory functions reflecting intensity or frequency discriminations (difference limens) were only minimally impaired. Pure tone audiometry showed a moderate (50 dB) bilateral hearing loss with a disproportionate severe loss of word intelligibility. Those auditory evoked potentials that were preserved included (1) cochlear microphonics reflecting hair cell activity; (2) cortical sustained potentials reflecting processing of slowly changing signals; and (3) long-latency cognitive components (P300, processing negativity) reflecting endogenous auditory cognitive processes. Both the evoked potential and perceptual deficits are attributed to changes in temporal encoding of acoustic signals perhaps occurring at the synapse between hair cell and eighth nerve dendrites. The results from this patient are discussed in relation to previously published cases with absent auditory evoked potentials and preserved hearing.

  20. Reverberation impairs brainstem temporal representations of voiced vowel sounds: challenging periodicity-tagged segregation of competing speech in rooms

    Directory of Open Access Journals (Sweden)

    Mark eSayles

    2015-01-01

    Full Text Available The auditory system typically processes information from concurrently active sound sources (e.g., two voices speaking at once, in the presence of multiple delayed, attenuated and distorted sound-wave reflections (reverberation. Brainstem circuits help segregate these complex acoustic mixtures into auditory objects. Psychophysical studies demonstrate a strong interaction between reverberation and fundamental-frequency (F0 modulation, leading to impaired segregation of competing vowels when segregation is on the basis of F0 differences. Neurophysiological studies of complex-sound segregation have concentrated on sounds with steady F0s, in anechoic environments. However, F0 modulation and reverberation are quasi-ubiquitous.We examine the ability of 129 single units in the ventral cochlear nucleus of the anesthetized guinea pig to segregate the concurrent synthetic vowel sounds /a/ and /i/, based on temporal discharge patterns under closed-field conditions. We address the effects of added real-room reverberation, F0 modulation, and the interaction of these two factors, on brainstem neural segregation of voiced speech sounds. A firing-rate representation of single-vowels’ spectral envelopes is robust to the combination of F0 modulation and reverberation: local firing-rate maxima and minima across the tonotopic array code vowel-formant structure. However, single-vowel F0-related periodicity information in shuffled inter-spike interval distributions is significantly degraded in the combined presence of reverberation and F0 modulation. Hence, segregation of double-vowels’ spectral energy into two streams (corresponding to the two vowels, on the basis of temporal discharge patterns, is impaired by reverberation; specifically when F0 is modulated. All unit types (primary-like, chopper, onset are similarly affected. These results offer neurophysiological insights to perceptual organization of complex acoustic scenes under realistically challenging

  1. The Sound of Voice: Voice-Based Categorization of Speakers' Sexual Orientation within and across Languages.

    Directory of Open Access Journals (Sweden)

    Simone Sulpizio

    Full Text Available Empirical research had initially shown that English listeners are able to identify the speakers' sexual orientation based on voice cues alone. However, the accuracy of this voice-based categorization, as well as its generalizability to other languages (language-dependency and to non-native speakers (language-specificity, has been questioned recently. Consequently, we address these open issues in 5 experiments: First, we tested whether Italian and German listeners are able to correctly identify sexual orientation of same-language male speakers. Then, participants of both nationalities listened to voice samples and rated the sexual orientation of both Italian and German male speakers. We found that listeners were unable to identify the speakers' sexual orientation correctly. However, speakers were consistently categorized as either heterosexual or gay on the basis of how they sounded. Moreover, a similar pattern of results emerged when listeners judged the sexual orientation of speakers of their own and of the foreign language. Overall, this research suggests that voice-based categorization of sexual orientation reflects the listeners' expectations of how gay voices sound rather than being an accurate detector of the speakers' actual sexual identity. Results are discussed with regard to accuracy, acoustic features of voices, language dependency and language specificity.

  2. Musical experience shapes top-down auditory mechanisms: evidence from masking and auditory attention performance.

    Science.gov (United States)

    Strait, Dana L; Kraus, Nina; Parbery-Clark, Alexandra; Ashley, Richard

    2010-03-01

    A growing body of research suggests that cognitive functions, such as attention and memory, drive perception by tuning sensory mechanisms to relevant acoustic features. Long-term musical experience also modulates lower-level auditory function, although the mechanisms by which this occurs remain uncertain. In order to tease apart the mechanisms that drive perceptual enhancements in musicians, we posed the question: do well-developed cognitive abilities fine-tune auditory perception in a top-down fashion? We administered a standardized battery of perceptual and cognitive tests to adult musicians and non-musicians, including tasks either more or less susceptible to cognitive control (e.g., backward versus simultaneous masking) and more or less dependent on auditory or visual processing (e.g., auditory versus visual attention). Outcomes indicate lower perceptual thresholds in musicians specifically for auditory tasks that relate with cognitive abilities, such as backward masking and auditory attention. These enhancements were observed in the absence of group differences for the simultaneous masking and visual attention tasks. Our results suggest that long-term musical practice strengthens cognitive functions and that these functions benefit auditory skills. Musical training bolsters higher-level mechanisms that, when impaired, relate to language and literacy deficits. Thus, musical training may serve to lessen the impact of these deficits by strengthening the corticofugal system for hearing. 2009 Elsevier B.V. All rights reserved.

  3. [Diagnostics and therapy in professional voice-users].

    Science.gov (United States)

    Richter, B; Echternach, M

    2010-04-01

    Voice is one of the most important instruments for expression and communication in humans. Dysphonia remains very frequent. Generally people in voice-intensive professions, such as teachers, call center employees, singers and actors suffer from these complaints. In recent years methods have been developed which facilitate appropriate diagnosis and therapy, based on the criteria of evidence based medicine, in voice patients appropriate to their degree of disease. The basic protocol of the European Laryngological Society offers a standardized evaluation of multidimensional voice parameters. In our own patient collective there were statistically significant improvements in voice quality, according to a pre/post mean value comparison, in both phonomicrosurgical (n=45) and voice therapy (n=30) patients in relation to RBH, DSI and VHI.

  4. Voice pedagogy-what do we need?

    Science.gov (United States)

    Gill, Brian P; Herbst, Christian T

    2016-12-01

    The final keynote panel of the 10th Pan-European Voice Conference (PEVOC) was concerned with the topic 'Voice pedagogy-what do we need?' In this communication the panel discussion is summarized, and the authors provide a deepening discussion on one of the key questions, addressing the roles and tasks of people working with voice students. In particular, a distinction is made between (1) voice building (derived from the German term 'Stimmbildung'), primarily comprising the functional and physiological aspects of singing; (2) coaching, mostly concerned with performance skills; and (3) singing voice rehabilitation. Both public and private educators are encouraged to apply this distinction to their curricula, in order to arrive at more efficient singing teaching and to reduce the risk of vocal injury to the singers concerned.

  5. Analyzing the mediated voice - a datasession

    DEFF Research Database (Denmark)

    Lawaetz, Anna

    Broadcasted voices are technologically manipulated. In order to achieve a certain autencity or sound of “reality” paradoxically the voices are filtered and trained in order to reach the listeners. This “mis-en-scene” is important knowledge when it comes to the development of a consistent method o...... of analysis of the mediated voice...

  6. Voice Quality in Mobile Telecommunication System

    Directory of Open Access Journals (Sweden)

    Evaldas Stankevičius

    2013-05-01

    Full Text Available The article deals with methods measuring the quality of voice transmitted over the mobile network as well as related problem, algorithms and options. It presents the created voice quality measurement system and discusses its adequacy as well as efficiency. Besides, the author presents the results of system application under the optimal hardware configuration. Under almost ideal conditions, the system evaluates the voice quality with MOS 3.85 average estimate; while the standardized TEMS Investigation 9.0 has 4.05 average MOS estimate. Next, the article presents the discussion of voice quality predictor implementation and investigates the predictor using nonlinear and linear prediction methods of voice quality dependence on the mobile network settings. Nonlinear prediction using artificial neural network resulted in the correlation coefficient of 0.62. While the linear prediction method using the least mean squares resulted in the correlation coefficient of 0.57. The analytical expression of voice quality features from the three network parameters: BER, C / I, RSSI is given as well.Article in Lithuanian

  7. Voice recognition through phonetic features with Punjabi utterances

    Science.gov (United States)

    Kaur, Jasdeep; Juglan, K. C.; Sharma, Vishal; Upadhyay, R. K.

    2017-07-01

    This paper deals with perception and disorders of speech in view of Punjabi language. Visualizing the importance of voice identification, various parameters of speaker identification has been studied. The speech material was recorded with a tape recorder in their normal and disguised mode of utterances. Out of the recorded speech materials, the utterances free from noise, etc were selected for their auditory and acoustic spectrographic analysis. The comparison of normal and disguised speech of seven subjects is reported. The fundamental frequency (F0) at similar places, Plosive duration at certain phoneme, Amplitude ratio (A1:A2) etc. were compared in normal and disguised speech. It was found that the formant frequency of normal and disguised speech remains almost similar only if it is compared at the position of same vowel quality and quantity. If the vowel is more closed or more open in the disguised utterance the formant frequency will be changed in comparison to normal utterance. The ratio of the amplitude (A1: A2) is found to be speaker dependent. It remains unchanged in the disguised utterance. However, this value may shift in disguised utterance if cross sectioning is not done at the same location.

  8. A Dual-Stream Neuroanatomy of Singing.

    Science.gov (United States)

    Loui, Psyche

    2015-02-01

    Singing requires effortless and efficient use of auditory and motor systems that center around the perception and production of the human voice. Although perception and production are usually tightly coupled functions, occasional mismatches between the two systems inform us of dissociable pathways in the brain systems that enable singing. Here I review the literature on perception and production in the auditory modality, and propose a dual-stream neuroanatomical model that subserves singing. I will discuss studies surrounding the neural functions of feedforward, feedback, and efference systems that control vocal monitoring, as well as the white matter pathways that connect frontal and temporal regions that are involved in perception and production. I will also consider disruptions of the perception-production network that are evident in tone-deaf individuals and poor pitch singers. Finally, by comparing expert singers against other musicians and nonmusicians, I will evaluate the possibility that singing training might offer rehabilitation from these disruptions through neuroplasticity of the perception-production network. Taken together, the best available evidence supports a model of dorsal and ventral pathways in auditory-motor integration that enables singing and is shared with language, music, speech, and human interactions in the auditory environment.

  9. Auditory short-term memory in the primate auditory cortex

    OpenAIRE

    Scott, Brian H.; Mishkin, Mortimer

    2015-01-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ���working memory��� bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive sho...

  10. Smartphone App for Voice Disorders

    Science.gov (United States)

    ... on. Feature: Taste, Smell, Hearing, Language, Voice, Balance Smartphone App for Voice Disorders Past Issues / Fall 2013 ... developed a mobile monitoring device that relies on smartphone technology to gather a week's worth of talking, ...

  11. Hearing Voices and Seeing Things

    Science.gov (United States)

    ... Facts for Families Guide Facts for Families - Vietnamese Hearing Voices and Seeing Things No. 102; Updated October ... delusions (a fixed, false, and often bizarre belief). Hearing voices or seeing things that are not there ...

  12. Integration of auditory and visual speech information

    NARCIS (Netherlands)

    Hall, M.; Smeele, P.M.T.; Kuhl, P.K.

    1998-01-01

    The integration of auditory and visual speech is observed when modes specify different places of articulation. Influences of auditory variation on integration were examined using consonant identifi-cation, plus quality and similarity ratings. Auditory identification predicted auditory-visual

  13. Auditory Dysfunction in Patients with Cerebrovascular Disease

    Directory of Open Access Journals (Sweden)

    Sadaharu Tabuchi

    2014-01-01

    Full Text Available Auditory dysfunction is a common clinical symptom that can induce profound effects on the quality of life of those affected. Cerebrovascular disease (CVD is the most prevalent neurological disorder today, but it has generally been considered a rare cause of auditory dysfunction. However, a substantial proportion of patients with stroke might have auditory dysfunction that has been underestimated due to difficulties with evaluation. The present study reviews relationships between auditory dysfunction and types of CVD including cerebral infarction, intracerebral hemorrhage, subarachnoid hemorrhage, cerebrovascular malformation, moyamoya disease, and superficial siderosis. Recent advances in the etiology, anatomy, and strategies to diagnose and treat these conditions are described. The numbers of patients with CVD accompanied by auditory dysfunction will increase as the population ages. Cerebrovascular diseases often include the auditory system, resulting in various types of auditory dysfunctions, such as unilateral or bilateral deafness, cortical deafness, pure word deafness, auditory agnosia, and auditory hallucinations, some of which are subtle and can only be detected by precise psychoacoustic and electrophysiological testing. The contribution of CVD to auditory dysfunction needs to be understood because CVD can be fatal if overlooked.

  14. Predicting compliance with command hallucinations: anger, impulsivity and appraisals of voices' power and intent.

    Science.gov (United States)

    Bucci, Sandra; Birchwood, Max; Twist, Laura; Tarrier, Nicholas; Emsley, Richard; Haddock, Gillian

    2013-06-01

    Command hallucinations are experienced by 33-74% of people who experience voices, with varying levels of compliance reported. Compliance with command hallucinations can result in acts of aggression, violence, suicide and self-harm; the typical response however is non-compliance or appeasement. Two factors associated with such dangerous behaviours are anger and impulsivity, however few studies have examined their relationship with compliance to command hallucinations. The current study aimed to examine the roles of anger and impulsivity on compliance with command hallucinations in people diagnosed with a psychotic disorder. The study was a cross-sectional design and included individuals who reported auditory hallucinations in the past month. Subjects completed a variety of self-report questionnaire measures. Thirty-two people experiencing command hallucinations, from both in-patient and community settings, were included. The tendency to appraise the voice as powerful, to be impulsive, to experience anger and to regulate anger were significantly associated with compliance with command hallucinations to do harm. Two factors emerged as significant independent predictors of compliance with command hallucinations; omnipotence and impulsivity. An interaction between omnipotence and compliance with commands, via a link with impulsivity, is considered and important clinical factors in the assessment of risk when working with clients experiencing command hallucinations are recommended. The data is highly suggestive and warrants further investigation with a larger sample. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Increased discriminability of authenticity from multimodal laughter is driven by auditory information.

    Science.gov (United States)

    Lavan, Nadine; McGettigan, Carolyn

    2017-10-01

    We present an investigation of the perception of authenticity in audiovisual laughter, in which we contrast spontaneous and volitional samples and examine the contributions of unimodal affective information to multimodal percepts. In a pilot study, we demonstrate that listeners perceive spontaneous laughs as more authentic than volitional ones, both in unimodal (audio-only, visual-only) and multimodal contexts (audiovisual). In the main experiment, we show that the discriminability of volitional and spontaneous laughter is enhanced for multimodal laughter. Analyses of relationships between affective ratings and the perception of authenticity show that, while both unimodal percepts significantly predict evaluations of audiovisual laughter, it is auditory affective cues that have the greater influence on multimodal percepts. We discuss differences and potential mismatches in emotion signalling through voices and faces, in the context of spontaneous and volitional behaviour, and highlight issues that should be addressed in future studies of dynamic multimodal emotion processing.

  16. Self-perception, complaints and vocal quality among undergraduate students enrolled in a Pedagogy course.

    Science.gov (United States)

    Fabron, Eliana Maria Gradim; Regaçone, Simone Fiuza; Marino, Viviane Cristina de Castro; Mastria, Marina Ludovico; Motonaga, Suely Mayumi; Sebastião, Luciana Tavares

    2015-01-01

    To compare the vocal self-perception and vocal complaints reported by two groups of students of the pedagogy course (freshmen and graduates); to relate the vocal self-perception to the vocal complaints for these groups; and to compare the voice quality of the students from these groups through perceptual auditory assessment and acoustic analysis. Initially, 89 students from the pedagogy course answered a questionnaire about self-perceived voice quality and vocal complaints. In a second phase, auditory-perceptual evaluation and acoustic analyses of 48 participants were made through voice recordings of sustained vowel emission and poem reading. The most reported vocal complaints were fatigue while using the voice, sore throat, effort to speak, irritation or burning in the throat, hoarseness, tightness in the neck, and variations of voice throughout the day. There was a higher occurrence of complaints from graduates than from freshmen, with significant differences for four of the nine complaints. It was also possible to observe the relationship between vocal self-perception and complaints reported by these students. No significant differences were observed in the results of auditory-perceptual evaluation; however, some graduates had their voices evaluated with higher severity of deviation of normalcy. During acoustic analysis no difference was observed between groups. The increase in vocal demand by the graduates may have caused the greatest number and diversity of vocal complaints, and several of them are related to the self-assessment of voice quality. The auditory-perceptual evaluation and acoustic analysis showed no deviations in their voice.

  17. Clinical Features of Psychogenic Voice Disorder and the Efficiency of Voice Therapy and Psychological Evaluation.

    Science.gov (United States)

    Tezcaner, Zahide Çiler; Gökmen, Muhammed Fatih; Yıldırım, Sibel; Dursun, Gürsel

    2017-11-06

    The aim of this study was to define the clinical features of psychogenic voice disorder (PVD) and explore the treatment efficiency of voice therapy and psychological evaluation. Fifty-eight patients who received treatment following the PVD diagnosis and had no organic or other functional voice disorders were assessed retrospectively based on laryngoscopic examinations and subjective and objective assessments. Epidemiological characteristics, accompanying organic and psychological disorders, preferred methods of treatment, and previous treatment outcomes were examined for each patient. A comparison was made based on voice disorders and responses to treatment between patients who received psychotherapy and patients who did not. Participants in this study comprised 58 patients, 10 male and 48 female. Voice therapy was applied in all patients, 54 (93.1%) of whom had improvement in their voice. Although all patients were advised to undergo psychological assessment, only 60.3% (35/58) of them underwent psychological assessment. No statistically significant difference was found between patients who did receive psychological support concerning their treatment responses and patients who did not. Relapse occurred in 14.7% (5/34) of the patients who applied for psychological assessment and in 50% (10/20) of those who did not. There was a statistically significant difference in relapse rates, which was higher among patients who did not receive psychological support (P therapy is an efficient treatment method for PVD. However, in the long-term follow-up, relapse of the disease is observed to be higher among patients who failed to follow up on the recommendation for psychological assessment. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  18. Integration of Visual Information in Auditory Cortex Promotes Auditory Scene Analysis through Multisensory Binding.

    Science.gov (United States)

    Atilgan, Huriye; Town, Stephen M; Wood, Katherine C; Jones, Gareth P; Maddox, Ross K; Lee, Adrian K C; Bizley, Jennifer K

    2018-02-07

    How and where in the brain audio-visual signals are bound to create multimodal objects remains unknown. One hypothesis is that temporal coherence between dynamic multisensory signals provides a mechanism for binding stimulus features across sensory modalities. Here, we report that when the luminance of a visual stimulus is temporally coherent with the amplitude fluctuations of one sound in a mixture, the representation of that sound is enhanced in auditory cortex. Critically, this enhancement extends to include both binding and non-binding features of the sound. We demonstrate that visual information conveyed from visual cortex via the phase of the local field potential is combined with auditory information within auditory cortex. These data provide evidence that early cross-sensory binding provides a bottom-up mechanism for the formation of cross-sensory objects and that one role for multisensory binding in auditory cortex is to support auditory scene analysis. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.

  19. Interaction of language, auditory and memory brain networks in auditory verbal hallucinations

    NARCIS (Netherlands)

    Curcic-Blake, Branislava; Ford, Judith M.; Hubl, Daniela; Orlov, Natasza D.; Sommer, Iris E.; Waters, Flavie; Allen, Paul; Jardri, Renaud; Woodruff, Peter W.; David, Olivier; Mulert, Christoph; Woodward, Todd S.; Aleman, Andre

    Auditory verbal hallucinations (AVH) occur in psychotic disorders, but also as a symptom of other conditions and even in healthy people. Several current theories on the origin of AVH converge, with neuroimaging studies suggesting that the language, auditory and memory/limbic networks are of

  20. Reliability in perceptual analysis of voice quality.

    Science.gov (United States)

    Bele, Irene Velsvik

    2005-12-01

    This study focuses on speaking voice quality in male teachers (n = 35) and male actors (n = 36), who represent untrained and trained voice users, because we wanted to investigate normal and supranormal voices. In this study, both substantial and methodologic aspects were considered. It includes a method for perceptual voice evaluation, and a basic issue was rater reliability. A listening group of 10 listeners, 7 experienced speech-language therapists, and 3 speech-language therapist students evaluated the voices by 15 vocal characteristics using VA scales. Two sets of voice signals were investigated: text reading (2 loudness levels) and sustained vowel (3 levels). The results indicated a high interrater reliability for most perceptual characteristics. Connected speech was evaluated more reliably, especially at the normal level, but both types of voice signals were evaluated reliably, although the reliability for connected speech was somewhat higher than for vowels. Experienced listeners tended to be more consistent in their ratings than did the student raters. Some vocal characteristics achieved acceptable reliability even with a smaller panel of listeners. The perceptual characteristics grouped in 4 factors reflected perceptual dimensions.

  1. Permanent Quadriplegia Following Replacement of Voice Prosthesis.

    Science.gov (United States)

    Ozturk, Kayhan; Erdur, Omer; Kibar, Ertugrul

    2016-11-01

    The authors presented a patient with quadriplegia caused by cervical spine abscess following voice prosthesis replacement. The authors present the first reported permanent quadriplegia patient caused by voice prosthesis replacement. The authors wanted to emphasize that life-threatening complications may be faced during the replacement of voice prosthesis. Care should be taken during the replacement of voice prosthesis and if some problems have been faced during the procedure patients must be followed closely.

  2. Procedures for central auditory processing screening in schoolchildren.

    Science.gov (United States)

    Carvalho, Nádia Giulian de; Ubiali, Thalita; Amaral, Maria Isabel Ramos do; Santos, Maria Francisca Colella

    2018-03-22

    Central auditory processing screening in schoolchildren has led to debates in literature, both regarding the protocol to be used and the importance of actions aimed at prevention and promotion of auditory health. Defining effective screening procedures for central auditory processing is a challenge in Audiology. This study aimed to analyze the scientific research on central auditory processing screening and discuss the effectiveness of the procedures utilized. A search was performed in the SciELO and PUBMed databases by two researchers. The descriptors used in Portuguese and English were: auditory processing, screening, hearing, auditory perception, children, auditory tests and their respective terms in Portuguese. original articles involving schoolchildren, auditory screening of central auditory skills and articles in Portuguese or English. studies with adult and/or neonatal populations, peripheral auditory screening only, and duplicate articles. After applying the described criteria, 11 articles were included. At the international level, central auditory processing screening methods used were: screening test for auditory processing disorder and its revised version, screening test for auditory processing, scale of auditory behaviors, children's auditory performance scale and Feather Squadron. In the Brazilian scenario, the procedures used were the simplified auditory processing assessment and Zaidan's battery of tests. At the international level, the screening test for auditory processing and Feather Squadron batteries stand out as the most comprehensive evaluation of hearing skills. At the national level, there is a paucity of studies that use methods evaluating more than four skills, and are normalized by age group. The use of simplified auditory processing assessment and questionnaires can be complementary in the search for an easy access and low-cost alternative in the auditory screening of Brazilian schoolchildren. Interactive tools should be proposed, that

  3. Updating signal typing in voice: addition of type 4 signals.

    Science.gov (United States)

    Sprecher, Alicia; Olszewski, Aleksandra; Jiang, Jack J; Zhang, Yu

    2010-06-01

    The addition of a fourth type of voice to Titze's voice classification scheme is proposed. This fourth voice type is characterized by primarily stochastic noise behavior and is therefore unsuitable for both perturbation and correlation dimension analysis. Forty voice samples were classified into the proposed four types using narrowband spectrograms. Acoustic, perceptual, and correlation dimension analyses were completed for all voice samples. Perturbation measures tended to increase with voice type. Based on reliability cutoffs, the type 1 and type 2 voices were considered suitable for perturbation analysis. Measures of unreliability were higher for type 3 and 4 voices. Correlation dimension analyses increased significantly with signal type as indicated by a one-way analysis of variance. Notably, correlation dimension analysis could not quantify the type 4 voices. The proposed fourth voice type represents a subset of voices dominated by noise behavior. Current measures capable of evaluating type 4 voices provide only qualitative data (spectrograms, perceptual analysis, and an infinite correlation dimension). Type 4 voices are highly complex and the development of objective measures capable of analyzing these voices remains a topic of future investigation.

  4. Voice amplification for primary school teachers with voice disorders: a randomized clinical trial.

    Science.gov (United States)

    Bovo, Roberto; Trevisi, Patrizia; Emanuelli, Enzo; Martini, Alessandro

    2013-06-01

    Several studies have demonstrated a high prevalence of voice disorders in teachers, together with the personal, professional and economical consequences of the problem. Good primary prevention should be based on 3 aspects: 1) amelioration of classroom acoustics, 2) voice care programs for future professional voice users, including teachers and 3) classroom or portable amplification systems. The aim of the study was to assess the benefit obtained from the use of portable amplification systems by female primary school teachers in their occupational setting. Forty female primary school teachers attended a course about professional voice care, which comprised two theoretical lectures, each 60 min long. Thereafter, they were randomized into 2 groups: the teachers of the first group were asked to use a portable vocal amplifier for 3 months, till the end of school-year. The other 20 teachers were part of the control group, matched for age and years of employment. All subjects had a grade 1 of dysphonia with no significant organic lesion of the vocal folds. Most teachers of the experimental group used the amplifier consistently for the whole duration of the experiment and found it very useful in reducing the symptoms of vocal fatigue. In fact, after 3 months, Voice Handicap Index (VHI) scores in "course + amplifier" group demonstrated a significant amelioration (p = 0.003). The perceptual grade of dysphonia also improved significantly (p = 0.0005). The same parameters changed favourably also in the "course only" group, but the results were not statistically significant (p = 0.4 for VHI and p = 0.03 for perceptual grade). In teachers, and particularly in those with a constitutional weak voice and/or those who are prone to vocal fold pathology, vocal amplifiers may be an effective and low-cost intervention to decrease potentially damaging vocal loads and may represent a necessary form of prevention.

  5. Investigating the role of auditory and tactile modalities in violin quality evaluation.

    Science.gov (United States)

    Wollman, Indiana; Fritz, Claudia; Poitevineau, Jacques; McAdams, Stephen

    2014-01-01

    The role of auditory and tactile modalities involved in violin playing and evaluation was investigated in an experiment employing a blind violin evaluation task under different conditions: i) normal playing conditions, ii) playing with auditory masking, and iii) playing with vibrotactile masking. Under each condition, 20 violinists evaluated five violins according to criteria related to violin playing and sound characteristics and rated their overall quality and relative preference. Results show that both auditory and vibrotactile feedback are important in the violinists' evaluations but that their relative importance depends on the violinist, the violin and the type of evaluation (different criteria ratings or preference). In this way, the overall quality ratings were found to be accurately predicted by the rating criteria, which also proved to be perceptually relevant to violinists, but were poorly correlated with the preference ratings; this suggests that the two types of ratings (overall quality vs preference) may stem from different decision-making strategies. Furthermore, the experimental design confirmed that violinists agree more on the importance of criteria in their overall evaluation than on their actual ratings for different violins. In particular, greater agreement was found on the importance of criteria related to the sound of the violin. Nevertheless, this study reveals that there are fundamental differences in the way players interpret and evaluate each criterion, which may explain why correlating physical properties with perceptual properties has been challenging so far in the field of musical acoustics.

  6. Data-Driven User Feedback: An Improved Neurofeedback Strategy considering the Interindividual Variability of EEG Features

    Directory of Open Access Journals (Sweden)

    Chang-Hee Han

    2016-01-01

    Full Text Available It has frequently been reported that some users of conventional neurofeedback systems can experience only a small portion of the total feedback range due to the large interindividual variability of EEG features. In this study, we proposed a data-driven neurofeedback strategy considering the individual variability of electroencephalography (EEG features to permit users of the neurofeedback system to experience a wider range of auditory or visual feedback without a customization process. The main idea of the proposed strategy is to adjust the ranges of each feedback level using the density in the offline EEG database acquired from a group of individuals. Twenty-two healthy subjects participated in offline experiments to construct an EEG database, and five subjects participated in online experiments to validate the performance of the proposed data-driven user feedback strategy. Using the optimized bin sizes, the number of feedback levels that each individual experienced was significantly increased to 139% and 144% of the original results with uniform bin sizes in the offline and online experiments, respectively. Our results demonstrated that the use of our data-driven neurofeedback strategy could effectively increase the overall range of feedback levels that each individual experienced during neurofeedback training.

  7. Measurement of voice onset time in maxillectomy patients.

    Science.gov (United States)

    Hattori, Mariko; Sumita, Yuka I; Taniguchi, Hisashi

    2014-01-01

    Objective speech evaluation using acoustic measurement is needed for the proper rehabilitation of maxillectomy patients. For digital evaluation of consonants, measurement of voice onset time is one option. However, voice onset time has not been measured in maxillectomy patients as their consonant sound spectra exhibit unique characteristics that make the measurement of voice onset time challenging. In this study, we established criteria for measuring voice onset time in maxillectomy patients for objective speech evaluation. We examined voice onset time for /ka/ and /ta/ in 13 maxillectomy patients by calculating the number of valid measurements of voice onset time out of three trials for each syllable. Wilcoxon's signed rank test showed that voice onset time measurements were more successful for /ka/ and /ta/ when a prosthesis was used (Z = -2.232, P = 0.026 and Z = -2.401, P = 0.016, resp.) than when a prosthesis was not used. These results indicate a prosthesis affected voice onset measurement in these patients. Although more research in this area is needed, measurement of voice onset time has the potential to be used to evaluate consonant production in maxillectomy patients wearing a prosthesis.

  8. Measurement of Voice Onset Time in Maxillectomy Patients

    Directory of Open Access Journals (Sweden)

    Mariko Hattori

    2014-01-01

    Full Text Available Objective speech evaluation using acoustic measurement is needed for the proper rehabilitation of maxillectomy patients. For digital evaluation of consonants, measurement of voice onset time is one option. However, voice onset time has not been measured in maxillectomy patients as their consonant sound spectra exhibit unique characteristics that make the measurement of voice onset time challenging. In this study, we established criteria for measuring voice onset time in maxillectomy patients for objective speech evaluation. We examined voice onset time for /ka/ and /ta/ in 13 maxillectomy patients by calculating the number of valid measurements of voice onset time out of three trials for each syllable. Wilcoxon’s signed rank test showed that voice onset time measurements were more successful for /ka/ and /ta/ when a prosthesis was used (Z=−2.232, P=0.026 and Z=−2.401, P=0.016, resp. than when a prosthesis was not used. These results indicate a prosthesis affected voice onset measurement in these patients. Although more research in this area is needed, measurement of voice onset time has the potential to be used to evaluate consonant production in maxillectomy patients wearing a prosthesis.

  9. Pyff - a pythonic framework for feedback applications and stimulus presentation in neuroscience.

    Science.gov (United States)

    Venthur, Bastian; Scholler, Simon; Williamson, John; Dähne, Sven; Treder, Matthias S; Kramarek, Maria T; Müller, Klaus-Robert; Blankertz, Benjamin

    2010-01-01

    This paper introduces Pyff, the Pythonic feedback framework for feedback applications and stimulus presentation. Pyff provides a platform-independent framework that allows users to develop and run neuroscientific experiments in the programming language Python. Existing solutions have mostly been implemented in C++, which makes for a rather tedious programming task for non-computer-scientists, or in Matlab, which is not well suited for more advanced visual or auditory applications. Pyff was designed to make experimental paradigms (i.e., feedback and stimulus applications) easily programmable. It includes base classes for various types of common feedbacks and stimuli as well as useful libraries for external hardware such as eyetrackers. Pyff is also equipped with a steadily growing set of ready-to-use feedbacks and stimuli. It can be used as a standalone application, for instance providing stimulus presentation in psychophysics experiments, or within a closed loop such as in biofeedback or brain-computer interfacing experiments. Pyff communicates with other systems via a standardized communication protocol and is therefore suitable to be used with any system that may be adapted to send its data in the specified format. Having such a general, open-source framework will help foster a fruitful exchange of experimental paradigms between research groups. In particular, it will decrease the need of reprogramming standard paradigms, ease the reproducibility of published results, and naturally entail some standardization of stimulus presentation.

  10. Voice and choice by delegation.

    Science.gov (United States)

    van de Bovenkamp, Hester; Vollaard, Hans; Trappenburg, Margo; Grit, Kor

    2013-02-01

    In many Western countries, options for citizens to influence public services are increased to improve the quality of services and democratize decision making. Possibilities to influence are often cast into Albert Hirschman's taxonomy of exit (choice), voice, and loyalty. In this article we identify delegation as an important addition to this framework. Delegation gives individuals the chance to practice exit/choice or voice without all the hard work that is usually involved in these options. Empirical research shows that not many people use their individual options of exit and voice, which could lead to inequality between users and nonusers. We identify delegation as a possible solution to this problem, using Dutch health care as a case study to explore this option. Notwithstanding various advantages, we show that voice and choice by delegation also entail problems of inequality and representativeness.

  11. Adaptation in the auditory system: an overview

    Directory of Open Access Journals (Sweden)

    David ePérez-González

    2014-02-01

    Full Text Available The early stages of the auditory system need to preserve the timing information of sounds in order to extract the basic features of acoustic stimuli. At the same time, different processes of neuronal adaptation occur at several levels to further process the auditory information. For instance, auditory nerve fiber responses already experience adaptation of their firing rates, a type of response that can be found in many other auditory nuclei and may be useful for emphasizing the onset of the stimuli. However, it is at higher levels in the auditory hierarchy where more sophisticated types of neuronal processing take place. For example, stimulus-specific adaptation, where neurons show adaptation to frequent, repetitive stimuli, but maintain their responsiveness to stimuli with different physical characteristics, thus representing a distinct kind of processing that may play a role in change and deviance detection. In the auditory cortex, adaptation takes more elaborate forms, and contributes to the processing of complex sequences, auditory scene analysis and attention. Here we review the multiple types of adaptation that occur in the auditory system, which are part of the pool of resources that the neurons employ to process the auditory scene, and are critical to a proper understanding of the neuronal mechanisms that govern auditory perception.

  12. Singing Voice Analysis, Synthesis, and Modeling

    Science.gov (United States)

    Kim, Youngmoo E.

    The singing voice is the oldest musical instrument, but its versatility and emotional power are unmatched. Through the combination of music, lyrics, and expression, the voice is able to affect us in ways that no other instrument can. The fact that vocal music is prevalent in almost all cultures is indicative of its innate appeal to the human aesthetic. Singing also permeates most genres of music, attesting to the wide range of sounds the human voice is capable of producing. As listeners we are naturally drawn to the sound of the human voice, and, when present, it immediately becomes the focus of our attention.

  13. Voice stress analysis and evaluation

    Science.gov (United States)

    Haddad, Darren M.; Ratley, Roy J.

    2001-02-01

    Voice Stress Analysis (VSA) systems are marketed as computer-based systems capable of measuring stress in a person's voice as an indicator of deception. They are advertised as being less expensive, easier to use, less invasive in use, and less constrained in their operation then polygraph technology. The National Institute of Justice have asked the Air Force Research Laboratory for assistance in evaluating voice stress analysis technology. Law enforcement officials have also been asking questions about this technology. If VSA technology proves to be effective, its value for military and law enforcement application is tremendous.

  14. Effects of Medications on Voice

    Science.gov (United States)

    ... ENTCareers Marketplace Find an ENT Doctor Near You Effects of Medications on Voice Effects of Medications on Voice Patient Health Information News ... replacement therapy post-menopause may have a variable effect. An inadequate level of thyroid replacement medication in ...

  15. The Voices of the Documentarist

    Science.gov (United States)

    Utterback, Ann S.

    1977-01-01

    Discusses T. S. Elliot's essay, "The Three Voices of Poetry" which conceptualizes the position taken by the poet or creator. Suggests that an examination of documentary film, within the three voices concept, expands the critical framework of the film genre. (MH)

  16. Depictions of auditory verbal hallucinations in news media.

    Science.gov (United States)

    Vilhauer, Ruvanee P

    2015-02-01

    The characterization of auditory verbal hallucinations (AVH) in the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-V), diverges from recent research literature, which demonstrates the occurrence of AVH in individuals who are psychologically healthy. This discrepancy raises the question of how the public perceives AVH. Public perceptions are important because they could potentially affect how individuals with AVH interpret these experiences and how people view voice hearers. Because media portrayals can provide a window into how phenomena are viewed by the public, an archival study of newspaper articles was carried out to examine depictions of AVH. A sample of 181 newspaper articles originating in the United States was analyzed using a content analysis approach. The majority of articles examined contained no suggestion that AVH are possible in psychologically healthy individuals. Most articles suggested that AVH were a symptom of mental illness, and many suggested that AVH were associated with criminal behavior, violence and suicidality. The news media examined tended to present a misleading and largely pathologizing view of AVH. More research is needed to shed light on how, and to what extent, public perceptions may influence those who experience AVH. © The Author(s) 2014.

  17. Temporal envelope processing in the human auditory cortex: response and interconnections of auditory cortical areas.

    Science.gov (United States)

    Gourévitch, Boris; Le Bouquin Jeannès, Régine; Faucon, Gérard; Liégeois-Chauvel, Catherine

    2008-03-01

    Temporal envelope processing in the human auditory cortex has an important role in language analysis. In this paper, depth recordings of local field potentials in response to amplitude modulated white noises were used to design maps of activation in primary, secondary and associative auditory areas and to study the propagation of the cortical activity between them. The comparison of activations between auditory areas was based on a signal-to-noise ratio associated with the response to amplitude modulation (AM). The functional connectivity between cortical areas was quantified by the directed coherence (DCOH) applied to auditory evoked potentials. This study shows the following reproducible results on twenty subjects: (1) the primary auditory cortex (PAC), the secondary cortices (secondary auditory cortex (SAC) and planum temporale (PT)), the insular gyrus, the Brodmann area (BA) 22 and the posterior part of T1 gyrus (T1Post) respond to AM in both hemispheres. (2) A stronger response to AM was observed in SAC and T1Post of the left hemisphere independent of the modulation frequency (MF), and in the left BA22 for MFs 8 and 16Hz, compared to those in the right. (3) The activation and propagation features emphasized at least four different types of temporal processing. (4) A sequential activation of PAC, SAC and BA22 areas was clearly visible at all MFs, while other auditory areas may be more involved in parallel processing upon a stream originating from primary auditory area, which thus acts as a distribution hub. These results suggest that different psychological information is carried by the temporal envelope of sounds relative to the rate of amplitude modulation.

  18. Obligatory and facultative brain regions for voice-identity recognition

    Science.gov (United States)

    Roswandowitz, Claudia; Kappes, Claudia; Obrig, Hellmuth; von Kriegstein, Katharina

    2018-01-01

    Abstract Recognizing the identity of others by their voice is an important skill for social interactions. To date, it remains controversial which parts of the brain are critical structures for this skill. Based on neuroimaging findings, standard models of person-identity recognition suggest that the right temporal lobe is the hub for voice-identity recognition. Neuropsychological case studies, however, reported selective deficits of voice-identity recognition in patients predominantly with right inferior parietal lobe lesions. Here, our aim was to work towards resolving the discrepancy between neuroimaging studies and neuropsychological case studies to find out which brain structures are critical for voice-identity recognition in humans. We performed a voxel-based lesion-behaviour mapping study in a cohort of patients (n = 58) with unilateral focal brain lesions. The study included a comprehensive behavioural test battery on voice-identity recognition of newly learned (voice-name, voice-face association learning) and familiar voices (famous voice recognition) as well as visual (face-identity recognition) and acoustic control tests (vocal-pitch and vocal-timbre discrimination). The study also comprised clinically established tests (neuropsychological assessment, audiometry) and high-resolution structural brain images. The three key findings were: (i) a strong association between voice-identity recognition performance and right posterior/mid temporal and right inferior parietal lobe lesions; (ii) a selective association between right posterior/mid temporal lobe lesions and voice-identity recognition performance when face-identity recognition performance was factored out; and (iii) an association of right inferior parietal lobe lesions with tasks requiring the association between voices and faces but not voices and names. The results imply that the right posterior/mid temporal lobe is an obligatory structure for voice-identity recognition, while the inferior parietal

  19. The effect of voice quality on hiring decisions

    Directory of Open Access Journals (Sweden)

    Lea Tylečková

    2017-09-01

    Full Text Available This paper examines the effect of voice quality on hiring decisions. Considering voice quality an important tool in an individual’s self-presentation in the job market, it may very well enhance his/her job prospects, while some voice qualities may affect employers’ judgments in a negative way. Five men and five women were recorded reading four different utterances representing answers to job interviewers’ questions in four different phonation guises: modal, breathy, creaky and pressed. 38 professional employment interviewers recorded the speakers’ hireability and personality ratings (likeability, self-confidence and trustworthiness on 7-point semantic differential scales based on the speakers’ voice. The results revealed a significant effect of the phonation guises on the speakers’ ratings with the modal voice being superior to the cluster of non-modal voices. Interestingly, the non-modal guises were evaluated in a very similar way, except for the self-confidence category with the breathy voice getting the lowest scores on the one hand and the pressed voice correlating with high self-confidence ratings on the other.

  20. Can a voice disorder be an occupational disease?

    Directory of Open Access Journals (Sweden)

    Daša Gluvajić

    2012-11-01

    Full Text Available Voice disorders are all changes in the voice quality that can be detected by hearing. Some etiological factors that contribute to the development of voice disorders are related to occupation, working environment and working conditions. In modern societies one third of the labour force works in professions with vocal loading. In such professions, voice disorders influence work ability and quality of life. For an occupational disease, the exposure to harmful factors in the workplace is essential and causes the development of a disorder in a previously healthy individual. In some European countries, voice disorders in teachers, which do not improve after proper treatment are recognized as occupational diseases. In Slovenia, no organic or functional voice disorder is listed on the current list of occupational diseases. Prevention and cure of occupational voice disorders can contribute to better safety at the workplace and improve the workers’ health. Voice professionals must also know that they are responsible for their own health and that they must actively take care of it.