WorldWideScience

Sample records for voice auditory feedback

  1. Vocal responses to perturbations in voice auditory feedback in individuals with Parkinson's disease.

    Directory of Open Access Journals (Sweden)

    Hanjun Liu

    Full Text Available BACKGROUND: One of the most common symptoms of speech deficits in individuals with Parkinson's disease (PD is significantly reduced vocal loudness and pitch range. The present study investigated whether abnormal vocalizations in individuals with PD are related to sensory processing of voice auditory feedback. Perturbations in loudness or pitch of voice auditory feedback are known to elicit short latency, compensatory responses in voice amplitude or fundamental frequency. METHODOLOGY/PRINCIPAL FINDINGS: Twelve individuals with Parkinson's disease and 13 age- and sex-matched healthy control subjects sustained a vowel sound (/α/ and received unexpected, brief (200 ms perturbations in voice loudness (±3 or 6 dB or pitch (±100 cents auditory feedback. Results showed that, while all subjects produced compensatory responses in their voice amplitude or fundamental frequency, individuals with PD exhibited larger response magnitudes than the control subjects. Furthermore, for loudness-shifted feedback, upward stimuli resulted in shorter response latencies than downward stimuli in the control subjects but not in individuals with PD. CONCLUSIONS/SIGNIFICANCE: The larger response magnitudes in individuals with PD compared with the control subjects suggest that processing of voice auditory feedback is abnormal in PD. Although the precise mechanisms of the voice feedback processing are unknown, results of this study suggest that abnormal voice control in individuals with PD may be related to dysfunctional mechanisms of error detection or correction in sensory feedback processing.

  2. The effect of visual feedback and training in auditory-perceptual judgment of voice quality.

    Science.gov (United States)

    Barsties, Ben; Beers, Mieke; Ten Cate, Liesbeth; Van Ballegooijen, Karin; Braam, Lilian; De Groot, Merel; Van Der Kant, Marieke; Kruitwagen, Cas; Maryn, Youri

    2017-04-01

    The aim of the present investigation was to evaluate the effect of visual feedback on rating voice quality severity level and the reliability of voice quality judgment by inexperienced listeners. For this purpose two training programs were created, each lasting 2 hours. In total 37 undergraduate speech-language therapy students participated in the study and were divided into a visual plus auditory-perceptual feedback group (V + AF), an auditory-perceptual feedback group (AF), and a control group with no feedback (NF). All listeners completed two rating sessions judging overall severity labeled as grade (G), roughness (R), and breathiness (B). The judged voice samples contained the concatenation of continuous speech and sustained phonation. No significant rater reliability changes were found in the pre- and posttest between the three groups in every GRB-parameter (all p > 0.05). There was a training effect seen in the significant improvement of rater reliability for roughness within the NF and AF groups (all p visual and auditory anchors while rating as well as longer training sessions may be required to draw a firm conclusion.

  3. Effect of Training and Level of External Auditory Feedback on the Singing Voice: Pitch Inaccuracy.

    Science.gov (United States)

    Bottalico, Pasquale; Graetzer, Simone; Hunter, Eric J

    2017-01-01

    One of the most important aspects of singing is the control of fundamental frequency. The effects on pitch inaccuracy, defined as the distance in cents in equally tempered tuning between the reference note and the sung note, of the following conditions were evaluated: (1) level of external feedback, (2) tempo (slow or fast), (3) articulation (legato or staccato), (4) tessitura (low, medium, or high), and (5) semi-phrase direction (ascending or descending). The subjects were 10 nonprofessional singers and 10 classically trained professional or semi-professional singers (10 men and 10 women). Subjects sang one octave and a fifth arpeggi with three different levels of external auditory feedback, two tempi, and two articulations (legato or staccato). It was observed that inaccuracy was greatest in the descending semi-phrase arpeggi produced at a fast tempo and with a staccato articulation, especially for nonprofessional singers. The magnitude of inaccuracy was also relatively large in the high tessitura relative to the low and the medium tessitura for such singers. Contrary to predictions, when external auditory feedback was strongly attenuated by the hearing protectors, nonprofessional singers showed greater pitch accuracy than in the other external feedback conditions. This finding indicates the importance of internal auditory feedback in pitch control. With an increase in training, the singer's pitch inaccuracy decreases. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  4. Sensory Processing: Advances in Understanding Structure and Function of Pitch-Shifted Auditory Feedback in Voice Control

    Directory of Open Access Journals (Sweden)

    Charles R Larson

    2016-02-01

    Full Text Available The pitch-shift paradigm has become a widely used method for studying the role of voice pitch auditory feedback in voice control. This paradigm introduces small, brief pitch shifts in voice auditory feedback to vocalizing subjects. The perturbations trigger a reflexive mechanism that counteracts the change in pitch. The underlying mechanisms of the vocal responses are thought to reflect a negative feedback control system that is similar to constructs developed to explain other forms of motor control. Another use of this technique requires subjects to voluntarily change the pitch of their voice when they hear a pitch shift stimulus. Under these conditions, short latency responses are produced that change voice pitch to match that of the stimulus. The pitch-shift technique has been used with magnetoencephalography (MEG and electroencephalography (EEG recordings, and has shown that at vocal onset there is normally a suppression of neural activity related to vocalization. However, if a pitch-shift is also presented at voice onset, there is a cancellation of this suppression, which has been interpreted to mean that one way in which a person distinguishes self-vocalization from vocalization of others is by a comparison of the intended voice and the actual voice. Studies of the pitch shift reflex in the fMRI environment show that the superior temporal gyrus (STG plays an important role in the process of controlling voice F0 based on auditory feedback. Additional studies using fMRI for effective connectivity modeling show that the left and right STG play critical roles in correcting for an error in voice production. While both the left and right STG are involved in this process, a feedback loop develops between left and right STG during perturbations, in which the left to right connection becomes stronger, and a new negative right to left connection emerges along with the emergence of other feedback loops within the cortical network tested.

  5. The effect of visual feedback and training in auditory-perceptual judgment of voice quality

    NARCIS (Netherlands)

    Barsties, Ben; Beers, Mieke; Ten Cate, Liesbeth; Van Ballegooijen, Karin; Braam, Lilian; De Groot, Merel; Van Der Kant, Marieke; Kruitwagen, Cas|info:eu-repo/dai/nl/304826790; Maryn, Youri

    2017-01-01

    The aim of the present investigation was to evaluate the effect of visual feedback on rating voice quality severity level and the reliability of voice quality judgment by inexperienced listeners. For this purpose two training programs were created, each lasting 2 hours. In total 37 undergraduate spe

  6. A temporal predictive code for voice motor control: Evidence from ERP and behavioral responses to pitch-shifted auditory feedback.

    Science.gov (United States)

    Behroozmand, Roozbeh; Sangtian, Stacey; Korzyukov, Oleg; Larson, Charles R

    2016-04-01

    The predictive coding model suggests that voice motor control is regulated by a process in which the mismatch (error) between feedforward predictions and sensory feedback is detected and used to correct vocal motor behavior. In this study, we investigated how predictions about timing of pitch perturbations in voice auditory feedback would modulate ERP and behavioral responses during vocal production. We designed six counterbalanced blocks in which a +100 cents pitch-shift stimulus perturbed voice auditory feedback during vowel sound vocalizations. In three blocks, there was a fixed delay (500, 750 or 1000 ms) between voice and pitch-shift stimulus onset (predictable), whereas in the other three blocks, stimulus onset delay was randomized between 500, 750 and 1000 ms (unpredictable). We found that subjects produced compensatory (opposing) vocal responses that started at 80 ms after the onset of the unpredictable stimuli. However, for predictable stimuli, subjects initiated vocal responses at 20 ms before and followed the direction of pitch shifts in voice feedback. Analysis of ERPs showed that the amplitudes of the N1 and P2 components were significantly reduced in response to predictable compared with unpredictable stimuli. These findings indicate that predictions about temporal features of sensory feedback can modulate vocal motor behavior. In the context of the predictive coding model, temporally-predictable stimuli are learned and reinforced by the internal feedforward system, and as indexed by the ERP suppression, the sensory feedback contribution is reduced for their processing. These findings provide new insights into the neural mechanisms of vocal production and motor control.

  7. Neuronal mechanisms of voice control are affected by implicit expectancy of externally triggered perturbations in auditory feedback.

    Directory of Open Access Journals (Sweden)

    Oleg Korzyukov

    Full Text Available Accurate vocal production relies on several factors including sensory feedback and the ability to predict future challenges to the control processes. Repetitive patterns of perturbations in sensory feedback by themselves elicit implicit expectations in the vocal control system regarding the timing, quality and direction of perturbations. In the present study, the predictability of voice pitch-shifted auditory feedback was experimentally manipulated. A block of trials where all pitch-shift stimuli were upward, and therefore predictable was contrasted against an unpredictable block of trials in which the stimulus direction was randomized between upward and downward pitch-shifts. It was found that predictable perturbations in voice auditory feedback led to a reduction in the proportion of compensatory vocal responses, which might be indicative of a reduction in vocal control. The predictable perturbations also led to a reduction in the magnitude of the N1 component of cortical Event Related Potentials (ERP that was associated with the reflexive compensations to the perturbations. We hypothesize that formation of expectancy in our study is accompanied by involuntary allocation of attentional resources occurring as a result of habituation or learning, that in turn trigger limited and controlled exploration-related motor variability in the vocal control system.

  8. Auditory feedback of one’s own voice is used for high-level semantic monitoring: the self-comprehension hypothesis

    Directory of Open Access Journals (Sweden)

    Andreas eLind

    2014-03-01

    Full Text Available What would it be like if we said one thing, and heard ourselves saying something else? Would we notice something was wrong? Or would we believe we said the thing we heard? Is feedback of our own speech only used to detect errors, or does it also help to specify the meaning of what we say? Comparator models of self-monitoring favor the first alternative, and hold that our sense of agency is given by the comparison between intentions and outcomes, while inferential models argue that agency is a more fluent construct, dependent on contextual inferences about the most likely cause of an action. In this paper, we present a theory about the use of feedback during speech. Specifically, we discuss inferential models of speech production that question the standard comparator assumption that the meaning of our utterances is fully specified before articulation. We then argue that auditory feedback provides speakers with a channel for high-level, semantic self-comprehension. In support of this we discuss results using a method we recently developed called Real-time Speech Exchange (RSE. In our first study using RSE (Lind et al, submitted participants were fitted with headsets and performed a computerized Stroop task. We surreptitiously recorded words they said, and later in the test we played them back at the exact same time that the participants uttered something else, while blocking the actual feedback of their voice. Thus, participants said one thing, but heard themselves saying something else. The results showed that when timing conditions were ideal, more than two thirds of the manipulations went undetected. Crucially, in a large proportion of the non-detected manipulated trials, the inserted words were experienced as self-produced by the participants. This indicates that our sense of agency for speech has a strong inferential component, and that auditory feedback of our own voice acts as a pathway for semantic monitoring.

  9. Different Auditory Feedback Control for Echolocation and Communication in Horseshoe Bats

    OpenAIRE

    Ying Liu; Jiang Feng; Walter Metzner

    2013-01-01

    Auditory feedback from the animal's own voice is essential during bat echolocation: to optimize signal detection, bats continuously adjust various call parameters in response to changing echo signals. Auditory feedback seems also necessary for controlling many bat communication calls, although it remains unclear how auditory feedback control differs in echolocation and communication. We tackled this question by analyzing echolocation and communication in greater horseshoe bats, whose echoloca...

  10. Rapid change in articulatory lip movement induced by preceding auditory feedback during production of bilabial plosives.

    Directory of Open Access Journals (Sweden)

    Takemi Mochida

    Full Text Available BACKGROUND: There has been plentiful evidence of kinesthetically induced rapid compensation for unanticipated perturbation in speech articulatory movements. However, the role of auditory information in stabilizing articulation has been little studied except for the control of voice fundamental frequency, voice amplitude and vowel formant frequencies. Although the influence of auditory information on the articulatory control process is evident in unintended speech errors caused by delayed auditory feedback, the direct and immediate effect of auditory alteration on the movements of articulators has not been clarified. METHODOLOGY/PRINCIPAL FINDINGS: This work examined whether temporal changes in the auditory feedback of bilabial plosives immediately affects the subsequent lip movement. We conducted experiments with an auditory feedback alteration system that enabled us to replace or block speech sounds in real time. Participants were asked to produce the syllable /pa/ repeatedly at a constant rate. During the repetition, normal auditory feedback was interrupted, and one of three pre-recorded syllables /pa/, /Φa/, or /pi/, spoken by the same participant, was presented once at a different timing from the anticipated production onset, while no feedback was presented for subsequent repetitions. Comparisons of the labial distance trajectories under altered and normal feedback conditions indicated that the movement quickened during the short period immediately after the alteration onset, when /pa/ was presented 50 ms before the expected timing. Such change was not significant under other feedback conditions we tested. CONCLUSIONS/SIGNIFICANCE: The earlier articulation rapidly induced by the progressive auditory input suggests that a compensatory mechanism helps to maintain a constant speech rate by detecting errors between the internally predicted and actually provided auditory information associated with self movement. The timing- and context

  11. The Role of Listener Experience on Consensus Auditory-Perceptual Evaluation of Voice (CAPE-V) Ratings of Postthyroidectomy Voice

    Science.gov (United States)

    Helou, Leah B.; Solomon, Nancy Pearl; Henry, Leonard R.; Coppit, George L.; Howard, Robin S.; Stojadinovic, Alexander

    2010-01-01

    Purpose: To determine whether experienced and inexperienced listeners rate postthyroidectomy voice samples similarly using the Consensus Auditory-Perceptual Evaluation of Voice (CAPE-V). Method: Prospective observational study of voice quality ratings of randomized and blinded voice samples was performed. Twenty-one postthyroidectomy patients'…

  12. Neural mechanisms underlying auditory feedback control of speech.

    Science.gov (United States)

    Tourville, Jason A; Reilly, Kevin J; Guenther, Frank H

    2008-02-01

    The neural substrates underlying auditory feedback control of speech were investigated using a combination of functional magnetic resonance imaging (fMRI) and computational modeling. Neural responses were measured while subjects spoke monosyllabic words under two conditions: (i) normal auditory feedback of their speech and (ii) auditory feedback in which the first formant frequency of their speech was unexpectedly shifted in real time. Acoustic measurements showed compensation to the shift within approximately 136 ms of onset. Neuroimaging revealed increased activity in bilateral superior temporal cortex during shifted feedback, indicative of neurons coding mismatches between expected and actual auditory signals, as well as right prefrontal and Rolandic cortical activity. Structural equation modeling revealed increased influence of bilateral auditory cortical areas on right frontal areas during shifted speech, indicating that projections from auditory error cells in posterior superior temporal cortex to motor correction cells in right frontal cortex mediate auditory feedback control of speech.

  13. Context, Contrast, and Tone of Voice in Auditory Sarcasm Perception

    Science.gov (United States)

    Voyer, Daniel; Thibodeau, Sophie-Hélène; Delong, Breanna J.

    2016-01-01

    Four experiments were conducted to investigate the interplay between context and tone of voice in the perception of sarcasm. These experiments emphasized the role of contrast effects in sarcasm perception exclusively by means of auditory stimuli whereas most past research has relied on written material. In all experiments, a positive or negative…

  14. Adaptation to Delayed Speech Feedback Induces Temporal Recalibration between Vocal Sensory and Auditory Modalities

    Directory of Open Access Journals (Sweden)

    Kosuke Yamamoto

    2011-10-01

    Full Text Available We ordinarily perceive our voice sound as occurring simultaneously with vocal production, but the sense of simultaneity in vocalization can be easily interrupted by delayed auditory feedback (DAF. DAF causes normal people to have difficulty speaking fluently but helps people with stuttering to improve speech fluency. However, the underlying temporal mechanism for integrating the motor production of voice and the auditory perception of vocal sound remains unclear. In this study, we investigated the temporal tuning mechanism integrating vocal sensory and voice sounds under DAF with an adaptation technique. Participants read some sentences with specific delay times of DAF (0, 30, 75, 120 ms during three minutes to induce ‘Lag Adaptation’. After the adaptation, they then judged the simultaneity between motor sensation and vocal sound given feedback in producing simple voice but not speech. We found that speech production with lag adaptation induced a shift in simultaneity responses toward the adapted auditory delays. This indicates that the temporal tuning mechanism in vocalization can be temporally recalibrated after prolonged exposure to delayed vocal sounds. These findings suggest vocalization is finely tuned by the temporal recalibration mechanism, which acutely monitors the integration of temporal delays between motor sensation and vocal sound.

  15. Temporal recalibration in vocalization induced by adaptation of delayed auditory feedback.

    Directory of Open Access Journals (Sweden)

    Kosuke Yamamoto

    Full Text Available BACKGROUND: We ordinarily perceive our voice sound as occurring simultaneously with vocal production, but the sense of simultaneity in vocalization can be easily interrupted by delayed auditory feedback (DAF. DAF causes normal people to have difficulty speaking fluently but helps people with stuttering to improve speech fluency. However, the underlying temporal mechanism for integrating the motor production of voice and the auditory perception of vocal sound remains unclear. In this study, we investigated the temporal tuning mechanism integrating vocal sensory and voice sounds under DAF with an adaptation technique. METHODS AND FINDINGS: Participants produced a single voice sound repeatedly with specific delay times of DAF (0, 66, 133 ms during three minutes to induce 'Lag Adaptation'. They then judged the simultaneity between motor sensation and vocal sound given feedback. We found that lag adaptation induced a shift in simultaneity responses toward the adapted auditory delays. This indicates that the temporal tuning mechanism in vocalization can be temporally recalibrated after prolonged exposure to delayed vocal sounds. Furthermore, we found that the temporal recalibration in vocalization can be affected by averaging delay times in the adaptation phase. CONCLUSIONS: These findings suggest vocalization is finely tuned by the temporal recalibration mechanism, which acutely monitors the integration of temporal delays between motor sensation and vocal sound.

  16. Context, Contrast, and Tone of Voice in Auditory Sarcasm Perception.

    Science.gov (United States)

    Voyer, Daniel; Thibodeau, Sophie-Hélène; Delong, Breanna J

    2016-02-01

    Four experiments were conducted to investigate the interplay between context and tone of voice in the perception of sarcasm. These experiments emphasized the role of contrast effects in sarcasm perception exclusively by means of auditory stimuli whereas most past research has relied on written material. In all experiments, a positive or negative computer-generated context spoken in a flat emotional tone was followed by a literally positive statement spoken in a sincere or sarcastic tone of voice. Participants indicated for each statement whether the intonation was sincere or sarcastic. In Experiment 1, a congruent context/tone of voice pairing (negative/sarcastic, positive/sincere) produced fast response times and proportions of sarcastic responses in the direction predicted by the tone of voice. Incongruent pairings produced mid-range proportions and slower response times. Experiment 2 introduced ambiguous contexts to determine whether a lower context/statements contrast would affect the proportion of sarcastic responses and response time. Results showed the expected findings for proportions (values between those obtained for congruent and incongruent pairings in the direction predicted by the tone of voice). However, response time failed to produce the predicted pattern, suggesting potential issues with the choice of stimuli. Experiments 3 and 4 extended the results of Experiments 1 and 2, respectively, to auditory stimuli based on written vignettes used in neuropsychological assessment. Results were exactly as predicted by contrast effects in both experiments. Taken together, the findings suggest that both context and tone influence how sarcasm is perceived while supporting the importance of contrast effects in sarcasm perception.

  17. Different auditory feedback control for echolocation and communication in horseshoe bats.

    Directory of Open Access Journals (Sweden)

    Ying Liu

    Full Text Available Auditory feedback from the animal's own voice is essential during bat echolocation: to optimize signal detection, bats continuously adjust various call parameters in response to changing echo signals. Auditory feedback seems also necessary for controlling many bat communication calls, although it remains unclear how auditory feedback control differs in echolocation and communication. We tackled this question by analyzing echolocation and communication in greater horseshoe bats, whose echolocation pulses are dominated by a constant frequency component that matches the frequency range they hear best. To maintain echoes within this "auditory fovea", horseshoe bats constantly adjust their echolocation call frequency depending on the frequency of the returning echo signal. This Doppler-shift compensation (DSC behavior represents one of the most precise forms of sensory-motor feedback known. We examined the variability of echolocation pulses emitted at rest (resting frequencies, RFs and one type of communication signal which resembles an echolocation pulse but is much shorter (short constant frequency communication calls, SCFs and produced only during social interactions. We found that while RFs varied from day to day, corroborating earlier studies in other constant frequency bats, SCF-frequencies remained unchanged. In addition, RFs overlapped for some bats whereas SCF-frequencies were always distinctly different. This indicates that auditory feedback during echolocation changed with varying RFs but remained constant or may have been absent during emission of SCF calls for communication. This fundamentally different feedback mechanism for echolocation and communication may have enabled these bats to use SCF calls for individual recognition whereas they adjusted RF calls to accommodate the daily shifts of their auditory fovea.

  18. Representation of Reward Feedback in Primate Auditory Cortex

    Directory of Open Access Journals (Sweden)

    Michael eBrosch

    2011-02-01

    Full Text Available It is well established that auditory cortex is plastic on different time scales and that this plasticity is driven by the reinforcement that is used to motivate subjects to learn or to perform an auditory task. Motivated by these findings, we study in detail properties of neuronal firing in auditory cortex that is related to reward feedback. We recorded from the auditory cortex of two monkeys while they were performing an auditory categorization task. Monkeys listened to a sequence of tones and had to signal when the frequency of adjacent tones stepped in downward direction, irrespective of the tone frequency and step size. Correct identifications were rewarded with either a large or a small amount of water. The size of reward depended on the monkeys' performance in the previous trial: it was large after a correct trial and small after an incorrect trial. The rewards served to maintain task performance. During task performance we found three successive periods of neuronal firing in auditory cortex that reflected (1 the reward expectancy for each trial, (2 the reward size received and (3 the mismatch between the expected and delivered reward. These results, together with control experiments suggest that auditory cortex receives reward feedback that could be used to adapt auditory cortex to task requirements. Additionally, the results presented here extend previous observations of non-auditory roles of auditory cortex and shows that auditory cortex is even more cognitively influenced than lately recognized.

  19. Representation of reward feedback in primate auditory cortex.

    Science.gov (United States)

    Brosch, Michael; Selezneva, Elena; Scheich, Henning

    2011-01-01

    It is well established that auditory cortex is plastic on different time scales and that this plasticity is driven by the reinforcement that is used to motivate subjects to learn or to perform an auditory task. Motivated by these findings, we study in detail properties of neuronal firing in auditory cortex that is related to reward feedback. We recorded from the auditory cortex of two monkeys while they were performing an auditory categorization task. Monkeys listened to a sequence of tones and had to signal when the frequency of adjacent tones stepped in downward direction, irrespective of the tone frequency and step size. Correct identifications were rewarded with either a large or a small amount of water. The size of reward depended on the monkeys' performance in the previous trial: it was large after a correct trial and small after an incorrect trial. The rewards served to maintain task performance. During task performance we found three successive periods of neuronal firing in auditory cortex that reflected (1) the reward expectancy for each trial, (2) the reward-size received, and (3) the mismatch between the expected and delivered reward. These results, together with control experiments suggest that auditory cortex receives reward feedback that could be used to adapt auditory cortex to task requirements. Additionally, the results presented here extend previous observations of non-auditory roles of auditory cortex and shows that auditory cortex is even more cognitively influenced than lately recognized.

  20. Establishing Validity of the Consensus Auditory-Perceptual Evaluation of Voice (CAPE-V)

    Science.gov (United States)

    Zraick, Richard I.; Kempster, Gail B.; Connor, Nadine P.; Thibeault, Susan; Klaben, Bernice K.; Bursac, Zoran; Thrush, Carol R.; Glaze, Leslie E.

    2011-01-01

    Purpose: The Consensus Auditory-Perceptual Evaluation of Voice (CAPE-V) was developed to provide a protocol and form for clinicians to use when assessing the voice quality of adults with voice disorders (Kempster, Gerratt, Verdolini Abbott, Barkmeier-Kramer, & Hillman, 2009). This study examined the reliability and the empirical validity of the…

  1. Establishing Validity of the Consensus Auditory-Perceptual Evaluation of Voice (CAPE-V)

    Science.gov (United States)

    Zraick, Richard I.; Kempster, Gail B.; Connor, Nadine P.; Thibeault, Susan; Klaben, Bernice K.; Bursac, Zoran; Thrush, Carol R.; Glaze, Leslie E.

    2011-01-01

    Purpose: The Consensus Auditory-Perceptual Evaluation of Voice (CAPE-V) was developed to provide a protocol and form for clinicians to use when assessing the voice quality of adults with voice disorders (Kempster, Gerratt, Verdolini Abbott, Barkmeier-Kramer, & Hillman, 2009). This study examined the reliability and the empirical validity of the…

  2. Speech Compensation for Time-Scale-Modified Auditory Feedback

    Science.gov (United States)

    Ogane, Rintaro; Honda, Masaaki

    2014-01-01

    Purpose: The purpose of this study was to examine speech compensation in response to time-scale-modified auditory feedback during the transition of the semivowel for a target utterance of /ija/. Method: Each utterance session consisted of 10 control trials in the normal feedback condition followed by 20 perturbed trials in the modified auditory…

  3. Feedback valence affects auditory perceptual learning independently of feedback probability

    OpenAIRE

    Amitay, S.; Moore, D. R.; Molloy, K.; Halliday, L. F.

    2015-01-01

    Previous studies have suggested that negative feedback is more effective in driving learning than positive feedback. We investigated the effect on learning of providing varying amounts of negative and positive feedback while listeners attempted to discriminate between three identical tones; an impossible task that nevertheless produces robust learning. Four feedback conditions were compared during training: 90% positive feedback or 10% negative feedback informed the participants that they wer...

  4. Attentional demands modulate sensorimotor learning induced by persistent exposure to changes in auditory feedback.

    Science.gov (United States)

    Scheerer, Nichole E; Tumber, Anupreet K; Jones, Jeffery A

    2016-02-01

    Hearing one's own voice is important for regulating ongoing speech and for mapping speech sounds onto articulator movements. However, it is currently unknown whether attention mediates changes in the relationship between motor commands and their acoustic output, which are necessary as growth and aging inevitably cause changes to the vocal tract. In this study, participants produced vocalizations while they heard their vocal pitch persistently shifted downward one semitone in both single- and dual-task conditions. During the single-task condition, participants vocalized while passively viewing a visual stream. During the dual-task condition, participants vocalized while also monitoring a visual stream for target letters, forcing participants to divide their attention. Participants' vocal pitch was measured across each vocalization, to index the extent to which their ongoing vocalization was modified as a result of the deviant auditory feedback. Smaller compensatory responses were recorded during the dual-task condition, suggesting that divided attention interfered with the use of auditory feedback for the regulation of ongoing vocalizations. Participants' vocal pitch was also measured at the beginning of each vocalization, before auditory feedback was available, to assess the extent to which the deviant auditory feedback was used to modify subsequent speech motor commands. Smaller changes in vocal pitch at vocalization onset were recorded during the dual-task condition, suggesting that divided attention diminished sensorimotor learning. Together, the results of this study suggest that attention is required for the speech motor control system to make optimal use of auditory feedback for the regulation and planning of speech motor commands.

  5. Validity and rater reliability of Persian version of the Consensus Auditory Perceptual Evaluation of Voice

    Directory of Open Access Journals (Sweden)

    Nazila Salary Majd

    2014-08-01

    Full Text Available Background and Aim: Auditory-perceptual assessment of voice a main approach in the diagnosis and therapy improvement of voice disorders. Despite, there are few Iranian studies about auditory-perceptual assessment of voice. The aim of present study was development and determination of validity and rater reliability of Persian version of the Consensus Auditory Perceptual Evaluation of Voice (CAPE -V.Methods: The qualitative content validity was detected by collecting 10 questionnaires from 9 experienced speech and language pathologists and a linguist. For reliability purposes, the voice samples of 40 dysphonic (neurogenic, functional with and without laryngeal lesions adults (20-45 years of age and 10 normal healthy speakers were recorded. The samples included sustain of vowels and reading the 6 sentences of Persian version of the consensus auditory perceptual evaluation of voice called the ATSHA.Results: The qualitative content validity was proved for developed Persian version of the consensus auditory perceptual evaluation of voice. Cronbach’s alpha was high (0.95. Intra-rater reliability coefficients ranged from 0.86 for overall severity to 0.42 for pitch; inter-rater reliability ranged from 0.85 for overall severity to 0.32 for pitch (p<0.05.Conclusion: The ATSHA can be used as a valid and reliable Persian scale for auditory perceptual assessment of voice in adults.

  6. Adaptation to delayed auditory feedback induces the temporal recalibration effect in both speech perception and production.

    Science.gov (United States)

    Yamamoto, Kosuke; Kawabata, Hideaki

    2014-12-01

    We ordinarily speak fluently, even though our perceptions of our own voices are disrupted by various environmental acoustic properties. The underlying mechanism of speech is supposed to monitor the temporal relationship between speech production and the perception of auditory feedback, as suggested by a reduction in speech fluency when the speaker is exposed to delayed auditory feedback (DAF). While many studies have reported that DAF influences speech motor processing, its relationship to the temporal tuning effect on multimodal integration, or temporal recalibration, remains unclear. We investigated whether the temporal aspects of both speech perception and production change due to adaptation to the delay between the motor sensation and the auditory feedback. This is a well-used method of inducing temporal recalibration. Participants continually read texts with specific DAF times in order to adapt to the delay. Then, they judged the simultaneity between the motor sensation and the vocal feedback. We measured the rates of speech with which participants read the texts in both the exposure and re-exposure phases. We found that exposure to DAF changed both the rate of speech and the simultaneity judgment, that is, participants' speech gained fluency. Although we also found that a delay of 200 ms appeared to be most effective in decreasing the rates of speech and shifting the distribution on the simultaneity judgment, there was no correlation between these measurements. These findings suggest that both speech motor production and multimodal perception are adaptive to temporal lag but are processed in distinct ways.

  7. Voice F0 responses to pitch-shifted voice feedback during English speech.

    Science.gov (United States)

    Chen, Stephanie H; Liu, Hanjun; Xu, Yi; Larson, Charles R

    2007-02-01

    Previous studies have demonstrated that motor control of segmental features of speech rely to some extent on sensory feedback. Control of voice fundamental frequency (F0) has been shown to be modulated by perturbations in voice pitch feedback during various phonatory tasks and in Mandarin speech. The present study was designed to determine if voice Fo is modulated in a task-dependent manner during production of suprasegmental features of English speech. English speakers received pitch-modulated voice feedback (+/-50, 100, and 200 cents, 200 ms duration) during a sustained vowel task and a speech task. Response magnitudes during speech (mean 31.5 cents) were larger than during the vowels (mean 21.6 cents), response magnitudes increased as a function of stimulus magnitude during speech but not vowels, and responses to downward pitch-shift stimuli were larger than those to upward stimuli. Response latencies were shorter in speech (mean 122 ms) compared to vowels (mean 154 ms). These findings support previous research suggesting the audio vocal system is involved in the control of suprasegmental features of English speech by correcting for errors between voice pitch feedback and the desired F0.

  8. Task-irrelevant auditory feedback facilitates motor performance in musicians

    Directory of Open Access Journals (Sweden)

    Virginia eConde

    2012-05-01

    Full Text Available An efficient and fast auditory–motor network is a basic resource for trained musicians due to the importance of motor anticipation of sound production in musical performance. When playing an instrument, motor performance always goes along with the production of sounds and the integration between both modalities plays an essential role in the course of musical training. The aim of the present study was to investigate the role of task-irrelevant auditory feedback during motor performance in musicians using a serial reaction time task (SRTT. Our hypothesis was that musicians, due to their extensive auditory–motor practice routine during musical training, have a superior performance and learning capabilities when receiving auditory feedback during SRTT relative to musicians performing the SRTT without any auditory feedback. Here we provide novel evidence that task-irrelevant auditory feedback is capable to reinforce SRTT performance but not learning, a finding that might provide further insight into auditory-motor integration in musicians on a behavioral level.

  9. Formant compensation for auditory feedback with English vowels

    DEFF Research Database (Denmark)

    Mitsuya, Takashi; MacDonald, Ewen N; Munhall, Kevin G;

    2015-01-01

    Past studies have shown that speakers spontaneously adjust their speech acoustics in response to their auditory feedback perturbed in real time. In the case of formant perturbation, the majority of studies have examined speaker's compensatory production using the English vowel /ɛ/ as in the word...... to differences in the degree of lingual contact or jaw openness. This may in turn influence the ways in which speakers compensate for auditory feedback. The aim of the current study was to examine speakers' compensatory behavior with six English monophthongs. Specifically, the current study tested to see...

  10. Auditory feedback control of vocal pitch during sustained vocalization: a cross-sectional study of adult aging.

    Directory of Open Access Journals (Sweden)

    Peng Liu

    Full Text Available BACKGROUND: Auditory feedback has been demonstrated to play an important role in the control of voice fundamental frequency (F(0, but the mechanisms underlying the processing of auditory feedback remain poorly understood. It has been well documented that young adults can use auditory feedback to stabilize their voice F(0 by making compensatory responses to perturbations they hear in their vocal pitch feedback. However, little is known about the effects of aging on the processing of audio-vocal feedback during vocalization. METHODOLOGY/PRINCIPAL FINDINGS: In the present study, we recruited adults who were between 19 and 75 years of age and divided them into five age groups. Using a pitch-shift paradigm, the pitch of their vocal feedback was unexpectedly shifted ±50 or ±100 cents during sustained vocalization of the vowel sound/u/. Compensatory vocal F(0 response magnitudes and latencies to pitch feedback perturbations were examined. A significant effect of age was found such that response magnitudes increased with increasing age until maximal values were reached for adults 51-60 years of age and then decreased for adults 61-75 years of age. Adults 51-60 years of age were also more sensitive to the direction and magnitude of the pitch feedback perturbations compared to younger adults. CONCLUSION: These findings demonstrate that the pitch-shift reflex systematically changes across the adult lifespan. Understanding aging-related changes to the role of auditory feedback is critically important for our theoretical understanding of speech production and the clinical applications of that knowledge.

  11. Auditory Masking Effects on Speech Fluency in Apraxia of Speech and Aphasia: Comparison to Altered Auditory Feedback

    Science.gov (United States)

    Jacks, Adam; Haley, Katarina L.

    2015-01-01

    Purpose: To study the effects of masked auditory feedback (MAF) on speech fluency in adults with aphasia and/or apraxia of speech (APH/AOS). We hypothesized that adults with AOS would increase speech fluency when speaking with noise. Altered auditory feedback (AAF; i.e., delayed/frequency-shifted feedback) was included as a control condition not…

  12. Feedback delays eliminate auditory-motor learning in speech production.

    Science.gov (United States)

    Max, Ludo; Maffett, Derek G

    2015-03-30

    Neurologically healthy individuals use sensory feedback to alter future movements by updating internal models of the effector system and environment. For example, when visual feedback about limb movements or auditory feedback about speech movements is experimentally perturbed, the planning of subsequent movements is adjusted - i.e., sensorimotor adaptation occurs. A separate line of studies has demonstrated that experimentally delaying the sensory consequences of limb movements causes the sensory input to be attributed to external sources rather than to one's own actions. Yet similar feedback delays have remarkably little effect on visuo-motor adaptation (although the rate of learning varies, the amount of adaptation is only moderately affected with delays of 100-200ms, and adaptation still occurs even with a delay as long as 5000ms). Thus, limb motor learning remains largely intact even in conditions where error assignment favors external factors. Here, we show a fundamentally different result for sensorimotor control of speech articulation: auditory-motor adaptation to formant-shifted feedback is completely eliminated with delays of 100ms or more. Thus, for speech motor learning, real-time auditory feedback is critical. This novel finding informs theoretical models of human motor control in general and speech motor control in particular, and it has direct implications for the application of motor learning principles in the habilitation and rehabilitation of individuals with various sensorimotor speech disorders. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  13. Stuttering Inhibition via Altered Auditory Feedback during Scripted Telephone Conversations

    Science.gov (United States)

    Hudock, Daniel; Kalinowski, Joseph

    2014-01-01

    Background: Overt stuttering is inhibited by approximately 80% when people who stutter read aloud as they hear an altered form of their speech feedback to them. However, levels of stuttering inhibition vary from 60% to 100% depending on speaking situation and signal presentation. For example, binaural presentations of delayed auditory feedback…

  14. Stuttering Inhibition via Altered Auditory Feedback during Scripted Telephone Conversations

    Science.gov (United States)

    Hudock, Daniel; Kalinowski, Joseph

    2014-01-01

    Background: Overt stuttering is inhibited by approximately 80% when people who stutter read aloud as they hear an altered form of their speech feedback to them. However, levels of stuttering inhibition vary from 60% to 100% depending on speaking situation and signal presentation. For example, binaural presentations of delayed auditory feedback…

  15. Altered Sensory Feedbacks in Pianist's Dystonia: the altered auditory feedback paradigm and the glove effect

    Directory of Open Access Journals (Sweden)

    Felicia Pei-Hsin Cheng

    2013-12-01

    Full Text Available Background: This study investigates the effect of altered auditory feedback (AAF in musician's dystonia (MD and discusses whether altered auditory feedback can be considered as a sensory trick in MD. Furthermore, the effect of AAF is compared with altered tactile feedback, which can serve as a sensory trick in several other forms of focal dystonia. Methods: The method is based on scale analysis (Jabusch et al. 2004. Experiment 1 employs synchronization paradigm: 12 MD patients and 25 healthy pianists had to repeatedly play C-major scales in synchrony with a metronome on a MIDI-piano with 3 auditory feedback conditions: 1. normal feedback; 2. no feedback; 3. constant delayed feedback. Experiment 2 employs synchronization-continuation paradigm: 12 MD patients and 12 healthy pianists had to repeatedly play C-major scales in two phases: first in synchrony with a metronome, secondly continue the established tempo without the metronome. There are 4 experimental conditions, among them 3 are the same altered auditory feedback as in Experiment 1 and 1 is related to altered tactile sensory input. The coefficient of variation of inter-onset intervals of the key depressions was calculated to evaluate fine motor control. Results: In both experiments, the healthy controls and the patients behaved very similarly. There is no difference in the regularity of playing between the two groups under any condition, and neither did AAF nor did altered tactile feedback have a beneficial effect on patients’ fine motor control. Conclusions: The results of the two experiments suggest that in the context of our experimental designs, AAF and altered tactile feedback play a minor role in motor coordination in patients with musicians' dystonia. We propose that altered auditory and tactile feedback do not serve as effective sensory tricks and may not temporarily reduce the symptoms of patients suffering from MD in this experimental context.

  16. Partial Compensation for Altered Auditory Feedback: A Tradeoff with Somatosensory Feedback?

    Science.gov (United States)

    Katseff, Shira; Houde, John; Johnson, Keith

    2012-01-01

    Talkers are known to compensate only partially for experimentally-induced changes to their auditory feedback. In a typical experiment, talkers might hear their F1 feedback shifted higher (so that /[epsilon]/ sounds like /[ash]/, for example), and compensate by lowering F1 in their subsequent speech by about a quarter of that distance. Here, we…

  17. Functional role of delta and theta band oscillations for auditory feedback processing during vocal pitch motor control.

    Science.gov (United States)

    Behroozmand, Roozbeh; Ibrahim, Nadine; Korzyukov, Oleg; Robin, Donald A; Larson, Charles R

    2015-01-01

    The answer to the question of how the brain incorporates sensory feedback and links it with motor function to achieve goal-directed movement during vocalization remains unclear. We investigated the mechanisms of voice pitch motor control by examining the spectro-temporal dynamics of EEG signals when non-musicians (NM), relative pitch (RP), and absolute pitch (AP) musicians maintained vocalizations of a vowel sound and received randomized ± 100 cents pitch-shift stimuli in their auditory feedback. We identified a phase-synchronized (evoked) fronto-central activation within the theta band (5-8 Hz) that temporally overlapped with compensatory vocal responses to pitch-shifted auditory feedback and was significantly stronger in RP and AP musicians compared with non-musicians. A second component involved a non-phase-synchronized (induced) frontal activation within the delta band (1-4 Hz) that emerged at approximately 1 s after the stimulus onset. The delta activation was significantly stronger in the NM compared with RP and AP groups and correlated with the pitch rebound error (PRE), indicating the degree to which subjects failed to re-adjust their voice pitch to baseline after the stimulus offset. We propose that the evoked theta is a neurophysiological marker of enhanced pitch processing in musicians and reflects mechanisms by which humans incorporate auditory feedback to control their voice pitch. We also suggest that the delta activation reflects adaptive neural processes by which vocal production errors are monitored and used to update the state of sensory-motor networks for driving subsequent vocal behaviors. This notion is corroborated by our findings showing that larger PREs were associated with greater delta band activity in the NM compared with RP and AP groups. These findings provide new insights into the neural mechanisms of auditory feedback processing for vocal pitch motor control.

  18. Functional role of delta and theta band oscillations for auditory feedback processing during vocal pitch motor control

    Directory of Open Access Journals (Sweden)

    Roozbeh eBehroozmand

    2015-03-01

    Full Text Available The answer to the question of how the brain incorporates sensory feedback and links it with motor function to achieve goal-directed movement during vocalization remains unclear. We investigated the mechanisms of voice pitch motor control by examining the spectro-temporal dynamics of EEG signals when non-musicians (NM, relative pitch (RP and absolute pitch (AP musicians maintained vocalizations of a vowel sound and received randomized ±100 cents pitch-shift stimuli in their auditory feedback. We identified a phase-synchronized (evoked fronto-central activation within the theta band (5-8 Hz that temporally overlapped with compensatory vocal responses to pitch-shifted auditory feedback and was significantly stronger in RP and AP musicians compared with non-musicians. A second component involved a non-phase-synchronized (induced frontal activation within the delta band (1-4 Hz that emerged at approximately 1 second after the stimulus onset. The delta activation was significantly stronger in the NM compared with RP and AP groups and correlated with the pitch rebound error (PRE, indicating the degree to which subjects failed to re-adjust their voice pitch to baseline after the stimulus offset. We propose that the evoked theta is a neurophysiological marker of enhanced pitch processing in musicians and reflects mechanisms by which humans incorporate auditory feedback to control their voice pitch. We also suggest that the delta activation reflects adaptive neural processes by which vocal production errors are monitored and used to update the state of sensory-motor networks for driving subsequent vocal behaviors. This notion is corroborated by our findings showing that larger PREs were associated with greater delta band activity in the NM compared with RP and AP groups. These findings provide new insights into the neural mechanisms of auditory feedback processing for vocal pitch motor control.

  19. Intensity of guitar playing as a function of auditory feedback.

    Science.gov (United States)

    Johnson, C I; Pick, H L; Garber, S R; Siegel, G M

    1978-06-01

    Subjects played an electric guitar while auditory feedback was attenuated or amplified at seven sidetone levels varying 10-dB steps around a comfortable listening level. The sidetone signal was presented in quiet (experiment I) and several levels of white noise (experiment II). Subjects compensated for feedback changes, demonstrating a sidetone amplification as well as a Lombard effect. The similarity of these results to those found previously for speech suggests that guitar playing can be a useful analog for the function of auditory feedback in speech production. Unlike previous findings for speech, the sidetone-amplification effect was not potentiated by masking, consistent with a hypothesis that potentiation in speech is attributable to interference with bone conduction caused by the masking noise.

  20. Hear today, not gone tomorrow? An exploratory longitudinal study of auditory verbal hallucinations (hearing voices).

    Science.gov (United States)

    Hartigan, Nicky; McCarthy-Jones, Simon; Hayward, Mark

    2014-01-01

    Despite an increasing volume of cross-sectional work on auditory verbal hallucinations (hearing voices), there remains a paucity of work on how the experience may change over time. The first aim of this study was to attempt replication of a previous finding that beliefs about voices are enduring and stable, irrespective of changes in the severity of voices, and do not change without a specific intervention. The second aim was to examine whether voice-hearers' interrelations with their voices change over time, without a specific intervention. A 12-month longitudinal examination of these aspects of voices was undertaken with hearers in routine clinical treatment (N = 18). We found beliefs about voices' omnipotence and malevolence were stable over a 12-month period, as were styles of interrelating between voice and hearer, despite trends towards reductions in voice-related distress and disruption. However, there was a trend for beliefs about the benevolence of voices to decrease over time. Styles of interrelating between voice and hearer appear relatively stable and enduring, as are beliefs about the voices' malevolent intent and power. Although there was some evidence that beliefs about benevolence may reduce over time, the reasons for this were not clear. Our exploratory study was limited by only being powered to detect large effect sizes. Implications for clinical practice and future research are discussed.

  1. Effect of auditory feedback on speech production after cochlear implantation

    Directory of Open Access Journals (Sweden)

    Sheikh Zadeh H

    2001-10-01

    Full Text Available The main goal of this study is to determine the auditory feedback effects in improvement of speech production process in prelingual totally deaf children who used cochlear implant prosthesis. For this reason, we recorded speech of four prelingual cochlear implant children pre and post of operation. Then we extract some static features of vowels-such as fundamental frequency, formant frequencies, vowel duration and vowel energy-from their stable mid-section and analyze them using a longitudinal prosthesis-on/off analysis. These patients-where are in the range of 7-13 years old-were operated in the cochlear implant clinic of Amiralam hospital. At each session, patients read the sentences once in device-on condition and then after 30 minutes stay in device-off condition. Quantitative results show that at least for the features under study, the patient's reliance on the auditory feedback decreased consistently by time (about 65%-averaged on all three vowels under study and all patients. So we concluded that after a sufficient time of operation, the speech motor patterns of patients will be trained for the correct production of static features of vowels and the relation of patients to auditory feedback for the production of such features considerably decreased by time.

  2. Speaking modifies voice-evoked activity in the human auditory cortex.

    Science.gov (United States)

    Curio, G; Neuloh, G; Numminen, J; Jousmäki, V; Hari, R

    2000-04-01

    The voice we most often hear is our own, and proper interaction between speaking and hearing is essential for both acquisition and performance of spoken language. Disturbed audiovocal interactions have been implicated in aphasia, stuttering, and schizophrenic voice hallucinations, but paradigms for a noninvasive assessment of auditory self-monitoring of speaking and its possible dysfunctions are rare. Using magnetoencephalograpy we show here that self-uttered syllables transiently activate the speaker's auditory cortex around 100 ms after voice onset. These phasic responses were delayed by 11 ms in the speech-dominant left hemisphere relative to the right, whereas during listening to a replay of the same utterances the response latencies were symmetric. Moreover, the auditory cortices did not react to rare vowel changes interspersed randomly within a series of repetitively spoken vowels, in contrast to regular change-related responses evoked 100-200 ms after replayed rare vowels. Thus, speaking primes the human auditory cortex at a millisecond time scale, dampening and delaying reactions to self-produced "expected" sounds, more prominently in the speech-dominant hemisphere. Such motor-to-sensory priming of early auditory cortex responses during voicing constitutes one element of speech self-monitoring that could be compromised in central speech disorders.

  3. A Comparison of Text, Voice, and Screencasting Feedback to Online Students

    Science.gov (United States)

    Orlando, John

    2016-01-01

    The emergence of simple video and voice recording software has allowed faculty to deliver online course content in a variety of rich formats. But most faculty are still using traditional text comments for feedback to students. The author launched a study comparing student and faculty perceptions of text, voice, and screencasting feedback. The…

  4. Not Just Because it is Fair - The Role of Feedback Quality and Voice in Performance Evaluation

    NARCIS (Netherlands)

    J. Noeverman (Jan)

    2010-01-01

    textabstractThis paper investigates the role of feedback quality and voice in performance evaluation. A model is developed and tested in which feedback quality and voice enhance procedural fairness perceptions (procedure effects), and procedural fairness perceptions in turn lead to different positiv

  5. Weighting of Auditory Feedback Across the English Vowel Space

    OpenAIRE

    Purcell, David; Munhall, Kevin

    2008-01-01

    Auditory feedback in the headphones of talkers was manipulated in the F1 dimension using a real-time vowel formant filtering system. Minimum formant shifts required to elicit a response and the amount of compensation were measured for vowels across the English vowel space. The largest response in production of F1 was observed for the vowel /ε/ and smaller or non-significant changes were found for point vowels. In general, changes in production were of a compensatory nature that reduced the er...

  6. Weak responses to auditory feedback perturbation during articulation in persons who stutter: evidence for abnormal auditory-motor transformation.

    Directory of Open Access Journals (Sweden)

    Shanqing Cai

    Full Text Available Previous empirical observations have led researchers to propose that auditory feedback (the auditory perception of self-produced sounds when speaking functions abnormally in the speech motor systems of persons who stutter (PWS. Researchers have theorized that an important neural basis of stuttering is the aberrant integration of auditory information into incipient speech motor commands. Because of the circumstantial support for these hypotheses and the differences and contradictions between them, there is a need for carefully designed experiments that directly examine auditory-motor integration during speech production in PWS. In the current study, we used real-time manipulation of auditory feedback to directly investigate whether the speech motor system of PWS utilizes auditory feedback abnormally during articulation and to characterize potential deficits of this auditory-motor integration. Twenty-one PWS and 18 fluent control participants were recruited. Using a short-latency formant-perturbation system, we examined participants' compensatory responses to unanticipated perturbation of auditory feedback of the first formant frequency during the production of the monophthong [ε]. The PWS showed compensatory responses that were qualitatively similar to the controls' and had close-to-normal latencies (∼150 ms, but the magnitudes of their responses were substantially and significantly smaller than those of the control participants (by 47% on average, p<0.05. Measurements of auditory acuity indicate that the weaker-than-normal compensatory responses in PWS were not attributable to a deficit in low-level auditory processing. These findings are consistent with the hypothesis that stuttering is associated with functional defects in the inverse models responsible for the transformation from the domain of auditory targets and auditory error information into the domain of speech motor commands.

  7. The Effect of Gender on the N1-P2 Auditory Complex while Listening and Speaking with Altered Auditory Feedback

    Science.gov (United States)

    Swink, Shannon; Stuart, Andrew

    2012-01-01

    The effect of gender on the N1-P2 auditory complex was examined while listening and speaking with altered auditory feedback. Fifteen normal hearing adult males and 15 females participated. N1-P2 components were evoked while listening to self-produced nonaltered and frequency shifted /a/ tokens and during production of /a/ tokens during nonaltered…

  8. Using on-line altered auditory feedback treating Parkinsonian speech

    Science.gov (United States)

    Wang, Emily; Verhagen, Leo; de Vries, Meinou H.

    2005-09-01

    Patients with advanced Parkinson's disease tend to have dysarthric speech that is hesitant, accelerated, and repetitive, and that is often resistant to behavior speech therapy. In this pilot study, the speech disturbances were treated using on-line altered feedbacks (AF) provided by SpeechEasy (SE), an in-the-ear device registered with the FDA for use in humans to treat chronic stuttering. Eight PD patients participated in the study. All had moderate to severe speech disturbances. In addition, two patients had moderate recurring stuttering at the onset of PD after long remission since adolescence, two had bilateral STN DBS, and two bilateral pallidal DBS. An effective combination of delayed auditory feedback and frequency-altered feedback was selected for each subject and provided via SE worn in one ear. All subjects produced speech samples (structured-monologue and reading) under three conditions: baseline, with SE without, and with feedbacks. The speech samples were randomly presented and rated for speech intelligibility goodness using UPDRS-III item 18 and the speaking rate. The results indicted that SpeechEasy is well tolerated and AF can improve speech intelligibility in spontaneous speech. Further investigational use of this device for treating speech disorders in PD is warranted [Work partially supported by Janus Dev. Group, Inc.].

  9. Mouth and Voice: A Relationship between Visual and Auditory Preference in the Human Superior Temporal Sulcus.

    Science.gov (United States)

    Zhu, Lin L; Beauchamp, Michael S

    2017-03-08

    Cortex in and around the human posterior superior temporal sulcus (pSTS) is known to be critical for speech perception. The pSTS responds to both the visual modality (especially biological motion) and the auditory modality (especially human voices). Using fMRI in single subjects with no spatial smoothing, we show that visual and auditory selectivity are linked. Regions of the pSTS were identified that preferred visually presented moving mouths (presented in isolation or as part of a whole face) or moving eyes. Mouth-preferring regions responded strongly to voices and showed a significant preference for vocal compared with nonvocal sounds. In contrast, eye-preferring regions did not respond to either vocal or nonvocal sounds. The converse was also true: regions of the pSTS that showed a significant response to speech or preferred vocal to nonvocal sounds responded more strongly to visually presented mouths than eyes. These findings can be explained by environmental statistics. In natural environments, humans see visual mouth movements at the same time as they hear voices, while there is no auditory accompaniment to visual eye movements. The strength of a voxel's preference for visual mouth movements was strongly correlated with the magnitude of its auditory speech response and its preference for vocal sounds, suggesting that visual and auditory speech features are coded together in small populations of neurons within the pSTS.SIGNIFICANCE STATEMENT Humans interacting face to face make use of auditory cues from the talker's voice and visual cues from the talker's mouth to understand speech. The human posterior superior temporal sulcus (pSTS), a brain region known to be important for speech perception, is complex, with some regions responding to specific visual stimuli and others to specific auditory stimuli. Using BOLD fMRI, we show that the natural statistics of human speech, in which voices co-occur with mouth movements, are reflected in the neural architecture of

  10. Early psychological intervention for auditory hallucinations: an exploratory study of young people's voices groups.

    Science.gov (United States)

    Newton, Elizabeth; Landau, Sabine; Smith, Patrick; Monks, Paul; Shergill, Sukhi; Wykes, Til

    2005-01-01

    Twenty to fifty percent of people with a diagnosis of schizophrenia continue to hear voices despite taking neuroleptic medication. Trials of group cognitive behavioral therapy for adults with auditory hallucinations have shown promising results. Auditory hallucinations may be most amenable to psychological intervention during a 3-year critical period after symptom onset. This study evaluates the effectiveness of group cognitive behavioral therapy (CBT) for young people with recent-onset auditory hallucinations (N = 22), using a waiting list control. Outcome measures were administered at four separate time points. Significant reductions in auditory hallucinations occurred over the total treatment phase, but not over the waiting period. Further investigations in the form of randomized controlled trials are warranted.

  11. Explaining the high voice superiority effect in polyphonic music: evidence from cortical evoked potentials and peripheral auditory models.

    Science.gov (United States)

    Trainor, Laurel J; Marie, Céline; Bruce, Ian C; Bidelman, Gavin M

    2014-02-01

    Natural auditory environments contain multiple simultaneously-sounding objects and the auditory system must parse the incoming complex sound wave they collectively create into parts that represent each of these individual objects. Music often similarly requires processing of more than one voice or stream at the same time, and behavioral studies demonstrate that human listeners show a systematic perceptual bias in processing the highest voice in multi-voiced music. Here, we review studies utilizing event-related brain potentials (ERPs), which support the notions that (1) separate memory traces are formed for two simultaneous voices (even without conscious awareness) in auditory cortex and (2) adults show more robust encoding (i.e., larger ERP responses) to deviant pitches in the higher than in the lower voice, indicating better encoding of the former. Furthermore, infants also show this high-voice superiority effect, suggesting that the perceptual dominance observed across studies might result from neurophysiological characteristics of the peripheral auditory system. Although musically untrained adults show smaller responses in general than musically trained adults, both groups similarly show a more robust cortical representation of the higher than of the lower voice. Finally, years of experience playing a bass-range instrument reduces but does not reverse the high voice superiority effect, indicating that although it can be modified, it is not highly neuroplastic. Results of new modeling experiments examined the possibility that characteristics of middle-ear filtering and cochlear dynamics (e.g., suppression) reflected in auditory nerve firing patterns might account for the higher-voice superiority effect. Simulations show that both place and temporal AN coding schemes well-predict a high-voice superiority across a wide range of interval spacings and registers. Collectively, we infer an innate, peripheral origin for the higher-voice superiority observed in human

  12. Auditory feedback and memory for music performance: sound evidence for an encoding effect.

    Science.gov (United States)

    Finney, Steven A; Palmer, Caroline

    2003-01-01

    Research on the effects of context and task on learning and memory has included approaches that emphasize processes during learning (e.g., Craik & Tulving, 1975) and approaches that emphasize a match of conditions during learning with conditions during a later test of memory (e.g., Morris, Bransford, & Franks, 1977; Proteau, 1992; Tulving & Thomson, 1973). We investigated the effects of auditory context on learning and retrieval in three experiments on memorized music performance (a form of serial recall). Auditory feedback (presence or absence) was manipulated while pianists learned musical pieces from notation and when they later played the pieces from memory. Auditory feedback during learning significantly improved later recall. However, auditory feedback at test did not significantly affect recall, nor was there an interaction between conditions at learning and test. Auditory feedback in music performance appears to be a contextual factor that affects learning but is relatively independent of retrieval conditions.

  13. Effects of visual and auditory feedback on sensorimotor circuits in the basal ganglia.

    Science.gov (United States)

    Prodoehl, Janey; Yu, Hong; Wasson, Pooja; Corcos, Daniel M; Vaillancourt, David E

    2008-06-01

    Previous work using visual feedback has identified two distinct sensorimotor circuits in the basal ganglia (BG): one that scaled with the duration of force and one that scaled with the rate of change of force. The present study compared functional MRI signal changes in the BG during a grip force task using either visual or auditory feedback to determine whether the BG nuclei process auditory and visual feedback similarly. We confirmed the same two sensorimotor circuits in the BG. Activation in the striatum and external globus pallidus (GPe) scaled linearly with the duration of force under visual and auditory feedback conditions, with similar slopes and intercepts across feedback type. The pattern of signal change for the internal globus pallidus (GPi) and subthalamic nucleus (STN) was nonlinear and parameters of the exponential function were altered by feedback type. Specifically, GPi and STN activation decreased exponentially with the rate of change of force. The rate constant and asymptote of the exponential functions for GPi and STN were greater during auditory than visual feedback. In a comparison of the BOLD signal between BG regions, GPe had the highest percentage of variance accounted for and this effect was preserved for both feedback types. These new findings suggest that neuronal activity of specific BG nuclei is affected by whether the feedback is derived from visual or auditory inputs. Also, the data are consistent with the hypothesis that the GPe has a high level of information convergence from other BG nuclei, which is preserved across different sensory feedback modalities.

  14. Temporal coordination in joint music performance: effects of endogenous rhythms and auditory feedback.

    Science.gov (United States)

    Zamm, Anna; Pfordresher, Peter Q; Palmer, Caroline

    2015-02-01

    Many behaviors require that individuals coordinate the timing of their actions with others. The current study investigated the role of two factors in temporal coordination of joint music performance: differences in partners' spontaneous (uncued) rate and auditory feedback generated by oneself and one's partner. Pianists performed melodies independently (in a Solo condition), and with a partner (in a duet condition), either at the same time as a partner (Unison), or at a temporal offset (Round), such that pianists heard their partner produce a serially shifted copy of their own sequence. Access to self-produced auditory information during duet performance was manipulated as well: Performers heard either full auditory feedback (Full), or only feedback from their partner (Other). Larger differences in partners' spontaneous rates of Solo performances were associated with larger asynchronies (less effective synchronization) during duet performance. Auditory feedback also influenced temporal coordination of duet performance: Pianists were more coordinated (smaller tone onset asynchronies and more mutual adaptation) during duet performances when self-generated auditory feedback aligned with partner-generated feedback (Unison) than when it did not (Round). Removal of self-feedback disrupted coordination (larger tone onset asynchronies) during Round performances only. Together, findings suggest that differences in partners' spontaneous rates of Solo performances, as well as differences in self- and partner-generated auditory feedback, influence temporal coordination of joint sensorimotor behaviors.

  15. Sensorimotor learning in children and adults: Exposure to frequency-altered auditory feedback during speech production.

    Science.gov (United States)

    Scheerer, N E; Jacobson, D S; Jones, J A

    2016-02-09

    Auditory feedback plays an important role in the acquisition of fluent speech; however, this role may change once speech is acquired and individuals no longer experience persistent developmental changes to the brain and vocal tract. For this reason, we investigated whether the role of auditory feedback in sensorimotor learning differs across children and adult speakers. Participants produced vocalizations while they heard their vocal pitch predictably or unpredictably shifted downward one semitone. The participants' vocal pitches were measured at the beginning of each vocalization, before auditory feedback was available, to assess the extent to which the deviant auditory feedback modified subsequent speech motor commands. Sensorimotor learning was observed in both children and adults, with participants' initial vocal pitch increasing following trials where they were exposed to predictable, but not unpredictable, frequency-altered feedback. Participants' vocal pitch was also measured across each vocalization, to index the extent to which the deviant auditory feedback was used to modify ongoing vocalizations. While both children and adults were found to increase their vocal pitch following predictable and unpredictable changes to their auditory feedback, adults produced larger compensatory responses. The results of the current study demonstrate that both children and adults rapidly integrate information derived from their auditory feedback to modify subsequent speech motor commands. However, these results also demonstrate that children and adults differ in their ability to use auditory feedback to generate compensatory vocal responses during ongoing vocalization. Since vocal variability also differed across the children and adult groups, these results also suggest that compensatory vocal responses to frequency-altered feedback manipulations initiated at vocalization onset may be modulated by vocal variability. Copyright © 2015 IBRO. Published by Elsevier Ltd. All

  16. Effect of task-related continuous auditory feedback during learning of tracking motion exercises

    Directory of Open Access Journals (Sweden)

    Rosati Giulio

    2012-10-01

    Full Text Available Abstract Background This paper presents the results of a set of experiments in which we used continuous auditory feedback to augment motor training exercises. This feedback modality is mostly underexploited in current robotic rehabilitation systems, which usually implement only very basic auditory interfaces. Our hypothesis is that properly designed continuous auditory feedback could be used to represent temporal and spatial information that could in turn, improve performance and motor learning. Methods We implemented three different experiments on healthy subjects, who were asked to track a target on a screen by moving an input device (controller with their hand. Different visual and auditory feedback modalities were envisaged. The first experiment investigated whether continuous task-related auditory feedback can help improve performance to a greater extent than error-related audio feedback, or visual feedback alone. In the second experiment we used sensory substitution to compare different types of auditory feedback with equivalent visual feedback, in order to find out whether mapping the same information on a different sensory channel (the visual channel yielded comparable effects with those gained in the first experiment. The final experiment applied a continuously changing visuomotor transformation between the controller and the screen and mapped kinematic information, computed in either coordinate system (controller or video, to the audio channel, in order to investigate which information was more relevant to the user. Results Task-related audio feedback significantly improved performance with respect to visual feedback alone, whilst error-related feedback did not. Secondly, performance in audio tasks was significantly better with respect to the equivalent sensory-substituted visual tasks. Finally, with respect to visual feedback alone, video-task-related sound feedback decreased the tracking error during the learning of a novel

  17. Augmented visual, auditory, haptic, and multimodal feedback in motor learning: A review

    National Research Council Canada - National Science Library

    Sigrist, Roland; Rauter, Georg; Riener, Robert; Wolf, Peter

    2013-01-01

    .... Recently, technical advances have made it possible also to investigate more complex, realistic motor tasks and to implement not only visual, but also auditory, haptic, or multimodal augmented feedback...

  18. Effects of auditory feedback during gait training on hemiplegic patients' weight bearing and dynamic balance ability.

    Science.gov (United States)

    Ki, Kyong-Il; Kim, Mi-Sun; Moon, Young; Choi, Jong-Duk

    2015-04-01

    [Purpose] This study examined the effects of auditory feedback during gait on the weight bearing of patients with hemiplegia resulting from a stroke. [Subjects] Thirty hemiplegic patients participated in this experiment and they were randomly allocated to an experimental group and a control group. [Methods] Both groups received neuro-developmental treatment for four weeks and the experimental group additionally received auditory feedback during gait training. In order to examine auditory feedback effects on weight bearing during gait, a motion analysis system GAITRite was used to measure the duration of the stance phase and single limb stance phase of the subjects. [Results] The experimental group showed statistically significant improvements in the duration of the stance phase and single limb stance phase of the paretic side and the results of the Timed Up and Go Test after the training. [Conclusion] Auditory feedback during gait training significantly improved the duration of the stance phase and single limb stance phase of hemiplegic stroke patients.

  19. Auditory verbal hallucinations and continuum models of psychosis: A systematic review of the healthy voice-hearer literature.

    Science.gov (United States)

    Baumeister, David; Sedgwick, Ottilie; Howes, Oliver; Peters, Emmanuelle

    2017-02-01

    Recent decades have seen a surge of research interest in the phenomenon of healthy individuals who experience auditory verbal hallucinations, yet do not exhibit distress or need for care. The aims of the present systematic review are to provide a comprehensive overview of this research and examine how healthy voice-hearers may best be conceptualised in relation to the diagnostic versus 'quasi-' and 'fully-dimensional' continuum models of psychosis. A systematic literature search was conducted, resulting in a total of 398 article titles and abstracts that were scrutinised for appropriateness to the present objective. Seventy articles were identified for full-text analysis, of which 36 met criteria for inclusion. Subjective perceptual experience of voices, such as loudness or location (i.e., inside/outside head), is similar in clinical and non-clinical groups, although clinical voice-hearers have more frequent voices, more negative voice content, and an older age of onset. Groups differ significantly in beliefs about voices, control over voices, voice-related distress, and affective difficulties. Cognitive biases, reduced global functioning, and psychiatric symptoms such as delusions, appear more prevalent in healthy voice-hearers than in healthy controls, yet less than in clinical samples. Transition to mental health difficulties is increased in HVHs, yet only occurs in a minority and is predicted by previous mood problems and voice distress. Whilst healthy voice-hearers show similar brain activity during hallucinatory experiences to clinical voice-hearers, other neuroimaging measures, such as mismatch negativity, have been inconclusive. Risk factors such as familial and childhood trauma appear similar between clinical and non-clinical voice-hearers. Overall the results of the present systematic review support a continuum view rather than a diagnostic model, but cannot distinguish between 'quasi' and 'fully' dimensional models. Healthy voice-hearers may be a key

  20. Selective and divided attention modulates auditory-vocal integration in the processing of pitch feedback errors.

    Science.gov (United States)

    Liu, Ying; Hu, Huijing; Jones, Jeffery A; Guo, Zhiqiang; Li, Weifeng; Chen, Xi; Liu, Peng; Liu, Hanjun

    2015-08-01

    Speakers rapidly adjust their ongoing vocal productions to compensate for errors they hear in their auditory feedback. It is currently unclear what role attention plays in these vocal compensations. This event-related potential (ERP) study examined the influence of selective and divided attention on the vocal and cortical responses to pitch errors heard in auditory feedback regarding ongoing vocalisations. During the production of a sustained vowel, participants briefly heard their vocal pitch shifted up two semitones while they actively attended to auditory or visual events (selective attention), or both auditory and visual events (divided attention), or were not told to attend to either modality (control condition). The behavioral results showed that attending to the pitch perturbations elicited larger vocal compensations than attending to the visual stimuli. Moreover, ERPs were likewise sensitive to the attentional manipulations: P2 responses to pitch perturbations were larger when participants attended to the auditory stimuli compared to when they attended to the visual stimuli, and compared to when they were not explicitly told to attend to either the visual or auditory stimuli. By contrast, dividing attention between the auditory and visual modalities caused suppressed P2 responses relative to all the other conditions and caused enhanced N1 responses relative to the control condition. These findings provide strong evidence for the influence of attention on the mechanisms underlying the auditory-vocal integration in the processing of pitch feedback errors. In addition, selective attention and divided attention appear to modulate the neurobehavioral processing of pitch feedback errors in different ways.

  1. Ring a bell? Adaptive Auditory Game Feedback to Sustain Performance in Stroke Rehabilitation

    DEFF Research Database (Denmark)

    Hald, Kasper; Knoche, Hendrik Ole

    2016-01-01

    Abstract. This paper investigates the effect of adaptive auditory feed- back on continued player performance for stroke patients in a Whack- a-Mole style tablet game. The feedback consisted of accumulatively in- creasing the pitch of positive feedback sounds on tasks with fast reaction time...... and resetting it after slow reaction times. The analysis was based on data was obtained in a field trial with lesion patients during their regular rehabilitation. The auditory feedback events were categorized by feedback type (positive/negative) and the associated pitch change of ei- ther high or low magnitude...

  2. Ring a bell? Adaptive Auditory Game Feedback to Sustain Performance in Stroke Rehabilitation

    DEFF Research Database (Denmark)

    Hald, Kasper; Knoche, Hendrik

    2016-01-01

    This paper investigates the effect of adaptive auditory feed- back on continued player performance for stroke patients in a Whack- a-Mole style tablet game. The feedback consisted of accumulatively in- creasing the pitch of positive feedback sounds on tasks with fast reaction time and resetting...... it after slow reaction times. The analysis was based on data was obtained in a field trial with lesion patients during their regular rehabilitation. The auditory feedback events were categorized by feedback type (positive/negative) and the associated pitch change of ei- ther high or low magnitude. Both...

  3. The Provision of Feedback Types to EFL Learners in Synchronous Voice Computer Mediated Communication

    Science.gov (United States)

    Ko, Chao-Jung

    2015-01-01

    This study examined the relationship between Synchronous Voice Computer Mediated Communication (SVCMC) interaction and the use of feedback types, especially pronunciation feedback types, in distance tutoring contexts. The participants, divided into two groups (explicit and recast), were twelve beginning/low-intermediate level English as a Foreign…

  4. Auditory-perceptual analysis of voice in abused children and adolescents.

    Science.gov (United States)

    Stivanin, Luciene; Santos, Fernanda Pontes dos; Oliveira, Christian César Cândido de; Santos, Bernardo dos; Ribeiro, Simone Tozzini; Scivoletto, Sandra

    2015-01-01

    Abused children and adolescents are exposed to factors that can trigger vocal changes. This study aimed to analyze the prevalence of vocal changes in abused children and adolescents, through auditory-perceptual analysis of voice and the study of the association between vocal changes, communication disorders, psychiatric disorders, and global functioning. This was an observational and transversal study of 136 children and adolescents (mean age 10.2 years, 78 male) who were assessed by a multidisciplinary team specializing in abused populations. Speech evaluation was performed (involving the aspects of oral and written communication, as well as auditory-perceptual analysis of voice, through the GRBASI scale). Psychiatric diagnosis was performed in accordance with the DSM-IV diagnostic criteria and by applying the K-SADS; global functioning was evaluated by means of the C-GAS scale. The prevalence of vocal change was 67.6%; of the patients with vocal changes, 92.3% had other communication disorders. Voice changes were associated with a loss of seven points in global functioning, and there was no association between vocal changes and psychiatric diagnosis. The prevalence of vocal change was greater than that observed in the general population, with significant associations with communication disorders and global functioning. The results demonstrate that the situations these children experience can intensify the triggering of abusive vocal behaviors and consequently, of vocal changes. Copyright © 2014 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  5. Silent reading of direct versus indirect speech activates voice-selective areas in the auditory cortex.

    Science.gov (United States)

    Yao, Bo; Belin, Pascal; Scheepers, Christoph

    2011-10-01

    In human communication, direct speech (e.g., Mary said: "I'm hungry") is perceived to be more vivid than indirect speech (e.g., Mary said [that] she was hungry). However, for silent reading, the representational consequences of this distinction are still unclear. Although many of us share the intuition of an "inner voice," particularly during silent reading of direct speech statements in text, there has been little direct empirical confirmation of this experience so far. Combining fMRI with eye tracking in human volunteers, we show that silent reading of direct versus indirect speech engenders differential brain activation in voice-selective areas of the auditory cortex. This suggests that readers are indeed more likely to engage in perceptual simulations (or spontaneous imagery) of the reported speaker's voice when reading direct speech as opposed to meaning-equivalent indirect speech statements as part of a more vivid representation of the former. Our results may be interpreted in line with embodied cognition and form a starting point for more sophisticated interdisciplinary research on the nature of auditory mental simulation during reading.

  6. Role of auditory feedback in speech produced by cochlear implanted adults and children

    Science.gov (United States)

    Bharadwaj, Sneha V.; Tobey, Emily A.; Assmann, Peter F.; Katz, William F.

    2002-05-01

    A prominent theory of speech production proposes that speech segments are largely controlled by reference to an internal model, with minimal reliance on auditory feedback. This theory also maintains that suprasegmental aspects of speech are directly regulated by auditory feedback. Accordingly, if a talker is briefly deprived of auditory feedback speech segments should not be affected, but suprasegmental properties should show significant change. To test this prediction, comparisons were made between speech samples obtained from cochlear implant users who repeated words under two conditions (1) implant device turned ON, and (2) implant switched OFF immediately before the repetition of each word. To determine whether producing unfamiliar speech requires greater reliance on auditory feedback than producing familiar speech, English and French words were elicited from English-speaking subjects. Subjects were congenitally deaf children (n=4) and adventitiously deafened adults (n=4). Vowel fundamental frequency and formant frequencies, vowel and syllable durations, and fricative spectral moments were analyzed. Preliminary data only partially confirm the predictions, in that both segmental and suprasegmental aspects of speech were significantly modified in the absence of auditory feedback. Modifications were greater for French compared to English words, suggesting greater reliance on auditory feedback for unfamiliar words. [Work supported by NIDCD.

  7. An Experimental Investigation of the Effect of Altered Auditory Feedback on the Conversational Speech of Adults Who Stutter

    Science.gov (United States)

    Lincoln, Michelle; Packman, Ann; Onslow, Mark; Jones, Mark

    2010-01-01

    Purpose: To investigate the impact on percentage of syllables stuttered of various durations of delayed auditory feedback (DAF), levels of frequency-altered feedback (FAF), and masking auditory feedback (MAF) during conversational speech. Method: Eleven adults who stuttered produced 10-min conversational speech samples during a control condition…

  8. Auditory Feedback in Music Performance: The Role of Melodic Structure and Musical Skill

    Science.gov (United States)

    Pfordresher, Peter Q.

    2005-01-01

    Five experiments explored whether fluency in musical sequence production relies on matches between the contents of auditory feedback and the planned outcomes of actions. Participants performed short melodies from memory on a keyboard while musical pitches that sounded in synchrony with each keypress (feedback contents) were altered. Results…

  9. Multivoxel Patterns Reveal Functionally Differentiated Networks Underlying Auditory Feedback Processing of Speech

    DEFF Research Database (Denmark)

    Zheng, Zane Z.; Vicente-Grabovetsky, Alejandro; MacDonald, Ewen N.

    2013-01-01

    within a multivoxel pattern analysis framework, that this sensorimotor process is supported by functionally differentiated brain networks. During scanning, a real-time speech-tracking system was used to deliver two acoustically different types of distorted auditory feedback or unaltered feedback while...

  10. Effect of auditory feedback differs according to side of hemiparesis: a comparative pilot study

    Directory of Open Access Journals (Sweden)

    Bensmail Djamel

    2009-12-01

    Full Text Available Abstract Background Following stroke, patients frequently demonstrate loss of motor control and function and altered kinematic parameters of reaching movements. Feedback is an essential component of rehabilitation and auditory feedback of kinematic parameters may be a useful tool for rehabilitation of reaching movements at the impairment level. The aim of this study was to investigate the effect of 2 types of auditory feedback on the kinematics of reaching movements in hemiparetic stroke patients and to compare differences between patients with right (RHD and left hemisphere damage (LHD. Methods 10 healthy controls, 8 stroke patients with LHD and 8 with RHD were included. Patient groups had similar levels of upper limb function. Two types of auditory feedback (spatial and simple were developed and provided online during reaching movements to 9 targets in the workspace. Kinematics of the upper limb were recorded with an electromagnetic system. Kinematics were compared between groups (Mann Whitney test and the effect of auditory feedback on kinematics was tested within each patient group (Friedman test. Results In the patient groups, peak hand velocity was lower, the number of velocity peaks was higher and movements were more curved than in the healthy group. Despite having a similar clinical level, kinematics differed between LHD and RHD groups. Peak velocity was similar but LHD patients had fewer velocity peaks and less curved movements than RHD patients. The addition of auditory feedback improved the curvature index in patients with RHD and deteriorated peak velocity, the number of velocity peaks and curvature index in LHD patients. No difference between types of feedback was found in either patient group. Conclusion In stroke patients, side of lesion should be considered when examining arm reaching kinematics. Further studies are necessary to evaluate differences in responses to auditory feedback between patients with lesions in opposite

  11. Categorical vowel perception enhances the effectiveness and generalization of auditory feedback in human-machine-interfaces.

    Directory of Open Access Journals (Sweden)

    Eric Larson

    Full Text Available Human-machine interface (HMI designs offer the possibility of improving quality of life for patient populations as well as augmenting normal user function. Despite pragmatic benefits, utilizing auditory feedback for HMI control remains underutilized, in part due to observed limitations in effectiveness. The goal of this study was to determine the extent to which categorical speech perception could be used to improve an auditory HMI. Using surface electromyography, 24 healthy speakers of American English participated in 4 sessions to learn to control an HMI using auditory feedback (provided via vowel synthesis. Participants trained on 3 targets in sessions 1-3 and were tested on 3 novel targets in session 4. An "established categories with text cues" group of eight participants were trained and tested on auditory targets corresponding to standard American English vowels using auditory and text target cues. An "established categories without text cues" group of eight participants were trained and tested on the same targets using only auditory cuing of target vowel identity. A "new categories" group of eight participants were trained and tested on targets that corresponded to vowel-like sounds not part of American English. Analyses of user performance revealed significant effects of session and group (established categories groups and the new categories group, and a trend for an interaction between session and group. Results suggest that auditory feedback can be effectively used for HMI operation when paired with established categorical (native vowel targets with an unambiguous cue.

  12. Neuromagnetic correlates of voice pitch, vowel type, and speaker size in auditory cortex.

    Science.gov (United States)

    Andermann, Martin; Patterson, Roy D; Vogt, Carolin; Winterstetter, Lisa; Rupp, André

    2017-09-01

    Vowel recognition is largely immune to differences in speaker size despite the waveform differences associated with variation in speaker size. This has led to the suggestion that voice pitch and mean formant frequency (MFF) are extracted early in the hierarchy of hearing/speech processing and used to normalize the internal representation of vowel sounds. This paper presents a magnetoencephalographic (MEG) experiment designed to locate and compare neuromagnetic activity associated with voice pitch, MFF and vowel type in human auditory cortex. Sequences of six sustained vowels were used to contrast changes in the three components of vowel perception, and MEG responses to the changes were recorded from 25 participants. A staged procedure was employed to fit the MEG data with a source model having one bilateral pair of dipoles for each component of vowel perception. This dipole model showed that the activity associated with the three perceptual changes was functionally separable; the pitch source was located in Heschl's gyrus (bilaterally), while the vowel-type and formant-frequency sources were located (bilaterally) just behind Heschl's gyrus in planum temporale. The results confirm that vowel normalization begins in auditory cortex at an early point in the hierarchy of speech processing. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Comparisons of Stuttering Frequency during and after Speech Initiation in Unaltered Feedback, Altered Auditory Feedback and Choral Speech Conditions

    Science.gov (United States)

    Saltuklaroglu, Tim; Kalinowski, Joseph; Robbins, Mary; Crawcour, Stephen; Bowers, Andrew

    2009-01-01

    Background: Stuttering is prone to strike during speech initiation more so than at any other point in an utterance. The use of auditory feedback (AAF) has been found to produce robust decreases in the stuttering frequency by creating an electronic rendition of choral speech (i.e., speaking in unison). However, AAF requires users to self-initiate…

  14. Comparisons of Stuttering Frequency during and after Speech Initiation in Unaltered Feedback, Altered Auditory Feedback and Choral Speech Conditions

    Science.gov (United States)

    Saltuklaroglu, Tim; Kalinowski, Joseph; Robbins, Mary; Crawcour, Stephen; Bowers, Andrew

    2009-01-01

    Background: Stuttering is prone to strike during speech initiation more so than at any other point in an utterance. The use of auditory feedback (AAF) has been found to produce robust decreases in the stuttering frequency by creating an electronic rendition of choral speech (i.e., speaking in unison). However, AAF requires users to self-initiate…

  15. Tap Arduino: An Arduino microcontroller for low-latency auditory feedback in sensorimotor synchronization experiments.

    Science.gov (United States)

    Schultz, Benjamin G; van Vugt, Floris T

    2016-12-01

    Timing abilities are often measured by having participants tap their finger along with a metronome and presenting tap-triggered auditory feedback. These experiments predominantly use electronic percussion pads combined with software (e.g., FTAP or Max/MSP) that records responses and delivers auditory feedback. However, these setups involve unknown latencies between tap onset and auditory feedback and can sometimes miss responses or record multiple, superfluous responses for a single tap. These issues may distort measurements of tapping performance or affect the performance of the individual. We present an alternative setup using an Arduino microcontroller that addresses these issues and delivers low-latency auditory feedback. We validated our setup by having participants (N = 6) tap on a force-sensitive resistor pad connected to the Arduino and on an electronic percussion pad with various levels of force and tempi. The Arduino delivered auditory feedback through a pulse-width modulation (PWM) pin connected to a headphone jack or a wave shield component. The Arduino's PWM (M = 0.6 ms, SD = 0.3) and wave shield (M = 2.6 ms, SD = 0.3) demonstrated significantly lower auditory feedback latencies than the percussion pad (M = 9.1 ms, SD = 2.0), FTAP (M = 14.6 ms, SD = 2.8), and Max/MSP (M = 15.8 ms, SD = 3.4). The PWM and wave shield latencies were also significantly less variable than those from FTAP and Max/MSP. The Arduino missed significantly fewer taps, and recorded fewer superfluous responses, than the percussion pad. The Arduino captured all responses, whereas at lower tapping forces, the percussion pad missed more taps. Regardless of tapping force, the Arduino outperformed the percussion pad. Overall, the Arduino is a high-precision, low-latency, portable, and affordable tool for auditory experiments.

  16. Speakers' acceptance of real-time speech exchange indicates that we use auditory feedback to specify the meaning of what we say.

    Science.gov (United States)

    Lind, Andreas; Hall, Lars; Breidegard, Björn; Balkenius, Christian; Johansson, Petter

    2014-06-01

    Speech is usually assumed to start with a clearly defined preverbal message, which provides a benchmark for self-monitoring and a robust sense of agency for one's utterances. However, an alternative hypothesis states that speakers often have no detailed preview of what they are about to say, and that they instead use auditory feedback to infer the meaning of their words. In the experiment reported here, participants performed a Stroop color-naming task while we covertly manipulated their auditory feedback in real time so that they said one thing but heard themselves saying something else. Under ideal timing conditions, two thirds of these semantic exchanges went undetected by the participants, and in 85% of all nondetected exchanges, the inserted words were experienced as self-produced. These findings indicate that the sense of agency for speech has a strong inferential component, and that auditory feedback of one's own voice acts as a pathway for semantic monitoring, potentially overriding other feedback loops. © The Author(s) 2014.

  17. Logarithmic temporal axis manipulation and its application for measuring auditory contributions in F0 control using a transformed auditory feedback procedure

    Science.gov (United States)

    Yanaga, Ryuichiro; Kawahara, Hideki

    2003-10-01

    A new parameter extraction procedure based on logarithmic transformation of the temporal axis was applied to investigate auditory effects on voice F0 control to overcome artifacts due to natural fluctuations and nonlinearities in speech production mechanisms. The proposed method may add complementary information to recent findings reported by using frequency shift feedback method [Burnett and Larson, J. Acoust. Soc. Am. 112 (2002)], in terms of dynamic aspects of F0 control. In a series of experiments, dependencies of system parameters in F0 control on subjects, F0 and style (musical expressions and speaking) were tested using six participants. They were three male and three female students specialized in musical education. They were asked to sustain a Japanese vowel /a/ for about 10 s repeatedly up to 2 min in total while hearing F0 modulated feedback speech, that was modulated using an M-sequence. The results replicated qualitatively the previous finding [Kawahara and Williams, Vocal Fold Physiology, (1995)] and provided more accurate estimates. Relations with designing an artificial singer also will be discussed. [Work partly supported by the grant in aids in scientific research (B) 14380165 and Wakayama University.

  18. Combined mirror visual and auditory feedback therapy for upper limb phantom pain: a case report

    Directory of Open Access Journals (Sweden)

    Yan Kun

    2011-01-01

    Full Text Available Abstract Introduction Phantom limb sensation and phantom limb pain is a very common issue after amputations. In recent years there has been accumulating data implicating 'mirror visual feedback' or 'mirror therapy' as helpful in the treatment of phantom limb sensation and phantom limb pain. Case presentation We present the case of a 24-year-old Caucasian man, a left upper limb amputee, treated with mirror visual feedback combined with auditory feedback with improved pain relief. Conclusion This case may suggest that auditory feedback might enhance the effectiveness of mirror visual feedback and serve as a valuable addition to the complex multi-sensory processing of body perception in patients who are amputees.

  19. Ambulatory Voice Biofeedback: Relative Frequency and Summary Feedback Effects on Performance and Retention of Reduced Vocal Intensity in the Daily Lives of Participants with Normal Voices

    Science.gov (United States)

    Van Stan, Jarrad H.; Mehta, Daryush D.; Sternad, Dagmar; Petit, Robert; Hillman, Robert E.

    2017-01-01

    Purpose: Ambulatory voice biofeedback has the potential to significantly improve voice therapy effectiveness by targeting carryover of desired behaviors outside the therapy session (i.e., retention). This study applies motor learning concepts (reduced frequency and delayed, summary feedback) that demonstrate increased retention to ambulatory voice…

  20. Temporal control and compensation for perturbed voicing feedback

    DEFF Research Database (Denmark)

    Mitsuya, Takashi; MacDonald, Ewen; Munhall, Kevin G.

    2014-01-01

    Previous research employing a real-time auditory perturbation paradigm has shown that talkers monitor their own speech attributes such as fundamental frequency, vowel intensity, vowel formants, and fricative noise as part of speech motor control. In the case of vowel formants or fricative noise, ...

  1. Attentional demands influence vocal compensations to pitch errors heard in auditory feedback.

    Directory of Open Access Journals (Sweden)

    Anupreet K Tumber

    Full Text Available Auditory feedback is required to maintain fluent speech. At present, it is unclear how attention modulates auditory feedback processing during ongoing speech. In this event-related potential (ERP study, participants vocalized/a/, while they heard their vocal pitch suddenly shifted downward a ½ semitone in both single and dual-task conditions. During the single-task condition participants passively viewed a visual stream for cues to start and stop vocalizing. In the dual-task condition, participants vocalized while they identified target stimuli in a visual stream of letters. The presentation rate of the visual stimuli was manipulated in the dual-task condition in order to produce a low, intermediate, and high attentional load. Visual target identification accuracy was lowest in the high attentional load condition, indicating that attentional load was successfully manipulated. Results further showed that participants who were exposed to the single-task condition, prior to the dual-task condition, produced larger vocal compensations during the single-task condition. Thus, when participants' attention was divided, less attention was available for the monitoring of their auditory feedback, resulting in smaller compensatory vocal responses. However, P1-N1-P2 ERP responses were not affected by divided attention, suggesting that the effect of attentional load was not on the auditory processing of pitch altered feedback, but instead it interfered with the integration of auditory and motor information, or motor control itself.

  2. Restoration of voice function by using biological feedback in laryngeal and hypopharyngeal carcinoma patients

    Science.gov (United States)

    Choinzonov, E. L.; Balatskaya, L. N.; Chizhevskaya, S. Yu.; Meshcheryakov, R. V.; Kostyuchenko, E. Yu.; Ivanova, T. A.

    2016-08-01

    The aim of the research is to develop and introduce a new technique of post-laryngectomy voice rehabilitation of laryngeal and hypopharyngeal carcinoma patients. The study involves comparing and analyzing 82 cases of voice function restoration by using biological feedback based on mathematical modeling of voice production. The advantage of the modern technology-based method in comparison with the conventional one is proved. Restoration of voice function using biofeedback allows taking into account patient's abilities, adjusting parameters of voice trainings, and controlling their efficiency in real-time mode. The data obtained indicate that the new method contributes to the rapid inclusion of self-regulation mechanisms of the body and results in the overall success rate of voice rehabilitation in totally laryngectomized patients reaching 92%, which reduces the rehabilitation period to 18 days, compared to 86% and 38 days in the control group, respectively. Restoration of disturbed functions after successful treatment is an important task of rehabilitation and is crucial in terms of the quality of cancer patients' lives. To assess life quality of laryngeal cancer patients, the EORTC Quality of Life Core Questionnaire (QLQ-C30), and head and neck module (QLQ-H&N35) were used. The analyzed results proved that the technique of biofeedback voice restoration significantly improves the quality of life of laryngectomized patients. It allows reducing the number of disabled people, restoring patients' ability to work-related activities, and significantly improving social adaptation of these patients.

  3. The relationship between vocal accuracy and variability to the level of compensation to altered auditory feedback.

    Science.gov (United States)

    Scheerer, Nichole E; Jones, Jeffery A

    2012-11-07

    Auditory feedback plays an important role in monitoring vocal output and determining when adjustments are necessary. In this study a group of untrained singers participated in a frequency altered feedback experiment to examine if accuracy at matching a note could predict the degree of compensation to auditory feedback that was shifted in frequency. Participants were presented with a target note and instructed to match the note in pitch and duration. Following the onset of the participants' vocalizations their vocal pitch was shifted down one semi-tone at a random time during their utterance. This altered auditory feedback was instantaneously presented back to them through headphones. Results indicated that note matching accuracy did not correlate with compensation magnitude, however, a significant correlation was found between baseline variability and compensation magnitude. These results suggest that individuals with a more stable baseline fundamental frequency rely more on feedforward control mechanisms than individuals with more variable vocal production. This increased weighting of feedforward control means they are less sensitive to mismatches between their intended vocal production and auditory feedback. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  4. The role of vowel perceptual cues in compensatory responses to perturbations of speech auditory feedback.

    Science.gov (United States)

    Reilly, Kevin J; Dougherty, Kathleen E

    2013-08-01

    The perturbation of acoustic features in a speaker's auditory feedback elicits rapid compensatory responses that demonstrate the importance of auditory feedback for control of speech output. The current study investigated whether responses to a perturbation of speech auditory feedback vary depending on the importance of the perturbed feature to perception of the vowel being produced. Auditory feedback of speakers' first formant frequency (F1) was shifted upward by 130 mels in randomly selected trials during the speakers' production of consonant-vowel-consonant words containing either the vowel /Λ/ or the vowel /ɝ/. Although these vowels exhibit comparable F1 frequencies, the contribution of F1 to perception of /Λ/ is greater than its contribution to perception of /ɝ/. Compensation to the F1 perturbation was observed during production of both vowels, but compensatory responses during /Λ/ occurred at significantly shorter latencies and exhibited significantly larger magnitudes than compensatory responses during /ɝ/. The finding that perturbation of vowel F1 during /Λ/ and /ɝ/ yielded compensatory differences that mirrored the contributions of F1 to perception of these vowels indicates that some portion of feedback control is weighted toward monitoring and preservation of acoustic cues for speech perception.

  5. Auditory display as a prosthetic hand sensory feedback for reaching and grasping tasks.

    Science.gov (United States)

    Gonzalez, Jose; Suzuki, Hiroyuki; Natsumi, Nakayama; Sekine, Masashi; Yu, Wenwei

    2012-01-01

    Upper limb amputees have to rely extensively on visual feedback in order to monitor and manipulate successfully their prosthetic device. This situation leads to high consciousness burden, which generates fatigue and frustration. Therefore, in order to enhance motor-sensory performance and awareness, an auditory display was used as a sensory feedback system for the prosthetic hand's spatio-temporal and force information in a complete reaching and grasping setting. The main objective of this study was to explore the effects of using the auditory display to monitor the prosthetic hand during a complete reaching and grasping motion. The results presented in this paper point out that the usage of an auditory display to monitor and control a robot hand improves the temporal and grasping performance greatly, while reducing mental effort and improving their confidence.

  6. Shop 'til you hear it drop - Influence of Interactive Auditory Feedback in a Virtual Reality Supermarket

    DEFF Research Database (Denmark)

    Sikström, Erik; Høeg, Emil Rosenlund; Mangano, Luca

    2016-01-01

    In this paper we describe an experiment aiming to investigate the impact of auditory feedback in a virtual reality supermarket scenario. The participants were asked to read a shopping list and collect items one by one and place them into a shopping cart. Three conditions were presented randomly, ...

  7. Individual Variability in Delayed Auditory Feedback Effects on Speech Fluency and Rate in Normally Fluent Adults

    Science.gov (United States)

    Chon, HeeCheong; Kraft, Shelly Jo; Zhang, Jingfei; Loucks, Torrey; Ambrose, Nicoline G.

    2013-01-01

    Purpose: Delayed auditory feedback (DAF) is known to induce stuttering-like disfluencies (SLDs) and cause speech rate reductions in normally fluent adults, but the reason for speech disruptions is not fully known, and individual variation has not been well characterized. Studying individual variation in susceptibility to DAF may identify factors…

  8. Auditory feedback affects perception of effort when exercising with a Pulley machine

    DEFF Research Database (Denmark)

    Bordegoni, Monica; Ferrise, Francesco; Grani, Francesco

    2013-01-01

    In this paper we describe an experiment that investigates the role of auditory feedback in affecting the perception of effort when using a physical pulley machine. Specifically, we investigated whether variations in the amplitude and frequency content of the pulley sound affect perception of effort....... Results show that variations in frequency content affect the perception of effort....

  9. Brain responses to altered auditory feedback during musical keyboard production: an fMRI study.

    Science.gov (United States)

    Pfordresher, Peter Q; Mantell, James T; Brown, Steven; Zivadinov, Robert; Cox, Jennifer L

    2014-03-27

    Alterations of auditory feedback during piano performance can be profoundly disruptive. Furthermore, different alterations can yield different types of disruptive effects. Whereas alterations of feedback synchrony disrupt performed timing, alterations of feedback pitch contents can disrupt accuracy. The current research tested whether these behavioral dissociations correlate with differences in brain activity. Twenty pianists performed simple piano keyboard melodies while being scanned in a 3-T magnetic resonance imaging (MRI) scanner. In different conditions they experienced normal auditory feedback, altered auditory feedback (asynchronous delays or altered pitches), or control conditions that excluded movement or sound. Behavioral results replicated past findings. Neuroimaging data suggested that asynchronous delays led to increased activity in Broca's area and its right homologue, whereas disruptive alterations of pitch elevated activations in the cerebellum, area Spt, inferior parietal lobule, and the anterior cingulate cortex. Both disruptive conditions increased activations in the supplementary motor area. These results provide the first evidence of neural responses associated with perception/action mismatch during keyboard production.

  10. Effects of altered auditory feedback across effector systems: production of melodies by keyboard and singing.

    Science.gov (United States)

    Pfordresher, Peter Q; Mantell, James T

    2012-01-01

    We report an experiment that tested whether effects of altered auditory feedback (AAF) during piano performance differ from its effects during singing. These effector systems differ with respect to the mapping between motor gestures and pitch content of auditory feedback. Whereas this action-effect mapping is highly reliable during phonation in any vocal motor task (singing or speaking), mapping between finger movements and pitch occurs only in limited situations, such as piano playing. Effects of AAF in both tasks replicated results previously found for keyboard performance (Pfordresher, 2003), in that asynchronous (delayed) feedback slowed timing whereas alterations to feedback pitch increased error rates, and the effect of asynchronous feedback was similar in magnitude across tasks. However, manipulations of feedback pitch had larger effects on singing than on keyboard production, suggesting effector-specific differences in sensitivity to action-effect mapping with respect to feedback content. These results support the view that disruption from AAF is based on abstract, effector independent, response-effect associations but that the strength of associations differs across effector systems.

  11. The experience of agency in sequence production with altered auditory feedback.

    Science.gov (United States)

    Couchman, Justin J; Beasley, Robertson; Pfordresher, Peter Q

    2012-03-01

    When speaking or producing music, people rely in part on auditory feedback - the sounds associated with the performed action. Three experiments investigated the degree to which alterations of auditory feedback (AAF) during music performances influence the experience of agency (i.e., the sense that your actions led to auditory events) and the possible link between agency and the disruptive effect of AAF on production. Participants performed short novel melodies from memory on a keyboard. Auditory feedback during performances was manipulated with respect to its pitch contents and/or its synchrony with actions. Participants rated their experience of agency after each trial. In all experiments, AAF reduced judgments of agency across conditions. Performance was most disrupted (measured by error rates and slowing) when AAF led to an ambiguous experience of agency, suggesting that there may be some causal relationship between agency and disruption. However, analyses revealed that these two effects were probably independent. A control experiment verified that performers can make veridical judgments of agency. Published by Elsevier Inc.

  12. Active Vibration Isolation Using a Voice Coil Actuator with Absolute Velocity Feedback Control

    OpenAIRE

    Yun-Hui Liu; Wei-Hao Wu

    2013-01-01

    This paper describes the active vibration isolation using a voice coil actuator with absolute velocity feedback control for highly sensitive instruments (e.g., atomic force microscopes) which suffer from building vibration. Compared with traditional isolators, the main advantage of the proposed isolation system is that it produces no isolator resonance. The absolute vibration velocity signal is acquired from an accelerator and processed through an integrator, and is then input to the controll...

  13. Compensations to auditory feedback perturbations in congenitally blind and sighted speakers: Acoustic and articulatory data.

    Science.gov (United States)

    Trudeau-Fisette, Pamela; Tiede, Mark; Ménard, Lucie

    2017-01-01

    This study investigated the effects of visual deprivation on the relationship between speech perception and production by examining compensatory responses to real-time perturbations in auditory feedback. Specifically, acoustic and articulatory data were recorded while sighted and congenitally blind French speakers produced several repetitions of the vowel /ø/. At the acoustic level, blind speakers produced larger compensatory responses to altered vowels than their sighted peers. At the articulatory level, blind speakers also produced larger displacements of the upper lip, the tongue tip, and the tongue dorsum in compensatory responses. These findings suggest that blind speakers tolerate less discrepancy between actual and expected auditory feedback than sighted speakers. The study also suggests that sighted speakers have acquired more constrained somatosensory goals through the influence of visual cues perceived in face-to-face conversation, leading them to tolerate less discrepancy between expected and altered articulatory positions compared to blind speakers and thus resulting in smaller observed compensatory responses.

  14. Continuous Auditory Feedback of Eye Movements: An Exploratory Study toward Improving Oculomotor Control

    Directory of Open Access Journals (Sweden)

    Eric O. Boyer

    2017-04-01

    Full Text Available As eye movements are mostly automatic and overtly generated to attain visual goals, individuals have a poor metacognitive knowledge of their own eye movements. We present an exploratory study on the effects of real-time continuous auditory feedback generated by eye movements. We considered both a tracking task and a production task where smooth pursuit eye movements (SPEM can be endogenously generated. In particular, we used a visual paradigm which enables to generate and control SPEM in the absence of a moving visual target. We investigated whether real-time auditory feedback of eye movement dynamics might improve learning in both tasks, through a training protocol over 8 days. The results indicate that real-time sonification of eye movements can actually modify the oculomotor behavior, and reinforce intrinsic oculomotor perception. Nevertheless, large inter-individual differences were observed preventing us from reaching a strong conclusion on sensorimotor learning improvements.

  15. A Bayesian Account of Vocal Adaptation to Pitch-Shifted Auditory Feedback

    Science.gov (United States)

    Hahnloser, Richard H. R.

    2017-01-01

    Motor systems are highly adaptive. Both birds and humans compensate for synthetically induced shifts in the pitch (fundamental frequency) of auditory feedback stemming from their vocalizations. Pitch-shift compensation is partial in the sense that large shifts lead to smaller relative compensatory adjustments of vocal pitch than small shifts. Also, compensation is larger in subjects with high motor variability. To formulate a mechanistic description of these findings, we adapt a Bayesian model of error relevance. We assume that vocal-auditory feedback loops in the brain cope optimally with known sensory and motor variability. Based on measurements of motor variability, optimal compensatory responses in our model provide accurate fits to published experimental data. Optimal compensation correctly predicts sensory acuity, which has been estimated in psychophysical experiments as just-noticeable pitch differences. Our model extends the utility of Bayesian approaches to adaptive vocal behaviors. PMID:28135267

  16. Auditory discrimination of voice-onset time and its relationship with reading ability.

    Science.gov (United States)

    Arciuli, Joanne; Rankine, Tracey; Monaghan, Padraic

    2010-05-01

    The perception of voice-onset time (VOT) during dichotic listening provides unique insight regarding auditory discrimination processes and, as such, an opportunity to learn more about individual differences in reading ability. We analysed the responses elicited by four VOT conditions: short-long pairs (SL), where a syllable with a short VOT was presented to the left ear and a syllable with a long VOT was presented to the right ear, as well as long-short (LS), short-short (SS), and long-long (LL) pairs. Stimuli were presented in three attention conditions, where participants were instructed to attend to either the left or right ear, or received no instruction. By around 9.5 years of age children perform similarly to adults in terms of the size and relative magnitude of the right ear advantage (REA) elicited by each of the four VOT conditions. Overall, SL pairs elicited the largest REA and LS pairs elicited a left ear advantage (LEA), reflecting stimulus-driven bottom-up processes. However, children were less able to modulate their responses according to attention condition, reflecting a lack of top-down control. Effective direction of attention to one ear or the other was related to measures of reading accuracy and comprehension, indicating that reading skill is associated with top-down control of bottom-up perceptual processes.

  17. Listening to voices: the use of phenomenology to differentiate malingered from genuine auditory verbal hallucinations.

    Science.gov (United States)

    McCarthy-Jones, Simon; Resnick, Phillip J

    2014-01-01

    The experience of hearing a voice in the absence of an appropriate external stimulus, formally termed an auditory verbal hallucination (AVH), may be malingered for reasons such as personal financial gain, or, in criminal cases, to attempt a plea of not guilty by reason of insanity. An accurate knowledge of the phenomenology of AVHs is central to assessing the veracity of claims to such experiences. We begin by demonstrating that some contemporary criminal cases still employ inaccurate conceptions of the phenomenology of AVHs to assess defendants' claims. The phenomenology of genuine, malingered, and atypical AVHs is then examined. We argue that, due to the heterogeneity of AVHs, the use of typical properties of AVHs as a yardstick against which to evaluate the veracity of a defendant's claims is likely to be less effective than the accumulation of instances of defendants endorsing statements of atypical features of AVHs. We identify steps towards the development of a formal tool for this purpose, and examine other conceptual issues pertinent to criminal cases arising from the phenomenology of AVHs.

  18. On the Role of Auditory Feedback in Robot-Assisted Movement Training after Stroke: Review of the Literature

    Directory of Open Access Journals (Sweden)

    Giulio Rosati

    2013-01-01

    reviewed. In particular, a comparative quantitative analysis over a large corpus of the recent literature suggests that the potential of auditory feedback in rehabilitation systems is currently and largely underexploited. Finally, several scenarios are proposed in which the use of auditory feedback may contribute to overcome some of the main limitations of current rehabilitation systems, in terms of user engagement, development of acute-phase and home rehabilitation devices, learning of more complex motor tasks, and improving activities of daily living.

  19. Using voice input and audio feedback to enhance the reality of a virtual experience

    Energy Technology Data Exchange (ETDEWEB)

    Miner, N.E.

    1994-04-01

    Virtual Reality (VR) is a rapidly emerging technology which allows participants to experience a virtual environment through stimulation of the participant`s senses. Intuitive and natural interactions with the virtual world help to create a realistic experience. Typically, a participant is immersed in a virtual environment through the use of a 3-D viewer. Realistic, computer-generated environment models and accurate tracking of a participant`s view are important factors for adding realism to a virtual experience. Stimulating a participant`s sense of sound and providing a natural form of communication for interacting with the virtual world are equally important. This paper discusses the advantages and importance of incorporating voice recognition and audio feedback capabilities into a virtual world experience. Various approaches and levels of complexity are discussed. Examples of the use of voice and sound are presented through the description of a research application developed in the VR laboratory at Sandia National Laboratories.

  20. Effect of the loss of auditory feedback on segmental parameters of vowels of postlingually deafened speakers.

    Science.gov (United States)

    Schenk, Barbara S; Baumgartner, Wolf Dieter; Hamzavi, Jafar Sasan

    2003-12-01

    The most obvious and best documented changes in speech of postlingually deafened speakers are the rate, fundamental frequency, and volume (energy). These changes are due to the lack of auditory feedback. But auditory feedback affects not only the suprasegmental parameters of speech. The aim of this study was to determine the change at the segmental level of speech in terms of vowel formants. Twenty-three postlingually deafened and 18 normally hearing speakers were recorded reading a German text. The frequencies of the first and second formants and the vowel spaces of selected vowels in word-in-context condition were compared. All first formant frequencies (F1) of the postlingually deafened speakers were significantly different from those of the normally hearing people. The values of F1 were higher for the vowels /e/ (418+/-61 Hz compared with 359+/-52 Hz, P=0.006) and /o/ (459+/-58 compared with 390+/-45 Hz, P=0.0003) and lower for /a/ (765+/-115 Hz compared with 851+/-146 Hz, P=0.038). The second formant frequency (F2) only showed a significant increase for the vowel/e/(2016+/-347 Hz compared with 2279+/-250 Hz, P=0.012). The postlingually deafened people were divided into two subgroups according to duration of deafness (shorter/longer than 10 years of deafness). There was no significant difference in formant changes between the two groups. Our report demonstrated an effect of auditory feedback also on segmental features of speech of postlingually deafened people.

  1. Comparing Voice Self-Assessment with Auditory Perceptual Analysis in Patients with Multiple Sclerosis

    Directory of Open Access Journals (Sweden)

    Bauer, Vladimir

    2015-01-01

    Full Text Available Introduction Disordered voice quality could be a symptom of multiple sclerosis (MS. The impact of MS on voice-related quality of life is still controversial. Objectives The aim of this study was to compare the results of voice self-assessment with the results of expert perceptual assessment in patients with MS. Methods The research included 38 patients with relapse-remitting MS (23 women and 15 men; ages 21 to 83, mean = 44. All participants filled out a Voice Handicap Index (VHI, and their voice sample was analyzed by speech and language professionals using the Grade Roughness Breathiness Asthenia Strain scale (GRBAS. Results The patients with MS had significantly higher VHI than control group participants (mean value 16.68 ± 16.2 compared with 5.29 ± 5.5, p = 0.0001. The study established a notable level of dysphonia in 55%, roughness and breathiness in 66%, asthenia in 34%, and strain in 55% of the vocal samples. A significant correlation was established between VHI and GRBAS scores (r = 0.3693, p = 0.0225, and VHI and asthenia and strain components (r = 0.4037 and 0.3775, p = 0.012 and 0.0195, respectively. The female group showed positive and significant correlation between claims for self-assessing one's voice (pVHI and overall GRBAS scores, and between pVHI and grade, roughness, asthenia, and strain components. No significant correlation was found for male patients (p > 0.05. Conclusion A significant number of patients with MS experienced voice problems. The VHI is a good and effective tool to assess patient self-perception of voice quality, but it may not reflect the severity of dysphonia as perceived by voice and speech professionals.

  2. Comparing Voice Self-Assessment with Auditory Perceptual Analysis in Patients with Multiple Sclerosis

    Science.gov (United States)

    Bauer, Vladimir; Aleric, Zorica; Jancic, Ervin

    2014-01-01

    Introduction Disordered voice quality could be a symptom of multiple sclerosis (MS). The impact of MS on voice-related quality of life is still controversial. Objectives The aim of this study was to compare the results of voice self-assessment with the results of expert perceptual assessment in patients with MS. Methods The research included 38 patients with relapse-remitting MS (23 women and 15 men; ages 21 to 83, mean = 44). All participants filled out a Voice Handicap Index (VHI), and their voice sample was analyzed by speech and language professionals using the Grade Roughness Breathiness Asthenia Strain scale (GRBAS). Results The patients with MS had significantly higher VHI than control group participants (mean value 16.68 ± 16.2 compared with 5.29 ± 5.5, p = 0.0001). The study established a notable level of dysphonia in 55%, roughness and breathiness in 66%, asthenia in 34%, and strain in 55% of the vocal samples. A significant correlation was established between VHI and GRBAS scores (r = 0.3693, p = 0.0225), and VHI and asthenia and strain components (r = 0.4037 and 0.3775, p = 0.012 and 0.0195, respectively). The female group showed positive and significant correlation between claims for self-assessing one's voice (pVHI) and overall GRBAS scores, and between pVHI and grade, roughness, asthenia, and strain components. No significant correlation was found for male patients (p > 0.05). Conclusion A significant number of patients with MS experienced voice problems. The VHI is a good and effective tool to assess patient self-perception of voice quality, but it may not reflect the severity of dysphonia as perceived by voice and speech professionals. PMID:25992162

  3. Behavioural evidence of a dissociation between voice gender categorization and phoneme categorization using auditory morphed stimuli

    Directory of Open Access Journals (Sweden)

    Cyril R Pernet

    2014-01-01

    Full Text Available Both voice gender and speech perception rely on neuronal populations located in the peri-sylvian areas. However, whilst functional imaging studies suggest a left versus right hemisphere and anterior versus posterior dissociation between voice and speech categorization, psycholinguistic studies on talker variability suggest that these two processes (voice and speech categorization share common mechanisms. In this study, we investigated the categorical perception of voice gender (male vs. female and phonemes (/pa/ vs. /ta/ using the same stimulus continua generated by morphing. This allowed the investigation of behavioural differences while controlling acoustic characteristics, since the same stimuli were used in both tasks. Despite a higher acoustic dissimilarity between items during the phoneme categorization task (a male and female voice producing the same phonemes than the gender task (the same person producing 2 phonemes, results showed that speech information is being processed much faster than voice information. In addition, f0 or timbre equalization did not affect RT, which disagrees with the classical psycholinguistic models in which voice information is stripped away or normalized to access phonetic content. Also, despite similar response (percentages and perceptual (d’ curves, a reverse correlation analysis on acoustic features revealed, as expected, that the formant frequencies of the consonant distinguished stimuli in the phoneme task, but that only the vowel formant frequencies distinguish stimuli in the gender task. The 2nd set of results thus also disagrees with models postulating that the same acoustic information is used for voice and speech. Altogether these results suggest that voice gender categorization and phoneme categorization are dissociated at an early stage on the basis of different enhanced acoustic features that are diagnostic to the task at hand.

  4. Atypical Bilateral Brain Synchronization in the Early Stage of Human Voice Auditory Processing in Young Children with Autism

    Science.gov (United States)

    Kurita, Toshiharu; Kikuchi, Mitsuru; Yoshimura, Yuko; Hiraishi, Hirotoshi; Hasegawa, Chiaki; Takahashi, Tetsuya; Hirosawa, Tetsu; Furutani, Naoki; Higashida, Haruhiro; Ikeda, Takashi; Mutou, Kouhei; Asada, Minoru; Minabe, Yoshio

    2016-01-01

    Autism spectrum disorder (ASD) has been postulated to involve impaired neuronal cooperation in large-scale neural networks, including cortico-cortical interhemispheric circuitry. In the context of ASD, alterations in both peripheral and central auditory processes have also attracted a great deal of interest because these changes appear to represent pathophysiological processes; therefore, many prior studies have focused on atypical auditory responses in ASD. The auditory evoked field (AEF), recorded by magnetoencephalography, and the synchronization of these processes between right and left hemispheres was recently suggested to reflect various cognitive abilities in children. However, to date, no previous study has focused on AEF synchronization in ASD subjects. To assess global coordination across spatially distributed brain regions, the analysis of Omega complexity from multichannel neurophysiological data was proposed. Using Omega complexity analysis, we investigated the global coordination of AEFs in 3–8-year-old typically developing (TD) children (n = 50) and children with ASD (n = 50) in 50-ms time-windows. Children with ASD displayed significantly higher Omega complexities compared with TD children in the time-window of 0–50 ms, suggesting lower whole brain synchronization in the early stage of the P1m component. When we analyzed the left and right hemispheres separately, no significant differences in any time-windows were observed. These results suggest lower right-left hemispheric synchronization in children with ASD compared with TD children. Our study provides new evidence of aberrant neural synchronization in young children with ASD by investigating auditory evoked neural responses to the human voice. PMID:27074011

  5. Effects of Consensus Training on the Reliability of Auditory Perceptual Ratings of Voice Quality

    DEFF Research Database (Denmark)

    Iwarsson, Jenny; Petersen, Niels Reinholt

    2012-01-01

    a multidimensional protocol with four-point equal-appearing interval scales. The stimuli consisted of text reading by authentic dysphonic patients. The consensus training for each perceptual voice parameter included (1) definition, (2) underlying physiology, (3) presentation of carefully selected sound examples...... training, including use of a reference voice sample material, to calibrate, equalize, and stabilize the internal standards held in memory by the listeners....

  6. Vowel generalization and its relation to adaptation during perturbations of auditory feedback.

    Science.gov (United States)

    Reilly, Kevin J; Pettibone, Chelsea

    2017-08-23

    Repeated perturbations of auditory feedback during vowel production elicit changes not only in the production of the perturbed vowel (adaptation) but also in the production of nearby vowels that were not perturbed (generalization). The finding that adaptation generalizes to other, non-perturbed vowels suggest that sensorimotor representations for vowels are not independent; instead the goals for producing any one vowel may depend in part on the goals for other vowels. The present study investigated the dependence or independence of vowel representations by evaluating adaptation and generalization in two groups of speakers exposed to auditory perturbations of their first formant (F1) during different vowels. The speakers in both groups who adapted to the perturbation exhibited generalization in two non-perturbed vowels that were produced under masking noise. Correlation testing was performed to evaluate the relations between adaptation and generalization as well as between the generalization in the two non-perturbed vowels. These tests identified significant coupling between the F1 changes of adjacent vowels but not non-adjacent vowels. The pattern of correlation findings indicates that generalization was due in part to feedforward representations that are partly shared across adjacent vowels, possibly to maintain their acoustic contrast. Copyright © 2016, Journal of Neurophysiology.

  7. Psycho-physiological assessment of a prosthetic hand sensory feedback system based on an auditory display: a preliminary study

    Directory of Open Access Journals (Sweden)

    Gonzalez Jose

    2012-06-01

    Full Text Available Abstract Background Prosthetic hand users have to rely extensively on visual feedback, which seems to lead to a high conscious burden for the users, in order to manipulate their prosthetic devices. Indirect methods (electro-cutaneous, vibrotactile, auditory cues have been used to convey information from the artificial limb to the amputee, but the usability and advantages of these feedback methods were explored mainly by looking at the performance results, not taking into account measurements of the user’s mental effort, attention, and emotions. The main objective of this study was to explore the feasibility of using psycho-physiological measurements to assess cognitive effort when manipulating a robot hand with and without the usage of a sensory substitution system based on auditory feedback, and how these psycho-physiological recordings relate to temporal and grasping performance in a static setting. Methods 10 male subjects (26+/-years old, participated in this study and were asked to come for 2 consecutive days. On the first day the experiment objective, tasks, and experiment setting was explained. Then, they completed a 30 minutes guided training. On the second day each subject was tested in 3 different modalities: Auditory Feedback only control (AF, Visual Feedback only control (VF, and Audiovisual Feedback control (AVF. For each modality they were asked to perform 10 trials. At the end of each test, the subject had to answer the NASA TLX questionnaire. Also, during the test the subject’s EEG, ECG, electro-dermal activity (EDA, and respiration rate were measured. Results The results show that a higher mental effort is needed when the subjects rely only on their vision, and that this effort seems to be reduced when auditory feedback is added to the human-machine interaction (multimodal feedback. Furthermore, better temporal performance and better grasping performance was obtained in the audiovisual modality. Conclusions The performance

  8. Validation of the Spanish adaptation of the Consensus Auditory-Perceptual Evaluation of Voice (CAPE-V).

    Science.gov (United States)

    Núñez-Batalla, Faustino; Morato-Galán, Marta; García-López, Isabel; Ávila-Menéndez, Arántzazu

    2015-01-01

    The Consensus Auditory-Perceptual Evaluation of Voice (CAPE-V) was developed.to promote a standardised approach to evaluating and documenting auditory perceptual judgments of vocal quality. This tool was originally developed in English language and its Spanish version is still inexistent. The aim of this study was to develop a Spanish adaptation of CAPE-V and to examine the reliability and empirical validity of this Spanish version. To adapt the CAPE-V protocol to the Spanish language, we proposed 6 phrases phonetically designed according to the CAPE-V requirements. Prospective instrument validation was performed. The validity of the Spanish version of the CAPE-V was examined in 4 ways: intra-rater reliability, inter-rater reliability and CAPE-V versus GRABS judgments. Inter-rater reliability coefficients for the CAPE-V ranged from 0.93 for overall severity to 0.54 for intensity; intra-rater reliability ranged from 0.98 for overall severity to 0.85 for intensity. The comparison of judgments between GRABS and CAPE-V ranged from 0.86 for overall severity to 0.61 for breathiness. The present study supports the use of the Spanish version of CAPE-V because of its validity and reliability. Copyright © 2014 Elsevier España, S.L.U. and Sociedad Española de Otorrinolaringología y Patología Cérvico-Facial. All rights reserved.

  9. A software module for implementing auditory and visual feedback on a video-based eye tracking system

    Science.gov (United States)

    Rosanlall, Bharat; Gertner, Izidor; Geri, George A.; Arrington, Karl F.

    2016-05-01

    We describe here the design and implementation of a software module that provides both auditory and visual feedback of the eye position measured by a commercially available eye tracking system. The present audio-visual feedback module (AVFM) serves as an extension to the Arrington Research ViewPoint EyeTracker, but it can be easily modified for use with other similar systems. Two modes of audio feedback and one mode of visual feedback are provided in reference to a circular area-of-interest (AOI). Auditory feedback can be either a click tone emitted when the user's gaze point enters or leaves the AOI, or a sinusoidal waveform with frequency inversely proportional to the distance from the gaze point to the center of the AOI. Visual feedback is in the form of a small circular light patch that is presented whenever the gaze-point is within the AOI. The AVFM processes data that are sent to a dynamic-link library by the EyeTracker. The AVFM's multithreaded implementation also allows real-time data collection (1 kHz sampling rate) and graphics processing that allow display of the current/past gaze-points as well as the AOI. The feedback provided by the AVFM described here has applications in military target acquisition and personnel training, as well as in visual experimentation, clinical research, marketing research, and sports training.

  10. Perceiving a stranger's voice as being one's own: a 'rubber voice' illusion?

    Directory of Open Access Journals (Sweden)

    Zane Z Zheng

    Full Text Available We describe an illusion in which a stranger's voice, when presented as the auditory concomitant of a participant's own speech, is perceived as a modified version of their own voice. When the congruence between utterance and feedback breaks down, the illusion is also broken. Compared to a baseline condition in which participants heard their own voice as feedback, hearing a stranger's voice induced robust changes in the fundamental frequency (F0 of their production. Moreover, the shift in F0 appears to be feedback dependent, since shift patterns depended reliably on the relationship between the participant's own F0 and the stranger-voice F0. The shift in F0 was evident both when the illusion was present and after it was broken, suggesting that auditory feedback from production may be used separately for self-recognition and for vocal motor control. Our findings indicate that self-recognition of voices, like other body attributes, is malleable and context dependent.

  11. A frequency-selective feedback model of auditory efferent suppression and its implications for the recognition of speech in noise.

    Science.gov (United States)

    Clark, Nicholas R; Brown, Guy J; Jürgens, Tim; Meddis, Ray

    2012-09-01

    The potential contribution of the peripheral auditory efferent system to our understanding of speech in a background of competing noise was studied using a computer model of the auditory periphery and assessed using an automatic speech recognition system. A previous study had shown that a fixed efferent attenuation applied to all channels of a multi-channel model could improve the recognition of connected digit triplets in noise [G. J. Brown, R. T. Ferry, and R. Meddis, J. Acoust. Soc. Am. 127, 943-954 (2010)]. In the current study an anatomically justified feedback loop was used to automatically regulate separate attenuation values for each auditory channel. This arrangement resulted in a further enhancement of speech recognition over fixed-attenuation conditions. Comparisons between multi-talker babble and pink noise interference conditions suggest that the benefit originates from the model's ability to modify the amount of suppression in each channel separately according to the spectral shape of the interfering sounds.

  12. Fast negative feedback enables mammalian auditory nerve fibers to encode a wide dynamic range of sound intensities.

    Directory of Open Access Journals (Sweden)

    Mark Ospeck

    Full Text Available Mammalian auditory nerve fibers (ANF are remarkable for being able to encode a 40 dB, or hundred fold, range of sound pressure levels into their firing rate. Most of the fibers are very sensitive and raise their quiescent spike rate by a small amount for a faint sound at auditory threshold. Then as the sound intensity is increased, they slowly increase their spike rate, with some fibers going up as high as ∼300 Hz. In this way mammals are able to combine sensitivity and wide dynamic range. They are also able to discern sounds embedded within background noise. ANF receive efferent feedback, which suggests that the fibers are readjusted according to the background noise in order to maximize the information content of their auditory spike trains. Inner hair cells activate currents in the unmyelinated distal dendrites of ANF where sound intensity is rate-coded into action potentials. We model this spike generator compartment as an attenuator that employs fast negative feedback. Input current induces rapid and proportional leak currents. This way ANF are able to have a linear frequency to input current (f-I curve that has a wide dynamic range. The ANF spike generator remains very sensitive to threshold currents, but efferent feedback is able to lower its gain in response to noise.

  13. Auditory nerve fiber representation of cues to voicing in syllable-final stop consonants

    Energy Technology Data Exchange (ETDEWEB)

    Sinex, D.G. (Boys Town National Research Hospital, Omaha, NE (United States))

    1993-09-01

    Acoustic cues to the identity of consonants such as d and t vary according to contextual factors such as the position of the consonant within a syllable. However, investigations of the neural coding of consonants have almost always used stimuli in which the consonant occurs in the syllable-initial position. The present experiments examined the peripheral neural representation of spectral and temporal cues that can distinguish between stop consonants d and t in syllable-final position. Stimulus sets consisting of the syllables hid, hit, hud, and hut were recorded by three different talkers. During the consonant closure interval, the spectrum of d was characterized by the presence of a low-frequency voice bar. Most neurons responses were characterized by discharge rate decreases at the beginning of the closure interval and by rate increases that marked the release of the consonant closure. Exceptions were seen in the responses of neurons with characteristics frequencies (CFs) below approximately 0.7 kHz to syllables ending in d. These neurons responded to the voice bar with discharge rates that could approach the rates elicited by the vowel. The latencies of prominent discharge rate changes were measured for all neurons and used to compute the length of the encoded closure interval. The encoded interval was clearly longer for syllables ending in t than in d. The encoded interval increased with CF for both consonants but more rapidly for t. Differences in the encoded closure interval were small for syllables with different vowels or syllables produced by different talkers. 29 refs., 10 figs.

  14. Effect of visual distraction and auditory feedback on patient effort during robot-assisted movement training after stroke

    Directory of Open Access Journals (Sweden)

    Reinkensmeyer David J

    2011-04-01

    Full Text Available Abstract Background Practicing arm and gait movements with robotic assistance after neurologic injury can help patients improve their movement ability, but patients sometimes reduce their effort during training in response to the assistance. Reduced effort has been hypothesized to diminish clinical outcomes of robotic training. To better understand patient slacking, we studied the role of visual distraction and auditory feedback in modulating patient effort during a common robot-assisted tracking task. Methods Fourteen participants with chronic left hemiparesis from stroke, five control participants with chronic right hemiparesis and fourteen non-impaired healthy control participants, tracked a visual target with their arms while receiving adaptive assistance from a robotic arm exoskeleton. We compared four practice conditions: the baseline tracking task alone; tracking while also performing a visual distracter task; tracking with the visual distracter and sound feedback; and tracking with sound feedback. For the distracter task, symbols were randomly displayed in the corners of the computer screen, and the participants were instructed to click a mouse button when a target symbol appeared. The sound feedback consisted of a repeating beep, with the frequency of repetition made to increase with increasing tracking error. Results Participants with stroke halved their effort and doubled their tracking error when performing the visual distracter task with their left hemiparetic arm. With sound feedback, however, these participants increased their effort and decreased their tracking error close to their baseline levels, while also performing the distracter task successfully. These effects were significantly smaller for the participants who used their non-paretic arm and for the participants without stroke. Conclusions Visual distraction decreased participants effort during a standard robot-assisted movement training task. This effect was greater for

  15. Effect of visual distraction and auditory feedback on patient effort during robot-assisted movement training after stroke

    Science.gov (United States)

    2011-01-01

    Background Practicing arm and gait movements with robotic assistance after neurologic injury can help patients improve their movement ability, but patients sometimes reduce their effort during training in response to the assistance. Reduced effort has been hypothesized to diminish clinical outcomes of robotic training. To better understand patient slacking, we studied the role of visual distraction and auditory feedback in modulating patient effort during a common robot-assisted tracking task. Methods Fourteen participants with chronic left hemiparesis from stroke, five control participants with chronic right hemiparesis and fourteen non-impaired healthy control participants, tracked a visual target with their arms while receiving adaptive assistance from a robotic arm exoskeleton. We compared four practice conditions: the baseline tracking task alone; tracking while also performing a visual distracter task; tracking with the visual distracter and sound feedback; and tracking with sound feedback. For the distracter task, symbols were randomly displayed in the corners of the computer screen, and the participants were instructed to click a mouse button when a target symbol appeared. The sound feedback consisted of a repeating beep, with the frequency of repetition made to increase with increasing tracking error. Results Participants with stroke halved their effort and doubled their tracking error when performing the visual distracter task with their left hemiparetic arm. With sound feedback, however, these participants increased their effort and decreased their tracking error close to their baseline levels, while also performing the distracter task successfully. These effects were significantly smaller for the participants who used their non-paretic arm and for the participants without stroke. Conclusions Visual distraction decreased participants effort during a standard robot-assisted movement training task. This effect was greater for the hemiparetic arm

  16. Audio Feedback to Physiotherapy Students for Viva Voce: How Effective Is "The Living Voice"?

    Science.gov (United States)

    Munro, Wendy; Hollingworth, Linda

    2014-01-01

    Assessment and feedback remains one of the categories that students are least satisfied with within the United Kingdom National Student Survey. The Student Charter promotes the use of various formats of feedback to enhance student learning. This study evaluates the use of audio MP3 as an alternative feedback mechanism to written feedback for…

  17. Audio Feedback to Physiotherapy Students for Viva Voce: How Effective Is "The Living Voice"?

    Science.gov (United States)

    Munro, Wendy; Hollingworth, Linda

    2014-01-01

    Assessment and feedback remains one of the categories that students are least satisfied with within the United Kingdom National Student Survey. The Student Charter promotes the use of various formats of feedback to enhance student learning. This study evaluates the use of audio MP3 as an alternative feedback mechanism to written feedback for…

  18. Utility estimation of the application of auditory-visual-tactile sense feedback in respiratory gated radiation therapy

    Energy Technology Data Exchange (ETDEWEB)

    Jo, Jung Hun; KIm, Byeong Jin; Roh, Shi Won; Lee, Hyeon Chan; Jang, Hyeong Jun; Kim, Hoi Nam [Dept. of Radiation Oncology, Biomedical Engineering, Seoul St. Mary' s Hospital, Seoul (Korea, Republic of); Song, Jae Hoon [Dept. of Biomedical Engineering, Seoul St. Mary' s Hospital, Seoul (Korea, Republic of); Kim, Young Jae [Dept. of Radiological Technology, Gwang Yang Health Collage, Gwangyang (Korea, Republic of)

    2013-03-15

    The purpose of this study was to evaluate the possibility to optimize the gated treatment delivery time and maintenance of stable respiratory by the introduction of breath with the assistance of auditory-visual-tactile sense. The experimenter's respiration were measured by ANZAI 4D system. We obtained natural breathing signal, monitor-induced breathing signal, monitor and ventilator-induced breathing signal, and breath-hold signal using real time monitor during 10 minutes beam-on-time. In order to check the stability of respiratory signals distributed in each group were compared with means, standard deviation, variation value, beam{sub t}ime of the respiratory signal. The stability of each respiratory was measured in consideration of deviation change studied in each respiratory time lapse. As a result of an analysis of respiratory signal, all experimenters has showed that breathing signal used both Real time monitor and Ventilator was the most stable and shortest time. In this study, it was evaluated that respiratory gated radiation therapy with auditory-visual-tactual sense and without auditory-visual-tactual sense feedback. The study showed that respiratory gated radiation therapy delivery time could significantly be improved by the application of video feedback when this is combined with audio-tactual sense assistance. This delivery technique did prove its feasibility to limit the tumor motion during treatment delivery for all patients to a defined value while maintaining the accuracy and proved the applicability of the technique in a conventional clinical schedule.

  19. Effect of tonal native language on voice fundamental frequency responses to pitch feedback perturbations during sustained vocalizations.

    Science.gov (United States)

    Liu, Hanjun; Wang, Emily Q; Chen, Zhaocong; Liu, Peng; Larson, Charles R; Huang, Dongfeng

    2010-12-01

    The purpose of this cross-language study was to examine whether the online control of voice fundamental frequency (F(0)) during vowel phonation is influenced by language experience. Native speakers of Cantonese and Mandarin, both tonal languages spoken in China, participated in the experiments. Subjects were asked to vocalize a vowel sound /u/at their comfortable habitual F(0), during which their voice pitch was unexpectedly shifted (± 50, ± 100, ± 200, or ± 500 cents, 200 ms duration) and fed back instantaneously to them over headphones. The results showed that Cantonese speakers produced significantly smaller responses than Mandarin speakers when the stimulus magnitude varied from 200 to 500 cents. Further, response magnitudes decreased along with the increase in stimulus magnitude in Cantonese speakers, which was not observed in Mandarin speakers. These findings suggest that online control of voice F(0) during vocalization is sensitive to language experience. Further, systematic modulations of vocal responses across stimulus magnitude were observed in Cantonese speakers but not in Mandarin speakers, which indicates that this highly automatic feedback mechanism is sensitive to the specific tonal system of each language.

  20. The Unresponsive Partner: Roles of Social Status, Auditory Feedback, and Animacy in Coordination of Joint Music Performance

    Science.gov (United States)

    Demos, Alexander P.; Carter, Daniel J.; Wanderley, Marcelo M.; Palmer, Caroline

    2017-01-01

    We examined temporal synchronization in joint music performance to determine how social status, auditory feedback, and animacy influence interpersonal coordination. A partner’s coordination can be bidirectional (partners adapt to the actions of one another) or unidirectional (one partner adapts). According to the dynamical systems framework, bidirectional coordination should be the optimal (preferred) state during live performance. To test this, 24 skilled pianists each performed with a confederate while their coordination was measured by the asynchrony in their tone onsets. To promote social balance, half of the participants were told the confederate was a fellow participant – an equal social status. To promote social imbalance, the other half was told the confederate was an experimenter – an unequal social status. In all conditions, the confederate’s arm and finger movements were occluded from the participant’s view to allow manipulation of animacy of the confederate’s performances (live or recorded). Unbeknownst to the participants, half of the confederate’s performances were replaced with pre-recordings, forcing the participant into unidirectional coordination during performance. The other half of the confederate’s performances were live, which permitted bidirectional coordination between performers. In a final manipulation, both performers heard the auditory feedback from one or both of the performers’ parts removed at unpredictable times to disrupt their performance. Consistently larger asynchronies were observed in performances of unidirectional (recorded) than bidirectional (live) performances across all conditions. Participants who were told the confederate was an experimenter reported their synchrony as more successful than when the partner was introduced as a fellow participant. Finally, asynchronies increased as auditory feedback was removed; removal of the confederate’s part hurt coordination more than removal of the participant

  1. The Unresponsive Partner: Roles of Social Status, Auditory Feedback, and Animacy in Coordination of Joint Music Performance.

    Science.gov (United States)

    Demos, Alexander P; Carter, Daniel J; Wanderley, Marcelo M; Palmer, Caroline

    2017-01-01

    We examined temporal synchronization in joint music performance to determine how social status, auditory feedback, and animacy influence interpersonal coordination. A partner's coordination can be bidirectional (partners adapt to the actions of one another) or unidirectional (one partner adapts). According to the dynamical systems framework, bidirectional coordination should be the optimal (preferred) state during live performance. To test this, 24 skilled pianists each performed with a confederate while their coordination was measured by the asynchrony in their tone onsets. To promote social balance, half of the participants were told the confederate was a fellow participant - an equal social status. To promote social imbalance, the other half was told the confederate was an experimenter - an unequal social status. In all conditions, the confederate's arm and finger movements were occluded from the participant's view to allow manipulation of animacy of the confederate's performances (live or recorded). Unbeknownst to the participants, half of the confederate's performances were replaced with pre-recordings, forcing the participant into unidirectional coordination during performance. The other half of the confederate's performances were live, which permitted bidirectional coordination between performers. In a final manipulation, both performers heard the auditory feedback from one or both of the performers' parts removed at unpredictable times to disrupt their performance. Consistently larger asynchronies were observed in performances of unidirectional (recorded) than bidirectional (live) performances across all conditions. Participants who were told the confederate was an experimenter reported their synchrony as more successful than when the partner was introduced as a fellow participant. Finally, asynchronies increased as auditory feedback was removed; removal of the confederate's part hurt coordination more than removal of the participant's part in live

  2. A do-it-yourself membrane-activated auditory feedback device for weight bearing and gait training: a case report.

    Science.gov (United States)

    Batavia, M; Gianutsos, J G; Vaccaro, A; Gold, J T

    2001-04-01

    An augmented auditory feedback device comprised of a thin membrane switch mini-buzzer, and battery is described as a modification of a previously described feedback device. The membrane switch can be customized for the patient and is designed to fit inside a patient's shoe without altering the heel height. Its appeal lies in its simplicity of construction, low cost, and ease of implementation during a patient's training for weight bearing and gait. An ever-present source of information, it provides performance-relevant cues to both patient and clinician about the occurrence, duration, and location of a force component of motor performance. The report includes suggested applications of the device, instructions to construct it, and a case report in which the device was used to improve weight bearing and gait in a cognitively healthy person with spina bifida.

  3. Finding your mate at a cocktail party: frequency separation promotes auditory stream segregation of concurrent voices in multi-species frog choruses.

    Directory of Open Access Journals (Sweden)

    Vivek Nityananda

    Full Text Available Vocal communication in crowded social environments is a difficult problem for both humans and nonhuman animals. Yet many important social behaviors require listeners to detect, recognize, and discriminate among signals in a complex acoustic milieu comprising the overlapping signals of multiple individuals, often of multiple species. Humans exploit a relatively small number of acoustic cues to segregate overlapping voices (as well as other mixtures of concurrent sounds, like polyphonic music. By comparison, we know little about how nonhuman animals are adapted to solve similar communication problems. One important cue enabling source segregation in human speech communication is that of frequency separation between concurrent voices: differences in frequency promote perceptual segregation of overlapping voices into separate "auditory streams" that can be followed through time. In this study, we show that frequency separation (ΔF also enables frogs to segregate concurrent vocalizations, such as those routinely encountered in mixed-species breeding choruses. We presented female gray treefrogs (Hyla chrysoscelis with a pulsed target signal (simulating an attractive conspecific call in the presence of a continuous stream of distractor pulses (simulating an overlapping, unattractive heterospecific call. When the ΔF between target and distractor was small (e.g., ≤3 semitones, females exhibited low levels of responsiveness, indicating a failure to recognize the target as an attractive signal when the distractor had a similar frequency. Subjects became increasingly more responsive to the target, as indicated by shorter latencies for phonotaxis, as the ΔF between target and distractor increased (e.g., ΔF = 6-12 semitones. These results support the conclusion that gray treefrogs, like humans, can exploit frequency separation as a perceptual cue to segregate concurrent voices in noisy social environments. The ability of these frogs to segregate

  4. Distúrbio de voz em professores: autorreferência, avaliação perceptiva da voz e das pregas vocais Voice disorders in teachers: self-report, auditory-perceptive assessment of voice and vocal fold assessment

    Directory of Open Access Journals (Sweden)

    Maria Fabiana Bonfim de Lima-Silva

    2012-12-01

    Full Text Available OBJETIVO: Analisar a presença do distúrbio de voz em professores na concordância entre autorreferência, avaliação perceptiva da voz e das pregas vocais. MÉTODOS: Deste estudo transversal, participaram 60 professores de duas escolas públicas de ensino fundamental e médio. Após responderem questionário de autopercepção (Condição de Produção Vocal do Professor - CPV-P para caracterização da amostra e levantamento de dados sobre autorreferência ao distúrbio de voz, foram submetidos à coleta de amostra de fala e exame nasofibrolaringoscópico. Para classificar as vozes, três juízes fonoaudiólogos utilizaram à escala GRBASI e, para pregas vocais (PPVV, um otorrinolaringologista descreveu as alterações encontradas. Os dados foram analisados descritivamente, e a seguir submetidos a testes de associação. RESULTADOS: No questionário, 63,3% dos participantes referiram ter ou ter tido distúrbio de voz. Do total, 43,3% foram diagnosticados com alteração em voz e 46,7%, em prega vocal. Não houve associação entre autorreferência e avaliação da voz, nem entre autorreferência e avaliação de PPVV, com registro de concordância baixa entre as três avaliações. Porém, houve associação entre a avaliação da voz e de PPVV, com concordância intermediária entre elas. CONCLUSÃO: Há maior autorreferência a distúrbio de voz do que o constatado pela avaliação perceptiva da voz e das pregas vocais. A concordância intermediária entre as duas avaliações prediz a necessidade da realização de pelo menos uma delas por ocasião da triagem em professores.PURPOSE: To analyze the presence of voice disorders in teachers in agreement between self-report, auditory-perceptive assessment of voice quality and vocal fold assessment. METHODS: The subjects of this cross-sectional study were 60 public elementary, middle and high-school teachers. After answering a self-awareness questionnaire (Voice Production Conditions of

  5. The predictability of frequency-altered auditory feedback changes the weighting of feedback and feedforward input for speech motor control.

    Science.gov (United States)

    Scheerer, Nichole E; Jones, Jeffery A

    2014-12-01

    Speech production requires the combined effort of a feedback control system driven by sensory feedback, and a feedforward control system driven by internal models. However, the factors that dictate the relative weighting of these feedback and feedforward control systems are unclear. In this event-related potential (ERP) study, participants produced vocalisations while being exposed to blocks of frequency-altered feedback (FAF) perturbations that were either predictable in magnitude (consistently either 50 or 100 cents) or unpredictable in magnitude (50- and 100-cent perturbations varying randomly within each vocalisation). Vocal and P1-N1-P2 ERP responses revealed decreases in the magnitude and trial-to-trial variability of vocal responses, smaller N1 amplitudes, and shorter vocal, P1 and N1 response latencies following predictable FAF perturbation magnitudes. In addition, vocal response magnitudes correlated with N1 amplitudes, vocal response latencies, and P2 latencies. This pattern of results suggests that after repeated exposure to predictable FAF perturbations, the contribution of the feedforward control system increases. Examination of the presentation order of the FAF perturbations revealed smaller compensatory responses, smaller P1 and P2 amplitudes, and shorter N1 latencies when the block of predictable 100-cent perturbations occurred prior to the block of predictable 50-cent perturbations. These results suggest that exposure to large perturbations modulates responses to subsequent perturbations of equal or smaller size. Similarly, exposure to a 100-cent perturbation prior to a 50-cent perturbation within a vocalisation decreased the magnitude of vocal and N1 responses, but increased P1 and P2 latencies. Thus, exposure to a single perturbation can affect responses to subsequent perturbations.

  6. Long-range correlation properties in timing of skilled piano performance: the influence of auditory feedback and deep brain stimulation.

    Science.gov (United States)

    Herrojo Ruiz, María; Hong, Sang Bin; Hennig, Holger; Altenmüller, Eckart; Kühn, Andrea A

    2014-01-01

    Unintentional timing deviations during musical performance can be conceived of as timing errors. However, recent research on humanizing computer-generated music has demonstrated that timing fluctuations that exhibit long-range temporal correlations (LRTC) are preferred by human listeners. This preference can be accounted for by the ubiquitous presence of LRTC in human tapping and rhythmic performances. Interestingly, the manifestation of LRTC in tapping behavior seems to be driven in a subject-specific manner by the LRTC properties of resting-state background cortical oscillatory activity. In this framework, the current study aimed to investigate whether propagation of timing deviations during the skilled, memorized piano performance (without metronome) of 17 professional pianists exhibits LRTC and whether the structure of the correlations is influenced by the presence or absence of auditory feedback. As an additional goal, we set out to investigate the influence of altering the dynamics along the cortico-basal-ganglia-thalamo-cortical network via deep brain stimulation (DBS) on the LRTC properties of musical performance. Specifically, we investigated temporal deviations during the skilled piano performance of a non-professional pianist who was treated with subthalamic-deep brain stimulation (STN-DBS) due to severe Parkinson's disease, with predominant tremor affecting his right upper extremity. In the tremor-affected right hand, the timing fluctuations of the performance exhibited random correlations with DBS OFF. By contrast, DBS restored long-range dependency in the temporal fluctuations, corresponding with the general motor improvement on DBS. Overall, the present investigations demonstrate the presence of LRTC in skilled piano performances, indicating that unintentional temporal deviations are correlated over a wide range of time scales. This phenomenon is stable after removal of the auditory feedback, but is altered by STN-DBS, which suggests that cortico

  7. Long-range correlation properties in timing of skilled piano performance: the influence of auditory feedback and deep brain stimulation.

    Directory of Open Access Journals (Sweden)

    Maria eHerrojo Ruiz

    2014-09-01

    Full Text Available Unintentional timing deviations during musical performance can be conceived of as timing errors. However, recent research on humanizing computer-generated music has demonstrated that timing fluctuations that exhibit long-range temporal correlations (LRTC are preferred by human listeners. This preference can be accounted for by the ubiquitous presence of LRTC in human tapping and rhythmic performances. Interestingly, the manifestation of LRTC in tapping behavior seems to be driven in a subject-specific manner by the LRTC properties of resting-state background cortical oscillatory activity. In this framework, the current study aimed to investigate whether propagation of timing deviations during the skilled, memorized piano performance (without metronome of 17 professional pianists exhibits LRTC and whether the structure of the correlations is influenced by the presence or absence of auditory feedback.As an additional goal, we set out to investigate the influence of altering the dynamics along the cortico-basal-ganglia-thalamo-cortical network via deep brain stimulation (DBS on the LRTC properties of musical performance. Specifically, we investigated temporal deviations during the skilled piano performance of a non-professional pianist who was treated with subthalamic-deep brain stimulation (STN-DBS due to severe Parkinson's disease, with predominant tremor affecting his right upper extremity. In the tremor-affected right hand, the timing fluctuations of the performance exhibited random correlations with DBS OFF. By contrast, DBS restored long-range dependency in the temporal fluctuations, corresponding with the general motor improvement on DBS.Overall, the present investigations are the first to demonstrate the presence of LRTC in skilled piano performances, indicating that unintentional temporal deviations are correlated over a wide range of time scales. This phenomenon is stable after removal of the auditory feedback, but is altered by STN

  8. Hearing voices: does it give your patient a headache? A case of auditory hallucinations as acoustic aura in migraine

    Directory of Open Access Journals (Sweden)

    Van der Feltz-Cornelis CM

    2012-03-01

    Full Text Available Christina M van der Feltz-Cornelis1–3, Henk Biemans1, Jan Timmer11Clinical Centre for Body, Mind and Health, GGz Breburg, Tilburg, The Netherlands; 2Faculty of Social and Behavioral Sciences, Tilburg University, Tilburg, The Netherlands; 3Trimbos Instituut, Utrecht, The NetherlandsObjective: Auditory hallucinations are generally considered to be a psychotic symptom. However, they do occur without other psychotic symptoms in a substantive number of cases in the general population and can cause a lot of individual distress because of the supposed association with schizophrenia. We describe a case of nonpsychotic auditory hallucinations occurring in the context of migraine.Method: Case report and literature review.Results: A 40-year-old man presented with imperative auditory hallucinations that caused depressive and anxiety symptoms. He reported migraine with visual aura as well which started at the same time as the auditory hallucinations. The auditory hallucinations occurred in the context of nocturnal migraine attacks, preceding them as aura. No psychotic disorder was present. After treatment of the migraine with propranolol 40 mg twice daily, explanation of the etiology of the hallucinations, and mirtazapine 45 mg daily, the migraine subsided and no further hallucinations occurred. The patient recovered.Discussion: Visual auras have been described in migraine and occur quite often. Auditory hallucinations as aura in migraine have been described in children without psychosis, but this is the first case describing auditory hallucinations without psychosis as aura in migraine in an adult. For description of this kind of hallucination, DSM-IV lacks an appropriate category.Conclusion: Psychiatrists should consider migraine with acoustic aura as a possible etiological factor in patients without further psychotic symptoms presenting with auditory hallucinations, and they should ask for headache symptoms when they take the history. Prognosis may be

  9. Self-Generated Auditory Feedback as a Cue to Support Rhythmic Motor Stability

    Directory of Open Access Journals (Sweden)

    Gopher Daniel

    2011-12-01

    Full Text Available A goal of the SKILLS project is to develop Virtual Reality (VR-based training simulators for different application domains, one of which is juggling. Within this context the value of multimodal VR environments for skill acquisition is investigated. In this study, we investigated whether it was necessary to render the sounds of virtual balls hitting virtual hands within the juggling training simulator. First, we recorded sounds at the jugglers’ ears and found the sound of ball hitting hands to be audible. Second, we asked 24 jugglers to juggle under normal conditions (Audible or while listening to pink noise intended to mask the juggling sounds (Inaudible. We found that although the jugglers themselves reported no difference in their juggling across these two conditions, external juggling experts rated rhythmic stability worse in the Inaudible condition than in the Audible condition. This result suggests that auditory information should be rendered in the VR juggling training simulator.

  10. Design of standard voice sample text for subjective auditory perceptual evaluation of voice disorders%嗓音障碍主观听感知评估中标准化朗读文本的设计

    Institute of Scientific and Technical Information of China (English)

    李进让; 孙雁雁; 徐文

    2010-01-01

    Objective To design a speech voice sample text with all phonemes in Mandarin for subjective auditory perceptual evaluation of voice disorders. Methods The principles for design of a speech voice sample text are: The short text should include the 21 initials and 39 finals, this may cover all the phonemes in Mandarin. Also, the short text should have some meanings. Results A short text was made out. It had 155 Chinese words, and included 21 initials and 38 finals (the final, (e), was not included because it was rarely used in Mandarin). Also, the text covered 17 light tones and one "Erhua". The constituent ratios of the initials and finals presented in this short text were statistically similar as those in Mandarin according to the method of similarity of the sample and population( r =0. 742, P <0. 001 and r =0.844, P < 0.001, respectively). The constituent ratios of the tones presented in this short text were statistically not similar as those in Mandarin(r = 0. 731, P > 0. 05 ). Conclusions A speech voice sample text with all phonemes in Mandarin was made out. The constituent ratios of the initials and finals presented in this short text are similar as those in Mandarin. Its value for subjective auditory perceptual evaluation of voice disorders need further study.%目的 设计涵盖汉语普通话发音所有音素的短文,用于嗓音障碍主观听感知的评估.方法 设计原则为涵盖汉语拼音的21个声母和39个韵母的短文,包含汉语普通话发音的所有音素;其次短文要有一定的中心意思.结果 设计的短文共155字,涵盖了21个声母和38个韵母.由于韵母(e)不常用,而对应的读音仅有1个字"欸",故未包括.另外,短文中包括了17个轻声和1个儿化音.短文中声母、韵母和声调的出现频率(构成比)大体符合汉语的出现规律.采用样本与总体相似性检验的方法,将短文与中国科学院声学研究所统计汉语中声母、韵母和声调的构成比进行相似

  11. Dynamics of vocalization-induced modulation of auditory cortical activity at mid-utterance.

    Directory of Open Access Journals (Sweden)

    Zhaocong Chen

    Full Text Available BACKGROUND: Recent research has addressed the suppression of cortical sensory responses to altered auditory feedback that occurs at utterance onset regarding speech. However, there is reason to assume that the mechanisms underlying sensorimotor processing at mid-utterance are different than those involved in sensorimotor control at utterance onset. The present study attempted to examine the dynamics of event-related potentials (ERPs to different acoustic versions of auditory feedback at mid-utterance. METHODOLOGY/PRINCIPAL FINDINGS: Subjects produced a vowel sound while hearing their pitch-shifted voice (100 cents, a sum of their vocalization and pure tones, or a sum of their vocalization and white noise at mid-utterance via headphones. Subjects also passively listened to playback of what they heard during active vocalization. Cortical ERPs were recorded in response to different acoustic versions of feedback changes during both active vocalization and passive listening. The results showed that, relative to passive listening, active vocalization yielded enhanced P2 responses to the 100 cents pitch shifts, whereas suppression effects of P2 responses were observed when voice auditory feedback was distorted by pure tones or white noise. CONCLUSION/SIGNIFICANCE: The present findings, for the first time, demonstrate a dynamic modulation of cortical activity as a function of the quality of acoustic feedback at mid-utterance, suggesting that auditory cortical responses can be enhanced or suppressed to distinguish self-produced speech from externally-produced sounds.

  12. Feedback.

    Science.gov (United States)

    Richardson, Barbara K

    2004-12-01

    The emergency department provides a rich environment for diverse patient encounters, rapid clinical decision making, and opportunities to hone procedural skills. Well-prepared faculty can utilize this environment to teach residents and medical students and gain institutional recognition for their incomparable role and teamwork. Giving effective feedback is an essential skill for all teaching faculty. Feedback is ongoing appraisal of performance based on direct observation aimed at changing or sustaining a behavior. Tips from the literature and the author's experience are reviewed to provide formats for feedback, review of objectives, and elements of professionalism and how to deal with poorly performing students. Although the following examples pertain to medical student education, these techniques are applicable to the education of all adult learners, including residents and colleagues. Specific examples of redirection and reflection are offered, and pitfalls are reviewed. Suggestions for streamlining verbal and written feedback and obtaining feedback from others in a fast-paced environment are given. Ideas for further individual and group faculty development are presented.

  13. A linguistic comparison between auditory verbal hallucinations in patients with a psychotic disorder and in nonpsychotic individuals: Not just what the voices say, but how they say it.

    Science.gov (United States)

    de Boer, J N; Heringa, S M; van Dellen, E; Wijnen, F N K; Sommer, I E C

    2016-11-01

    Auditory verbal hallucinations (AVH) in psychotic patients are associated with activation of right hemisphere language areas, although this hemisphere is non-dominant in most people. Language generated in the right hemisphere can be observed in aphasia patients with left hemisphere damage. It is called "automatic speech", characterized by low syntactic complexity and negative emotional valence. AVH in nonpsychotic individuals, by contrast, predominantly have a neutral or positive emotional content and may be less dependent on right hemisphere activity. We hypothesize that right hemisphere language characteristics can be observed in the language of AVH, differentiating psychotic from nonpsychotic individuals. 17 patients with a psychotic disorder and 19 nonpsychotic individuals were instructed to repeat their AVH verbatim directly upon hearing them. Responses were recorded, transcribed and analyzed for total words, mean length of utterance, proportion of grammatical utterances, proportion of negations, literal and thematic perseverations, abuses, type-token ratio, embeddings, verb complexity, noun-verb ratio, and open-closed class ratio. Linguistic features of AVH overall differed between groups F(13,24)=3.920, p=0.002; Pillai's Trace 0.680. AVH of psychotic patients compared with AVH of nonpsychotic individuals had a shorter mean length of utterance, lower verb complexity, and more verbal abuses and perseverations (all p<0.05). Other features were similar between groups. AVH of psychotic patients showed lower syntactic complexity and higher levels of repetition and abuses than AVH of nonpsychotic individuals. These differences are in line with a stronger involvement of the right hemisphere in the origination of AVH in patients than in nonpsychotic voice hearers. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Combining video instruction followed by voice feedback in a self-learning station for acquisition of Basic Life Support skills: a randomised non-inferiority trial.

    Science.gov (United States)

    Mpotos, Nicolas; Lemoyne, Sabine; Calle, Paul A; Deschepper, Ellen; Valcke, Martin; Monsieurs, Koenraad G

    2011-07-01

    Current computerised self-learning (SL) stations for Basic Life Support (BLS) are an alternative to instructor-led (IL) refresher training but are not intended for initial skill acquisition. We developed a SL station for initial skill acquisition and evaluated its efficacy. In a non-inferiority trial, 120 pharmacy students were randomised to IL small group training or individual training in a SL station. In the IL group, instructors demonstrated the skills and provided feedback. In the SL group a shortened Mini Anne™ video, to acquire the skills, was followed by Resusci Anne Skills Station™ software (both Laerdal, Norway) with voice feedback for further refinement. Testing was performed individually, respecting a seven week interval after training for every student. One hundred and seventeen participants were assessed (three drop-outs). The proportion of students achieving a mean compression depth 40-50mm was 24/56 (43%) IL vs. 31/61 (51%) SL and 39/56 (70%) IL vs. 48/61 (79%) SL for a mean compression depth ≥ 40 mm. Compression rate 80-120/min was achieved in 49/56 (88%) IL vs. 57/61 (93%) SL and any incomplete release (≥ 5 mm) was observed in 31/56 (55%) IL and 35/61 (57%) SL. Adequate mean ventilation volume (400-1000 ml) was achieved in 29/56 (52%) IL vs. 36/61 (59%) SL. Non-inferiority was confirmed for depth and although inconclusive, other areas came close to demonstrate it. Compression skills acquired in a SL station combining video-instruction with training using voice feedback were not inferior to IL training. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  15. Voice Simulation in Nursing Education.

    Science.gov (United States)

    Kepler, Britney B; Lee, Heeyoung; Kane, Irene; Mitchell, Ann M

    2016-01-01

    The goal of this study was to improve prelicensure nursing students' attitudes toward and self-efficacy related to delivering nursing care to patients with auditory hallucinations. Based on the Hearing Voices That Are Distressing curriculum, 87 participants were instructed to complete 3 tasks while wearing headphones delivering distressing voices. Comparing presimulation and postsimulation results, this study suggests that the simulation significantly improved attitudes toward patients with auditory hallucinations; however, self-efficacy related to caring for these patients remained largely unchanged.

  16. Rehabilitation of the Upper Extremity after Stroke: A Case Series Evaluating REO Therapy and an Auditory Sensor Feedback for Trunk Control

    Directory of Open Access Journals (Sweden)

    G. Thielman

    2012-01-01

    Full Text Available Background and Purpose. Training in the virtual environment in post stroke rehab is being established as a new approach for neurorehabilitation, specifically, ReoTherapy (REO a robot-assisted virtual training device. Trunk stabilization strapping has been part of the concept with this device, and literature is lacking to support this for long-term functional changes with individuals after stroke. The purpose of this case series was to measure the feasibility of auditory trunk sensor feedback during REO therapy, in moderate to severely impaired individuals after stroke. Case Description. Using an open label crossover comparison design, 3 chronic stroke subjects were trained for 12 sessions over six weeks on either the REO or the control condition of task related training (TRT; after a washout period of 4 weeks; the alternative therapy was given. Outcomes. With both interventions, clinically relevant improvements were found for measures of body function and structure, as well as for activity, for two participants. Providing auditory feedback during REO training for trunk control was found to be feasible. Discussion. The degree of changes evident varied per protocol and may be due to the appropriateness of the technique chosen, as well as based on patients impaired arm motor control.

  17. A Robotic Voice Simulator and the Interactive Training for Hearing-Impaired People

    Directory of Open Access Journals (Sweden)

    Hideyuki Sawada

    2008-01-01

    Full Text Available A talking and singing robot which adaptively learns the vocalization skill by means of an auditory feedback learning algorithm is being developed. The robot consists of motor-controlled vocal organs such as vocal cords, a vocal tract and a nasal cavity to generate a natural voice imitating a human vocalization. In this study, the robot is applied to the training system of speech articulation for the hearing-impaired, because the robot is able to reproduce their vocalization and to teach them how it is to be improved to generate clear speech. The paper briefly introduces the mechanical construction of the robot and how it autonomously acquires the vocalization skill in the auditory feedback learning by listening to human speech. Then the training system is described, together with the evaluation of the speech training by auditory impaired people.

  18. Overreliance on auditory feedback may lead to sound/syllable repetitions: simulations of stuttering and fluency-inducing conditions with a neural model of speech production

    Science.gov (United States)

    Civier, Oren; Tasko, Stephen M.; Guenther, Frank H.

    2010-01-01

    This paper investigates the hypothesis that stuttering may result in part from impaired readout of feedforward control of speech, which forces persons who stutter (PWS) to produce speech with a motor strategy that is weighted too much toward auditory feedback control. Over-reliance on feedback control leads to production errors which, if they grow large enough, can cause the motor system to “reset” and repeat the current syllable. This hypothesis is investigated using computer simulations of a “neurally impaired” version of the DIVA model, a neural network model of speech acquisition and production. The model’s outputs are compared to published acoustic data from PWS’ fluent speech, and to combined acoustic and articulatory movement data collected from the dysfluent speech of one PWS. The simulations mimic the errors observed in the PWS subject’s speech, as well as the repairs of these errors. Additional simulations were able to account for enhancements of fluency gained by slowed/prolonged speech and masking noise. Together these results support the hypothesis that many dysfluencies in stuttering are due to a bias away from feedforward control and toward feedback control. PMID:20831971

  19. Sentence Comprehension in Adolescents with down Syndrome and Typically Developing Children: Role of Sentence Voice, Visual Context, and Auditory-Verbal Short-Term Memory.

    Science.gov (United States)

    Miolo, Giuliana; Chapman, Robins S.; Sindberg, Heidi A.

    2005-01-01

    The authors evaluated the roles of auditory-verbal short-term memory, visual short-term memory, and group membership in predicting language comprehension, as measured by an experimental sentence comprehension task (SCT) and the Test for Auditory Comprehension of Language--Third Edition (TACL-3; E. Carrow-Woolfolk, 1999) in 38 participants: 19 with…

  20. The effect of background music in auditory health persuasion

    NARCIS (Netherlands)

    Elbert, Sarah; Dijkstra, Arie

    2013-01-01

    In auditory health persuasion, threatening information regarding health is communicated by voice only. One relevant context of auditory persuasion is the addition of background music. There are different mechanisms through which background music might influence persuasion, for example through mood (

  1. The effect of background music in auditory health persuasion

    NARCIS (Netherlands)

    Elbert, Sarah; Dijkstra, Arie

    2013-01-01

    In auditory health persuasion, threatening information regarding health is communicated by voice only. One relevant context of auditory persuasion is the addition of background music. There are different mechanisms through which background music might influence persuasion, for example through mood

  2. Auditory-motor learning influences auditory memory for music.

    Science.gov (United States)

    Brown, Rachel M; Palmer, Caroline

    2012-05-01

    In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.

  3. Hear You Later Alligator: How delayed auditory feedback affects non-musically trained people’s strumming

    DEFF Research Database (Denmark)

    Larsen, Jeppe Veirum; Knoche, Hendrik

    2017-01-01

    of an actuated guitar to a metronome at 60bpm and 120bpm. The long DAF matched a subdivision of the overall tempo. We compared their performance using two different input devices with feedback before or on activation. While 250ms DAF hardly affected musically trained participants, non-musically trained...

  4. Speaker's voice as a memory cue.

    Science.gov (United States)

    Campeanu, Sandra; Craik, Fergus I M; Alain, Claude

    2015-02-01

    Speaker's voice occupies a central role as the cornerstone of auditory social interaction. Here, we review the evidence suggesting that speaker's voice constitutes an integral context cue in auditory memory. Investigation into the nature of voice representation as a memory cue is essential to understanding auditory memory and the neural correlates which underlie it. Evidence from behavioral and electrophysiological studies suggest that while specific voice reinstatement (i.e., same speaker) often appears to facilitate word memory even without attention to voice at study, the presence of a partial benefit of similar voices between study and test is less clear. In terms of explicit memory experiments utilizing unfamiliar voices, encoding methods appear to play a pivotal role. Voice congruency effects have been found when voice is specifically attended at study (i.e., when relatively shallow, perceptual encoding takes place). These behavioral findings coincide with neural indices of memory performance such as the parietal old/new recollection effect and the late right frontal effect. The former distinguishes between correctly identified old words and correctly identified new words, and reflects voice congruency only when voice is attended at study. Characterization of the latter likely depends upon voice memory, rather than word memory. There is also evidence to suggest that voice effects can be found in implicit memory paradigms. However, the presence of voice effects appears to depend greatly on the task employed. Using a word identification task, perceptual similarity between study and test conditions is, like for explicit memory tests, crucial. In addition, the type of noise employed appears to have a differential effect. While voice effects have been observed when white noise is used at both study and test, using multi-talker babble does not confer the same results. In terms of neuroimaging research modulations, characterization of an implicit memory effect

  5. Voice processing in monkey and human brains.

    Science.gov (United States)

    Scott, Sophie K

    2008-09-01

    Studies in humans have indicated that the anterior superior temporal sulcus has an important role in the processing of information about human voices, especially the identification of talkers from their voice. A new study using functional magnetic resonance imaging (fMRI) with macaques provides strong evidence that anterior auditory fields, part of the auditory 'what' pathway, preferentially respond to changes in the identity of conspecifics, rather than specific vocalizations from the same individual.

  6. Auditory Hallucination

    Directory of Open Access Journals (Sweden)

    MohammadReza Rajabi

    2003-09-01

    Full Text Available Auditory Hallucination or Paracusia is a form of hallucination that involves perceiving sounds without auditory stimulus. A common is hearing one or more talking voices which is associated with psychotic disorders such as schizophrenia or mania. Hallucination, itself, is the most common feature of perceiving the wrong stimulus or to the better word perception of the absence stimulus. Here we will discuss four definitions of hallucinations:1.Perceiving of a stimulus without the presence of any subject; 2. hallucination proper which are the wrong perceptions that are not the falsification of real perception, Although manifest as a new subject and happen along with and synchronously with a real perception;3. hallucination is an out-of-body perception which has no accordance with a real subjectIn a stricter sense, hallucinations are defined as perceptions in a conscious and awake state in the absence of external stimuli which have qualities of real perception, in that they are vivid, substantial, and located in external objective space. We are going to discuss it in details here.

  7. Auditory efferent feedback system deficits precede age-related hearing loss: contralateral suppression of otoacoustic emissions in mice.

    Science.gov (United States)

    Zhu, Xiaoxia; Vasilyeva, Olga N; Kim, Sunghee; Jacobson, Michael; Romney, Joshua; Waterman, Marjorie S; Tuttle, David; Frisina, Robert D

    2007-08-10

    The C57BL/6J mouse has been a useful model of presbycusis, as it displays an accelerated age-related peripheral hearing loss. The medial olivocochlear efferent feedback (MOC) system plays a role in suppressing cochlear outer hair cell (OHC) responses, particularly for background noise. Neurons of the MOC system are located in the superior olivary complex, particularly in the dorsomedial periolivary nucleus (DMPO) and in the ventral nucleus of the trapezoid body (VNTB). We previously discovered that the function of the MOC system declines with age prior to OHC degeneration, as measured by contralateral suppression (CS) of distortion product otoacoustic emissions (DPOAEs) in humans and CBA mice. The present study aimed to determine the time course of age changes in MOC function in C57s. DPOAE amplitudes and CS of DPOAEs were collected for C57s from 6 to 40 weeks of age. MOC responses were observed at 6 weeks but were gone at middle (15-30 kHz) and high (30-45 kHz) frequencies by 8 weeks. Quantitative stereological analyses of Nissl sections revealed smaller neurons in the DMPO and VNTB of young adult C57s compared with CBAs. These findings suggest that reduced neuron size may underlie part of the noteworthy rapid decline of the C57 efferent system. In conclusion, the C57 mouse has MOC function at 6 weeks, but it declines quickly, preceding the progression of peripheral age-related sensitivity deficits and hearing loss in this mouse strain.

  8. Comparing acoustic and perceptual voice parameters in female teachers based on voice complaints

    Directory of Open Access Journals (Sweden)

    Maryam Faghani Abukeili

    2014-04-01

    Full Text Available Background and Aim: Teachers are a large group of professional voice users that several risk factors and voice demands causes various voice complaints among them. As the voice is multidimensional, the aim of this study was acoustic and perceptual measurement of teachers’ voice and comparing the findings between two groups with many and few voice complaints.Methods: Sixty female teachers of high school in Sari, north of Iran, were chosen by available sampling to participate in this cross-sectional study. According to a voice complaints questionnaire, 21 subjects located in few voice complaints and 31 in many voice complaints group. After a working day, subjects completed a voice self-assessment questionnaire. Also, teachers’voice were recorded during three tasks including sustained vowels /a/ and /i/, text reading and conversational speech. Acoustic parameters were analyzed by Praat software and 2 speech-language pathalogists performed auditory-perceptual assessment by GRBAS ( Grade, Roughness, Breathiness, Asthenia, Strain scale. Results: Comparing of the voice self-assessment between the two groups demonstrated statistically significant difference (p<0.05; however results of the acoustic and auditory-perceptual measurement did not show significant diffrence.Conclusion: Despite prevalent voice problems in teachers, there are various conditions in terms of complaints and assessments methods. In this study, only a remarkable deviation documented in the client-based assessments in many voice compliants group in comparison with few voice compliants, which would be probably related to different individual’s perception of voice problem between two groups. These results support paying attention to self-assessments in clinical process of voice problems.

  9. Developmental sex-specific change in auditory-vocal integration: ERP evidence in children.

    Science.gov (United States)

    Liu, Peng; Chen, Zhaocong; Jones, Jeffery A; Wang, Emily Q; Chen, Shaozhen; Huang, Dongfeng; Liu, Hanjun

    2013-03-01

    The present event-related potential (ERP) study examined the developmental mechanisms of auditory-vocal integration in normally developing children. Neurophysiological responses to altered auditory feedback were recorded to determine whether they are affected by age and sex. Forty-two children were pairwise matched for sex and were divided into a group of younger (10-12years) and a group of older (13-15years) children. Twenty healthy young adults (20-25years) also participated in the experiment. ERPs were recorded from the participants who heard their voice pitch feedback unexpectedly shifted -50, -100, or -200 cents during sustained vocalization. P1 amplitudes became smaller as subjects increased in age from childhood to adulthood, and males produced larger N1 amplitudes than females. An age-related decrease in the P1-N1 latencies was also found: latencies were shorter in young adults than in school children. A complex age-by-sex interaction was found for the P2 component, where an age-related increase in P2 amplitudes existed only in girls, and boys produced longer P2 latencies than girls but only in the older children. These findings demonstrate that neurophysiological responses to pitch errors in voice auditory feedback depend on age and sex in normally developing children. The present study provides evidence that there is a sex-specific development of the neural mechanisms involved in auditory-vocal integration. Copyright © 2012 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  10. The auditory hallucination: a phenomenological survey.

    Science.gov (United States)

    Nayani, T H; David, A S

    1996-01-01

    A comprehensive semi-structured questionnaire was administered to 100 psychotic patients who had experienced auditory hallucinations. The aim was to extend the phenomenology of the hallucination into areas of both form and content and also to guide future theoretical development. All subjects heard 'voices' talking to or about them. The location of the voice, its characteristics and the nature of address were described. Precipitants and alleviating factors plus the effect of the hallucinations on the sufferer were identified. Other hallucinatory experiences, thought insertion and insight were examined for their inter-relationships. A pattern emerged of increasing complexity of the auditory-verbal hallucination over time by a process of accretion, with the addition of more voices and extended dialogues, and more intimacy between subject and voice. Such evolution seemed to relate to the lessening of distress and improved coping. These findings should inform both neurological and cognitive accounts of the pathogenesis of auditory hallucinations in psychotic disorders.

  11. The Performing Voice of the Audiobook

    DEFF Research Database (Denmark)

    Pedersen, Birgitte Stougaard; Have, Iben

    2014-01-01

    will be based on a reception aesthetic and phenomenological approach, the latter as presented by Don Ihde in his book Listening and Voice. Phenomenologies of Sound , and my analytical sketches will be related to theoretical statements concerning the understanding of voice and media (Cavarero, Dolar, La......Belle, Neumark). Finally, the article will discuss the specific artistic combination and our auditory experience of mediated human voices and sculpturally projected faces in an art museum context under the general conditions of the societal panophonia of disembodied and mediated voices, as promoted by Steven...

  12. Voice Disorders

    Science.gov (United States)

    Voice is the sound made by air passing from your lungs through your larynx, or voice box. In your larynx are your vocal cords, ... to make sound. For most of us, our voices play a big part in who we are, ...

  13. Every Voice

    Science.gov (United States)

    Patrick, Penny

    2008-01-01

    This article discusses how the author develops an approach that allows her students, who are part of the marginalized population, to learn the power of their own voices--not just their writing voices, but their oral voices as well. The author calls it "TWIST": Thoughts, Writing folder, Inquiring mind, Supplies, and Teamwork. It is where…

  14. Every Voice

    Science.gov (United States)

    Patrick, Penny

    2008-01-01

    This article discusses how the author develops an approach that allows her students, who are part of the marginalized population, to learn the power of their own voices--not just their writing voices, but their oral voices as well. The author calls it "TWIST": Thoughts, Writing folder, Inquiring mind, Supplies, and Teamwork. It is where…

  15. Voice restoration

    NARCIS (Netherlands)

    Hilgers, F.J.M.; Balm, A.J.M.; van den Brekel, M.W.M.; Tan, I.B.; Remacle, M.; Eckel, H.E.

    2010-01-01

    Surgical prosthetic voice restoration is the best possible option for patients to regain oral communication after total laryngectomy. It is considered to be the present "gold standard" for voice rehabilitation of laryngectomized individuals. Surgical prosthetic voice restoration, in essence, is alwa

  16. Avaliação perceptivo-auditiva e fatores associados à alteração vocal em professores Auditory vocal analysis and factors associated with voice disorders among teachers

    Directory of Open Access Journals (Sweden)

    Albanita Gomes da Costa de Ceballos

    2011-06-01

    the city of Salvador, Bahia. Teachers answered a questionnaire and were submitted to auditory vocal analysis. The GRBAS was used for the diagnosis of vocal disorders. RESULTS: The study population comprised 82.8% women, teachers with an average age of 40.7 years, teachers with higher education (88.4%, with an average workday of 38 hours per week, average 11.5 years of professional practice and average monthly income of R$1.817.18. The prevalence of voice disorders was 53.6%. (255 teachers. The bivariate analysis showed statistically significant associations between vocal disorders and age above 40 years (PR = 1.83; 95% CI; 1.27-2.64, family history of dysphonia (PR = 1.72; 95% CI; 1.06-2.80, over 20 hours of weekly working hours (PR = 1.66; 95% CI; 1.09-2.52 and presence of chalk dust in the classroom (PR = 1.70; 95% CI; 1.14-2.53. CONCLUSION: The study concluded that teachers, 40 years old and over, with a family history of dysphonia, working over 20 hours weekly, and teaching in classrooms with chalk dust are more likely to develop voice disorders than others.

  17. Keyboard With Voice Output

    Science.gov (United States)

    Huber, W. C.

    1986-01-01

    Voice synthesizer tells what key is about to be depressed. Verbal feedback useful for blind operators or where dim light prevents sighted operator from seeing keyboard. Also used where operator is busy observing other things while keying data into control system. Used as training aid for touch typing, and to train blind operators to use both standard and braille keyboards. Concept adapted to such equipment as typewriters, computers, calculators, telephones, cash registers, and on/off controls.

  18. Translating Neurocognitive Models of Auditory-Verbal Hallucinations into Therapy: Using Real-time fMRI-Neurofeedback to Treat Voices

    Science.gov (United States)

    Fovet, Thomas; Orlov, Natasza; Dyck, Miriam; Allen, Paul; Mathiak, Klaus; Jardri, Renaud

    2016-01-01

    Auditory-verbal hallucinations (AVHs) are frequent and disabling symptoms, which can be refractory to conventional psychopharmacological treatment in more than 25% of the cases. Recent advances in brain imaging allow for a better understanding of the neural underpinnings of AVHs. These findings strengthened transdiagnostic neurocognitive models that characterize these frequent and disabling experiences. At the same time, technical improvements in real-time functional magnetic resonance imaging (fMRI) enabled the development of innovative and non-invasive methods with the potential to relieve psychiatric symptoms, such as fMRI-based neurofeedback (fMRI-NF). During fMRI-NF, brain activity is measured and fed back in real time to the participant in order to help subjects to progressively achieve voluntary control over their own neural activity. Precisely defining the target brain area/network(s) appears critical in fMRI-NF protocols. After reviewing the available neurocognitive models for AVHs, we elaborate on how recent findings in the field may help to develop strong a priori strategies for fMRI-NF target localization. The first approach relies on imaging-based “trait markers” (i.e., persistent traits or vulnerability markers that can also be detected in the presymptomatic and remitted phases of AVHs). The goal of such strategies is to target areas that show aberrant activations during AVHs or are known to be involved in compensatory activation (or resilience processes). Brain regions, from which the NF signal is derived, can be based on structural MRI and neurocognitive knowledge, or functional MRI information collected during specific cognitive tasks. Because hallucinations are acute and intrusive symptoms, a second strategy focuses more on “state markers.” In this case, the signal of interest relies on fMRI capture of the neural networks exhibiting increased activity during AVHs occurrences, by means of multivariate pattern recognition methods. The fine

  19. Functional overlap between regions involved in speech perception and in monitoring one's own voice during speech production.

    Science.gov (United States)

    Zheng, Zane Z; Munhall, Kevin G; Johnsrude, Ingrid S

    2010-08-01

    The fluency and the reliability of speech production suggest a mechanism that links motor commands and sensory feedback. Here, we examined the neural organization supporting such links by using fMRI to identify regions in which activity during speech production is modulated according to whether auditory feedback matches the predicted outcome or not and by examining the overlap with the network recruited during passive listening to speech sounds. We used real-time signal processing to compare brain activity when participants whispered a consonant-vowel-consonant word ("Ted") and either heard this clearly or heard voice-gated masking noise. We compared this to when they listened to yoked stimuli (identical recordings of "Ted" or noise) without speaking. Activity along the STS and superior temporal gyrus bilaterally was significantly greater if the auditory stimulus was (a) processed as the auditory concomitant of speaking and (b) did not match the predicted outcome (noise). The network exhibiting this Feedback Type x Production/Perception interaction includes a superior temporal gyrus/middle temporal gyrus region that is activated more when listening to speech than to noise. This is consistent with speech production and speech perception being linked in a control system that predicts the sensory outcome of speech acts and that processes an error signal in speech-sensitive regions when this and the sensory data do not match.

  20. Auditory agnosia.

    Science.gov (United States)

    Slevc, L Robert; Shell, Alison R

    2015-01-01

    Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. © 2015 Elsevier B.V. All rights reserved.

  1. Using Facebook to Reach People Who Experience Auditory Hallucinations

    OpenAIRE

    Crosier, Benjamin Sage; Brian, Rachel Marie; Ben-Zeev, Dror

    2016-01-01

    Background Auditory hallucinations (eg, hearing voices) are relatively common and underreported false sensory experiences that may produce distress and impairment. A large proportion of those who experience auditory hallucinations go unidentified and untreated. Traditional engagement methods oftentimes fall short in reaching the diverse population of people who experience auditory hallucinations. Objective The objective of this proof-of-concept study was to examine the viability of leveraging...

  2. Modulations of the auditory M100 in an imitation task

    NARCIS (Netherlands)

    Franken, M.K.M.; Hagoort, P.; Acheson, D.J.

    2015-01-01

    Models of speech production explain event-related suppression of the auditory cortical response as reflecting a comparison between auditory predictions and feedback. The present MEG study was designed to test two predictions from this framework: (1) whether the reduced auditory response varies as a

  3. Superior voice timbre processing in musicians.

    Science.gov (United States)

    Chartrand, Jean-Pierre; Belin, Pascal

    2006-09-25

    After several years of exposure to musical instrument practice, musicians acquire a great expertise in processing auditory features like tonal pitch or timbre. Here we compared the performance of musicians and non-musicians in two timbre discrimination tasks: one using instrumental timbres, the other using voices. Both accuracy (d-prime) and reaction time measures were obtained. The results indicate that the musicians performed better than the non-musicians at both tasks. The musicians also took more time to respond at both tasks. One interpretation of this result is that the expertise musicians acquired with instrumental timbres during their training transferred to timbres of voice. The musician participants may also have used different cognitive strategies during the experiment. Higher response times found in musicians can be explained by a longer verbal-auditory memory and the use of a strategy to further process auditory features.

  4. Recovering from Hallucinations: A Qualitative Study of Coping with Voices Hearing of People with Schizophrenia in Hong Kong

    Directory of Open Access Journals (Sweden)

    Petrus Ng

    2012-01-01

    Full Text Available Auditory hallucination is a positive symptom of schizophrenia and has significant impacts on the lives of individuals. People with auditory hallucination require considerable assistance from mental health professionals. Apart from medications, they may apply different lay methods to cope with their voice hearing. Results from qualitative interviews showed that people with schizophrenia in the Chinese sociocultural context of Hong Kong were coping with auditory hallucination in different ways, including (a changing social contacts, (b manipulating the voices, and (c changing perception and meaning towards the voices. Implications for recovery from psychiatric illness of individuals with auditory hallucinations are discussed.

  5. Keeping Your Voice Healthy

    Science.gov (United States)

    ... Find an ENT Doctor Near You Keeping Your Voice Healthy Keeping Your Voice Healthy Patient Health Information ... heavily voice-related. Key Steps for Keeping Your Voice Healthy Drink plenty of water. Moisture is good ...

  6. Effective Connectivity Associated With Auditory Error Detection In Musicians With Absolute Pitch

    Directory of Open Access Journals (Sweden)

    Amy L Parkinson

    2014-03-01

    Full Text Available It is advantageous to study a wide range of vocal abilities in order to fully understand how vocal control measures vary across the full spectrum. Individuals with absolute pitch (AP are able to assign a verbal label to musical notes and have enhanced abilities in pitch identification without reliance on an external referent. In this study we used dynamic causal modeling (DCM to model effective connectivity of ERP responses to pitch perturbation in voice auditory feedback in musicians with relative pitch (RP, absolute pitch and non-musician controls. We identified a network compromising left and right hemisphere superior temporal gyrus (STG, primary motor cortex (M1 and premotor cortex (PM. We specified nine models and compared two main factors examining various combinations of STG involvement in feedback pitch error detection/correction process. Our results suggest that modulation of left to right STG connections are important in the identification of self-voice error and sensory motor integration in AP musicians. We also identify reduced connectivity of left hemisphere PM to STG connections in AP and RP groups during the error detection and corrections process relative to non-musicians. We suggest that this suppression may allow for enhanced connectivity relating to pitch identification in the right hemisphere in those with more precise pitch matching abilities. Musicians with enhanced pitch identification abilities likely have an improved auditory error detection and correction system involving connectivity of STG regions. Our findings here also suggest that individuals with AP are more adept at using feedback related to pitch from the right hemisphere.

  7. Helping people to keep their voices healthy and to communicate effectively.

    Science.gov (United States)

    Comins, R

    1998-01-01

    Voice is essential for all spoken languages. The mechanism of voice is taken for granted and its potential in human communication is overlooked. The combined expertise of specialist speech and language therapists and voice teachers in the Voice Care Network is focused on disseminating knowledge about care, development and effective use of the speaking voice. They cooperate, exchange ideas and develop practical voice workshops to prevent vocal problems and to support teachers, lecturers and others who depend on voice. A countrywide network of tutors runs workshops in universities and schools. Feedback shows appreciation of the overall benefits.

  8. Auditory hallucinations.

    Science.gov (United States)

    Blom, Jan Dirk

    2015-01-01

    Auditory hallucinations constitute a phenomenologically rich group of endogenously mediated percepts which are associated with psychiatric, neurologic, otologic, and other medical conditions, but which are also experienced by 10-15% of all healthy individuals in the general population. The group of phenomena is probably best known for its verbal auditory subtype, but it also includes musical hallucinations, echo of reading, exploding-head syndrome, and many other types. The subgroup of verbal auditory hallucinations has been studied extensively with the aid of neuroimaging techniques, and from those studies emerges an outline of a functional as well as a structural network of widely distributed brain areas involved in their mediation. The present chapter provides an overview of the various types of auditory hallucination described in the literature, summarizes our current knowledge of the auditory networks involved in their mediation, and draws on ideas from the philosophy of science and network science to reconceptualize the auditory hallucinatory experience, and point out directions for future research into its neurobiologic substrates. In addition, it provides an overview of known associations with various clinical conditions and of the existing evidence for pharmacologic and non-pharmacologic treatments.

  9. Simultaneous face and voice processing in schizophrenia.

    Science.gov (United States)

    Liu, Taosheng; Pinheiro, Ana P; Zhao, Zhongxin; Nestor, Paul G; McCarley, Robert W; Niznikiewicz, Margaret

    2016-05-15

    While several studies have consistently demonstrated abnormalities in the unisensory processing of face and voice in schizophrenia (SZ), the extent of abnormalities in the simultaneous processing of both types of information remains unclear. To address this issue, we used event-related potentials (ERP) methodology to probe the multisensory integration of face and non-semantic sounds in schizophrenia. EEG was recorded from 18 schizophrenia patients and 19 healthy control (HC) subjects in three conditions: neutral faces (visual condition-VIS); neutral non-semantic sounds (auditory condition-AUD); neutral faces presented simultaneously with neutral non-semantic sounds (audiovisual condition-AUDVIS). When compared with HC, the schizophrenia group showed less negative N170 to both face and face-voice stimuli; later P270 peak latency in the multimodal condition of face-voice relative to unimodal condition of face (the reverse was true in HC); reduced P400 amplitude and earlier P400 peak latency in the face but not in the voice-face condition. Thus, the analysis of ERP components suggests that deficits in the encoding of facial information extend to multimodal face-voice stimuli and that delays exist in feature extraction from multimodal face-voice stimuli in schizophrenia. In contrast, categorization processes seem to benefit from the presentation of simultaneous face-voice information. Timepoint by timepoint tests of multimodal integration did not suggest impairment in the initial stages of processing in schizophrenia.

  10. Emotional feedback for mobile devices

    CERN Document Server

    Seebode, Julia

    2015-01-01

    This book investigates the functional adequacy as well as the affective impression made by feedback messages on mobile devices. It presents an easily adoptable experimental setup to examine context effects on various feedback messages, and applies it to auditory, tactile and auditory-tactile feedback messages. This approach provides insights into the relationship between the affective impression and functional applicability of these messages as well as an understanding of the influence of unimodal components on the perception of multimodal feedback messages. The developed paradigm can also be extended to investigate other aspects of context and used to investigate feedback messages in modalities other than those presented. The book uses questionnaires implemented on a Smartphone, which can easily be adopted for field studies to broaden the scope even wider. Finally, the book offers guidelines for the design of system feedback.

  11. Auditory and motor imagery modulate learning in music performance.

    Science.gov (United States)

    Brown, Rachel M; Palmer, Caroline

    2013-01-01

    Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced novel melodies), and retrieval (during Recall of those melodies). Pianists learned melodies by listening without performing (auditory learning) or performing without sound (motor learning); following Learning, pianists performed the melodies from memory with auditory feedback (Recall). During either Learning (Experiment 1) or Recall (Experiment 2), pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced) and temporal regularity (variability of quarter-note interonset intervals) were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists' pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2). Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1): Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2): Higher auditory imagery skill predicted greater temporal regularity during Recall in the presence of

  12. The "VoiceForum" Platform for Spoken Interaction

    Science.gov (United States)

    Fynn, Fohn; Wigham, Chiara R.

    2011-01-01

    Showcased in the courseware exhibition, "VoiceForum" is a web-based software platform for asynchronous learner interaction in threaded discussions using voice and text. A dedicated space is provided for the tutor who can give feedback on a posted message and dialogue with the participants at a separate level from the main interactional…

  13. Auditory hallucinations in nonverbal quadriplegics.

    Science.gov (United States)

    Hamilton, J

    1985-11-01

    When a system for communicating with nonverbal, quadriplegic, institutionalized residents was developed, it was discovered that many were experiencing auditory hallucinations. Nine cases are presented in this study. The "voices" described have many similar characteristics, the primary one being that they give authoritarian commands that tell the residents how to behave and to which the residents feel compelled to respond. Both the relationship of this phenomenon to the theoretical work of Julian Jaynes and its effect on the lives of the residents are discussed.

  14. Varieties of Voice-Hearing: Psychics and the Psychosis Continuum.

    Science.gov (United States)

    Powers, Albert R; Kelley, Megan S; Corlett, Philip R

    2017-01-01

    Hearing voices that are not present is a prominent symptom of serious mental illness. However, these experiences may be common in the non-help-seeking population, leading some to propose the existence of a continuum of psychosis from health to disease. Thus far, research on this continuum has focused on what is impaired in help-seeking groups. Here we focus on protective factors in non-help-seeking voice-hearers. We introduce a new study population: clairaudient psychics who receive daily auditory messages. We conducted phenomenological interviews with these subjects, as well as with patients diagnosed with a psychotic disorder who hear voices, people with a diagnosis of a psychotic disorder who do not hear voices, and matched control subjects (without voices or a diagnosis). We found the hallucinatory experiences of psychic voice-hearers to be very similar to those of patients who were diagnosed. We employed techniques from forensic psychiatry to conclude that the psychics were not malingering. Critically, we found that this sample of non-help-seeking voice hearers were able to control the onset and offset of their voices, that they were less distressed by their voice-hearing experiences and that, the first time they admitted to voice-hearing, the reception by others was much more likely to be positive. Patients had much more negative voice-hearing experiences, were more likely to receive a negative reaction when sharing their voices with others for the first time, and this was subsequently more disruptive to their social relationships. We predict that this sub-population of healthy voice-hearers may have much to teach us about the neurobiology, cognitive psychology and ultimately the treatment of voices that are distressing. © The Author 2016. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  15. Noise perception in the workplace and auditory and extra-auditory symptoms referred by university professors.

    Science.gov (United States)

    Servilha, Emilse Aparecida Merlin; Delatti, Marina de Almeida

    2012-01-01

    To investigate the correlation between noise in the work environment and auditory and extra-auditory symptoms referred by university professors. Eighty five professors answered a questionnaire about identification, functional status, and health. The relationship between occupational noise and auditory and extra-auditory symptoms was investigated. Statistical analysis considered the significance level of 5%. None of the professors indicated absence of noise. Responses were grouped in Always (A) (n=21) and Not Always (NA) (n=63). Significant sources of noise were both the yard and another class, which were classified as high intensity; poor acoustic and echo. There was no association between referred noise and health complaints, such as digestive, hormonal, osteoarticular, dental, circulatory, respiratory and emotional complaints. There was also no association between referred noise and hearing complaints, and the group A showed higher occurrence of responses regarding noise nuisance, hearing difficulty and dizziness/vertigo, tinnitus, and earache. There was association between referred noise and voice alterations, and the group NA presented higher percentage of cases with voice alterations than the group A. The university environment was considered noisy; however, there was no association with auditory and extra-auditory symptoms. The hearing complaints were more evident among professors in the group A. Professors' health is a multi-dimensional product and, therefore, noise cannot be considered the only aggravation factor.

  16. Auditory Sketches: Very Sparse Representations of Sounds Are Still Recognizable.

    Directory of Open Access Journals (Sweden)

    Vincent Isnard

    Full Text Available Sounds in our environment like voices, animal calls or musical instruments are easily recognized by human listeners. Understanding the key features underlying this robust sound recognition is an important question in auditory science. Here, we studied the recognition by human listeners of new classes of sounds: acoustic and auditory sketches, sounds that are severely impoverished but still recognizable. Starting from a time-frequency representation, a sketch is obtained by keeping only sparse elements of the original signal, here, by means of a simple peak-picking algorithm. Two time-frequency representations were compared: a biologically grounded one, the auditory spectrogram, which simulates peripheral auditory filtering, and a simple acoustic spectrogram, based on a Fourier transform. Three degrees of sparsity were also investigated. Listeners were asked to recognize the category to which a sketch sound belongs: singing voices, bird calls, musical instruments, and vehicle engine noises. Results showed that, with the exception of voice sounds, very sparse representations of sounds (10 features, or energy peaks, per second could be recognized above chance. No clear differences could be observed between the acoustic and the auditory sketches. For the voice sounds, however, a completely different pattern of results emerged, with at-chance or even below-chance recognition performances, suggesting that the important features of the voice, whatever they are, were removed by the sketch process. Overall, these perceptual results were well correlated with a model of auditory distances, based on spectro-temporal excitation patterns (STEPs. This study confirms the potential of these new classes of sounds, acoustic and auditory sketches, to study sound recognition.

  17. Auditory Sketches: Very Sparse Representations of Sounds Are Still Recognizable.

    Science.gov (United States)

    Isnard, Vincent; Taffou, Marine; Viaud-Delmon, Isabelle; Suied, Clara

    2016-01-01

    Sounds in our environment like voices, animal calls or musical instruments are easily recognized by human listeners. Understanding the key features underlying this robust sound recognition is an important question in auditory science. Here, we studied the recognition by human listeners of new classes of sounds: acoustic and auditory sketches, sounds that are severely impoverished but still recognizable. Starting from a time-frequency representation, a sketch is obtained by keeping only sparse elements of the original signal, here, by means of a simple peak-picking algorithm. Two time-frequency representations were compared: a biologically grounded one, the auditory spectrogram, which simulates peripheral auditory filtering, and a simple acoustic spectrogram, based on a Fourier transform. Three degrees of sparsity were also investigated. Listeners were asked to recognize the category to which a sketch sound belongs: singing voices, bird calls, musical instruments, and vehicle engine noises. Results showed that, with the exception of voice sounds, very sparse representations of sounds (10 features, or energy peaks, per second) could be recognized above chance. No clear differences could be observed between the acoustic and the auditory sketches. For the voice sounds, however, a completely different pattern of results emerged, with at-chance or even below-chance recognition performances, suggesting that the important features of the voice, whatever they are, were removed by the sketch process. Overall, these perceptual results were well correlated with a model of auditory distances, based on spectro-temporal excitation patterns (STEPs). This study confirms the potential of these new classes of sounds, acoustic and auditory sketches, to study sound recognition.

  18. Auditory N1 reveals planning and monitoring processes during music performance.

    Science.gov (United States)

    Mathias, Brian; Gehring, William J; Palmer, Caroline

    2017-02-01

    The current study investigated the relationship between planning processes and feedback monitoring during music performance, a complex task in which performers prepare upcoming events while monitoring their sensory outcomes. Theories of action planning in auditory-motor production tasks propose that the planning of future events co-occurs with the perception of auditory feedback. This study investigated the neural correlates of planning and feedback monitoring by manipulating the contents of auditory feedback during music performance. Pianists memorized and performed melodies at a cued tempo in a synchronization-continuation task while the EEG was recorded. During performance, auditory feedback associated with single melody tones was occasionally substituted with tones corresponding to future (next), present (current), or past (previous) melody tones. Only future-oriented altered feedback disrupted behavior: Future-oriented feedback caused pianists to slow down on the subsequent tone more than past-oriented feedback, and amplitudes of the auditory N1 potential elicited by the tone immediately following the altered feedback were larger for future-oriented than for past-oriented or noncontextual (unrelated) altered feedback; larger N1 amplitudes were associated with greater slowing following altered feedback in the future condition only. Feedback-related negativities were elicited in all altered feedback conditions. In sum, behavioral and neural evidence suggests that future-oriented feedback disrupts performance more than past-oriented feedback, consistent with planning theories that posit similarity-based interference between feedback and planning contents. Neural sensory processing of auditory feedback, reflected in the N1 ERP, may serve as a marker for temporal disruption caused by altered auditory feedback in auditory-motor production tasks.

  19. Auditory and motor imagery modulate learning in music performance

    Directory of Open Access Journals (Sweden)

    Rachel M. Brown

    2013-07-01

    Full Text Available Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians’ encoding (during Learning, as they practiced novel melodies, and retrieval (during Recall of those melodies. Pianists learned melodies by listening without performing (auditory learning or performing without sound (motor learning; following Learning, pianists performed the melodies from memory with auditory feedback (Recall. During either Learning (Experiment 1 or Recall (Experiment 2, pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced and temporal regularity (variability of quarter-note interonset intervals were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists’ pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2. Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1: Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2: Higher auditory imagery skill predicted greater temporal regularity during Recall in the

  20. Attention Modulates the Auditory Cortical Processing of Spatial and Category Cues in Naturalistic Auditory Scenes

    Science.gov (United States)

    Renvall, Hanna; Staeren, Noël; Barz, Claudia S.; Ley, Anke; Formisano, Elia

    2016-01-01

    This combined fMRI and MEG study investigated brain activations during listening and attending to natural auditory scenes. We first recorded, using in-ear microphones, vocal non-speech sounds, and environmental sounds that were mixed to construct auditory scenes containing two concurrent sound streams. During the brain measurements, subjects attended to one of the streams while spatial acoustic information of the scene was either preserved (stereophonic sounds) or removed (monophonic sounds). Compared to monophonic sounds, stereophonic sounds evoked larger blood-oxygenation-level-dependent (BOLD) fMRI responses in the bilateral posterior superior temporal areas, independent of which stimulus attribute the subject was attending to. This finding is consistent with the functional role of these regions in the (automatic) processing of auditory spatial cues. Additionally, significant differences in the cortical activation patterns depending on the target of attention were observed. Bilateral planum temporale and inferior frontal gyrus were preferentially activated when attending to stereophonic environmental sounds, whereas when subjects attended to stereophonic voice sounds, the BOLD responses were larger at the bilateral middle superior temporal gyrus and sulcus, previously reported to show voice sensitivity. In contrast, the time-resolved MEG responses were stronger for mono- than stereophonic sounds in the bilateral auditory cortices at ~360 ms after the stimulus onset when attending to the voice excerpts within the combined sounds. The observed effects suggest that during the segregation of auditory objects from the auditory background, spatial sound cues together with other relevant temporal and spectral cues are processed in an attention-dependent manner at the cortical locations generally involved in sound recognition. More synchronous neuronal activation during monophonic than stereophonic sound processing, as well as (local) neuronal inhibitory mechanisms in

  1. Voice-based assessments of trustworthiness, competence, and warmth in blind and sighted adults

    OpenAIRE

    Oleszkiewicz, Anna; Pisanski, Katarzyna; Lachowicz-Tabaczek, Kinga; Sorokowska, Agnieszka

    2016-01-01

    The study of voice perception in congenitally blind individuals allows researchers rare insight into how a lifetime of visual deprivation affects the development of voice perception. Previous studies have suggested that blind adults outperform their sighted counterparts in low-level auditory tasks testing spatial localization and pitch discrimination, as well as in verbal speech processing; however, blind persons generally show no advantage in nonverbal voice recognition or discrimination tas...

  2. Perceptual Wavelet packet transform based Wavelet Filter Banks Modeling of Human Auditory system for improving the intelligibility of voiced and unvoiced speech: A Case Study of a system development

    OpenAIRE

    Ranganadh Narayanam*

    2015-01-01

    The objective of this project is to discuss a versatile speech enhancement method based on the human auditory model. In this project a speech enhancement scheme is being described which meets the demand for quality noise reduction algorithms which are capable of operating at a very low signal to noise ratio. We will be discussing how proposed speech enhancement system is capable of reducing noise with little speech degradation in diverse noise environments. In this model to reduce the resi...

  3. Estudo do efeito do apoio visual do traçado espectrográfico na confiabilidade da análise perceptivo-auditiva Studying the effect of spectrogram visual support of in the auditory-perceptive voice evaluation reliability

    Directory of Open Access Journals (Sweden)

    Ana Cristina Côrtes Gama

    2011-04-01

    Full Text Available OBJETIVO: avaliar a concordância intra e inter-sujeitos na avaliação perceptivo-auditiva realizada de forma isolada e simultaneamente à apresentação do traçado espectrográfico correspondente, a fim de verificar se a apresentação simultânea dos estímulos vocais e espectrográficos aumenta a concordância da avaliação perceptivo-auditiva da voz. MÉTODOS: trata-se de um estudo longitudinal, em que seis fonoaudiólogas avaliaram, em dois momentos distintos, 105 vozes disfônicas e não disfônicas, de forma perceptivo-auditiva: primeiramente sem e posteriormente com a apresentação dos espectrogramas correspondentes. Vinte por cento das vozes foram repetidas aleatoriamente nos dois momentos, a fim de se analisar as concordâncias intra-avaliadoras. Utilizou-se a escala GRBASI para realização da avaliação vocal. Para análise da concordância inter-avaliadores, foi utilizado o índice estatístico Kappa Fleiss e, para cálculo da concordância intra-avaliador, foi utilizado o coeficiente de correlação de Spearman. RESULTADOS: não houve diferença estatisticamente significante entre as avaliações perceptivo-auditivas inter-sujeitos com e sem a leitura espectrográfica, porém houve aumento da concordância entre os avaliadores para as variáveis G, R, B e S. Não houve diferença estatisticamente significante entre as avaliações perceptivo-auditivas intra-sujeitos sem e com o apoio visual do espectrograma, entretanto, houve aumento da concordância intra-avaliadores após a apresentação do estímulo visual, para as variáveis G, B e I. CONCLUSÃO: o apoio visual do espectrograma não aumenta significativamente a confiabilidade da avaliação perceptivo-auditiva da voz, mas a auxilia, uma vez que promove um aumento da concordância inter e intra-avaliadores.PURPOSE: to evaluate the concordance of the auditory-perceptive evaluation between intra and inter-raters, done as an isolated form and simultaneously accomplished to

  4. Comparison of Voice Perceptual Charactheristics between Speech - Language Pathologists', Dysphonic and Normal Voiced Adult's View

    Directory of Open Access Journals (Sweden)

    Seyyedeh Maryam khoddami

    2010-06-01

    Full Text Available Background and Aim: In recent years, several tools for assessment of quality of patient life have been designed especially for dysphonics. Nowadays, we have useful assessments in health system that are refered for numerous clinical decisions. In this way, this investigation compares clinician and patient perception in dysphonic and normal voiced for first time.Methods: This study was carried out on 30 dysphonic and 30 subjects with normal voice. Their age, sex and job were same. In two groups, Consensus Auditory – Perceptual Evaluation of Voice (CAPE-V was used for evaluation of clinician perception and Voice Handicap Index - 30 (VHI-30 for assessment of patient perception. After collecting data, they were analyzed by Mann- witney and Wilcoxon tests.Results: The research revealed that mean of total and each section score of VHI-30 have significant difference between dysphonic and control group (p<0.01. Comparison of total and every parameter score of CAPE-V and speed also indicated significant difference between two groups (p<0.01. Study of reliability shows weak reliability (r=0.34 between clinician and patient perception of voice in dysphonics.Conclusion: Dysphonic patients percept their voice problem different and severe rather than clinicians that shows physical, psychological and social affects of dysphonia. This research confirms that patient – based assessment of voice is necessary to be part of common assessments of dysphonia.

  5. Aspects of voice irregularity measurement in connected speech.

    Science.gov (United States)

    Fourcin, Adrian

    2009-01-01

    Applications of the use of connected speech material for the objective assessment of two primary physical aspects of voice quality are described and discussed. Simple auditory perceptual criteria are employed to guide the choice of analysis parameters for the physical correlate of pitch, and their utility is investigated by the measurement of the characteristics of particular examples of the normal-speaking voice. This approach is extended to the measurement of vocal fold contact phase control in connected speech and both techniques are applied to pathological voice data. Copyright 2009 S. Karger AG, Basel.

  6. Hearing an Illusory Vowel in Noise : Suppression of Auditory Cortical Activity

    NARCIS (Netherlands)

    Riecke, Lars; Vanbussel, Mieke; Hausfeld, Lars; Baskent, Deniz; Formisano, Elia; Esposito, Fabrizio

    2012-01-01

    Human hearing is constructive. For example, when a voice is partially replaced by an extraneous sound (e.g., on the telephone due to a transmission problem), the auditory system may restore the missing portion so that the voice can be perceived as continuous (Miller and Licklider, 1950; for review,

  7. Gilda's Voices

    DEFF Research Database (Denmark)

    Schneider, Magnus Tessing; Anthon, Nicolai Elver

    2005-01-01

    Searching for a way to deal with the meanings evolving from the interaction between text, music, staging and audience characteristic of the operatic performance, the article attempts to sketch the dramatic potential of Gilda from Verdi's Rigoletto. The approach assumes that a dramatic character...... the perspective from score to performance. Through the analysis of recordings by Lina Pagliughi, Maria Callas and Anna Moffo, it is demonstrated how the purely auditory aspect of opera can offer widely different approaches not only to Verdi's Gilda, but to operatic performance in general....

  8. From ear to hand: the role of the auditory-motor loop in pointing to an auditory source

    Science.gov (United States)

    Boyer, Eric O.; Babayan, Bénédicte M.; Bevilacqua, Frédéric; Noisternig, Markus; Warusfel, Olivier; Roby-Brami, Agnes; Hanneton, Sylvain; Viaud-Delmon, Isabelle

    2013-01-01

    Studies of the nature of the neural mechanisms involved in goal-directed movements tend to concentrate on the role of vision. We present here an attempt to address the mechanisms whereby an auditory input is transformed into a motor command. The spatial and temporal organization of hand movements were studied in normal human subjects as they pointed toward unseen auditory targets located in a horizontal plane in front of them. Positions and movements of the hand were measured by a six infrared camera tracking system. In one condition, we assessed the role of auditory information about target position in correcting the trajectory of the hand. To accomplish this, the duration of the target presentation was varied. In another condition, subjects received continuous auditory feedback of their hand movement while pointing to the auditory targets. Online auditory control of the direction of pointing movements was assessed by evaluating how subjects reacted to shifts in heard hand position. Localization errors were exacerbated by short duration of target presentation but not modified by auditory feedback of hand position. Long duration of target presentation gave rise to a higher level of accuracy and was accompanied by early automatic head orienting movements consistently related to target direction. These results highlight the efficiency of auditory feedback processing in online motor control and suggest that the auditory system takes advantages of dynamic changes of the acoustic cues due to changes in head orientation in order to process online motor control. How to design an informative acoustic feedback needs to be carefully studied to demonstrate that auditory feedback of the hand could assist the monitoring of movements directed at objects in auditory space. PMID:23626532

  9. From ear to hand: the role of the auditory-motor loop in pointing to an auditory source

    Directory of Open Access Journals (Sweden)

    Eric Olivier Boyer

    2013-04-01

    Full Text Available Studies of the nature of the neural mechanisms involved in goal-directed movements tend to concentrate on the role of vision. We present here an attempt to address the mechanisms whereby an auditory input is transformed into a motor command. The spatial and temporal organization of hand movements were studied in normal human subjects as they pointed towards unseen auditory targets located in a horizontal plane in front of them. Positions and movements of the hand were measured by a six infrared camera tracking system. In one condition, we assessed the role of auditory information about target position in correcting the trajectory of the hand. To accomplish this, the duration of the target presentation was varied. In another condition, subjects received continuous auditory feedback of their hand movement while pointing to the auditory targets. Online auditory control of the direction of pointing movements was assessed by evaluating how subjects reacted to shifts in heard hand position. Localization errors were exacerbated by short duration of target presentation but not modified by auditory feedback of hand position. Long duration of target presentation gave rise to a higher level of accuracy and was accompanied by early automatic head orienting movements consistently related to target direction. These results highlight the efficiency of auditory feedback processing in online motor control and suggest that the auditory system takes advantages of dynamic changes of the acoustic cues due to changes in head orientation in order to process online motor control. How to design an informative acoustic feedback needs to be carefully studied to demonstrate that auditory feedback of the hand could assist the monitoring of movements directed at objects in auditory space.

  10. Auditory-perceptual learning improves speech motor adaptation in children.

    Science.gov (United States)

    Shiller, Douglas M; Rochon, Marie-Lyne

    2014-08-01

    Auditory feedback plays an important role in children's speech development by providing the child with information about speech outcomes that is used to learn and fine-tune speech motor plans. The use of auditory feedback in speech motor learning has been extensively studied in adults by examining oral motor responses to manipulations of auditory feedback during speech production. Children are also capable of adapting speech motor patterns to perceived changes in auditory feedback; however, it is not known whether their capacity for motor learning is limited by immature auditory-perceptual abilities. Here, the link between speech perceptual ability and the capacity for motor learning was explored in two groups of 5- to 7-year-old children who underwent a period of auditory perceptual training followed by tests of speech motor adaptation to altered auditory feedback. One group received perceptual training on a speech acoustic property relevant to the motor task while a control group received perceptual training on an irrelevant speech contrast. Learned perceptual improvements led to an enhancement in speech motor adaptation (proportional to the perceptual change) only for the experimental group. The results indicate that children's ability to perceive relevant speech acoustic properties has a direct influence on their capacity for sensory-based speech motor adaptation.

  11. Speaking and Nonspeaking Voice Professionals: Who Has the Better Voice?

    Science.gov (United States)

    Chitguppi, Chandala; Raj, Anoop; Meher, Ravi; Rathore, P K

    2017-04-18

    Voice professionals can be classified into two major subgroups: the primarily speaking and the primarily nonspeaking voice professionals. Nonspeaking voice professionals mainly include singers, whereas speaking voice professionals include the rest of the voice professionals. Although both of these groups have high vocal demands, it is currently unknown whether both groups show similar voice changes after their daily voice use. Comparison of these two subgroups of voice professionals has never been done before. This study aimed to compare the speaking voice of speaking and nonspeaking voice professionals with no obvious vocal fold pathology or voice-related complaints on the day of assessment. After obtaining relevant voice-related history, voice analysis and videostroboscopy were performed in 50 speaking and 50 nonspeaking voice professionals. Speaking voice professionals showed significantly higher incidence of voice-related complaints as compared with nonspeaking voice professionals. Voice analysis revealed that most acoustic parameters including fundamental frequency, jitter percent, and harmonic-to-noise ratio were significantly higher in speaking voice professionals, whereas videostroboscopy did not show any significant difference between the two groups. This is the first study of its kind to analyze the effect of daily voice use in the two subgroups of voice professionals with no obvious vocal fold pathology. We conclude that voice professionals should not be considered as a homogeneous group. The detrimental effects of excessive voice use were observed to occur more significantly in speaking voice professionals than in nonspeaking voice professionals. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  12. Visual Influences on Alignment to Voice Onset Time

    Science.gov (United States)

    Sanchez, Kauyumari; Miller, Rachel M.; Rosenblum, Lawrence D.

    2010-01-01

    Purpose: Speech shadowing experiments were conducted to test whether alignment (inadvertent imitation) to voice onset time (VOT) can be influenced by visual speech information. Method: Experiment 1 examined whether alignment would occur to auditory /pa/ syllables manipulated to have 3 different VOTs. Nineteen female participants were asked to…

  13. Voice Initiation and Termination Times in Stuttering and Nonstuttering Children.

    Science.gov (United States)

    Cullinan, Walter L.; Springer, Mark T.

    1980-01-01

    The times needed to initiate and terminate voicing in response to series of short segments of auditory signal were studied for 20 stuttering and 20 nonstuttering children (ages for both groups 5 to 12). The effects of random reward and nonreward on the phonatory response times also were studied. (Author/PHR)

  14. Neural circuits underlying mother's voice perception predict social communication abilities in children.

    Science.gov (United States)

    Abrams, Daniel A; Chen, Tianwen; Odriozola, Paola; Cheng, Katherine M; Baker, Amanda E; Padmanabhan, Aarthi; Ryali, Srikanth; Kochalka, John; Feinstein, Carl; Menon, Vinod

    2016-05-31

    The human voice is a critical social cue, and listeners are extremely sensitive to the voices in their environment. One of the most salient voices in a child's life is mother's voice: Infants discriminate their mother's voice from the first days of life, and this stimulus is associated with guiding emotional and social function during development. Little is known regarding the functional circuits that are selectively engaged in children by biologically salient voices such as mother's voice or whether this brain activity is related to children's social communication abilities. We used functional MRI to measure brain activity in 24 healthy children (mean age, 10.2 y) while they attended to brief (social function. Compared to female control voices, mother's voice elicited greater activity in primary auditory regions in the midbrain and cortex; voice-selective superior temporal sulcus (STS); the amygdala, which is crucial for processing of affect; nucleus accumbens and orbitofrontal cortex of the reward circuit; anterior insula and cingulate of the salience network; and a subregion of fusiform gyrus associated with face perception. The strength of brain connectivity between voice-selective STS and reward, affective, salience, memory, and face-processing regions during mother's voice perception predicted social communication skills. Our findings provide a novel neurobiological template for investigation of typical social development as well as clinical disorders, such as autism, in which perception of biologically and socially salient voices may be impaired.

  15. Leveraging voice

    DEFF Research Database (Denmark)

    Frølunde, Lisbeth

    2017-01-01

    researchers improve our practices and how could digital online video help offer more positive stories about research and higher education? How can academics in higher education be better to tell about our research, thereby reclaiming and leveraging our voice in a post-factual era? As higher education......This paper speculates on how researchers share research without diluting our credibility and how to make strategies for the future. It also calls for consideration of new traditions and practices for communicating knowledge to a wider audience across multiple media platforms. How might we...... continues to engage with digital and networked technologies it becomes increasingly relevant to question why and how academics could (re) position research knowledge in the digital and online media landscape of today and the future. The paper highlights methodological issues that arise in relation...

  16. Feeling voices.

    Directory of Open Access Journals (Sweden)

    Paolo Ammirante

    Full Text Available Two experiments investigated deaf individuals' ability to discriminate between same-sex talkers based on vibrotactile stimulation alone. Nineteen participants made same/different judgments on pairs of utterances presented to the lower back through voice coils embedded in a conforming chair. Discrimination of stimuli matched for F0, duration, and perceived magnitude was successful for pairs of spoken sentences in Experiment 1 (median percent correct = 83% and pairs of vowel utterances in Experiment 2 (median percent correct = 75%. Greater difference in spectral tilt between "different" pairs strongly predicted their discriminability in both experiments. The current findings support the hypothesis that discrimination of complex vibrotactile stimuli involves the cortical integration of spectral information filtered through frequency-tuned skin receptors.

  17. Voice disorders in mucosal leishmaniasis.

    Directory of Open Access Journals (Sweden)

    Ana Cristina Nunes Ruas

    Full Text Available INTRODUCTION: Leishmaniasis is considered as one of the six most important infectious diseases because of its high detection coefficient and ability to produce deformities. In most cases, mucosal leishmaniasis (ML occurs as a consequence of cutaneous leishmaniasis. If left untreated, mucosal lesions can leave sequelae, interfering in the swallowing, breathing, voice and speech processes and requiring rehabilitation. OBJECTIVE: To describe the anatomical characteristics and voice quality of ML patients. MATERIALS AND METHODS: A descriptive transversal study was conducted in a cohort of ML patients treated at the Laboratory for Leishmaniasis Surveillance of the Evandro Chagas National Institute of Infectious Diseases-Fiocruz, between 2010 and 2013. The patients were submitted to otorhinolaryngologic clinical examination by endoscopy of the upper airways and digestive tract and to speech-language assessment through directed anamnesis, auditory perception, phonation times and vocal acoustic analysis. The variables of interest were epidemiologic (sex and age and clinic (lesion location, associated symptoms and voice quality. RESULTS: 26 patients under ML treatment and monitored by speech therapists were studied. 21 (81% were male and five (19% female, with ages ranging from 15 to 78 years (54.5+15.0 years. The lesions were distributed in the following structures 88.5% nasal, 38.5% oral, 34.6% pharyngeal and 19.2% laryngeal, with some patients presenting lesions in more than one anatomic site. The main complaint was nasal obstruction (73.1%, followed by dysphonia (38.5%, odynophagia (30.8% and dysphagia (26.9%. 23 patients (84.6% presented voice quality perturbations. Dysphonia was significantly associated to lesions in the larynx, pharynx and oral cavity. CONCLUSION: We observed that vocal quality perturbations are frequent in patients with mucosal leishmaniasis, even without laryngeal lesions; they are probably associated to disorders of some

  18. Phonetic categorization in auditory word perception.

    Science.gov (United States)

    Ganong, W F

    1980-02-01

    To investigate the interaction in speech perception of auditory information and lexical knowledge (in particular, knowledge of which phonetic sequences are words), acoustic continua varying in voice onset time were constructed so that for each acoustic continuum, one of the two possible phonetic categorizations made a word and the other did not. For example, one continuum ranged between the word dash and the nonword tash; another used the nonword dask and the word task. In two experiments, subjects showed a significant lexical effect--that is, a tendency to make phonetic categorizations that make words. This lexical effect was greater at the phoneme boundary (where auditory information is ambiguous) than at the ends of the condinua. Hence the lexical effect must arise at a stage of processing sensitive to both lexical knowledge and auditory information.

  19. Functional Mapping of the Human Auditory Cortex: fMRI Investigation of a Patient with Auditory Agnosia from Trauma to the Inferior Colliculus.

    Science.gov (United States)

    Poliva, Oren; Bestelmeyer, Patricia E G; Hall, Michelle; Bultitude, Janet H; Koller, Kristin; Rafal, Robert D

    2015-09-01

    To use functional magnetic resonance imaging to map the auditory cortical fields that are activated, or nonreactive, to sounds in patient M.L., who has auditory agnosia caused by trauma to the inferior colliculi. The patient cannot recognize speech or environmental sounds. Her discrimination is greatly facilitated by context and visibility of the speaker's facial movements, and under forced-choice testing. Her auditory temporal resolution is severely compromised. Her discrimination is more impaired for words differing in voice onset time than place of articulation. Words presented to her right ear are extinguished with dichotic presentation; auditory stimuli in the right hemifield are mislocalized to the left. We used functional magnetic resonance imaging to examine cortical activations to different categories of meaningful sounds embedded in a block design. Sounds activated the caudal sub-area of M.L.'s primary auditory cortex (hA1) bilaterally and her right posterior superior temporal gyrus (auditory dorsal stream), but not the rostral sub-area (hR) of her primary auditory cortex or the anterior superior temporal gyrus in either hemisphere (auditory ventral stream). Auditory agnosia reflects dysfunction of the auditory ventral stream. The ventral and dorsal auditory streams are already segregated as early as the primary auditory cortex, with the ventral stream projecting from hR and the dorsal stream from hA1. M.L.'s leftward localization bias, preserved audiovisual integration, and phoneme perception are explained by preserved processing in her right auditory dorsal stream.

  20. Perceptual Wavelet packet transform based Wavelet Filter Banks Modeling of Human Auditory system for improving the intelligibility of voiced and unvoiced speech: A Case Study of a system development

    Directory of Open Access Journals (Sweden)

    Ranganadh Narayanam

    2015-10-01

    Full Text Available The objective of this project is to discuss a versatile speech enhancement method based on the human auditory model. In this project a speech enhancement scheme is being described which meets the demand for quality noise reduction algorithms which are capable of operating at a very low signal to noise ratio. We will be discussing how proposed speech enhancement system is capable of reducing noise with little speech degradation in diverse noise environments. In this model to reduce the residual noise and improve the intelligibility of speech a psychoacoustic model is incorporated into the generalized perceptual wavelet denoising method to reduce the residual noise. This is a generalized time frequency subtraction algorithm which advantageously exploits the wavelet multirate signal representation to preserve the critical transient information. Simultaneous masking and temporal masking of the human auditory system are modeled by the perceptual wavelet packet transform via the frequency and temporal localization of speech components. To calculate the bark spreading energy and temporal spreading energy the wavelet coefficients are used from which a time frequency masking threshold is deduced to adaptively adjust the subtraction parameters of the discussed method. To increase the intelligibility of speech an unvoiced speech enhancement algorithm also integrated into the system.

  1. [Auditory fatigue].

    Science.gov (United States)

    Sanjuán Juaristi, Julio; Sanjuán Martínez-Conde, Mar

    2015-01-01

    Given the relevance of possible hearing losses due to sound overloads and the short list of references of objective procedures for their study, we provide a technique that gives precise data about the audiometric profile and recruitment factor. Our objectives were to determine peripheral fatigue, through the cochlear microphonic response to sound pressure overload stimuli, as well as to measure recovery time, establishing parameters for differentiation with regard to current psychoacoustic and clinical studies. We used specific instruments for the study of cochlear microphonic response, plus a function generator that provided us with stimuli of different intensities and harmonic components. In Wistar rats, we first measured the normal microphonic response and then the effect of auditory fatigue on it. Using a 60dB pure tone acoustic stimulation, we obtained a microphonic response at 20dB. We then caused fatigue with 100dB of the same frequency, reaching a loss of approximately 11dB after 15minutes; after that, the deterioration slowed and did not exceed 15dB. By means of complex random tone maskers or white noise, no fatigue was caused to the sensory receptors, not even at levels of 100dB and over an hour of overstimulation. No fatigue was observed in terms of sensory receptors. Deterioration of peripheral perception through intense overstimulation may be due to biochemical changes of desensitisation due to exhaustion. Auditory fatigue in subjective clinical trials presumably affects supracochlear sections. The auditory fatigue tests found are not in line with those obtained subjectively in clinical and psychoacoustic trials. Copyright © 2013 Elsevier España, S.L.U. y Sociedad Española de Otorrinolaringología y Patología Cérvico-Facial. All rights reserved.

  2. Categorization of extremely brief auditory stimuli: domain-specific or domain-general processes?

    Directory of Open Access Journals (Sweden)

    Emmanuel Bigand

    Full Text Available The present study investigated the minimum amount of auditory stimulation that allows differentiation of spoken voices, instrumental music, and environmental sounds. Three new findings were reported. 1 All stimuli were categorized above chance level with 50 ms-segments. 2 When a peak-level normalization was applied, music and voices started to be accurately categorized with 20 ms-segments. When the root-mean-square (RMS energy of the stimuli was equalized, voice stimuli were better recognized than music and environmental sounds. 3 Further psychoacoustical analyses suggest that the categorization of extremely brief auditory stimuli depends on the variability of their spectral envelope in the used set. These last two findings challenge the interpretation of the voice superiority effect reported in previously published studies and propose a more parsimonious interpretation in terms of an emerging property of auditory categorization processes.

  3. Auditory feedback influences perceived driving speeds.

    Science.gov (United States)

    Horswill, Mark S; Plooy, Annaliese M

    2008-01-01

    Reducing the level of internal noise is seen as a goal when designing modern cars. One danger of such a philosophy is that one is systematically attempting to alter one of the cues that can be used by drivers to estimate speed and this could bias speed judgments and driving behaviour. Seven participants were presented with pairs of video-based driving scenes and asked to judge whether the second scene appeared faster or slower than the first (2-alternative forced-choice task using the method of constant stimuli). They either heard in-car noise at the level it occurred in the real world or reduced in volume by 5 dB. The reduction in noise led to participants judging speeds to be significantly slower and this effect was evident for all participants. This finding indicates that, when in-car noise is attenuated, drivers are likely to underestimate their speed, potentially encouraging them to drive faster and placing them at greater risk of crashing.

  4. Audio Feedback -- Better Feedback?

    Science.gov (United States)

    Voelkel, Susanne; Mello, Luciane V.

    2014-01-01

    National Student Survey (NSS) results show that many students are dissatisfied with the amount and quality of feedback they get for their work. This study reports on two case studies in which we tried to address these issues by introducing audio feedback to one undergraduate (UG) and one postgraduate (PG) class, respectively. In case study one…

  5. How Do Batters Use Visual, Auditory, and Tactile Information about the Success of a Baseball Swing?

    Science.gov (United States)

    Gray, Rob

    2009-01-01

    Bat/ball contact produces visual (the ball leaving the bat), auditory (the "crack" of the bat), and tactile (bat vibration) feedback about the success of the swing. We used a batting simulation to investigate how college baseball players use visual, tactile, and auditory feedback. In Experiment 1, swing accuracy (i.e., the lateral separation…

  6. How Do Batters Use Visual, Auditory, and Tactile Information about the Success of a Baseball Swing?

    Science.gov (United States)

    Gray, Rob

    2009-01-01

    Bat/ball contact produces visual (the ball leaving the bat), auditory (the "crack" of the bat), and tactile (bat vibration) feedback about the success of the swing. We used a batting simulation to investigate how college baseball players use visual, tactile, and auditory feedback. In Experiment 1, swing accuracy (i.e., the lateral separation…

  7. Implicit multisensory associations influence voice recognition.

    Directory of Open Access Journals (Sweden)

    Katharina von Kriegstein

    2006-10-01

    Full Text Available Natural objects provide partially redundant information to the brain through different sensory modalities. For example, voices and faces both give information about the speech content, age, and gender of a person. Thanks to this redundancy, multimodal recognition is fast, robust, and automatic. In unimodal perception, however, only part of the information about an object is available. Here, we addressed whether, even under conditions of unimodal sensory input, crossmodal neural circuits that have been shaped by previous associative learning become activated and underpin a performance benefit. We measured brain activity with functional magnetic resonance imaging before, while, and after participants learned to associate either sensory redundant stimuli, i.e. voices and faces, or arbitrary multimodal combinations, i.e. voices and written names, ring tones, and cell phones or brand names of these cell phones. After learning, participants were better at recognizing unimodal auditory voices that had been paired with faces than those paired with written names, and association of voices with faces resulted in an increased functional coupling between voice and face areas. No such effects were observed for ring tones that had been paired with cell phones or names. These findings demonstrate that brief exposure to ecologically valid and sensory redundant stimulus pairs, such as voices and faces, induces specific multisensory associations. Consistent with predictive coding theories, associative representations become thereafter available for unimodal perception and facilitate object recognition. These data suggest that for natural objects effective predictive signals can be generated across sensory systems and proceed by optimization of functional connectivity between specialized cortical sensory modules.

  8. Tuning Shifts of the Auditory System By Corticocortical and Corticofugal Projections and Conditioning

    OpenAIRE

    Suga, Nobuo

    2011-01-01

    The central auditory system consists of the lemniscal and nonlemniscal systems. The thalamic lemniscal and non-lemniscal auditory nuclei are different from each other in response properties and neural connectivities. The cortical auditory areas receiving the projections from these thalamic nuclei interact with each other through corticocortical projections and project down to the subcortical auditory nuclei. This corticofugal (descending) system forms multiple feedback loops with the ascendin...

  9. Accuracy of pitch matching significantly improved by live voice model.

    Science.gov (United States)

    Granot, Roni Y; Israel-Kolatt, Rona; Gilboa, Avi; Kolatt, Tsafrir

    2013-05-01

    Singing is, undoubtedly, the most fundamental expression of our musical capacity, yet an estimated 10-15% of Western population sings "out-of-tune (OOT)." Previous research in children and adults suggests, albeit inconsistently, that imitating a human voice can improve pitch matching. In the present study, we focus on the potentially beneficial effects of the human voice and especially the live human voice. Eighteen participants varying in their singing abilities were required to imitate in singing a set of nine ascending and descending intervals presented to them in five different randomized blocked conditions: live piano, recorded piano, live voice using optimal voice production, recorded voice using optimal voice production, and recorded voice using artificial forced voice production. Pitch and interval matching in singing were much more accurate when participants repeated sung intervals as compared with intervals played to them on the piano. The advantage of the vocal over the piano stimuli was robust and emerged clearly regardless of whether piano tones were played live and in full view or were presented via recording. Live vocal stimuli elicited higher accuracy than recorded vocal stimuli, especially when the recorded vocal stimuli were produced in a forced vocal production. Remarkably, even those who would be considered OOT singers on the basis of their performance when repeating piano tones were able to pitch match live vocal sounds, with deviations well within the range of what is considered accurate singing (M=46.0, standard deviation=39.2 cents). In fact, those participants who were most OOT gained the most from the live voice model. Results are discussed in light of the dual auditory-motor encoding of pitch analogous to that found in speech. Copyright © 2013 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  10. Dimensionality in voice quality.

    Science.gov (United States)

    Bele, Irene Velsvik

    2007-05-01

    This study concerns speaking voice quality in a group of male teachers (n = 35) and male actors (n = 36), as the purpose was to investigate normal and supranormal voices. The goal was the development of a method of valid perceptual evaluation for normal to supranormal and resonant voices. The voices (text reading at two loudness levels) had been evaluated by 10 listeners, for 15 vocal characteristics using VA scales. In this investigation, the results of an exploratory factor analysis of the vocal characteristics used in this method are presented, reflecting four dimensions of major importance for normal and supranormal voices. Special emphasis is placed on the effects on voice quality of a change in the loudness variable, as two loudness levels are studied. Furthermore, the vocal characteristics Sonority and Ringing voice quality are paid special attention, as the essence of the term "resonant voice" was a basic issue throughout a doctoral dissertation where this study was included.

  11. Voice box (image)

    Science.gov (United States)

    The larynx, or voice box, is located in the neck and performs several important functions in the body. The larynx is involved in swallowing, breathing, and voice production. Sound is produced when the air which ...

  12. Voice and Aging

    Science.gov (United States)

    ... dramatic voice changes are those during childhood and adolescence. The larynx (or voice box) and vocal cord tissues do not fully mature until late teenage years. Hormone-related changes during adolescence are ...

  13. Voice and endocrinology

    OpenAIRE

    KVS Hari Kumar; Anurag Garg; Ajai Chandra, N. S.; Singh, S. P.; Rakesh Datta

    2016-01-01

    Voice is one of the advanced features of natural evolution that differentiates human beings from other primates. The human voice is capable of conveying the thoughts into spoken words along with a subtle emotion to the tone. This extraordinary character of the voice in expressing multiple emotions is the gift of God to the human beings and helps in effective interpersonal communication. Voice generation involves close interaction between cerebral signals and the peripheral apparatus consistin...

  14. DLMS Voice Data Entry.

    Science.gov (United States)

    1980-06-01

    between operator and computer displayed on ADM-3A 20c A-I Possible Hardware Configuration for a Multistation Cartographic VDES ...this program a Voice Recognition System (VRS) which can be used to explore the use of voice data entry ( VDE ) in the DIMS or other cartographic data...Multi-Station Cartographic Voice Data Entry System An engineering development model voice data entry system ( VDES ) could be most efficiently

  15. The Harmonic Organization of Auditory Cortex

    Directory of Open Access Journals (Sweden)

    Xiaoqin eWang

    2013-12-01

    Full Text Available A fundamental structure of sounds encountered in the natural environment is the harmonicity. Harmonicity is an essential component of music found in all cultures. It is also a unique feature of vocal communication sounds such as human speech and animal vocalizations. Harmonics in sounds are produced by a variety of acoustic generators and reflectors in the natural environment, including vocal apparatuses of humans and animal species as well as music instruments of many types. We live in an acoustic world full of harmonicity. Given the widespread existence of the harmonicity in many aspects of the hearing environment, it is natural to expect that it be reflected in the evolution and development of the auditory systems of both humans and animals, in particular the auditory cortex. Recent neuroimaging and neurophysiology experiments have identified regions of non-primary auditory cortex in humans and non-human primates that have selective responses to harmonic pitches. Accumulating evidence has also shown that neurons in many regions of the auditory cortex exhibit characteristic responses to harmonically related frequencies beyond the range of pitch. Together, these findings suggest that a fundamental organizational principle of auditory cortex is based on the harmonicity. Such an organization likely plays an important role in music processing by the brain. It may also form the basis of the preference for particular classes of music and voice sounds.

  16. Writing with Voice

    Science.gov (United States)

    Kesler, Ted

    2012-01-01

    In this Teaching Tips article, the author argues for a dialogic conception of voice, based in the work of Mikhail Bakhtin. He demonstrates a dialogic view of voice in action, using two writing examples about the same topic from his daughter, a fifth-grade student. He then provides five practical tips for teaching a dialogic conception of voice in…

  17. Tips for Healthy Voices

    Science.gov (United States)

    ... social interaction as well as for most people’s occupation. Proper care and use of your voice will give you the best chance for having a healthy voice for your entire lifetime. Hoarseness or roughness in your voice is often ...

  18. Interhemispheric auditory connectivity: structure and function related to auditory verbal hallucinations.

    Science.gov (United States)

    Steinmann, Saskia; Leicht, Gregor; Mulert, Christoph

    2014-01-01

    Auditory verbal hallucinations (AVH) are one of the most common and most distressing symptoms of schizophrenia. Despite fundamental research, the underlying neurocognitive and neurobiological mechanisms are still a matter of debate. Previous studies suggested that "hearing voices" is associated with a number of factors including local deficits in the left auditory cortex and a disturbed connectivity of frontal and temporoparietal language-related areas. In addition, it is hypothesized that the interhemispheric pathways connecting right and left auditory cortices might be involved in the pathogenesis of AVH. Findings based on Diffusion-Tensor-Imaging (DTI) measurements revealed a remarkable interindividual variability in size and shape of the interhemispheric auditory pathways. Interestingly, schizophrenia patients suffering from AVH exhibited increased fractional anisotropy (FA) in the interhemispheric fibers than non-hallucinating patients. Thus, higher FA-values indicate an increased severity of AVH. Moreover, a dichotic listening (DL) task showed that the interindividual variability in the interhemispheric auditory pathways was reflected in the behavioral outcome: stronger pathways supported a better information transfer and consequently improved speech perception. This detection indicates a specific structure-function relationship, which seems to be interindividually variable. This review focuses on recent findings concerning the structure-function relationship of the interhemispheric pathways in controls, hallucinating and non-hallucinating schizophrenia patients and concludes that changes in the structural and functional connectivity of auditory areas are involved in the pathophysiology of AVH.

  19. Functional connectivity between face-movement and speech-intelligibility areas during auditory-only speech perception.

    Science.gov (United States)

    Schall, Sonja; von Kriegstein, Katharina

    2014-01-01

    It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers' voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker's face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.

  20. Hearing Voices in Different Cultures: A Social Kindling Hypothesis.

    Science.gov (United States)

    Luhrmann, Tanya M; Padmavati, R; Tharoor, Hema; Osei, Akwasi

    2015-10-01

    This study compares 20 subjects, in each of three different settings, with serious psychotic disorder (they meet inclusion criteria for schizophrenia) who hear voices, and compares their voice-hearing experience. We find that while there is much that is similar, there are notable differences in the kinds of voices that people seem to experience. In a California sample, people were more likely to describe their voices as intrusive unreal thoughts; in the South Indian sample, they were more likely to describe them as providing useful guidance; and in our West African sample, they were more likely to describe them as morally good and causally powerful. What we think we may be observing is that people who fall ill with serious psychotic disorder pay selective attention to a constant stream of many different auditory and quasi-auditory events because of different "cultural invitations"-variations in ways of thinking about minds, persons, spirits and so forth. Such a process is consistent with processes described in the cognitive psychology and psychiatric anthropology literature, but not yet described or understood with respect to cultural variations in auditory hallucinations. We call this process "social kindling."

  1. Auditory sustained field responses to periodic noise

    Directory of Open Access Journals (Sweden)

    Keceli Sumru

    2012-01-01

    Full Text Available Abstract Background Auditory sustained responses have been recently suggested to reflect neural processing of speech sounds in the auditory cortex. As periodic fluctuations below the pitch range are important for speech perception, it is necessary to investigate how low frequency periodic sounds are processed in the human auditory cortex. Auditory sustained responses have been shown to be sensitive to temporal regularity but the relationship between the amplitudes of auditory evoked sustained responses and the repetitive rates of auditory inputs remains elusive. As the temporal and spectral features of sounds enhance different components of sustained responses, previous studies with click trains and vowel stimuli presented diverging results. In order to investigate the effect of repetition rate on cortical responses, we analyzed the auditory sustained fields evoked by periodic and aperiodic noises using magnetoencephalography. Results Sustained fields were elicited by white noise and repeating frozen noise stimuli with repetition rates of 5-, 10-, 50-, 200- and 500 Hz. The sustained field amplitudes were significantly larger for all the periodic stimuli than for white noise. Although the sustained field amplitudes showed a rising and falling pattern within the repetition rate range, the response amplitudes to 5 Hz repetition rate were significantly larger than to 500 Hz. Conclusions The enhanced sustained field responses to periodic noises show that cortical sensitivity to periodic sounds is maintained for a wide range of repetition rates. Persistence of periodicity sensitivity below the pitch range suggests that in addition to processing the fundamental frequency of voice, sustained field generators can also resolve low frequency temporal modulations in speech envelope.

  2. Voice handicap in singers.

    Science.gov (United States)

    Murry, Thomas; Zschommler, Anne; Prokop, Jan

    2009-05-01

    The study aimed to determine the differences in responses to the Voice Handicap Index (VHI-10) between singers and nonsingers and to evaluate the ranked order differences of the VHI-10 statements for both groups. The VHI-10 was modified to include statements related to the singing voice for comparison to the original VHI-10. Thirty-five nonsingers with documented voice disorders responded to the VHI-10. A second group, consisting of 35 singers with voice complaints, responded to the VHI-10 with three statements added specifically addressing the singing voice. Data from both groups were analyzed in terms of overall subject self-rating of voice handicap and the rank order of statements from least to most important. The difference between the mean VHI-10 for the singers and nonsingers was not statistically significant, thus, supporting the validity of the VHI-10. However, the 10 statements were ranked differently in terms of their importance by both groups. In addition, when three statements related specifically to the singing voice were substituted in the original VHI-10, the singers judged their voice problem to be more severe than when using the original VHI-10. The type of statements used to assess self-perception of voice handicap may be related to the subject population. Singers with voice problems do not rate their voices to be more handicapped than nonsingers unless statements related specifically to singing are included.

  3. Auditory perceptual simulation: Simulating speech rates or accents?

    Science.gov (United States)

    Zhou, Peiyun; Christianson, Kiel

    2016-07-01

    When readers engage in Auditory Perceptual Simulation (APS) during silent reading, they mentally simulate characteristics of voices attributed to a particular speaker or a character depicted in the text. Previous research found that auditory perceptual simulation of a faster native English speaker during silent reading led to shorter reading times that auditory perceptual simulation of a slower non-native English speaker. Yet, it was uncertain whether this difference was triggered by the different speech rates of the speakers, or by the difficulty of simulating an unfamiliar accent. The current study investigates this question by comparing faster Indian-English speech and slower American-English speech in the auditory perceptual simulation paradigm. Analyses of reading times of individual words and the full sentence reveal that the auditory perceptual simulation effect again modulated reading rate, and auditory perceptual simulation of the faster Indian-English speech led to faster reading rates compared to auditory perceptual simulation of the slower American-English speech. The comparison between this experiment and the data from Zhou and Christianson (2016) demonstrate further that the "speakers'" speech rates, rather than the difficulty of simulating a non-native accent, is the primary mechanism underlying auditory perceptual simulation effects.

  4. Voice Recognition Algorithms using Mel Frequency Cepstral Coefficient (MFCC) and Dynamic Time Warping (DTW) Techniques

    CERN Document Server

    Muda, Lindasalwa; Elamvazuthi, I

    2010-01-01

    Digital processing of speech signal and voice recognition algorithm is very important for fast and accurate automatic voice recognition technology. The voice is a signal of infinite information. A direct analysis and synthesizing the complex voice signal is due to too much information contained in the signal. Therefore the digital signal processes such as Feature Extraction and Feature Matching are introduced to represent the voice signal. Several methods such as Liner Predictive Predictive Coding (LPC), Hidden Markov Model (HMM), Artificial Neural Network (ANN) and etc are evaluated with a view to identify a straight forward and effective method for voice signal. The extraction and matching process is implemented right after the Pre Processing or filtering signal is performed. The non-parametric method for modelling the human auditory perception system, Mel Frequency Cepstral Coefficients (MFCCs) are utilize as extraction techniques. The non linear sequence alignment known as Dynamic Time Warping (DTW) intro...

  5. Multidimensional assessment of strongly irregular voices such as in substitution voicing and spasmodic dysphonia: a compilation of own research.

    Science.gov (United States)

    Moerman, Mieke; Martens, Jean-Pierre; Dejonckere, Philippe

    2015-04-01

    This article is a compilation of own research performed during the European COoperation in Science and Technology (COST) action 2103: 'Advance Voice Function Assessment', an initiative of voice and speech processing teams consisting of physicists, engineers, and clinicians. This manuscript concerns analyzing largely irregular voicing types, namely substitution voicing (SV) and adductor spasmodic dysphonia (AdSD). A specific perceptual rating scale (IINFVo) was developed, and the Auditory Model Based Pitch Extractor (AMPEX), a piece of software that automatically analyses running speech and generates pitch values in background noise, was applied. The IINFVo perceptual rating scale has been shown to be useful in evaluating SV. The analysis of strongly irregular voices stimulated a modification of the European Laryngological Society's assessment protocol which was originally designed for the common types of (less severe) dysphonia. Acoustic analysis with AMPEX demonstrates that the most informative features are, for SV, the voicing-related acoustic features and, for AdSD, the perturbation measures. Poor correlations between self-assessment and acoustic and perceptual dimensions in the assessment of highly irregular voices argue for a multidimensional approach.

  6. Discrimination of auditory stimuli during isoflurane anesthesia.

    Science.gov (United States)

    Rojas, Manuel J; Navas, Jinna A; Greene, Stephen A; Rector, David M

    2008-10-01

    Deep isoflurane anesthesia initiates a burst suppression pattern in which high-amplitude bursts are preceded by periods of nearly silent electroencephalogram. The burst suppression ratio (BSR) is the percentage of suppression (silent electroencephalogram) during the burst suppression pattern and is one parameter used to assess anesthesia depth. We investigated cortical burst activity in rats in response to different auditory stimuli presented during the burst suppression state. We noted a rapid appearance of bursts and a significant decrease in the BSR during stimulation. The BSR changes were distinctive for the different stimuli applied, and the BSR decreased significantly more when stimulated with a voice familiar to the rat as compared with an unfamiliar voice. These results show that the cortex can show differential sensory responses during deep isoflurane anesthesia.

  7. The efficacy of using a personal stereo to treat auditory hallucinations. Preliminary findings.

    Science.gov (United States)

    Johnston, Olwyn; Gallagher, Anthony G; McMahon, Patrick J; King, David J

    2002-09-01

    This article presents preliminary findings from the first participant to complete an experiment assessing the efficacy of the personal stereo in treating auditory hallucinations. O.C., a 50-year-old woman, took part in a controlled treatment trial in which 1-week baseline, personal stereo, and control treatment (nonfunctioning hearing aid) stages were alternated for 7 weeks. The Positive and Negative Syndrome Scale, Clinical Global Impression Scales, Beliefs About Voices Questionnaire, Rosenberg Self-Esteem Scale, and Topography of Voices Rating Scale were used. The personal stereo led to a decrease in the severity of O.C.'s auditory hallucinations. For example, she rated her voices as being fairly distressing during baseline and control treatment stages but neutral during personal stereo stages. A slight decrease in other psychopathology also occurred during personal stereo stages. Use of the personal stereo did not lead to a decrease in self-esteem, contradicting suggestions that counterstimulation treatments for auditory hallucinations may be disempowering.

  8. Supervisor Feedback.

    Science.gov (United States)

    Hayman, Marilyn J.

    1981-01-01

    Investigated the effectiveness of supervisor feedback in contributing to learning counseling skills. Counselor trainees (N=64) were assigned to supervisor feedback, no supervisor feedback, or control groups for three training sessions. Results indicated counseling skills were learned best by students with no supervisor feedback but self and peer…

  9. Singing voice outcomes following singing voice therapy.

    Science.gov (United States)

    Dastolfo-Hromack, Christina; Thomas, Tracey L; Rosen, Clark A; Gartner-Schmidt, Jackie

    2016-11-01

    The objectives of this study were to describe singing voice therapy (SVT), describe referred patient characteristics, and document the outcomes of SVT. Retrospective. Records of patients receiving SVT between June 2008 and June 2013 were reviewed (n = 51). All diagnoses were included. Demographic information, number of SVT sessions, and symptom severity were retrieved from the medical record. Symptom severity was measured via the 10-item Singing Voice Handicap Index (SVHI-10). Treatment outcome was analyzed by diagnosis, history of previous training, and SVHI-10. SVHI-10 scores decreased following SVT (mean change = 11, 40% decrease) (P singing lessons (n = 10) also completed an average of three SVT sessions. Primary muscle tension dysphonia (MTD1) and benign vocal fold lesion (lesion) were the most common diagnoses. Most patients (60%) had previous vocal training. SVHI-10 decrease was not significantly different between MTD and lesion. This is the first outcome-based study of SVT in a disordered population. Diagnosis of MTD or lesion did not influence treatment outcomes. Duration of SVT was short (approximately three sessions). Voice care providers are encouraged to partner with a singing voice therapist to provide optimal care for the singing voice. This study supports the use of SVT as a tool for the treatment of singing voice disorders. 4 Laryngoscope, 126:2546-2551, 2016. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.

  10. Auditory Imagery: Empirical Findings

    Science.gov (United States)

    Hubbard, Timothy L.

    2010-01-01

    The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d)…

  11. The Speaker Behind The Voice: Therapeutic Practice from the Perspective of Pragmatic Theory

    Directory of Open Access Journals (Sweden)

    Felicity eDeamer

    2015-06-01

    Full Text Available Many attempts at understanding auditory verbal hallucinations (AVHs have tried to explain why there is an auditory experience in the absence of an appropriate stimulus. We suggest that many instance of voice-hearing should be approached differently. More specifically, they could be viewed primarily as hallucinated acts of communication, rather than hallucinated sounds. We suggest that this change of perspective is reflected in, and helps to explain, the successes of two recent therapeutic techniques. These two techniques are: Relating Therapy for Voices and Avatar Therapy.

  12. "Hello, Mrs. Willman, It's Me!" Keep Kids Reading over the Summer by Using Voice Mail.

    Science.gov (United States)

    Willman, Ann Teresa

    1999-01-01

    Describes a summer reading program for 26 remedial readers in which each student left voice-mail messages on the school district's voice-mail system, either reading to the teacher for three minutes or summarizing a book chapter. Describes the teacher's responses and parental feedback and involvement. Notes that all students maintained their…

  13. Clinical Voices - an update

    DEFF Research Database (Denmark)

    Fusaroli, Riccardo; Weed, Ethan

    Anomalous aspects of speech and voice, including pitch, fluency, and voice quality, are reported to characterise many mental disorders. However, it has proven difficult to quantify and explain this oddness of speech by employing traditional statistical methods. In this talk we will show how the t...... the temporal dynamics of the voice in Asperger's patients enable us to automatically reconstruct the diagnosis, and assess the Autism quotient score. We then generalise the findings to Danish and American children with autism....

  14. Effects of Medications on Voice

    Science.gov (United States)

    ... ENT Doctor Near You Effects of Medications on Voice Effects of Medications on Voice Patient Health Information ... entnet.org . Could Your Medication Be Affecting Your Voice? Some medications including prescription, over-the-counter, and ...

  15. Neural Architecture of Auditory Object Categorization

    Directory of Open Access Journals (Sweden)

    Yune-Sang Lee

    2011-10-01

    Full Text Available We can identify objects by sight or by sound, yet far less is known about auditory object recognition than about visual recognition. Any exemplar of a dog (eg, a picture can be recognized on multiple categorical levels (eg, animal, dog, poodle. Using fMRI combined with machine-learning techniques, we studied these levels of categorization with sounds rather than images. Subjects heard sounds of various animate and inanimate objects, and unrecognizable control sounds. We report four primary findings: (1 some distinct brain regions selectively coded for basic (“dog” versus superordinate (“animal” categorization; (2 classification at the basic level entailed more extended cortical networks than those for superordinate categorization; (3 human voices were recognized far better by multiple brain regions than were any other sound categories; (4 regions beyond temporal lobe auditory areas were able to distinguish and categorize auditory objects. We conclude that multiple representations of an object exist at different categorical levels. This neural instantiation of object categories is distributed across multiple brain regions, including so-called “visual association areas,” indicating that these regions support object knowledge even when the input is auditory. Moreover, our findings appear to conflict with prior well-established theories of category-specific modules in the brain.

  16. Older adults' recognition of bodily and auditory expressions of emotion.

    Science.gov (United States)

    Ruffman, Ted; Sullivan, Susan; Dittrich, Winand

    2009-09-01

    This study compared young and older adults' ability to recognize bodily and auditory expressions of emotion and to match bodily and facial expressions to vocal expressions. Using emotion discrimination and matching techniques, participants assessed emotion in voices (Experiment 1), point-light displays (Experiment 2), and still photos of bodies with faces digitally erased (Experiment 3). Older adults' were worse at least some of the time in recognition of anger, sadness, fear, and happiness in bodily expressions and of anger in vocal expressions. Compared with young adults, older adults also found it more difficult to match auditory expressions to facial expressions (5 of 6 emotions) and bodily expressions (3 of 6 emotions).

  17. Superior voice recognition in a patient with acquired prosopagnosia and object agnosia.

    Science.gov (United States)

    Hoover, Adria E N; Démonet, Jean-François; Steeves, Jennifer K E

    2010-11-01

    Anecdotally, it has been reported that individuals with acquired prosopagnosia compensate for their inability to recognize faces by using other person identity cues such as hair, gait or the voice. Are they therefore superior at the use of non-face cues, specifically voices, to person identity? Here, we empirically measure person and object identity recognition in a patient with acquired prosopagnosia and object agnosia. We quantify person identity (face and voice) and object identity (car and horn) recognition for visual, auditory, and bimodal (visual and auditory) stimuli. The patient is unable to recognize faces or cars, consistent with his prosopagnosia and object agnosia, respectively. He is perfectly able to recognize people's voices and car horns and bimodal stimuli. These data show a reverse shift in the typical weighting of visual over auditory information for audiovisual stimuli in a compromised visual recognition system. Moreover, the patient shows selectively superior voice recognition compared to the controls revealing that two different stimulus domains, persons and objects, may not be equally affected by sensory adaptation effects. This also implies that person and object identity recognition are processed in separate pathways. These data demonstrate that an individual with acquired prosopagnosia and object agnosia can compensate for the visual impairment and become quite skilled at using spared aspects of sensory processing. In the case of acquired prosopagnosia it is advantageous to develop a superior use of voices for person identity recognition in everyday life. Copyright © 2010 Elsevier Ltd. All rights reserved.

  18. Modulation of the motor cortex during singing-voice perception.

    Science.gov (United States)

    Lévêque, Yohana; Schön, Daniele

    2015-04-01

    Several studies on action observation have shown that the biological dimension of movement modulates sensorimotor interactions in perception. In the present fMRI study, we tested the hypothesis that the biological dimension of sound modulates the involvement of the motor system in human auditory perception, using musical tasks. We first localized the vocal motor cortex in each participant. Then we compared the BOLD response to vocal, semi-vocal and non-vocal melody perception, and found greater activity for voice perception in the right sensorimotor cortex. We additionally ran a psychophysiological interaction analysis with the right sensorimotor as a seed, showing that the vocal dimension of the stimuli enhanced the connectivity between the seed region and other important nodes of the auditory dorsal stream. Finally, the participants' vocal ability was negatively correlated to the voice effect in the Inferior Parietal Lobule. These results suggest that the biological dimension of singing-voice impacts the activity within the auditory dorsal stream, probably via a facilitated matching between the perceived sound and the participant motor representations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Voiced Reading and Rhythm

    Institute of Scientific and Technical Information of China (English)

    詹艳萍

    2007-01-01

    Since voiced reading is an important way in learning English,rhythm is the most critical factor that enables to read beautifully.This article illustrates the relationship between rhythm and voiced reading,the importance of rhythm,and the methods to develop the sense of rhythm.

  20. Clinical Voices - an update

    DEFF Research Database (Denmark)

    Fusaroli, Riccardo; Weed, Ethan

    Anomalous aspects of speech and voice, including pitch, fluency, and voice quality, are reported to characterise many mental disorders. However, it has proven difficult to quantify and explain this oddness of speech by employing traditional statistical methods. In this talk we will show how...

  1. Borderline Space for Voice

    Science.gov (United States)

    Batchelor, Denise

    2012-01-01

    Being on the borderline as a student in higher education is not always negative, to do with marginalisation, exclusion and having a voice that is vulnerable. Paradoxically, being on the edge also has positive connections with integration, inclusion and having a voice that is strong. Alternative understandings of the concept of borderline space can…

  2. Voice and endocrinology

    Directory of Open Access Journals (Sweden)

    KVS Hari Kumar

    2016-01-01

    Full Text Available Voice is one of the advanced features of natural evolution that differentiates human beings from other primates. The human voice is capable of conveying the thoughts into spoken words along with a subtle emotion to the tone. This extraordinary character of the voice in expressing multiple emotions is the gift of God to the human beings and helps in effective interpersonal communication. Voice generation involves close interaction between cerebral signals and the peripheral apparatus consisting of the larynx, vocal cords, and trachea. The human voice is susceptible to the hormonal changes throughout life right from the puberty until senescence. Thyroid, gonadal and growth hormones have tremendous impact on the structure and function of the vocal apparatus. The alteration of voice is observed even in physiological states such as puberty and menstruation. Astute clinical observers make out the changes in the voice and refer the patients for endocrine evaluation. In this review, we shall discuss the hormonal influence on the voice apparatus in normal and endocrine disorders.

  3. Face the voice

    DEFF Research Database (Denmark)

    Lønstrup, Ansa

    2014-01-01

    will be based on a reception aesthetic and phenomenological approach, the latter as presented by Don Ihde in his book Listening and Voice. Phenomenologies of Sound , and my analytical sketches will be related to theoretical statements concerning the understanding of voice and media (Cavarero, Dolar, La...

  4. Voice integrated systems

    Science.gov (United States)

    Curran, P. Mike

    1977-01-01

    The program at Naval Air Development Center was initiated to determine the desirability of interactive voice systems for use in airborne weapon systems crew stations. A voice recognition and synthesis system (VRAS) was developed and incorporated into a human centrifuge. The speech recognition aspect of VRAS was developed using a voice command system (VCS) developed by Scope Electronics. The speech synthesis capability was supplied by a Votrax, VS-5, speech synthesis unit built by Vocal Interface. The effects of simulated flight on automatic speech recognition were determined by repeated trials in the VRAS-equipped centrifuge. The relationship of vibration, G, O2 mask, mission duration, and cockpit temperature and voice quality was determined. The results showed that: (1) voice quality degrades after 0.5 hours with an O2 mask; (2) voice quality degrades under high vibration; and (3) voice quality degrades under high levels of G. The voice quality studies are summarized. These results were obtained with a baseline of 80 percent recognition accuracy with VCS.

  5. Ontario's Student Voice Initiative

    Science.gov (United States)

    Courtney, Jean

    2014-01-01

    This article describes in some detail aspects of the Student Voice initiative funded and championed by Ontario's Ministry of Education since 2008. The project enables thousands of students to make their voices heard in meaningful ways and to participate in student-led research. Some students from grades 7 to 12 become members of the Student…

  6. EasyVoice: Integrating voice synthesis with Skype

    CERN Document Server

    Condado, Paulo A

    2007-01-01

    This paper presents EasyVoice, a system that integrates voice synthesis with Skype. EasyVoice allows a person with voice disabilities to talk with another person located anywhere in the world, removing an important obstacle that affect these people during a phone or VoIP-based conversation.

  7. [Use of standard protocols in the evaluation of voice disorders].

    Science.gov (United States)

    Arias, C; Bless, D M; Khidr, A

    1992-01-01

    The purpose of this paper is to present a protocol for the use of standard forms in the evaluation of laryngeal structure and function in patients with voice disorders. The forms are designed to cover all the essential parameters needed to reach an accurate descriptive diagnosis which allows us to have an appropriate therapy plan according to the individual's detailed observations. It also gives us a consistent standardized evaluation form to measure changes after therapy whether behavioral, medical or surgical, and to compare different observations across patients. Reporting observations in this consistent manner will make characteristic patterns of different vocal behaviors readily obvious to the researcher or the clinician and reduce the possibility of missing any important details. The protocols are: indirect laryngoscopy, video-stroboscopic-evaluation form, functional voice and auditory perceptual voice evaluation.

  8. Auditory imagery: empirical findings.

    Science.gov (United States)

    Hubbard, Timothy L

    2010-03-01

    The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d) auditory imagery's relationship to perception and memory (detection, encoding, recall, mnemonic properties, phonological loop), and (e) individual differences in auditory imagery (in vividness, musical ability and experience, synesthesia, musical hallucinosis, schizophrenia, amusia) are considered. It is concluded that auditory imagery (a) preserves many structural and temporal properties of auditory stimuli, (b) can facilitate auditory discrimination but interfere with auditory detection, (c) involves many of the same brain areas as auditory perception, (d) is often but not necessarily influenced by subvocalization, (e) involves semantically interpreted information and expectancies, (f) involves depictive components and descriptive components, (g) can function as a mnemonic but is distinct from rehearsal, and (h) is related to musical ability and experience (although the mechanisms of that relationship are not clear).

  9. Auditory short-term memory activation during score reading

    OpenAIRE

    Simoens, Veerle L; Mari Tervaniemi

    2013-01-01

    Performing music on the basis of reading a score requires reading ahead of what is being played in order to anticipate the necessary actions to produce the notes. Score reading thus not only involves the decoding of a visual score and the comparison to the auditory feedback, but also short-term storage of the musical information due to the delay of the auditory feedback during reading ahead. This study investigates the mechanisms of encoding of musical information in short-term memory during ...

  10. Voice-based assessments of trustworthiness, competence, and warmth in blind and sighted adults.

    Science.gov (United States)

    Oleszkiewicz, Anna; Pisanski, Katarzyna; Lachowicz-Tabaczek, Kinga; Sorokowska, Agnieszka

    2017-06-01

    The study of voice perception in congenitally blind individuals allows researchers rare insight into how a lifetime of visual deprivation affects the development of voice perception. Previous studies have suggested that blind adults outperform their sighted counterparts in low-level auditory tasks testing spatial localization and pitch discrimination, as well as in verbal speech processing; however, blind persons generally show no advantage in nonverbal voice recognition or discrimination tasks. The present study is the first to examine whether visual experience influences the development of social stereotypes that are formed on the basis of nonverbal vocal characteristics (i.e., voice pitch). Groups of 27 congenitally or early-blind adults and 23 sighted controls assessed the trustworthiness, competence, and warmth of men and women speaking a series of vowels, whose voice pitches had been experimentally raised or lowered. Blind and sighted listeners judged both men's and women's voices with lowered pitch as being more competent and trustworthy than voices with raised pitch. In contrast, raised-pitch voices were judged as being warmer than were lowered-pitch voices, but only for women's voices. Crucially, blind and sighted persons did not differ in their voice-based assessments of competence or warmth, or in their certainty of these assessments, whereas the association between low pitch and trustworthiness in women's voices was weaker among blind than sighted participants. This latter result suggests that blind persons may rely less heavily on nonverbal cues to trustworthiness compared to sighted persons. Ultimately, our findings suggest that robust perceptual associations that systematically link voice pitch to the social and personal dimensions of a speaker can develop without visual input.

  11. Talker-specific auditory imagery during reading

    Science.gov (United States)

    Nygaard, Lynne C.; Duke, Jessica; Kawar, Kathleen; Queen, Jennifer S.

    2004-05-01

    The present experiment was designed to determine if auditory imagery during reading includes talker-specific characteristics such as speaking rate. Following Kosslyn and Matt (1977), participants were familiarized with two talkers during a brief prerecorded conversation. One talker spoke at a fast speaking rate and one spoke at a slow speaking rate. During familiarization, participants were taught to identify each talker by name. At test, participants were asked to read two passages and told that either the slow or fast talker wrote each passage. In one condition, participants were asked to read each passage aloud, and in a second condition, they were asked to read each passage silently. Participants pressed a key when they had completed reading the passage, and reading times were collected. Reading times were significantly slower when participants thought they were reading a passage written by the slow talker than when reading a passage written by the fast talker. However, the effects of speaking rate were only present in the reading-aloud condition. Additional experiments were conducted to investigate the role of attention to talker's voice during familiarization. These results suggest that readers may engage in auditory imagery while reading that preserves perceptual details of an author's voice.

  12. Effects of auditory disruption of lingual tactile sensitivity in skilled and unskilled speaking conditions.

    Science.gov (United States)

    Krummel, S; Petrosino, L; Fucci, D

    1991-10-01

    The purpose of this study was to investigate the effect of disruption of the auditory feedback channel on lingual vibrotactile thresholds obtained under skilled and unskilled speaking conditions. Each of 22 adults was asked to read an English (skilled) and French (unskilled) passage under conditions of normal and altered auditory feedback. Bilateral presentation of masking noise was utilized to disrupt auditory feedback. Before each of the experimental sessions and immediately following a reading, lingual vibrotactile thresholds were obtained. Analysis indicated that the mean differences in the pre- and postvibrotactile threshold measurements of the skilled auditory disrupted condition varied significantly from the mean differences in the pre- and postvibrotactile threshold measurements of the three other conditions. The role of feedback in the speech production of skilled and unskilled speakers is discussed.

  13. Voice Savers for Music Teachers

    Science.gov (United States)

    Cookman, Starr

    2012-01-01

    Music teachers are in a class all their own when it comes to voice use. These elite vocal athletes require stamina, strength, and flexibility from their voices day in, day out for hours at a time. Voice rehabilitation clinics and research show that music education ranks high among the professionals most commonly affected by voice problems.…

  14. Neural mechanisms for voice recognition

    NARCIS (Netherlands)

    Andics, A.V.; McQueen, J.M.; Petersson, K.M.; Gal, V.; Rudas, G.; Vidnyanszky, Z.

    2010-01-01

    We investigated neural mechanisms that support voice recognition in a training paradigm with fMRI. The same listeners were trained on different weeks to categorize the mid-regions of voice-morph continua as an individual's voice. Stimuli implicitly defined a voice-acoustics space, and training expli

  15. Multi-modal assessment of on-road demand of voice and manual phone calling and voice navigation entry across two embedded vehicle systems.

    Science.gov (United States)

    Mehler, Bruce; Kidd, David; Reimer, Bryan; Reagan, Ian; Dobres, Jonathan; McCartt, Anne

    2016-03-01

    One purpose of integrating voice interfaces into embedded vehicle systems is to reduce drivers' visual and manual distractions with 'infotainment' technologies. However, there is scant research on actual benefits in production vehicles or how different interface designs affect attentional demands. Driving performance, visual engagement, and indices of workload (heart rate, skin conductance, subjective ratings) were assessed in 80 drivers randomly assigned to drive a 2013 Chevrolet Equinox or Volvo XC60. The Chevrolet MyLink system allowed completing tasks with one voice command, while the Volvo Sensus required multiple commands to navigate the menu structure. When calling a phone contact, both voice systems reduced visual demand relative to the visual-manual interfaces, with reductions for drivers in the Equinox being greater. The Equinox 'one-shot' voice command showed advantages during contact calling but had significantly higher error rates than Sensus during destination address entry. For both secondary tasks, neither voice interface entirely eliminated visual demand. Practitioner Summary: The findings reinforce the observation that most, if not all, automotive auditory-vocal interfaces are multi-modal interfaces in which the full range of potential demands (auditory, vocal, visual, manipulative, cognitive, tactile, etc.) need to be considered in developing optimal implementations and evaluating drivers' interaction with the systems. Social Media: In-vehicle voice-interfaces can reduce visual demand but do not eliminate it and all types of demand need to be taken into account in a comprehensive evaluation.

  16. Dissociating the cortical basis of memory for voices, words and tones.

    Science.gov (United States)

    Stevens, Alexander A

    2004-01-01

    Human speech carries both linguistic content and information about the speaker's identity and affect. While neuroimaging has been used extensively to study verbal memory, there has been little attention to the neural basis of memory for voices. Evidence from studies of aphasia and auditory agnosia suggests that voice memory may rely on anatomically distinct areas in the right temporal and parietal lobes regions, but there is little data on the broader neural systems involved in voice memory. The present study tested the hypothesis that the neural systems involved in voice memory are functionally distinct from the systems involved in word recognition and are primarily located in the right cerebral hemisphere. Subjects performed two-back tasks in which they were required to alternately remember the voices speaking (Voice condition), and the words they produced (Word condition). A tone memory condition was also included, as a non-speech comparison. The contrast between the Voice and Word conditions revealed greater Voice-related effects in left temporal, right frontal and right medial parietal areas, while the Word-related effects appeared in left frontal and bilateral parietal areas. These findings map out a partially right-lateralized fronto-parietal network associated with voice memory, which can be distinguished from predominantly left-hemisphere regions associated with verbal working memory. These results provide further evidence that distinct neural systems are associated with the carrier waves of speech and word identity.

  17. Motor Training: Comparison of Visual and Auditory Coded Proprioceptive Cues

    Directory of Open Access Journals (Sweden)

    Philip Jepson

    2012-05-01

    Full Text Available Self-perception of body posture and movement is achieved through multi-sensory integration, particularly the utilisation of vision, and proprioceptive information derived from muscles and joints. Disruption to these processes can occur following a neurological accident, such as stroke, leading to sensory and physical impairment. Rehabilitation can be helped through use of augmented visual and auditory biofeedback to stimulate neuro-plasticity, but the effective design and application of feedback, particularly in the auditory domain, is non-trivial. Simple auditory feedback was tested by comparing the stepping accuracy of normal subjects when given a visual spatial target (step length and an auditory temporal target (step duration. A baseline measurement of step length and duration was taken using optical motion capture. Subjects (n=20 took 20 ‘training’ steps (baseline ±25% using either an auditory target (950 Hz tone, bell-shaped gain envelope or visual target (spot marked on the floor and were then asked to replicate the target step (length or duration corresponding to training with all feedback removed. Visual cues (mean percentage error=11.5%; SD ± 7.0%; auditory cues (mean percentage error = 12.9%; SD ± 11.8%. Visual cues elicit a high degree of accuracy both in training and follow-up un-cued tasks; despite the novelty of the auditory cues present for subjects, the mean accuracy of subjects approached that for visual cues, and initial results suggest a limited amount of practice using auditory cues can improve performance.

  18. Auditory short-term memory activation during score reading.

    Directory of Open Access Journals (Sweden)

    Veerle L Simoens

    Full Text Available Performing music on the basis of reading a score requires reading ahead of what is being played in order to anticipate the necessary actions to produce the notes. Score reading thus not only involves the decoding of a visual score and the comparison to the auditory feedback, but also short-term storage of the musical information due to the delay of the auditory feedback during reading ahead. This study investigates the mechanisms of encoding of musical information in short-term memory during such a complicated procedure. There were three parts in this study. First, professional musicians participated in an electroencephalographic (EEG experiment to study the slow wave potentials during a time interval of short-term memory storage in a situation that requires cross-modal translation and short-term storage of visual material to be compared with delayed auditory material, as it is the case in music score reading. This delayed visual-to-auditory matching task was compared with delayed visual-visual and auditory-auditory matching tasks in terms of EEG topography and voltage amplitudes. Second, an additional behavioural experiment was performed to determine which type of distractor would be the most interfering with the score reading-like task. Third, the self-reported strategies of the participants were also analyzed. All three parts of this study point towards the same conclusion according to which during music score reading, the musician most likely first translates the visual score into an auditory cue, probably starting around 700 or 1300 ms, ready for storage and delayed comparison with the auditory feedback.

  19. Auditory short-term memory activation during score reading.

    Science.gov (United States)

    Simoens, Veerle L; Tervaniemi, Mari

    2013-01-01

    Performing music on the basis of reading a score requires reading ahead of what is being played in order to anticipate the necessary actions to produce the notes. Score reading thus not only involves the decoding of a visual score and the comparison to the auditory feedback, but also short-term storage of the musical information due to the delay of the auditory feedback during reading ahead. This study investigates the mechanisms of encoding of musical information in short-term memory during such a complicated procedure. There were three parts in this study. First, professional musicians participated in an electroencephalographic (EEG) experiment to study the slow wave potentials during a time interval of short-term memory storage in a situation that requires cross-modal translation and short-term storage of visual material to be compared with delayed auditory material, as it is the case in music score reading. This delayed visual-to-auditory matching task was compared with delayed visual-visual and auditory-auditory matching tasks in terms of EEG topography and voltage amplitudes. Second, an additional behavioural experiment was performed to determine which type of distractor would be the most interfering with the score reading-like task. Third, the self-reported strategies of the participants were also analyzed. All three parts of this study point towards the same conclusion according to which during music score reading, the musician most likely first translates the visual score into an auditory cue, probably starting around 700 or 1300 ms, ready for storage and delayed comparison with the auditory feedback.

  20. Studying auditory verbal hallucinations using the RDoC framework.

    Science.gov (United States)

    Ford, Judith M

    2016-03-01

    In this paper, I explain why I adopted a Research Domain Criteria (RDoC) approach to study the neurobiology of auditory verbal hallucinations (AVH), or voices. I explain that the RDoC construct of "agency" fits well with AVH phenomenology. To the extent that voices sound nonself, voice hearers lack a sense of agency over the voices. Using a vocalization paradigm like those used with nonhuman primates to study mechanisms subserving the sense of agency, we find that the auditory N1 ERP is suppressed during vocalization, that EEG synchrony preceding speech onset is related to N1 suppression, and that both are reduced in patients with schizophrenia. Reduced cortical suppression is also seen across multiple psychotic disorders and in clinically high-risk youth, but it is not related to AVH. The motor activity preceding talking and connectivity between frontal and temporal lobes during talking have both proved sensitive to AVH, suggesting neural activity and connectivity associated with intentions to act may be a better way to study agency and predictions based on agency.

  1. Dominant Voice in Hamlet

    Institute of Scientific and Technical Information of China (English)

    李丹

    2015-01-01

    <正>The Tragedy of Hamlet dramatizes the revenge Prince Hamlet exacts on his uncle Claudius for murdering King Hamlet,Claudius’s brother and Prince Hamlet’s father,and then succeeding to the throne and taking as his wife Gertrude,the old king’s widow and Prince Hamlet’s mother.This paper will discuss something about dominant voice in the play.Dominant voice is the major voice in the country,the society,or the whole world.Those people who have the power or

  2. Making social robots more attractive: the effects of voice pitch, humor and empathy

    NARCIS (Netherlands)

    Niculescu, Andreea; Dijk, van Betsy; Nijholt, Anton; Li, Haizhou; See, Swan Lan; Ge, S.S.

    2013-01-01

    In this paper we explore how simple auditory/verbal features of the spoken language, such as voice characteristics (pitch) and language cues (empathy/humor expression) influence the quality of interaction with a social robot receptionist. For our experiment two robot characters were created: Olivia,

  3. PoLAR Voices: Informing Adult Learners about the Science and Story of Climate Change in the Polar Regions Through Audio Podcast

    Science.gov (United States)

    Quinney, A.; Murray, M. S.; Gobroski, K. A.; Topp, R. M.; Pfirman, S. L.

    2015-12-01

    The resurgence of audio programming with the advent of podcasting in the early 2000s spawned a new medium for communicating advances in science, research, and technology. To capitalize on this informal educational outlet, the Arctic Institute of North America partnered with the International Arctic Research Center, the University of Alaska Fairbanks, and the UA Museum of the North to develop a podcast series called PoLAR Voices for the Polar Learning and Responding (PoLAR) Climate Change Education Partnership. PoLAR Voices is a public education initiative that uses creative storytelling and novel narrative structures to immerse the listener in an auditory depiction of climate change. The programs will feature the science and story of climate change, approaching topics from both the points of view of researchers and Arctic indigenous peoples. This approach will engage the listener in the holistic story of climate change, addressing both scientific and personal perspectives, resulting in a program that is at once educational, entertaining and accessible. Feedback is being collected at each stage of development to ensure the content and format of the program satisfies listener interests and preferences. Once complete, the series will be released on thepolarhub.org and on iTunes. Additionally, blanket distribution of the programs will be accomplished via radio broadcast in urban, rural and remote areas, and in multiple languages to increase distribution and enhance accessibility.

  4. Acoustic cues for the recognition of self-voice and other-voice

    Directory of Open Access Journals (Sweden)

    Mingdi eXu

    2013-10-01

    Full Text Available Self-recognition, being indispensable for successful social communication, has become a major focus in current social neuroscience. The physical aspects of the self are most typically manifested in the face and voice. Compared with the wealth of studies on self-face recognition, self-voice recognition (SVR has not gained much attention. Converging evidence has suggested that the fundamental frequency (F0 and formant structures serve as the key acoustic cues for other-voice recognition (OVR. However, little is known about which, and how, acoustic cues are utilized for SVR as opposed to OVR. To address this question, we independently manipulated the F0 and formant information of recorded voices and investigated their contributions to SVR and OVR. Japanese participants were presented with recorded vocal stimuli and were asked to identify the speaker—either themselves or one of their peers. Six groups of 5 peers of the same sex participated in the study. Under conditions where the formant information was fully preserved and where only the frequencies lower than the third formant (F3 were retained, accuracies of SVR deteriorated significantly with the modulation of the F0, and the results were comparable for OVR. By contrast, under a condition where only the frequencies higher than F3 were retained, the accuracy of SVR was significantly higher than that of OVR throughout the range of F0 modulations, and the F0 scarcely affected the accuracies of SVR and OVR. Our results indicate that while both F0 and formant information are involved in SVR, as well as in OVR, the advantage of SVR is manifested only when major formant information for speech intelligibility is absent. These findings imply the robustness of self-voice representation, possibly by virtue of auditory familiarity and other factors such as its association with motor/articulatory representation.

  5. Ethnographic Voice Memo Narratives

    DEFF Research Database (Denmark)

    Rasmussen, Mette Apollo; Conradsen, Maria Bosse

    1800-01-01

    -based technique which actively involves actors in producing ethnography-based data concerning their everyday practice. With the help from smartphone technology it is possible to complement ethnography-based research methods when involving the actors and having them create small voice memo narratives. The voice...... memos create insights of actors‟ everyday practice, without the direct presence of a researcher and could be considered a step towards meeting the dilemmas of research in complex fieldwork settings....

  6. Resizing Auditory Communities

    DEFF Research Database (Denmark)

    Kreutzfeldt, Jacob

    2012-01-01

    Heard through the ears of the Canadian composer and music teacher R. Murray Schafer the ideal auditory community had the shape of a village. Schafer’s work with the World Soundscape Project in the 70s represent an attempt to interpret contemporary environments through musical and auditory...

  7. Role of the auditory system in speech production.

    Science.gov (United States)

    Guenther, Frank H; Hickok, Gregory

    2015-01-01

    This chapter reviews evidence regarding the role of auditory perception in shaping speech output. Evidence indicates that speech movements are planned to follow auditory trajectories. This in turn is followed by a description of the Directions Into Velocities of Articulators (DIVA) model, which provides a detailed account of the role of auditory feedback in speech motor development and control. A brief description of the higher-order brain areas involved in speech sequencing (including the pre-supplementary motor area and inferior frontal sulcus) is then provided, followed by a description of the Hierarchical State Feedback Control (HSFC) model, which posits internal error detection and correction processes that can detect and correct speech production errors prior to articulation. The chapter closes with a treatment of promising future directions of research into auditory-motor interactions in speech, including the use of intracranial recording techniques such as electrocorticography in humans, the investigation of the potential roles of various large-scale brain rhythms in speech perception and production, and the development of brain-computer interfaces that use auditory feedback to allow profoundly paralyzed users to learn to produce speech using a speech synthesizer.

  8. Subjective Loudness and Reality of Auditory Verbal Hallucinations and Activation of the Inner Speech Processing Network

    NARCIS (Netherlands)

    Vercammen, Ans; Knegtering, Henderikus; Bruggeman, Richard; Aleman, Andre

    2011-01-01

    Background: One of the most influential cognitive models of auditory verbal hallucinations (AVH) suggests that a failure to adequately monitor the production of one's own inner speech leads to verbal thought being misidentified as an alien voice. However, it is unclear whether this theory can explai

  9. An EMG Study of the Lip Muscles during Covert Auditory Verbal Hallucinations in Schizophrenia

    Science.gov (United States)

    Rapin, Lucile; Dohen, Marion; Polosan, Mircea; Perrier, Pascal; Loevenbruck, Hélène

    2013-01-01

    Purpose: "Auditory verbal hallucinations" (AVHs) are speech perceptions in the absence of external stimulation. According to an influential theoretical account of AVHs in schizophrenia, a deficit in inner-speech monitoring may cause the patients' verbal thoughts to be perceived as external voices. The account is based on a…

  10. The effect of superior auditory skills on vocal accuracy

    Science.gov (United States)

    Amir, Ofer; Amir, Noam; Kishon-Rabin, Liat

    2003-02-01

    The relationship between auditory perception and vocal production has been typically investigated by evaluating the effect of either altered or degraded auditory feedback on speech production in either normal hearing or hearing-impaired individuals. Our goal in the present study was to examine this relationship in individuals with superior auditory abilities. Thirteen professional musicians and thirteen nonmusicians, with no vocal or singing training, participated in this study. For vocal production accuracy, subjects were presented with three tones. They were asked to reproduce the pitch using the vowel /a/. This procedure was repeated three times. The fundamental frequency of each production was measured using an autocorrelation pitch detection algorithm designed for this study. The musicians' superior auditory abilities (compared to the nonmusicians) were established in a frequency discrimination task reported elsewhere. Results indicate that (a) musicians had better vocal production accuracy than nonmusicians (production errors of 1/2 a semitone compared to 1.3 semitones, respectively); (b) frequency discrimination thresholds explain 43% of the variance of the production data, and (c) all subjects with superior frequency discrimination thresholds showed accurate vocal production; the reverse relationship, however, does not hold true. In this study we provide empirical evidence to the importance of auditory feedback on vocal production in listeners with superior auditory skills.

  11. Visual face-movement sensitive cortex is relevant for auditory-only speech recognition.

    Science.gov (United States)

    Riedel, Philipp; Ragert, Patrick; Schelinski, Stefanie; Kiebel, Stefan J; von Kriegstein, Katharina

    2015-07-01

    It is commonly assumed that the recruitment of visual areas during audition is not relevant for performing auditory tasks ('auditory-only view'). According to an alternative view, however, the recruitment of visual cortices is thought to optimize auditory-only task performance ('auditory-visual view'). This alternative view is based on functional magnetic resonance imaging (fMRI) studies. These studies have shown, for example, that even if there is only auditory input available, face-movement sensitive areas within the posterior superior temporal sulcus (pSTS) are involved in understanding what is said (auditory-only speech recognition). This is particularly the case when speakers are known audio-visually, that is, after brief voice-face learning. Here we tested whether the left pSTS involvement is causally related to performance in auditory-only speech recognition when speakers are known by face. To test this hypothesis, we applied cathodal transcranial direct current stimulation (tDCS) to the pSTS during (i) visual-only speech recognition of a speaker known only visually to participants and (ii) auditory-only speech recognition of speakers they learned by voice and face. We defined the cathode as active electrode to down-regulate cortical excitability by hyperpolarization of neurons. tDCS to the pSTS interfered with visual-only speech recognition performance compared to a control group without pSTS stimulation (tDCS to BA6/44 or sham). Critically, compared to controls, pSTS stimulation additionally decreased auditory-only speech recognition performance selectively for voice-face learned speakers. These results are important in two ways. First, they provide direct evidence that the pSTS is causally involved in visual-only speech recognition; this confirms a long-standing prediction of current face-processing models. Secondly, they show that visual face-sensitive pSTS is causally involved in optimizing auditory-only speech recognition. These results are in line

  12. Presentation of dynamically overlapping auditory messages in user interfaces

    Energy Technology Data Exchange (ETDEWEB)

    Papp, III, Albert Louis [Univ. of California, Davis, CA (United States)

    1997-09-01

    This dissertation describes a methodology and example implementation for the dynamic regulation of temporally overlapping auditory messages in computer-user interfaces. The regulation mechanism exists to schedule numerous overlapping auditory messages in such a way that each individual message remains perceptually distinct from all others. The method is based on the research conducted in the area of auditory scene analysis. While numerous applications have been engineered to present the user with temporally overlapped auditory output, they have generally been designed without any structured method of controlling the perceptual aspects of the sound. The method of scheduling temporally overlapping sounds has been extended to function in an environment where numerous applications can present sound independently of each other. The Centralized Audio Presentation System is a global regulation mechanism that controls all audio output requests made from all currently running applications. The notion of multimodal objects is explored in this system as well. Each audio request that represents a particular message can include numerous auditory representations, such as musical motives and voice. The Presentation System scheduling algorithm selects the best representation according to the current global auditory system state, and presents it to the user within the request constraints of priority and maximum acceptable latency. The perceptual conflicts between temporally overlapping audio messages are examined in depth through the Computational Auditory Scene Synthesizer. At the heart of this system is a heuristic-based auditory scene synthesis scheduling method. Different schedules of overlapped sounds are evaluated and assigned penalty scores. High scores represent presentations that include perceptual conflicts between over-lapping sounds. Low scores indicate fewer and less serious conflicts. A user study was conducted to validate that the perceptual difficulties predicted by

  13. On the definition and interpretation of voice selective activation in the temporal cortex

    Directory of Open Access Journals (Sweden)

    Anja eBethmann

    2014-07-01

    Full Text Available Regions along the superior temporal sulci and in the anterior temporal lobes have been found to be involved in voice processing. It has even been argued that parts of the temporal cortices serve as voice-selective areas. Yet, evidence for voice-selective activation in the strict sense is still missing. The current fMRI study aimed at assessing the degree of voice-specific processing in different parts of the superior and middle temporal cortices. To this end, voices of famous persons were contrasted with widely different categories, which were sounds of animals and musical instruments. The argumentation was that only brain regions with statistically proven absence of activation by the control stimuli may be considered as candidates for voice-selective areas. Neural activity was found to be stronger in response to human voices in all analyzed parts of the temporal lobes except for the middle and posterior STG. More importantly, the activation differences between voices and the other environmental sounds increased continuously from the mid-posterior STG to the anterior MTG. Here, only voices but not the control stimuli excited an increase of the BOLD response above a resting baseline level. The findings are discussed with reference to the function of the anterior temporal lobes in person recognition and the general question on how to define selectivity of brain regions for a specific class of stimuli or tasks. In addition, our results corroborate recent assumptions about the hierarchical organization of auditory processing building on a processing stream from the primary auditory cortices to anterior portions of the temporal lobes.

  14. FLOWER VOICE: VIRTUAL ASSISTANT FOR OPEN DATA

    Directory of Open Access Journals (Sweden)

    Takahiro Kawamura

    2013-05-01

    Full Text Available Open Data is now collecting attention for innovative service creation, mainly in the area ofgovernment, bioscience, and smart X project. However, to promote its application more for consumerservices, a search engine for Open Data to know what kind of data are there would be of help. This paperpresents a voice assistant which uses Open Data as its knowledge source. It is featured by improvement ofaccuracy according to the user feedbacks, and acquisition of unregistered data by the user participation.We also show an application to support for a field-work and confirm its effectiveness.

  15. Tuning up the developing auditory CNS.

    Science.gov (United States)

    Sanes, Dan H; Bao, Shaowen

    2009-04-01

    Although the auditory system has limited information processing resources, the acoustic environment is infinitely variable. To properly encode the natural environment, the developing central auditory system becomes somewhat specialized through experience-dependent adaptive mechanisms that operate during a sensitive time window. Recent studies have demonstrated that cellular and synaptic plasticity occurs throughout the central auditory pathway. Acoustic-rearing experiments can lead to an over-representation of the exposed sound frequency, and this is associated with specific changes in frequency discrimination. These forms of cellular plasticity are manifest in brain regions, such as midbrain and cortex, which interact through feed-forward and feedback pathways. Hearing loss leads to a profound re-weighting of excitatory and inhibitory synaptic gain throughout the auditory CNS, and this is associated with an over-excitability that is observed in vivo. Further behavioral and computational analyses may provide insights into how theses cellular and systems plasticity effects underlie the development of cognitive functions such as speech perception.

  16. Changes in brain activity following intensive voice treatment in children with cerebral palsy.

    Science.gov (United States)

    Bakhtiari, Reyhaneh; Cummine, Jacqueline; Reed, Alesha; Fox, Cynthia M; Chouinard, Brea; Cribben, Ivor; Boliek, Carol A

    2017-09-01

    Eight children (3 females; 8-16 years) with motor speech disorders secondary to cerebral palsy underwent 4 weeks of an intensive neuroplasticity-principled voice treatment protocol, LSVT LOUD(®) , followed by a structured 12-week maintenance program. Children were asked to overtly produce phonation (ah) at conversational loudness, cued-phonation at perceived twice-conversational loudness, a series of single words, and a prosodic imitation task while being scanned using fMRI, immediately pre- and post-treatment and 12 weeks following a maintenance program. Eight age- and sex-matched controls were scanned at each of the same three time points. Based on the speech and language literature, 16 bilateral regions of interest were selected a priori to detect potential neural changes following treatment. Reduced neural activity in the motor areas (decreased motor system effort) before and immediately after treatment, and increased activity in the anterior cingulate gyrus after treatment (increased contribution of decision making processes) were observed in the group with cerebral palsy compared to the control group. Using graphical models, post-treatment changes in connectivity were observed between the left supramarginal gyrus and the right supramarginal gyrus and the left precentral gyrus for the children with cerebral palsy, suggesting LSVT LOUD enhanced contributions of the feedback system in the speech production network instead of high reliance on feedforward control system and the somatosensory target map for regulating vocal effort. Network pruning indicates greater processing efficiency and the recruitment of the auditory and somatosensory feedback control systems following intensive treatment. Hum Brain Mapp 38:4413-4429, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  17. Formativ Feedback

    DEFF Research Database (Denmark)

    Hyldahl, Kirsten Kofod

    Denne bog undersøger, hvordan lærere kan anvende feedback til at forbedre undervisningen i klasselokalet. I denne sammenhæng har John Hattie, professor ved Melbourne Universitet, udviklet en model for feedback, hvilken er baseret på synteser af meta-analyser. I 2009 udgav han bogen "Visible...

  18. Voice Matching Using Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Abhishek Bal

    2014-03-01

    Full Text Available In this paper, the use of Genetic Algorithm (GA for voice recognition is described. The practical application of Genetic Algorithm (GA to the solution of engineering problem is a rapidly emerging approach in the field of control engineering and signal processing. Genetic algorithms are useful for searching a space in multi-directional way from large spaces and poorly defined space. Voice is a signal of infinite information. Digital processing of voice signal is very important for automatic voice recognition technology. Nowadays, voice processing is very much important in security mechanism due to mimicry characteristic. So studying the voice feature extraction in voice processing is very necessary in military, hospital, telephone system, investigation bureau and etc. In order to extract valuable information from the voice signal, make decisions on the process, and obtain results, the data needs to be manipulated and analyzed. In this paper, if the instant voice is not matched with same person’s reference voices in the database, then Genetic Algorithm (GA is applied between two randomly chosen reference voices. Again the instant voice is compared with the result of Genetic Algorithm (GA which is used, including its three main steps: selection, crossover and mutation. We illustrate our approach with different sample of voices from human in our institution.

  19. Auditory stimuli mimicking ambient sounds drive temporal "delta-brushes" in premature infants.

    Directory of Open Access Journals (Sweden)

    Mathilde Chipaux

    Full Text Available In the premature infant, somatosensory and visual stimuli trigger an immature electroencephalographic (EEG pattern, "delta-brushes," in the corresponding sensory cortical areas. Whether auditory stimuli evoke delta-brushes in the premature auditory cortex has not been reported. Here, responses to auditory stimuli were studied in 46 premature infants without neurologic risk aged 31 to 38 postmenstrual weeks (PMW during routine EEG recording. Stimuli consisted of either low-volume technogenic "clicks" near the background noise level of the neonatal care unit, or a human voice at conversational sound level. Stimuli were administrated pseudo-randomly during quiet and active sleep. In another protocol, the cortical response to a composite stimulus ("click" and voice was manually triggered during EEG hypoactive periods of quiet sleep. Cortical responses were analyzed by event detection, power frequency analysis and stimulus locked averaging. Before 34 PMW, both voice and "click" stimuli evoked cortical responses with similar frequency-power topographic characteristics, namely a temporal negative slow-wave and rapid oscillations similar to spontaneous delta-brushes. Responses to composite stimuli also showed a maximal frequency-power increase in temporal areas before 35 PMW. From 34 PMW the topography of responses in quiet sleep was different for "click" and voice stimuli: responses to "clicks" became diffuse but responses to voice remained limited to temporal areas. After the age of 35 PMW auditory evoked delta-brushes progressively disappeared and were replaced by a low amplitude response in the same location. Our data show that auditory stimuli mimicking ambient sounds efficiently evoke delta-brushes in temporal areas in the premature infant before 35 PMW. Along with findings in other sensory modalities (visual and somatosensory, these findings suggest that sensory driven delta-brushes represent a ubiquitous feature of the human sensory cortex

  20. Using Voice Boards: pedagogical design, technological implementation, evaluation and reflections

    Directory of Open Access Journals (Sweden)

    Elisabeth Yaneske

    2010-12-01

    Full Text Available We present a case study to evaluate the use of a Wimba Voice Board to support asynchronous audio discussion. We discuss the learning strategy and pedagogic rationale when a Voice Board was implemented within an MA module for language learners, enabling students to create learning objects and facilitating peer-to-peer learning. Previously students studying the module had communicated using text-based synchronous and asynchronous discussion only. A common criticism of text-based media is the lack of non-verbal communication. Audio communication is a richer medium where use of pitch, tone, emphasis and inflection can increase personalisation and prevent misinterpretation. Feedback from staff and students on the affordances and constraints of voice communication are presented. Evaluations show that while there were several issues with the usability of the Wimba Voice Board, both staff and students felt the use of voice communication in an online environment had many advantages, including increased personalisation, motivation, and the opportunity to practice speaking and listening skills. However, some students were inhibited by feelings of embarrassment. The case study provides an in-depth study of Voice Boards, which makes an important contribution to the learning technology literature.

  1. Translation and adaptation of functional auditory performance indicators (FAPI

    Directory of Open Access Journals (Sweden)

    Karina Ferreira

    2011-12-01

    Full Text Available Work with deaf children has gained new attention since the expectation and goal of therapy has expanded to language development and subsequent language learning. Many clinical tests were developed for evaluation of speech sound perception in young children in response to the need for accurate assessment of hearing skills that developed from the use of individual hearing aids or cochlear implants. These tests also allow the evaluation of the rehabilitation program. However, few of these tests are available in Portuguese. Evaluation with the Functional Auditory Performance Indicators (FAPI generates a child's functional auditory skills profile, which lists auditory skills in an integrated and hierarchical order. It has seven hierarchical categories, including sound awareness, meaningful sound, auditory feedback, sound source localizing, auditory discrimination, short-term auditory memory, and linguistic auditory processing. FAPI evaluation allows the therapist to map the child's hearing profile performance, determine the target for increasing the hearing abilities, and develop an effective therapeutic plan. Objective: Since the FAPI is an American test, the inventory was adapted for application in the Brazilian population. Material and Methods: The translation was done following the steps of translation and back translation, and reproducibility was evaluated. Four translated versions (two originals and two back-translated were compared, and revisions were done to ensure language adaptation and grammatical and idiomatic equivalence. Results: The inventory was duly translated and adapted. Conclusion: Further studies about the application of the translated FAPI are necessary to make the test practicable in Brazilian clinical use.

  2. Contextual modulation of primary visual cortex by auditory signals.

    Science.gov (United States)

    Petro, L S; Paton, A T; Muckli, L

    2017-02-19

    Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol. 23, 195-201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol. 24, 1256-1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame.This article is part of the themed issue 'Auditory and visual scene analysis'.

  3. Contextual modulation of primary visual cortex by auditory signals

    Science.gov (United States)

    Paton, A. T.

    2017-01-01

    Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol. 23, 195–201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol. 24, 1256–1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044015

  4. The plastic ear and perceptual relearning in auditory spatial perception.

    Science.gov (United States)

    Carlile, Simon

    2014-01-01

    The auditory system of adult listeners has been shown to accommodate to altered spectral cues to sound location which presumably provides the basis for recalibration to changes in the shape of the ear over a life time. Here we review the role of auditory and non-auditory inputs to the perception of sound location and consider a range of recent experiments looking at the role of non-auditory inputs in the process of accommodation to these altered spectral cues. A number of studies have used small ear molds to modify the spectral cues that result in significant degradation in localization performance. Following chronic exposure (10-60 days) performance recovers to some extent and recent work has demonstrated that this occurs for both audio-visual and audio-only regions of space. This begs the questions as to the teacher signal for this remarkable functional plasticity in the adult nervous system. Following a brief review of influence of the motor state in auditory localization, we consider the potential role of auditory-motor learning in the perceptual recalibration of the spectral cues. Several recent studies have considered how multi-modal and sensory-motor feedback might influence accommodation to altered spectral cues produced by ear molds or through virtual auditory space stimulation using non-individualized spectral cues. The work with ear molds demonstrates that a relatively short period of training involving audio-motor feedback (5-10 days) significantly improved both the rate and extent of accommodation to altered spectral cues. This has significant implications not only for the mechanisms by which this complex sensory information is encoded to provide spatial cues but also for adaptive training to altered auditory inputs. The review concludes by considering the implications for rehabilitative training with hearing aids and cochlear prosthesis.

  5. The plastic ear and perceptual relearning in auditory spatial perception.

    Directory of Open Access Journals (Sweden)

    Simon eCarlile

    2014-08-01

    Full Text Available The auditory system of adult listeners has been shown to accommodate to altered spectral cues to sound location which presumably provides the basis for recalibration to changes in the shape of the ear over a life time. Here we review the role of auditory and non-auditory inputs to the perception of sound location and consider a range of recent experiments looking at the role of non-auditory inputs in the process of accommodation to these altered spectral cues. A number of studies have used small ear moulds to modify the spectral cues that result in significant degradation in localization performance. Following chronic exposure (10-60 days performance recovers to some extent and recent work has demonstrated that this occurs for both audio-visual and audio-only regions of space. This begs the questions as to the teacher signal for this remarkable functional plasticity in the adult nervous system. Following a brief review of influence of the motor state in auditory localisation, we consider the potential role of auditory-motor learning in the perceptual recalibration of the spectral cues. Several recent studies have considered how multi-modal and sensory-motor feedback might influence accommodation to altered spectral cues produced by ear moulds or through virtual auditory space stimulation using non-individualised spectral cues. The work with ear moulds demonstrates that a relatively short period of training involving sensory-motor feedback (5 – 10 days significantly improved both the rate and extent of accommodation to altered spectral cues. This has significant implications not only for the mechanisms by which this complex sensory information is encoded to provide a spatial code but also for adaptive training to altered auditory inputs. The review concludes by considering the implications for rehabilitative training with hearing aids and cochlear prosthesis.

  6. Voice Therapy Practices and Techniques: A Survey of Voice Clinicians.

    Science.gov (United States)

    Mueller, Peter B.; Larson, George W.

    1992-01-01

    Eighty-three voice disorder therapists' ratings of statements regarding voice therapy practices indicated that vocal nodules are the most frequent disorder treated; vocal abuse and hard glottal attack elimination, counseling, and relaxation were preferred treatment approaches; and voice therapy is more effective with adults than with children.…

  7. Voice in early glottic cancer compared to benign voice pathology

    NARCIS (Netherlands)

    Van Gogh, C. D. L.; Mahieu, H. F.; Kuik, D. J.; Rinkel, R. N. P. M.; Langendijk, J. A.; Verdonck-de Leeuw, I. M.

    2007-01-01

    The purpose of this study is to compare (Dutch) Voice Handicap Index (VHIvumc) scores from a selected group of patients with voice problems after treatment for early glottic cancer with patients with benign voice disorders and subjects from the normal population. The study included a group of 35 pat

  8. Auditory Integration Training

    Directory of Open Access Journals (Sweden)

    Zahra Jafari

    2002-07-01

    Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depressin, and hyperacute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of The Sound of a Moracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.

  9. The inner voice

    Directory of Open Access Journals (Sweden)

    Anthony James Ridgway

    2009-12-01

    Full Text Available The inner voice- we all know what it is because we all have it and use it when we are thinking or reading, for example. Little work has been done on it in our field, with the notable exception of Brian Tomlinson, but presumably it must be a cognitive phenomenon which is of great importance in thinking, language learning, and reading in a foreign language. The inner voice will be discussed as a cognitive psychological phenomenon associated with short-term memory, and distinguished from the inner ear. The process of speech recoding will be examined (the process of converting written language into the inner voice and the importance of developing the inner voice, as a means of both facilitating the production of a new language and enhancing the comprehension of a text in a foreign language, will be emphasized. Finally, ways of developing the inner voice in beginning and intermediate readers of a foreign language will be explored and recommended.

  10. Smartphone App for Voice Disorders

    Science.gov (United States)

    ... on. Feature: Taste, Smell, Hearing, Language, Voice, Balance Smartphone App for Voice Disorders Past Issues / Fall 2013 ... developed a mobile monitoring device that relies on smartphone technology to gather a week's worth of talking, ...

  11. Neural basis of the time window for subjective motor-auditory integration

    Directory of Open Access Journals (Sweden)

    Koichi eToida

    2016-01-01

    Full Text Available Temporal contiguity between an action and corresponding auditory feedback is crucial to the perception of self-generated sound. However, the neural mechanisms underlying motor–auditory temporal integration are unclear. Here, we conducted four experiments with an oddball paradigm to examine the specific event-related potentials (ERPs elicited by delayed auditory feedback for a self-generated action. The first experiment confirmed that a pitch-deviant auditory stimulus elicits mismatch negativity (MMN and P300, both when it is generated passively and by the participant’s action. In our second and third experiments, we investigated the ERP components elicited by delayed auditory feedback of for a self-generated action. We found that delayed auditory feedback elicited an enhancement of P2 (enhanced-P2 and a N300 component, which were apparently different from the MMN and P300 components observed in the first experiment. We further investigated the sensitivity of the enhanced-P2 and N300 to delay length in our fourth experiment. Strikingly, the amplitude of the N300 increased as a function of the delay length. Additionally, the N300 amplitude was significantly correlated with the conscious detection of the delay (the 50% detection point was around 200 ms, and hence reduction in the feeling of authorship of the sound (the sense of agency. In contrast, the enhanced-P2 was most prominent in short-delay (≤ 200 ms conditions and diminished in long-delay conditions. Our results suggest that different neural mechanisms are employed for the processing of temporally-deviant and pitch-deviant auditory feedback. Additionally, the temporal window for subjective motor–auditory integration is likely about 200 ms, as indicated by these auditory ERP components.

  12. Relationship between neuroticism, childhood trauma and cognitive-affective responses to auditory verbal hallucinations

    Science.gov (United States)

    So, Suzanne Ho-wai; Begemann, Marieke J. H.; Gong, Xianmin; Sommer, Iris E.

    2016-01-01

    Neuroticism has been shown to adversely influence the development and outcome of psychosis. However, how this personality trait associates with the individual’s responses to psychotic symptoms is less well known. Auditory verbal hallucinations (AVHs) have been reported by patients with psychosis and non-clinical individuals. There is evidence that voice-hearers who are more distressed by and resistant against the voices, as well as those who appraise the voices as malevolent and powerful, have poorer outcome. This study aimed to examine the mechanistic association of neuroticism with the cognitive-affective reactions to AVH. We assessed 40 psychotic patients experiencing frequent AVHs, 135 non-clinical participants experiencing frequent AVHs, and 126 healthy individuals. In both clinical and non-clinical voice-hearers alike, a higher level of neuroticism was associated with more distress and behavioral resistance in response to AVHs, as well as a stronger tendency to perceive voices as malevolent and powerful. Neuroticism fully mediated the found associations between childhood trauma and the individuals’ cognitive-affective reactions to voices. Our results supported the role of neurotic personality in shaping maladaptive reactions to voices. Neuroticism may also serve as a putative mechanism linking childhood trauma and psychological reactions to voices. Implications for psychological models of hallucinations are discussed. PMID:27698407

  13. Relationship between neuroticism, childhood trauma and cognitive-affective responses to auditory verbal hallucinations.

    Science.gov (United States)

    So, Suzanne Ho-Wai; Begemann, Marieke J H; Gong, Xianmin; Sommer, Iris E

    2016-10-04

    Neuroticism has been shown to adversely influence the development and outcome of psychosis. However, how this personality trait associates with the individual's responses to psychotic symptoms is less well known. Auditory verbal hallucinations (AVHs) have been reported by patients with psychosis and non-clinical individuals. There is evidence that voice-hearers who are more distressed by and resistant against the voices, as well as those who appraise the voices as malevolent and powerful, have poorer outcome. This study aimed to examine the mechanistic association of neuroticism with the cognitive-affective reactions to AVH. We assessed 40 psychotic patients experiencing frequent AVHs, 135 non-clinical participants experiencing frequent AVHs, and 126 healthy individuals. In both clinical and non-clinical voice-hearers alike, a higher level of neuroticism was associated with more distress and behavioral resistance in response to AVHs, as well as a stronger tendency to perceive voices as malevolent and powerful. Neuroticism fully mediated the found associations between childhood trauma and the individuals' cognitive-affective reactions to voices. Our results supported the role of neurotic personality in shaping maladaptive reactions to voices. Neuroticism may also serve as a putative mechanism linking childhood trauma and psychological reactions to voices. Implications for psychological models of hallucinations are discussed.

  14. Auditory Responses of Infants

    Science.gov (United States)

    Watrous, Betty Springer; And Others

    1975-01-01

    Forty infants, 3- to 12-months-old, participated in a study designed to differentiate the auditory response characteristics of normally developing infants in the age ranges 3 - 5 months, 6 - 8 months, and 9 - 12 months. (Author)

  15. Multimodal processing of emotional information in 9-month-old infants I: emotional faces and voices.

    Science.gov (United States)

    Otte, R A; Donkers, F C L; Braeken, M A K A; Van den Bergh, B R H

    2015-04-01

    Making sense of emotions manifesting in human voice is an important social skill which is influenced by emotions in other modalities, such as that of the corresponding face. Although processing emotional information from voices and faces simultaneously has been studied in adults, little is known about the neural mechanisms underlying the development of this ability in infancy. Here we investigated multimodal processing of fearful and happy face/voice pairs using event-related potential (ERP) measures in a group of 84 9-month-olds. Infants were presented with emotional vocalisations (fearful/happy) preceded by the same or a different facial expression (fearful/happy). The ERP data revealed that the processing of emotional information appearing in human voice was modulated by the emotional expression appearing on the corresponding face: Infants responded with larger auditory ERPs after fearful compared to happy facial primes. This finding suggests that infants dedicate more processing capacities to potentially threatening than to non-threatening stimuli.

  16. Representation of speech in human auditory cortex: is it special?

    Science.gov (United States)

    Steinschneider, Mitchell; Nourski, Kirill V; Fishman, Yonatan I

    2013-11-01

    Successful categorization of phonemes in speech requires that the brain analyze the acoustic signal along both spectral and temporal dimensions. Neural encoding of the stimulus amplitude envelope is critical for parsing the speech stream into syllabic units. Encoding of voice onset time (VOT) and place of articulation (POA), cues necessary for determining phonemic identity, occurs within shorter time frames. An unresolved question is whether the neural representation of speech is based on processing mechanisms that are unique to humans and shaped by learning and experience, or is based on rules governing general auditory processing that are also present in non-human animals. This question was examined by comparing the neural activity elicited by speech and other complex vocalizations in primary auditory cortex of macaques, who are limited vocal learners, with that in Heschl's gyrus, the putative location of primary auditory cortex in humans. Entrainment to the amplitude envelope is neither specific to humans nor to human speech. VOT is represented by responses time-locked to consonant release and voicing onset in both humans and monkeys. Temporal representation of VOT is observed both for isolated syllables and for syllables embedded in the more naturalistic context of running speech. The fundamental frequency of male speakers is represented by more rapid neural activity phase-locked to the glottal pulsation rate in both humans and monkeys. In both species, the differential representation of stop consonants varying in their POA can be predicted by the relationship between the frequency selectivity of neurons and the onset spectra of the speech sounds. These findings indicate that the neurophysiology of primary auditory cortex is similar in monkeys and humans despite their vastly different experience with human speech, and that Heschl's gyrus is engaged in general auditory, and not language-specific, processing. This article is part of a Special Issue entitled

  17. Temporal sequence of visuo-auditory interaction in multiple areas of the guinea pig visual cortex.

    Directory of Open Access Journals (Sweden)

    Masataka Nishimura

    Full Text Available Recent studies in humans and monkeys have reported that acoustic stimulation influences visual responses in the primary visual cortex (V1. Such influences can be generated in V1, either by direct auditory projections or by feedback projections from extrastriate cortices. To test these hypotheses, cortical activities were recorded using optical imaging at a high spatiotemporal resolution from multiple areas of the guinea pig visual cortex, to visual and/or acoustic stimulations. Visuo-auditory interactions were evaluated according to differences between responses evoked by combined auditory and visual stimulation, and the sum of responses evoked by separate visual and auditory stimulations. Simultaneous presentation of visual and acoustic stimulations resulted in significant interactions in V1, which occurred earlier than in other visual areas. When acoustic stimulation preceded visual stimulation, significant visuo-auditory interactions were detected only in V1. These results suggest that V1 is a cortical origin of visuo-auditory interaction.

  18. Sustainable Consumer Voices

    DEFF Research Database (Denmark)

    Klitmøller, Anders; Rask, Morten; Jensen, Nevena

    2011-01-01

    Aiming to explore how user driven innovation can inform high level design strategies, an in-depth empirical study was carried out, based on data from 50 observations of private vehicle users. This paper reports the resulting 5 consumer voices: Technology Enthusiast, Environmentalist, Design Lover......, Pragmatist and Status Seeker. Expedient use of the voices in creating design strategies is discussed, thus contributing directly to the practice of high level design managers. The main academic contribution of this paper is demonstrating how applied anthropology can be used to generate insights...... into disruptive emergence of product service systems, where quantitative user analyses rely on historical continuation....

  19. Duration reproduction with sensory feedback delay: Differential involvement of perception and action time

    Directory of Open Access Journals (Sweden)

    Stephanie eGanzenmüller

    2012-10-01

    Full Text Available Previous research has shown that voluntary action can attract subsequent, delayed feedback events towards the action, and adaptation to the sensorimotor delay can even reverse motor-sensory temporal-order judgments. However, whether and how sensorimotor delay affects duration reproduction is still unclear. To investigate this, we injected an onset- or offset-delay to the sensory feedback signal from a duration reproduction task. We compared duration reproductions within (visual, auditory modality and across audiovisual modalities with feedback signal onset- and offset-delay manipulations. We found that the reproduced duration was lengthened in both visual and auditory feedback signal onset-delay conditions. The lengthening effect was evident immediately, on the first trial with the onset delay. However, when the onset of the feedback signal was prior to the action, the lengthening effect was diminished. In contrast, a shortening effect was found with feedback signal offset-delay, though the effect was weaker and manifested only in the auditory offset-delay condition. These findings indicate that participants tend to mix the onset of action and the feedback signal more when the feedback is delayed, and they heavily rely on motor-stop signals for the duration reproduction. Furthermore, auditory duration was overestimated compared to visual duration in crossmodal feedback conditions, and the overestimation of auditory duration (or the underestimation of visual duration was independent of the delay manipulation.

  20. VoiceRelay: voice key operation using visual basic.

    Science.gov (United States)

    Abrams, Lise; Jennings, David T

    2004-11-01

    Using a voice key is a popular method for recording vocal response times in a variety of language production tasks. This article describes a class module called VoiceRelay that can be easily utilized in Visual Basic programs for voice key operation. This software-based voice key offers the precision of traditional voice keys (although accuracy is system dependent), as well as the flexibility of volume and sensitivity control. However, VoiceRelay is a considerably less expensive alternative for recording vocal response times because it operates with existing PC hardware and does not require the purchase of external response boxes or additional experiment-generation software. A sample project demonstrating implementation of the VoiceRelay class module may be downloaded from the Psychonomic Society Web archive, www.psychonomic.org/archive.

  1. Voice application development for Android

    CERN Document Server

    McTear, Michael

    2013-01-01

    This book will give beginners an introduction to building voice-based applications on Android. It will begin by covering the basic concepts and will build up to creating a voice-based personal assistant. By the end of this book, you should be in a position to create your own voice-based applications on Android from scratch in next to no time.Voice Application Development for Android is for all those who are interested in speech technology and for those who, as owners of Android devices, are keen to experiment with developing voice apps for their devices. It will also be useful as a starting po

  2. The maximum intelligible range of the human voice

    Science.gov (United States)

    Boren, Braxton

    This dissertation examines the acoustics of the spoken voice at high levels and the maximum number of people that could hear such a voice unamplified in the open air. In particular, it examines an early auditory experiment by Benjamin Franklin which sought to determine the maximum intelligible crowd for the Anglican preacher George Whitefield in the eighteenth century. Using Franklin's description of the experiment and a noise source on Front Street, the geometry and diffraction effects of such a noise source are examined to more precisely pinpoint Franklin's position when Whitefield's voice ceased to be intelligible. Based on historical maps, drawings, and prints, the geometry and material of Market Street is constructed as a computer model which is then used to construct an acoustic cone tracing model. Based on minimal values of the Speech Transmission Index (STI) at Franklin's position, Whitefield's on-axis Sound Pressure Level (SPL) at 1 m is determined, leading to estimates centering around 90 dBA. Recordings are carried out on trained actors and singers to determine their maximum time-averaged SPL at 1 m. This suggests that the greatest average SPL achievable by the human voice is 90-91 dBA, similar to the median estimates for Whitefield's voice. The sites of Whitefield's largest crowds are acoustically modeled based on historical evidence and maps. Based on Whitefield's SPL, the minimal STI value, and the crowd's background noise, this allows a prediction of the minimally intelligible area for each site. These yield maximum crowd estimates of 50,000 under ideal conditions, while crowds of 20,000 to 30,000 seem more reasonable when the crowd was reasonably quiet and Whitefield's voice was near 90 dBA.

  3. Sustainable Consumer Voices

    DEFF Research Database (Denmark)

    Klitmøller, Anders; Rask, Morten; Jensen, Nevena

    2011-01-01

    Aiming to explore how user driven innovation can inform high level design strategies, an in-depth empirical study was carried out, based on data from 50 observations of private vehicle users. This paper reports the resulting 5 consumer voices: Technology Enthusiast, Environmentalist, Design Lover...

  4. Voices of courage

    Directory of Open Access Journals (Sweden)

    Noraida Abdullah Karim

    2007-07-01

    Full Text Available In May 2007 the Women’s Commission for Refugee Women and Children1 presented its annual Voices of Courage awards to three displaced people who have dedicated their lives to promoting economic opportunities for refugee and displaced women and youth. These are their (edited testimonies.

  5. Listen to a voice

    DEFF Research Database (Denmark)

    Hølge-Hazelton, Bibi

    2001-01-01

    Listen to the voice of a young girl Lonnie, who was diagnosed with Type 1 diabetes at 16. Imagine that she is deeply involved in the social security system. She lives with her mother and two siblings in a working class part of a small town. She is at a special school for problematic youth, and he...

  6. Political animal voices

    NARCIS (Netherlands)

    Meijer, E.R.

    2017-01-01

    In this thesis, I develop a theory of political animal voices. The first part of the thesis focuses on non-human animal languages and forming interspecies worlds. I first investigate the relation between viewing language as exclusively human and seeing humans as categorically different from other

  7. Finding a Voice

    Science.gov (United States)

    Stuart, Shannon

    2012-01-01

    Schools have struggled for decades to provide expensive augmentative and alternative communication (AAC) resources for autistic students with communication challenges. Clunky voice output devices, often included in students' individualized education plans, cost about $8,000, a difficult expense to cover in hard times. However, mobile technology is…

  8. the Voice of Tomorrow

    Institute of Scientific and Technical Information of China (English)

    AlanBurdick

    2003-01-01

    Have you heard Mide? Coule be.Mike is a professional reader,and he's everywhere these days. On MapQuest, the Web-based map service,he'll read aloud whatever directions you ask for. If you like to have AOL or Yahoo! e-mail read aloud to you over the phone, that's Mike's voice you 're hearing. Soon

  9. What the voice reveals.

    NARCIS (Netherlands)

    Ko, Sei Jin

    2007-01-01

    Given that the voice is our main form of communication, we know surprisingly little about how it impacts judgment and behavior. Furthermore, the modern advancement in telecommunication systems, such as cellular phones, has meant that a large proportion of our everyday interactions are conducted voca

  10. The Inner Voice

    Science.gov (United States)

    Ridgway, Anthony James

    2009-01-01

    The inner voice- we all know what it is because we all have it and use it when we are thinking or reading, for example. Little work has been done on it in our field, with the notable exception of Brian Tomlinson, but presumably it must be a cognitive phenomenon which is of great importance in thinking, language learning, and reading in a foreign…

  11. Moving beyond Youth Voice

    Science.gov (United States)

    Serido, Joyce; Borden, Lynne M.; Perkins, Daniel F.

    2011-01-01

    This study combines research documenting the benefits of positive relationships between youth and caring adults on a young person's positive development with studies on youth voice to examine the mechanisms through which participation in youth programs contributes to positive developmental outcomes. Specifically, the study explores whether youth's…

  12. Bodies and Voices

    DEFF Research Database (Denmark)

    A wide-ranging collection of essays centred on readings of the body in contemporary literary and socio-anthropological discourse, from slavery and rape to female genital mutilation, from clothing, ocular pornography, voice, deformation and transmutation to the imprisoned, dismembered, remembered...

  13. Voices for Careers.

    Science.gov (United States)

    York, Edwin G.; Kapadia, Madhu

    Listed in this annotated bibliography are 502 cassette tapes of value to career exploration for Grade 7 through the adult level, whether as individualized instruction, small group study, or total class activity. Available to New Jersey educators at no charge, this Voices for Careers System is also available for duplication on request from the New…

  14. What the voice reveals

    NARCIS (Netherlands)

    Ko, Sei Jin

    2007-01-01

    Given that the voice is our main form of communication, we know surprisingly little about how it impacts judgment and behavior. Furthermore, the modern advancement in telecommunication systems, such as cellular phones, has meant that a large proportion of our everyday interactions are conducted voca

  15. Bodies and Voices

    DEFF Research Database (Denmark)

    A wide-ranging collection of essays centred on readings of the body in contemporary literary and socio-anthropological discourse, from slavery and rape to female genital mutilation, from clothing, ocular pornography, voice, deformation and transmutation to the imprisoned, dismembered, remembered...

  16. Emotional expressions in voice and music: same code, same effect?

    Science.gov (United States)

    Escoffier, Nicolas; Zhong, Jidan; Schirmer, Annett; Qiu, Anqi

    2013-08-01

    Scholars have documented similarities in the way voice and music convey emotions. By using functional magnetic resonance imaging (fMRI) we explored whether these similarities imply overlapping processing substrates. We asked participants to trace changes in either the emotion or pitch of vocalizations and music using a joystick. Compared to music, vocalizations more strongly activated superior and middle temporal cortex, cuneus, and precuneus. However, despite these differences, overlapping rather than differing regions emerged when comparing emotion with pitch tracing for music and vocalizations, respectively. Relative to pitch tracing, emotion tracing activated medial superior frontal and anterior cingulate cortex regardless of stimulus type. Additionally, we observed emotion specific effects in primary and secondary auditory cortex as well as in medial frontal cortex that were comparable for voice and music. Together these results indicate that similar mechanisms support emotional inferences from vocalizations and music and that these mechanisms tap on a general system involved in social cognition.

  17. Auditory hallucinations induced by trazodone.

    Science.gov (United States)

    Shiotsuki, Ippei; Terao, Takeshi; Ishii, Nobuyoshi; Hatano, Koji

    2014-04-03

    A 26-year-old female outpatient presenting with a depressive state suffered from auditory hallucinations at night. Her auditory hallucinations did not respond to blonanserin or paliperidone, but partially responded to risperidone. In view of the possibility that her auditory hallucinations began after starting trazodone, trazodone was discontinued, leading to a complete resolution of her auditory hallucinations. Furthermore, even after risperidone was decreased and discontinued, her auditory hallucinations did not recur. These findings suggest that trazodone may induce auditory hallucinations in some susceptible patients.

  18. Tuning shifts of the auditory system by corticocortical and corticofugal projections and conditioning.

    Science.gov (United States)

    Suga, Nobuo

    2012-02-01

    The central auditory system consists of the lemniscal and nonlemniscal systems. The thalamic lemniscal and nonlemniscal auditory nuclei are different from each other in response properties and neural connectivities. The cortical auditory areas receiving the projections from these thalamic nuclei interact with each other through corticocortical projections and project down to the subcortical auditory nuclei. This corticofugal (descending) system forms multiple feedback loops with the ascending system. The corticocortical and corticofugal projections modulate auditory signal processing and play an essential role in the plasticity of the auditory system. Focal electric stimulation - comparable to repetitive tonal stimulation - of the lemniscal system evokes three major types of changes in the physiological properties, such as the tuning to specific values of acoustic parameters of cortical and subcortical auditory neurons through different combinations of facilitation and inhibition. For such changes, a neuromodulator, acetylcholine, plays an essential role. Electric stimulation of the nonlemniscal system evokes changes in the lemniscal system that is different from those evoked by the lemniscal stimulation. Auditory signals ascending from the lemniscal and nonlemniscal thalamic nuclei to the cortical auditory areas appear to be selected or adjusted by a "differential" gating mechanism. Conditioning for associative learning and pseudo-conditioning for nonassociative learning respectively elicit tone-specific and nonspecific plastic changes. The lemniscal, corticofugal and cholinergic systems are involved in eliciting the former, but not the latter. The current article reviews the recent progress in the research of corticocortical and corticofugal modulations of the auditory system and its plasticity elicited by conditioning and pseudo-conditioning.

  19. Effect of Vocal Fry on Voice and on Velopharyngeal Sphincter

    Directory of Open Access Journals (Sweden)

    Elias, Vanessa Santos

    2016-01-01

    Full Text Available Introduction It is known that the basal sound promotes shortening and adduction of the vocal folds and leaves the mucosa looser. However there are few studies that address the supralaryngeal physiological findings obtained using the technique. Objective To check the effectiveness of using vocal fry on the voice and velopharingeal port closure of five adult subjects, whose cleft palate has been corrected with surgery. Methods Case study with five subjects who underwent otolaryngology examination by means of nasopharyngoscopy for imaging and measurement of the region of velopharyngeal port closure before and after using the vocal fry technique for three minutes. During the exam, the subjects sustained the isolated vowel /a:/ in their usual pitch and loudness. The emission of the vowel /a:/ was also used for perceptual analysis and spectrographic evaluation of their voices. Results Four subjects had an improvement in the region of velopharyngeal port closure; the results of the spectrographic evaluation were indicative of decreased hypernasality, and the results of the auditory-perceptual analysis suggested improved overall vocal quality, adequacy of loudness, decreased hypernasality, improvement of type of voice and decreased hoarseness. Conclusion This study showed a positive effect of vocal fry on voice and greater velopharyngeal port closure.

  20. Developmental Changes in Locating Voice and Sound in Space

    Directory of Open Access Journals (Sweden)

    Emiko Kezuka

    2017-09-01

    Full Text Available We know little about how infants locate voice and sound in a complex multi-modal space. Using a naturalistic laboratory experiment the present study tested 35 infants at 3 ages: 4 months (15 infants, 5 months (12 infants, and 7 months (8 infants. While they were engaged frontally with one experimenter, infants were presented with (a a second experimenter’s voice and (b castanet sounds from three different locations (left, right, and behind. There were clear increases with age in the successful localization of sounds from all directions, and a decrease in the number of repetitions required for success. Nonetheless even at 4 months two-thirds of the infants attempted to search for the voice or sound. At all ages localizing sounds from behind was more difficult and was clearly present only at 7 months. Perseverative errors (looking at the last location were present at all ages and appeared to be task specific (only present in the 7 month-olds for the behind location. Spontaneous attention shifts by the infants between the two experimenters, evident at 7 months, suggest early evidence for infant initiation of triadic attentional engagements. There was no advantage found for voice over castanet sounds in this study. Auditory localization is a complex and contextual process emerging gradually in the first half of the first year.

  1. Effect of singing training on total laryngectomees wearing a tracheoesophageal voice prosthesis.

    Science.gov (United States)

    Onofre, Fernanda; Ricz, Hilton Marcos Alves; Takeshita-Monaretti, Telma Kioko; Prado, Maria Yuka de Almeida; Aguiar-Ricz, Lílian Neto

    2013-02-01

    To assess the effect of a program of singing training on the voice of total laryngectomees wearing tracheoesophageal voice prosthesis, considering the quality of alaryngeal phonation, vocal extension and the musical elements of tunning and legato. Five laryngectomees wearing tracheoesophageal voice prosthesis completed the singing training program over a period of three months, with exploration of the strengthening of the respiratory muscles and vocalization and with evaluation of perceptive-auditory and singing voice being performed before and after 12 sessions of singing therapy. After the program of singing voice training, the quality of tracheoesophageal voice showed improvement or the persistence of the general degree of dysphonia for the emitted vowels and for the parameters of roughness and breathiness. For the vowel "a", the pitch was displaced to grave in two participants and to acute in one, and remained adequate in the others. A similar situation was observed also for the vowel "i". After the singing program, all participants presented tunning and most of them showed a greater presence of legato. The vocal extension improved in all participants. Singing training seems to have a favorable effect on the quality of tracheoesophageal phonation and on singing voice.

  2. Cross-cultural adaptation and validation of the Voice Handicap Index into Croatian.

    Science.gov (United States)

    Bonetti, Ana; Bonetti, Luka

    2013-01-01

    This article presents preliminary results of cultural adaptation and validation of the Croatian version of Voice Handicap Index (VHI). The translated version was completed by 38 subjects with voice disorders and 30 subjects without voice complaints. Compared with the subjects in the control group, subjects with voice disorders had significantly higher average total VHI score and scores in each of the three VHI domains (functional, physical, and emotional). Cronbach alpha for total VHI was .94, and coefficients obtained for the three VHI subscales were as follows: α = .87 for functional, α = .88 for physical, and α = .85 for emotional subscales. Intraclass correlation coefficient estimation was also high, for both total VHI (0.92) and subscales (0.85 for functional subscale, 0.87 for physical subscale, and 0.81 for emotional subscale). The overall VHI score positively correlated with auditory perceived grade of dysphonia. In the group with voice disorders, age was not correlated to the total VHI or the subscales. Also, there was no significant difference between male and female voice subjects in total VHI or the subscales. Preliminary findings of this research indicate that the Croatian VHI could provide a reliable and clinically valid measure of patient's current perception of voice problem and its reflection on the quality of life.

  3. Using Facebook to Reach People Who Experience Auditory Hallucinations.

    Science.gov (United States)

    Crosier, Benjamin Sage; Brian, Rachel Marie; Ben-Zeev, Dror

    2016-06-14

    Auditory hallucinations (eg, hearing voices) are relatively common and underreported false sensory experiences that may produce distress and impairment. A large proportion of those who experience auditory hallucinations go unidentified and untreated. Traditional engagement methods oftentimes fall short in reaching the diverse population of people who experience auditory hallucinations. The objective of this proof-of-concept study was to examine the viability of leveraging Web-based social media as a method of engaging people who experience auditory hallucinations and to evaluate their attitudes toward using social media platforms as a resource for Web-based support and technology-based treatment. We used Facebook advertisements to recruit individuals who experience auditory hallucinations to complete an 18-item Web-based survey focused on issues related to auditory hallucinations and technology use in American adults. We systematically tested multiple elements of the advertisement and survey layout including image selection, survey pagination, question ordering, and advertising targeting strategy. Each element was evaluated sequentially and the most cost-effective strategy was implemented in the subsequent steps, eventually deriving an optimized approach. Three open-ended question responses were analyzed using conventional inductive content analysis. Coded responses were quantified into binary codes, and frequencies were then calculated. Recruitment netted N=264 total sample over a 6-week period. Ninety-seven participants fully completed all measures at a total cost of $8.14 per participant across testing phases. Systematic adjustments to advertisement design, survey layout, and targeting strategies improved data quality and cost efficiency. People were willing to provide information on what triggered their auditory hallucinations along with strategies they use to cope, as well as provide suggestions to others who experience auditory hallucinations. Women, people

  4. Using Facebook to Reach People Who Experience Auditory Hallucinations

    Science.gov (United States)

    Brian, Rachel Marie; Ben-Zeev, Dror

    2016-01-01

    Background Auditory hallucinations (eg, hearing voices) are relatively common and underreported false sensory experiences that may produce distress and impairment. A large proportion of those who experience auditory hallucinations go unidentified and untreated. Traditional engagement methods oftentimes fall short in reaching the diverse population of people who experience auditory hallucinations. Objective The objective of this proof-of-concept study was to examine the viability of leveraging Web-based social media as a method of engaging people who experience auditory hallucinations and to evaluate their attitudes toward using social media platforms as a resource for Web-based support and technology-based treatment. Methods We used Facebook advertisements to recruit individuals who experience auditory hallucinations to complete an 18-item Web-based survey focused on issues related to auditory hallucinations and technology use in American adults. We systematically tested multiple elements of the advertisement and survey layout including image selection, survey pagination, question ordering, and advertising targeting strategy. Each element was evaluated sequentially and the most cost-effective strategy was implemented in the subsequent steps, eventually deriving an optimized approach. Three open-ended question responses were analyzed using conventional inductive content analysis. Coded responses were quantified into binary codes, and frequencies were then calculated. Results Recruitment netted N=264 total sample over a 6-week period. Ninety-seven participants fully completed all measures at a total cost of $8.14 per participant across testing phases. Systematic adjustments to advertisement design, survey layout, and targeting strategies improved data quality and cost efficiency. People were willing to provide information on what triggered their auditory hallucinations along with strategies they use to cope, as well as provide suggestions to others who experience

  5. Feedback and Incentives

    DEFF Research Database (Denmark)

    Eriksson, Tor Viking; Poulsen, Anders; Villeval, Marie Claire

    2009-01-01

    This paper experimentally investigates the impact of different pay schemes and relative performance feedback policies on employee effort. We explore three feedback rules: no feedback on relative performance, feedback given halfway through the production period, and continuously updated feedback. ...

  6. Spectral distribution of solo voice and accompaniment in pop music.

    Science.gov (United States)

    Borch, Daniel Zangger; Sundberg, Johan

    2002-01-01

    Singers performing in popular styles of music mostly rely on feedback provided by monitor loudspeakers on the stage. The highest sound level that these loudspeakers can provide without feedback noise is often too low to be heard over the ambient sound level on the stage. Long-term-average spectra of some orchestral accompaniments typically used in pop music are compared with those of classical symphonic orchestras. In loud pop accompaniment the sound level difference between 0.5 and 2.5 kHz is similar to that of a Wagner orchestra. Long-term-average spectra of pop singers' voices showed no signs of a singer's formant but a peak near 3.5 kHz. It is suggested that pop singers' difficulties to hear their own voices may be reduced if the frequency range 3-4 kHz is boosted in the monitor sound.

  7. Feedback-based error monitoring processes during musical performance: an ERP study.

    Science.gov (United States)

    Katahira, Kentaro; Abla, Dilshat; Masuda, Sayaka; Okanoya, Kazuo

    2008-05-01

    Auditory feedback is important in detecting and correcting errors during sound production when a current performance is compared to an intended performance. In the context of vocal production, a forward model, in which a prediction of action consequence (corollary discharge) is created, has been proposed to explain the dampened activity of the auditory cortex while producing self-generated vocal sounds. However, it is unclear how auditory feedback is processed and what neural mechanism underlies the process during other sound production behavior, such as musical performances. We investigated the neural correlates of human auditory feedback-based error detection using event-related potentials (ERPs) recorded during musical performances. Keyboard players of two different skill levels played simple melodies using a musical score. During the performance, the auditory feedback was occasionally altered. Subjects with early and extensive piano training produced a negative ERP component N210, which was absent in non-trained players. When subjects listened to music that deviated from a corresponding score without playing the piece, N210 did not emerge but the imaginary mismatch negativity (iMMN) did. Therefore, N210 may reflect a process of mismatch between the intended auditory image evoked by motor activity, and actual auditory feedback.

  8. April 16th : The World Voice Day

    NARCIS (Netherlands)

    Svec, Jan G.; Behlau, Mara

    2007-01-01

    Although the voice is used as an everyday basis of speech, most people realize its importance only when a voice problem arises. Increasing public awareness of the importance of the voice and alertness to voice problems are the main goals of the World Voice Day, which is celebrated yearly on April 16

  9. Risk factors for voice problems in teachers

    NARCIS (Netherlands)

    Kooijman, P. G. C.; de Jong, F. I. C. R. S.; Thomas, G.; Huinck, W.; Donders, R.; Graamans, K.; Schutte, H. K.

    2006-01-01

    In order to identify factors that are associated with voice problems and voice-related absenteeism in teachers, 1,878 questionnaires were analysed. The questionnaires inquired about personal data, voice complaints, voice-related absenteeism from work and conditions that may lead to voice complaints

  10. You're a What? Voice Actor

    Science.gov (United States)

    Liming, Drew

    2009-01-01

    This article talks about voice actors and features Tony Oliver, a professional voice actor. Voice actors help to bring one's favorite cartoon and video game characters to life. They also do voice-overs for radio and television commercials and movie trailers. These actors use the sound of their voice to sell a character's emotions--or an advertised…

  11. Risk factors for voice problems in teachers

    NARCIS (Netherlands)

    Kooijman, P. G. C.; de Jong, F. I. C. R. S.; Thomas, G.; Huinck, W.; Donders, R.; Graamans, K.; Schutte, H. K.

    2006-01-01

    In order to identify factors that are associated with voice problems and voice-related absenteeism in teachers, 1,878 questionnaires were analysed. The questionnaires inquired about personal data, voice complaints, voice-related absenteeism from work and conditions that may lead to voice complaints

  12. Risk factors for voice problems in teachers.

    NARCIS (Netherlands)

    Kooijman, P.G.C.; Jong, F.I.C.R.S. de; Thomas, G.; Huinck, W.J.; Donders, A.R.T.; Graamans, K.; Schutte, H.K.

    2006-01-01

    In order to identify factors that are associated with voice problems and voice-related absenteeism in teachers, 1,878 questionnaires were analysed. The questionnaires inquired about personal data, voice complaints, voice-related absenteeism from work and conditions that may lead to voice complaints

  13. Visual abilities are important for auditory-only speech recognition: evidence from autism spectrum disorder.

    Science.gov (United States)

    Schelinski, Stefanie; Riedel, Philipp; von Kriegstein, Katharina

    2014-12-01

    In auditory-only conditions, for example when we listen to someone on the phone, it is essential to fast and accurately recognize what is said (speech recognition). Previous studies have shown that speech recognition performance in auditory-only conditions is better if the speaker is known not only by voice, but also by face. Here, we tested the hypothesis that such an improvement in auditory-only speech recognition depends on the ability to lip-read. To test this we recruited a group of adults with autism spectrum disorder (ASD), a condition associated with difficulties in lip-reading, and typically developed controls. All participants were trained to identify six speakers by name and voice. Three speakers were learned by a video showing their face and three others were learned in a matched control condition without face. After training, participants performed an auditory-only speech recognition test that consisted of sentences spoken by the trained speakers. As a control condition, the test also included speaker identity recognition on the same auditory material. The results showed that, in the control group, performance in speech recognition was improved for speakers known by face in comparison to speakers learned in the matched control condition without face. The ASD group lacked such a performance benefit. For the ASD group auditory-only speech recognition was even worse for speakers known by face compared to speakers not known by face. In speaker identity recognition, the ASD group performed worse than the control group independent of whether the speakers were learned with or without face. Two additional visual experiments showed that the ASD group performed worse in lip-reading whereas face identity recognition was within the normal range. The findings support the view that auditory-only communication involves specific visual mechanisms. Further, they indicate that in ASD, speaker-specific dynamic visual information is not available to optimize auditory

  14. Whose voice matters? Learners

    Directory of Open Access Journals (Sweden)

    Sarah Bansilal

    2010-01-01

    Full Text Available International and national mathematics studies have revealed the poor mathematics skills of South African learners. An essential tool that can be used to improve learners' mathematical skills is for educators to use effective feedback. Our purpose in this study was to elicit learners' understanding and expectations of teacher assessment feedback. The study was conducted with five Grade 9 mathematics learners. Data were generated from one group interview, seven journal entries by each learner, video-taped classroom observations and researcher field notes. The study revealed that the learners have insightful perceptions of the concept of educator feedback. While some learners viewed educator feedback as a tool to probe their understanding, others viewed it as a mechanism to get the educator's point of view. A significant finding of the study was that learners viewed educator assessment feedback as instrumental in building or breaking their self-confidence.

  15. Auditory and Visual Sensations

    CERN Document Server

    Ando, Yoichi

    2010-01-01

    Professor Yoichi Ando, acoustic architectural designer of the Kirishima International Concert Hall in Japan, presents a comprehensive rational-scientific approach to designing performance spaces. His theory is based on systematic psychoacoustical observations of spatial hearing and listener preferences, whose neuronal correlates are observed in the neurophysiology of the human brain. A correlation-based model of neuronal signal processing in the central auditory system is proposed in which temporal sensations (pitch, timbre, loudness, duration) are represented by an internal autocorrelation representation, and spatial sensations (sound location, size, diffuseness related to envelopment) are represented by an internal interaural crosscorrelation function. Together these two internal central auditory representations account for the basic auditory qualities that are relevant for listening to music and speech in indoor performance spaces. Observed psychological and neurophysiological commonalities between auditor...

  16. Is the auditory evoked P2 response a biomarker of learning?

    Science.gov (United States)

    Tremblay, Kelly L; Ross, Bernhard; Inoue, Kayo; McClannahan, Katrina; Collet, Gregory

    2014-01-01

    Even though auditory training exercises for humans have been shown to improve certain perceptual skills of individuals with and without hearing loss, there is a lack of knowledge pertaining to which aspects of training are responsible for the perceptual gains, and which aspects of perception are changed. To better define how auditory training impacts brain and behavior, electroencephalography (EEG) and magnetoencephalography (MEG) have been used to determine the time course and coincidence of cortical modulations associated with different types of training. Here we focus on P1-N1-P2 auditory evoked responses (AEP), as there are consistent reports of gains in P2 amplitude following various types of auditory training experiences; including music and speech-sound training. The purpose of this experiment was to determine if the auditory evoked P2 response is a biomarker of learning. To do this, we taught native English speakers to identify a new pre-voiced temporal cue that is not used phonemically in the English language so that coinciding changes in evoked neural activity could be characterized. To differentiate possible effects of repeated stimulus exposure and a button-pushing task from learning itself, we examined modulations in brain activity in a group of participants who learned to identify the pre-voicing contrast and compared it to participants, matched in time, and stimulus exposure, that did not. The main finding was that the amplitude of the P2 auditory evoked response increased across repeated EEG sessions for all groups, regardless of any change in perceptual performance. What's more, these effects are retained for months. Changes in P2 amplitude were attributed to changes in neural activity associated with the acquisition process and not the learned outcome itself. A further finding was the expression of a late negativity (LN) wave 600-900 ms post-stimulus onset, post-training exclusively for the group that learned to identify the pre-voiced contrast.

  17. Is the auditory evoked P2 response a biomarker of learning?

    Directory of Open Access Journals (Sweden)

    Kelly eTremblay

    2014-02-01

    Full Text Available Even though auditory training exercises for humans have been shown to improve certain perceptual skills of individuals with and without hearing loss, there is a lack of knowledge pertaining to which aspects of training are responsible for the perceptual gains, and which aspects of perception are changed. To better define how auditory training impacts brain and behavior, electroencephalography and magnetoencephalography have been used to determine the time course and coincidence of cortical modulations associated with different types of training. Here we focus on P1-N1-P2 auditory evoked responses (AEP, as there are consistent reports of gains in P2 amplitude following various types of auditory training experiences; including music and speech-sound training. The purpose of this experiment was to determine if the auditory evoked P2 response is a biomarker of learning. To do this, we taught native English speakers to identify a new pre-voiced temporal cue that is not used phonemically in the English language so that coinciding changes in evoked neural activity could be characterized. To differentiate possible effects of repeated stimulus exposure and a button-pushing task from learning itself, we examined modulations in brain activity in a group of participants who learned to identify the pre-voicing contrast and compared it to participants, matched in time, and stimulus exposure, that did not. The main finding was that the amplitude of the P2 auditory evoked response increased across repeated EEG sessions for all groups, regardless of any change in perceptual performance. What’s more, these effects were retained for months. Changes in P2 amplitude were attributed to changes in neural activity associated with the acquisition process and not the learned outcome itself. A further finding was the expression of a late negativity (LN wave 600-900 ms post-stimulus onset, post-training, exclusively for the group that learned to identify the pre-voiced

  18. Exploring the anatomical encoding of voice with a mathematical model of the vocal system.

    Science.gov (United States)

    Assaneo, M Florencia; Sitt, Jacobo; Varoquaux, Gael; Sigman, Mariano; Cohen, Laurent; Trevisan, Marcos A

    2016-11-01

    The faculty of language depends on the interplay between the production and perception of speech sounds. A relevant open question is whether the dimensions that organize voice perception in the brain are acoustical or depend on properties of the vocal system that produced it. One of the main empirical difficulties in answering this question is to generate sounds that vary along a continuum according to the anatomical properties the vocal apparatus that produced them. Here we use a mathematical model that offers the unique possibility of synthesizing vocal sounds by controlling a small set of anatomically based parameters. In a first stage the quality of the synthetic voice was evaluated. Using specific time traces for sub-glottal pressure and tension of the vocal folds, the synthetic voices generated perceptual responses, which are indistinguishable from those of real speech. The synthesizer was then used to investigate how the auditory cortex responds to the perception of voice depending on the anatomy of the vocal apparatus. Our fMRI results show that sounds are perceived as human vocalizations when produced by a vocal system that follows a simple relationship between the size of the vocal folds and the vocal tract. We found that these anatomical parameters encode the perceptual vocal identity (male, female, child) and show that the brain areas that respond to human speech also encode vocal identity. On the basis of these results, we propose that this low-dimensional model of the vocal system is capable of generating realistic voices and represents a novel tool to explore the voice perception with a precise control of the anatomical variables that generate speech. Furthermore, the model provides an explanation of how auditory cortices encode voices in terms of the anatomical parameters of the vocal system.

  19. Tuned with a tune: Talker normalization via general auditory processes

    Directory of Open Access Journals (Sweden)

    Erika J C Laing

    2012-06-01

    Full Text Available Voices have unique acoustic signatures, contributing to the acoustic variability listeners must contend with in perceiving speech, and it has long been proposed that listeners normalize speech perception to information extracted from a talker’s speech. Initial attempts to explain talker normalization relied on extraction of articulatory referents, but recent studies of context-dependent auditory perception suggest that general auditory referents such as the long-term average spectrum (LTAS of a talker’s speech similarly affect speech perception. The present study aimed to differentiate the contributions of articulatory/linguistic versus auditory referents for context-driven talker normalization effects and, more specifically, to identify the specific constraints under which such contexts impact speech perception. Synthesized sentences manipulated to sound like different talkers influenced categorization of a subsequent speech target only when differences in the sentences’ LTAS were in the frequency range of the acoustic cues relevant for the target phonemic contrast. This effect was true both for speech targets preceded by spoken sentence contexts and for targets preceded by nonspeech tone sequences that were LTAS-matched to the spoken sentence contexts. Specific LTAS characteristics, rather than perceived talker, predicted the results suggesting that general auditory mechanisms play an important role in effects considered to be instances of perceptual talker normalization.

  20. Gender differences in identifying emotions from auditory and visual stimuli.

    Science.gov (United States)

    Waaramaa, Teija

    2017-12-01

    The present study focused on gender differences in emotion identification from auditory and visual stimuli produced by two male and two female actors. Differences in emotion identification from nonsense samples, language samples and prolonged vowels were investigated. It was also studied whether auditory stimuli can convey the emotional content of speech without visual stimuli, and whether visual stimuli can convey the emotional content of speech without auditory stimuli. The aim was to get a better knowledge of vocal attributes and a more holistic understanding of the nonverbal communication of emotion. Females tended to be more accurate in emotion identification than males. Voice quality parameters played a role in emotion identification in both genders. The emotional content of the samples was best conveyed by nonsense sentences, better than by prolonged vowels or shared native language of the speakers and participants. Thus, vocal non-verbal communication tends to affect the interpretation of emotion even in the absence of language. The emotional stimuli were better recognized from visual stimuli than auditory stimuli by both genders. Visual information about speech may not be connected to the language; instead, it may be based on the human ability to understand the kinetic movements in speech production more readily than the characteristics of the acoustic cues.

  1. Neural Representation of Concurrent Vowels in Macaque Primary Auditory Cortex.

    Science.gov (United States)

    Fishman, Yonatan I; Micheyl, Christophe; Steinschneider, Mitchell

    2016-01-01

    Successful speech perception in real-world environments requires that the auditory system segregate competing voices that overlap in frequency and time into separate streams. Vowels are major constituents of speech and are comprised of frequencies (harmonics) that are integer multiples of a common fundamental frequency (F0). The pitch and identity of a vowel are determined by its F0 and spectral envelope (formant structure), respectively. When two spectrally overlapping vowels differing in F0 are presented concurrently, they can be readily perceived as two separate "auditory objects" with pitches at their respective F0s. A difference in pitch between two simultaneous vowels provides a powerful cue for their segregation, which in turn, facilitates their individual identification. The neural mechanisms underlying the segregation of concurrent vowels based on pitch differences are poorly understood. Here, we examine neural population responses in macaque primary auditory cortex (A1) to single and double concurrent vowels (/a/ and /i/) that differ in F0 such that they are heard as two separate auditory objects with distinct pitches. We find that neural population responses in A1 can resolve, via a rate-place code, lower harmonics of both single and double concurrent vowels. Furthermore, we show that the formant structures, and hence the identities, of single vowels can be reliably recovered from the neural representation of double concurrent vowels. We conclude that A1 contains sufficient spectral information to enable concurrent vowel segregation and identification by downstream cortical areas.

  2. Hearing an illusory vowel in noise: suppression of auditory cortical activity.

    Science.gov (United States)

    Riecke, Lars; Vanbussel, Mieke; Hausfeld, Lars; Başkent, Deniz; Formisano, Elia; Esposito, Fabrizio

    2012-06-06

    Human hearing is constructive. For example, when a voice is partially replaced by an extraneous sound (e.g., on the telephone due to a transmission problem), the auditory system may restore the missing portion so that the voice can be perceived as continuous (Miller and Licklider, 1950; for review, see Bregman, 1990; Warren, 1999). The neural mechanisms underlying this continuity illusion have been studied mostly with schematic stimuli (e.g., simple tones) and are still a matter of debate (for review, see Petkov and Sutter, 2011). The goal of the present study was to elucidate how these mechanisms operate under more natural conditions. Using psychophysics and electroencephalography (EEG), we assessed simultaneously the perceived continuity of a human vowel sound through interrupting noise and the concurrent neural activity. We found that vowel continuity illusions were accompanied by a suppression of the 4 Hz EEG power in auditory cortex (AC) that was evoked by the vowel interruption. This suppression was stronger than the suppression accompanying continuity illusions of a simple tone. Finally, continuity perception and 4 Hz power depended on the intactness of the sound that preceded the vowel (i.e., the auditory context). These findings show that a natural sound may be restored during noise due to the suppression of 4 Hz AC activity evoked early during the noise. This mechanism may attenuate sudden pitch changes, adapt the resistance of the auditory system to extraneous sounds across auditory scenes, and provide a useful model for assisted hearing devices.

  3. Effects of digital vibrotactile speech feedback on overt stuttering frequency.

    Science.gov (United States)

    Snyder, Gregory J; Blanchet, Paul; Waddell, Dwight; Ivy, Lennette J

    2009-02-01

    Fluency-enhancing speech feedback, originating from internally or externally generated sources via auditory or visual sensory modalities is not restricted to a specific sensory modality or signal origination. Research suggests that externally generated digital vibrotactile speech feedback serves as an effective fluency enhancer. The present purpose was to test the fluency-enhancing effects of self-generated digital vibrotactile speech feedback on stuttering frequency. Adults who stutter read passages aloud over the telephone, both with and without digital vibrotactile speech feedback. Digital vibrotactile speech feedback was operationally defined as feeling the vibrations of the thyroid cartilage with the thumb and index finger while speaking. Analysis indicated that self-generated digital vibrotactile speech feedback reduced overt stuttering frequency by an average of 72%. As the specific neural mechanisms associated with stuttering and fluency enhancement from tactile speech feedback remain unknown, theoretical implications and clinical applications were discussed.

  4. Why Is My Voice Changing? (For Teens)

    Science.gov (United States)

    ... Week of Healthy Breakfasts Shyness Why Is My Voice Changing? KidsHealth > For Teens > Why Is My Voice ... deeper than a girl's, though. What Causes My Voice to Change? At puberty, guys' bodies begin producing ...

  5. Common Problems That Can Affect Your Voice

    Science.gov (United States)

    ... near you Common Problems That Can Affect Your Voice Common Problems That Can Affect Your Voice Patient ... that traditionally accompany gastro esophageal reflux disease (GERD). Voice Misuse and Overuse Speaking is a physical task ...

  6. Voice and silence in organizations

    Directory of Open Access Journals (Sweden)

    Moaşa, H.

    2011-01-01

    Full Text Available Unlike previous research on voice and silence, this article breaksthe distance between the two and declines to treat them as opposites. Voice and silence are interrelated and intertwined strategic forms ofcommunication which presuppose each other in such a way that the absence of one would minimize completely the other’s presence. Social actors are not voice, or silence. Social actors can have voice or silence, they can do both because they operate at multiple levels and deal with multiple issues at different moments in time.

  7. VOICE REHABILITATION FOLLOWING TOTAL LARYNGECTOMY

    Directory of Open Access Journals (Sweden)

    Balasubramanian Thiagarajan

    2015-03-01

    Full Text Available Despite continuing advances in surgical management of laryngeal malignancy, total laryngectomy is still the treatment of choice in advanced laryngeal malignancies. Considering the longevity of the patient following total laryngectomy, various measures have been adopted in order to provide voice function to the patient. Significant advancements have taken place in voice rehabilitation of post laryngectomy patients. Advancements in oncological surgical techniques and irradiation techniques have literally cured laryngeal malignancies. Among the various voice rehabilitation techniques available TEP (Tracheo oesophageal puncture is considered to be the gold standard. This article attempts to explore the various voice rehabilitation technique available with primary focus on TEP.

  8. A study of auditory preferences in nonhandicapped infants and infants with Down's syndrome.

    Science.gov (United States)

    Glenn, S M; Cunningham, C C; Joyce, P F

    1981-01-01

    11 infants with Down's syndrome (MA 9.2 months, CA 12.7 months) and 10 of 11 nonhandicapped infants (MA 9.6 months, CA 9.3 months) demonstrated that they could operate an automated device which enabled them to choose to listen to 1 of a pair of auditory signals. All subjects showed preferential responding. Both groups of infants showed a significant preference for nursery rhymes sung by a female voice rather than played on musical instruments. The infants with Down's syndrome had much longer response durations for the more complex auditory stimuli. The apparatus provides a useful technique for studying language development in both normal and abnormal populations.

  9. Auditory evacuation beacons

    NARCIS (Netherlands)

    Wijngaarden, S.J. van; Bronkhorst, A.W.; Boer, L.C.

    2005-01-01

    Auditory evacuation beacons can be used to guide people to safe exits, even when vision is totally obscured by smoke. Conventional beacons make use of modulated noise signals. Controlled evacuation experiments show that such signals require explicit instructions and are often misunderstood. A new si

  10. Virtual Auditory Displays

    Science.gov (United States)

    2000-01-01

    timbre , intensity, distance, room modeling, radio communication Virtual Environments Handbook Chapter 4 Virtual Auditory Displays Russell D... musical note “A” as a pure sinusoid, there will be 440 condensations and rarefactions per second. The distance between two adjacent condensations or...and complexity are pitch, loudness, and timbre respectively. This distinction between physical and perceptual measures of sound properties is an

  11. The impact of voice on speech realization

    OpenAIRE

    Jelka Breznik

    2014-01-01

    The study discusses spoken literary language and the impact of voice on speech realization. The voice consists of a sound made by a human being using the vocal folds for talking, singing, laughing, crying, screaming… The human voice is specifically the part of human sound production in which the vocal folds (vocal cords) are the primary sound source. Our voice is our instrument and identity card. How does the voice (voice tone) affect others and how do they respond, positively or negatively? ...

  12. The neglected neglect: auditory neglect.

    Science.gov (United States)

    Gokhale, Sankalp; Lahoti, Sourabh; Caplan, Louis R

    2013-08-01

    Whereas visual and somatosensory forms of neglect are commonly recognized by clinicians, auditory neglect is often not assessed and therefore neglected. The auditory cortical processing system can be functionally classified into 2 distinct pathways. These 2 distinct functional pathways deal with recognition of sound ("what" pathway) and the directional attributes of the sound ("where" pathway). Lesions of higher auditory pathways produce distinct clinical features. Clinical bedside evaluation of auditory neglect is often difficult because of coexisting neurological deficits and the binaural nature of auditory inputs. In addition, auditory neglect and auditory extinction may show varying degrees of overlap, which makes the assessment even harder. Shielding one ear from the other as well as separating the ear from space is therefore critical for accurate assessment of auditory neglect. This can be achieved by use of specialized auditory tests (dichotic tasks and sound localization tests) for accurate interpretation of deficits. Herein, we have reviewed auditory neglect with an emphasis on the functional anatomy, clinical evaluation, and basic principles of specialized auditory tests.

  13. Auditory-motor learning during speech production in 9-11-year-old children.

    Directory of Open Access Journals (Sweden)

    Douglas M Shiller

    Full Text Available BACKGROUND: Hearing ability is essential for normal speech development, however the precise mechanisms linking auditory input and the improvement of speaking ability remain poorly understood. Auditory feedback during speech production is believed to play a critical role by providing the nervous system with information about speech outcomes that is used to learn and subsequently fine-tune speech motor output. Surprisingly, few studies have directly investigated such auditory-motor learning in the speech production of typically developing children. METHODOLOGY/PRINCIPAL FINDINGS: In the present study, we manipulated auditory feedback during speech production in a group of 9-11-year old children, as well as in adults. Following a period of speech practice under conditions of altered auditory feedback, compensatory changes in speech production and perception were examined. Consistent with prior studies, the adults exhibited compensatory changes in both their speech motor output and their perceptual representations of speech sound categories. The children exhibited compensatory changes in the motor domain, with a change in speech output that was similar in magnitude to that of the adults, however the children showed no reliable compensatory effect on their perceptual representations. CONCLUSIONS: The results indicate that 9-11-year-old children, whose speech motor and perceptual abilities are still not fully developed, are nonetheless capable of auditory-feedback-based sensorimotor adaptation, supporting a role for such learning processes in speech motor development. Auditory feedback may play a more limited role, however, in the fine-tuning of children's perceptual representations of speech sound categories.

  14. Can You Help Me with My Pitch? Studying a Tool for Real-Time Automated Feedback

    Science.gov (United States)

    Schneider, Jan; Borner, Dirk; van Rosmalen, Peter; Specht, Marcus

    2016-01-01

    In our pursue to study effective real-time feedback in Technology Enhanced Learning, we developed the Presentation Trainer, a tool designed to support the practice of nonverbal communication skills for public speaking. The tool tracks the user's voice and body to analyze her performance, and selects the type of real-time feedback to be presented.…

  15. The Voice Handicap Index with Post-Laryngectomy Male Voices

    Science.gov (United States)

    Evans, Eryl; Carding, Paul; Drinnan, Michael

    2009-01-01

    Background: Surgical treatment for advanced laryngeal cancer involves complete removal of the larynx ("laryngectomy") and initial total loss of voice. Post-laryngectomy rehabilitation involves implementation of different means of "voicing" for these patients wherever possible. There is little information about laryngectomees'…

  16. Pedagogic Voice: Student Voice in Teaching and Engagement Pedagogies

    Science.gov (United States)

    Baroutsis, Aspa; McGregor, Glenda; Mills, Martin

    2016-01-01

    In this paper, we are concerned with the notion of "pedagogic voice" as it relates to the presence of student "voice" in teaching, learning and curriculum matters at an alternative, or second chance, school in Australia. This school draws upon many of the principles of democratic schooling via its utilisation of student voice…

  17. Cutaneous sensory nerve as a substitute for auditory nerve in solving deaf-mutes’ hearing problem: an innovation in multi-channel-array skin-hearing technology

    OpenAIRE

    Li, Jianwen; Li, Yan; Ming ZHANG; Ma, Weifang; Ma, Xuezong

    2014-01-01

    The current use of hearing aids and artificial cochleas for deaf-mute individuals depends on their auditory nerve. Skin-hearing technology, a patented system developed by our group, uses a cutaneous sensory nerve to substitute for the auditory nerve to help deaf-mutes to hear sound. This paper introduces a new solution, multi-channel-array skin-hearing technology, to solve the problem of speech discrimination. Based on the filtering principle of hair cells, external voice signals at different...

  18. DEVELOPING ‘STANDARD NOVEL ‘VAD’ TECHNIQUE’ AND ‘NOISE FREE SIGNALS’ FOR SPEECH AUDITORY BRAINSTEM RESPONSES FOR HUMAN SUBJECTS

    OpenAIRE

    Ranganadh Narayanam*

    2016-01-01

    In this research as a first step we have concentrated on collecting non-intra cortical EEG data of Brainstem Speech Evoked Potentials from human subjects in an Audiology Lab in University of Ottawa. The problems we have considered are the most advanced and most essential problems of interest in Auditory Neural Signal Processing area in the world: The first problem is the Voice Activity Detection (VAD) in Speech Auditory Brainstem Responses (ABR); The second problem is to identify the best De-...

  19. Haptic Foot Feedback for Kicking Training in Virtual Reality

    OpenAIRE

    Huang, Hank; Tan, Hong

    2016-01-01

    As means to further supplement athletic performances increases, virtual reality is becoming helpful to sports in terms of cognitive training such as reaction, mentality, and game strategies. With the aid of haptic feedback, interaction with virtual objects increases by another dimension, in addition to the presence of visual and auditory feedback. This research presents an integrated system of a virtual reality environment, motion tracking system, and a haptic unit designed for the dorsal foo...

  20. Comparison of Perceptual Signs of Voice before and after Vocal Hygiene Program in Adults with Dysphonia

    Directory of Open Access Journals (Sweden)

    Seyyedeh Maryam khoddami

    2011-12-01

    Full Text Available Background and Aim: Vocal abuse and misuse are the most frequent causes of voice disorders. Consequently some therapy is needed to stop or modify such behaviors. This research was performed to study the effectiveness of vocal hygiene program on perceptual signs of voice in people with dysphonia.Methods: A Vocal hygiene program was performed to 8 adults with dysphonia for 6 weeks. At first, Consensus Auditory- Perceptual Evaluation of Voice was used to assess perceptual signs. Then the program was delivered, Individuals were followed in second and forth weeks visits. In the last session, perceptual assessment was performed and individuals’ opinions were collected. Perceptual findings were compared before and after the therapy.Results: After the program, mean score of perceptual assessment decreased. Mean score of every perceptual sign revealed significant difference before and after the therapy (p≤0.0001. «Loudness» had maximum score and coordination between speech and respiration indicated minimum score. All participants confirmed efficiency of the therapy.Conclusion: The vocal hygiene program improves all perceptual signs of voice although not equally. This deduction is confirmed by both clinician-based and patient-based assessments. As a result, vocal hygiene program is necessary for a comprehensive voice therapy but is not solely effective to resolve all voice problems.

  1. A voice service for user feedback on school meals

    CSIR Research Space (South Africa)

    Sharma Grover, AS

    2012-03-01

    Full Text Available no significant differences were observed for performance across the prototypes, there were strong preferences for speech (input modality) and English (language). Focus group discussions revealed rich information on learner’s perceptions around trust...

  2. Sense of agency over speech and proneness to auditory hallucinations: the reality-monitoring paradigm.

    Science.gov (United States)

    Sugimori, Eriko; Asai, Tomohisa; Tanno, Yoshihiko

    2011-01-01

    This study investigated the effects of imagining speaking aloud, sensorimotor feedback, and auditory feedback on respondents' reports of having spoken aloud and examined the relationship between responses to "spoken aloud" in the reality-monitoring task and the sense of agency over speech. After speaking aloud, lip-synching, or imagining speaking, participants were asked whether each word had actually been spoken. The number of endorsements of "spoken aloud" was higher for words spoken aloud than for those lip-synched and higher for words lip-synched than for those imagined as having been spoken aloud. When participants were prevented by white noise from receiving auditory feedback, the discriminability of words spoken aloud decreased, and when auditory feedback was altered, reports of having spoken aloud decreased even though participants had actually done so. It was also found that those who have had auditory hallucination-like experiences were less able than were those without such experiences to discriminate the words spoken aloud, suggesting that endorsements of having "spoken aloud" in the reality-monitoring task reflected a sense of agency over speech. These results were explained in terms of the source-monitoring framework, and we proposed a revised forward model of speech in order to investigate auditory hallucinations.

  3. A Comprehensive Review of Auditory Verbal Hallucinations: Lifetime Prevalence, Correlates and Mechanisms in Healthy and Clinical Individuals

    Directory of Open Access Journals (Sweden)

    Saskia ede Leede-Smith

    2013-07-01

    Full Text Available Over the years, the prevalence of auditory verbal hallucinations (AVH has been documented across the lifespan in varied contexts, and with a range of potential long-term outcomes. Initially the emphasis focused on whether AVHs conferred risk for psychosis. However, recent research has identified significant differences in the presentation and outcomes of AVH in patients compared to those in non-clinical populations. For this reason, it has been suggested that auditory hallucinations are an entity by themselves and not necessarily indicative of transition along the psychosis continuum. This review will examine the presentation of auditory hallucinations across the life span. The stages described include childhood, adolescence, adult non-clinical populations, hypnaogogic/hypnopompic experiences, high schizotypal traits, schizophrenia, substance induced AVH, AVH in epilepsy and AVH in the elderly. In children, need for care depends upon whether the child associates the voice with negative beliefs, appraisals and other symptoms of psychosis. This theme appears to carry right through to healthy voice hearers in adulthood, in which a negative impact of the voice usually only exists if the individual has negative experiences as a result of their voice(s. This includes features of the voices such as the negative content, frequency and emotional valence as well as anxiety and depression, independently or caused by voices presence. It seems possible that the mechanisms which maintain AVH in non-clinical populations are different from those which are behind AVH presentations in psychotic illness. For example; the existence of maladaptive coping strategies in patient populations is one significant difference between clinical and non-clinical groups which is associated with a need for care. Whether or not these mechanisms start out the same and have differential trajectories is not yet evidenced. Future research needs to focus on the comparison of underlying

  4. A comprehensive review of auditory verbal hallucinations: lifetime prevalence, correlates and mechanisms in healthy and clinical individuals.

    Science.gov (United States)

    de Leede-Smith, Saskia; Barkus, Emma

    2013-01-01

    Over the years, the prevalence of auditory verbal hallucinations (AVHs) have been documented across the lifespan in varied contexts, and with a range of potential long-term outcomes. Initially the emphasis focused on whether AVHs conferred risk for psychosis. However, recent research has identified significant differences in the presentation and outcomes of AVH in patients compared to those in non-clinical populations. For this reason, it has been suggested that auditory hallucinations are an entity by themselves and not necessarily indicative of transition along the psychosis continuum. This review will examine the presentation of auditory hallucinations across the life span, as well as in various clinical groups. The stages described include childhood, adolescence, adult non-clinical populations, hypnagogic/hypnopompic experiences, high schizotypal traits, schizophrenia, substance induced AVH, AVH in epilepsy, and AVH in the elderly. In children, need for care depends upon whether the child associates the voice with negative beliefs, appraisals and other symptoms of psychosis. This theme appears to carry right through to healthy voice hearers in adulthood, in which a negative impact of the voice usually only exists if the individual has negative experiences as a result of their voice(s). This includes features of the voices such as the negative content, frequency, and emotional valence as well as anxiety and depression, independently or caused by voices presence. It seems possible that the mechanisms which maintain AVH in non-clinical populations are different from those which are behind AVH presentations in psychotic illness. For example, the existence of maladaptive coping strategies in patient populations is one significant difference between clinical and non-clinical groups which is associated with a need for care. Whether or not these mechanisms start out the same and have differential trajectories is not yet evidenced. Future research needs to focus on the

  5. Voice and Speech after Laryngectomy

    Science.gov (United States)

    Stajner-Katusic, Smiljka; Horga, Damir; Musura, Maja; Globlek, Dubravka

    2006-01-01

    The aim of the investigation is to compare voice and speech quality in alaryngeal patients using esophageal speech (ESOP, eight subjects), electroacoustical speech aid (EACA, six subjects) and tracheoesophageal voice prosthesis (TEVP, three subjects). The subjects reading a short story were recorded in the sound-proof booth and the speech samples…

  6. Voice Quality of Psychological Origin

    Science.gov (United States)

    Teixeira, Antonio; Nunes, Ana; Coimbra, Rosa Lidia; Lima, Rosa; Moutinho, Lurdes

    2008-01-01

    Variations in voice quality are essentially related to modifications of the glottal source parameters, such as: F[subscript 0], jitter, and shimmer. Voice quality is affected by prosody, emotional state, and vocal pathologies. Psychogenic vocal pathology is particularly interesting. In the present case study, the speaker naturally presented a…

  7. Voice handicap index in Swedish.

    Science.gov (United States)

    Ohlsson, Ann-Christine; Dotevall, Hans

    2009-01-01

    The objective of this study was to evaluate a Swedish version of the voice handicap index questionnaire (Sw-VHI). A total of 57 adult, dysphonic patients and 15 healthy controls completed the Sw-VHI and rated the degree of vocal fatigue and hoarseness on visual analogue scales. A perceptual voice evaluation was also performed. Test-retest reliability was analyzed in 38 subjects without voice complaints. Sw-VHI distinguished between dysphonic subjects and controls (P 0.84) and test-retest reliability (intraclass correlation coefficient >0.75) were good. Only moderate or weak correlations were found between Sw-VHI and the subjective and perceptual voice ratings. The data indicate that a difference above 13 points for the total Sw-VHI score and above 6 points for the Sw-VHI subscales is significant for an individual when comparing two different occasions. In conclusion, the Sw-VHI appears to be a robust instrument for assessment of the psycho-social impact of a voice disorder. However, Sw-VHI seems to, at least partly, capture different aspects of voice function to the subjective voice ratings and the perceptual voice evaluation.

  8. Enhancing Author's Voice through Scripting

    Science.gov (United States)

    Young, Chase J.; Rasinski, Timothy V.

    2011-01-01

    The authors suggest using scripting as a strategy to mentor and enhance author's voice in writing. Through gradual release, students use authentic literature as a model for writing with voice. The authors also propose possible extensions for independent practice, integration across content areas, and tips for evaluation.

  9. Voice, Schooling, Inequality, and Scale

    Science.gov (United States)

    Collins, James

    2013-01-01

    The rich studies in this collection show that the investigation of voice requires analysis of "recognition" across layered spatial-temporal and sociolinguistic scales. I argue that the concepts of voice, recognition, and scale provide insight into contemporary educational inequality and that their study benefits, in turn, from paying attention to…

  10. Voices in History

    Directory of Open Access Journals (Sweden)

    Ivan Leudar

    2001-06-01

    Full Text Available Experiences of “hearing voices” nowadays usually count as verbal hallucinations and they indicate serious mental illness. Some are first rank symptoms of schizophrenia, and the mass media, at least in Britain, tend to present them as antecedents of impulsive violence. They are, however, also found in other psychiatric conditions and epidemiological surveys reveal that even individuals with no need of psychiatric help can hear voices, sometimes following bereavement or abuse, but sometimes for no discernible reason. So do these experiences necessarily mean insanity and violence, and must they be thought of as pathogenic hallucinations; or are there other ways to understand them and live with them, and with what consequences?One way to make our thinking more flexible is to turn to history. We find that hearing voices was always an enigmatic experience, and the people who had it were rare. The gallery of voice hearers is, though, distinguished and it includes Galilei, Bunyan and St Teresa. Socrates heard a daemon who guided his actions, but in his time this did not signify madness, nor was it described as a hallucination. Yet in 19th century French psychological medicine the daemon became a hallucination and Socrates was retrospectively diagnosed as mentally ill. This paper examines the controversies which surrounded the experience at different points in history as well as the practice of retrospective psychiatry. The conclusion reached on the basis of the historical materials is that the experience and the ontological status it is ascribed are not trans-cultural or trans-historic but situated both in history and in the contemporary conflicts.

  11. Auditory pathways: anatomy and physiology.

    Science.gov (United States)

    Pickles, James O

    2015-01-01

    This chapter outlines the anatomy and physiology of the auditory pathways. After a brief analysis of the external, middle ears, and cochlea, the responses of auditory nerve fibers are described. The central nervous system is analyzed in more detail. A scheme is provided to help understand the complex and multiple auditory pathways running through the brainstem. The multiple pathways are based on the need to preserve accurate timing while extracting complex spectral patterns in the auditory input. The auditory nerve fibers branch to give two pathways, a ventral sound-localizing stream, and a dorsal mainly pattern recognition stream, which innervate the different divisions of the cochlear nucleus. The outputs of the two streams, with their two types of analysis, are progressively combined in the inferior colliculus and onwards, to produce the representation of what can be called the "auditory objects" in the external world. The progressive extraction of critical features in the auditory stimulus in the different levels of the central auditory system, from cochlear nucleus to auditory cortex, is described. In addition, the auditory centrifugal system, running from cortex in multiple stages to the organ of Corti of the cochlea, is described.

  12. Animal models for auditory streaming.

    Science.gov (United States)

    Itatani, Naoya; Klump, Georg M

    2017-02-19

    Sounds in the natural environment need to be assigned to acoustic sources to evaluate complex auditory scenes. Separating sources will affect the analysis of auditory features of sounds. As the benefits of assigning sounds to specific sources accrue to all species communicating acoustically, the ability for auditory scene analysis is widespread among different animals. Animal studies allow for a deeper insight into the neuronal mechanisms underlying auditory scene analysis. Here, we will review the paradigms applied in the study of auditory scene analysis and streaming of sequential sounds in animal models. We will compare the psychophysical results from the animal studies to the evidence obtained in human psychophysics of auditory streaming, i.e. in a task commonly used for measuring the capability for auditory scene analysis. Furthermore, the neuronal correlates of auditory streaming will be reviewed in different animal models and the observations of the neurons' response measures will be related to perception. The across-species comparison will reveal whether similar demands in the analysis of acoustic scenes have resulted in similar perceptual and neuronal processing mechanisms in the wide range of species being capable of auditory scene analysis.This article is part of the themed issue 'Auditory and visual scene analysis'.

  13. Interactions across Multiple Stimulus Dimensions in Primary Auditory Cortex

    Science.gov (United States)

    Zhuo, Ran; Xue, Hongbo; Chambers, Anna R.; Kolaczyk, Eric; Polley, Daniel B.

    2016-01-01

    Although sensory cortex is thought to be important for the perception of complex objects, its specific role in representing complex stimuli remains unknown. Complex objects are rich in information along multiple stimulus dimensions. The position of cortex in the sensory hierarchy suggests that cortical neurons may integrate across these dimensions to form a more gestalt representation of auditory objects. Yet, studies of cortical neurons typically explore single or few dimensions due to the difficulty of determining optimal stimuli in a high dimensional stimulus space. Evolutionary algorithms (EAs) provide a potentially powerful approach for exploring multidimensional stimulus spaces based on real-time spike feedback, but two important issues arise in their application. First, it is unclear whether it is necessary to characterize cortical responses to multidimensional stimuli or whether it suffices to characterize cortical responses to a single dimension at a time. Second, quantitative methods for analyzing complex multidimensional data from an EA are lacking. Here, we apply a statistical method for nonlinear regression, the generalized additive model (GAM), to address these issues. The GAM quantitatively describes the dependence between neural response and all stimulus dimensions. We find that auditory cortical neurons in mice are sensitive to interactions across dimensions. These interactions are diverse across the population, indicating significant integration across stimulus dimensions in auditory cortex. This result strongly motivates using multidimensional stimuli in auditory cortex. Together, the EA and the GAM provide a novel quantitative paradigm for investigating neural coding of complex multidimensional stimuli in auditory and other sensory cortices. PMID:27622211

  14. Interactions across Multiple Stimulus Dimensions in Primary Auditory Cortex.

    Science.gov (United States)

    Sloas, David C; Zhuo, Ran; Xue, Hongbo; Chambers, Anna R; Kolaczyk, Eric; Polley, Daniel B; Sen, Kamal

    2016-01-01

    Although sensory cortex is thought to be important for the perception of complex objects, its specific role in representing complex stimuli remains unknown. Complex objects are rich in information along multiple stimulus dimensions. The position of cortex in the sensory hierarchy suggests that cortical neurons may integrate across these dimensions to form a more gestalt representation of auditory objects. Yet, studies of cortical neurons typically explore single or few dimensions due to the difficulty of determining optimal stimuli in a high dimensional stimulus space. Evolutionary algorithms (EAs) provide a potentially powerful approach for exploring multidimensional stimulus spaces based on real-time spike feedback, but two important issues arise in their application. First, it is unclear whether it is necessary to characterize cortical responses to multidimensional stimuli or whether it suffices to characterize cortical responses to a single dimension at a time. Second, quantitative methods for analyzing complex multidimensional data from an EA are lacking. Here, we apply a statistical method for nonlinear regression, the generalized additive model (GAM), to address these issues. The GAM quantitatively describes the dependence between neural response and all stimulus dimensions. We find that auditory cortical neurons in mice are sensitive to interactions across dimensions. These interactions are diverse across the population, indicating significant integration across stimulus dimensions in auditory cortex. This result strongly motivates using multidimensional stimuli in auditory cortex. Together, the EA and the GAM provide a novel quantitative paradigm for investigating neural coding of complex multidimensional stimuli in auditory and other sensory cortices.

  15. Training-induced plasticity of auditory localization in adult mammals.

    Directory of Open Access Journals (Sweden)

    Oliver Kacelnik

    2006-04-01

    Full Text Available Accurate auditory localization relies on neural computations based on spatial cues present in the sound waves at each ear. The values of these cues depend on the size, shape, and separation of the two ears and can therefore vary from one individual to another. As with other perceptual skills, the neural circuits involved in spatial hearing are shaped by experience during development and retain some capacity for plasticity in later life. However, the factors that enable and promote plasticity of auditory localization in the adult brain are unknown. Here we show that mature ferrets can rapidly relearn to localize sounds after having their spatial cues altered by reversibly occluding one ear, but only if they are trained to use these cues in a behaviorally relevant task, with greater and more rapid improvement occurring with more frequent training. We also found that auditory adaptation is possible in the absence of vision or error feedback. Finally, we show that this process involves a shift in sensitivity away from the abnormal auditory spatial cues to other cues that are less affected by the earplug. The mature auditory system is therefore capable of adapting to abnormal spatial information by reweighting different localization cues. These results suggest that training should facilitate acclimatization to hearing aids in the hearing impaired.

  16. Facing Sound - Voicing Art

    DEFF Research Database (Denmark)

    Lønstrup, Ansa

    2013-01-01

    This article is based on examples of contemporary audiovisual art, with a special focus on the Tony Oursler exhibition Face to Face at Aarhus Art Museum ARoS in Denmark in March-July 2012. My investigation involves a combination of qualitative interviews with visitors, observations of the audienc......´s interactions with the exhibition and the artwork in the museum space and short analyses of individual works of art based on reception aesthetics and phenomenology and inspired by newer writings on sound, voice and listening....

  17. Voice over IP Security

    CERN Document Server

    Keromytis, Angelos D

    2011-01-01

    Voice over IP (VoIP) and Internet Multimedia Subsystem technologies (IMS) are rapidly being adopted by consumers, enterprises, governments and militaries. These technologies offer higher flexibility and more features than traditional telephony (PSTN) infrastructures, as well as the potential for lower cost through equipment consolidation and, for the consumer market, new business models. However, VoIP systems also represent a higher complexity in terms of architecture, protocols and implementation, with a corresponding increase in the potential for misuse. In this book, the authors examine the

  18. Effects on vocal range and voice quality of singing voice training: the classically trained female voice.

    Science.gov (United States)

    Pabon, Peter; Stallinga, Rob; Södersten, Maria; Ternström, Sten

    2014-01-01

    A longitudinal study was performed on the acoustical effects of singing voice training under a given study program, using the voice range profile (VRP). Pretraining and posttraining recordings were made of students who participated in a 3-year bachelor singing study program. A questionnaire that included questions on optimal range, register use, classification, vocal health and hygiene, mixing technique, and training goals was used to rate and categorize self-assessed voice changes. Based on the responses, a subgroup of 10 classically trained female voices was selected, which was homogeneous enough for effects of training to be identified. The VRP perimeter contour was analyzed for effects of voice training. Also, a mapping within the VRP of voice quality, as expressed by the crest factor, was used to indicate the register boundaries and to monitor the acoustical consequences of the newly learned vocal technique of "mixed voice." VRPs were averaged across subjects. Findings were compared with the self-assessed vocal changes. Pre/post comparison of the average VRPs showed, in the midrange, (1) a decrease in the VRP area that was associated with the loud chest voice, (2) a reduction of the crest factor values, and (3) a reduction of maximum sound pressure level values. The students' self-evaluations of the voice changes appeared in some cases to contradict the VRP findings. VRPs of individual voices were seen to change over the course of a singing education. These changes were manifest also in the average group. High-resolution computerized recording, complemented with an acoustic register marker, allows a meaningful assessment of some effects of training, on an individual basis and for groups that comprise singers of a specific genre. It is argued that this kind of investigation is possible only within a focused training program, given by a faculty who has agreed on the goals. Copyright © 2014 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  19. Questioning Photovoice Research: Whose Voice?

    Science.gov (United States)

    Evans-Agnew, Robin A; Rosemberg, Marie-Anne S

    2016-07-01

    Photovoice is an important participatory research tool for advancing health equity. Our purpose is to critically review how participant voice is promoted through the photovoice process of taking and discussing photos and adding text/captions. PubMed, Scopus, PsycINFO, and Web of Science databases were searched from the years 2008 to 2014 using the keywords photovoice, photonovella, photovoice and social justice, and photovoice and participatory action research. Research articles were reviewed for how participant voice was (a) analyzed, (b) exhibited in community forums, and (c) disseminated through published manuscripts. Of 21 studies, 13 described participant voice in the data analysis, 14 described participants' control over exhibiting photo-texts, seven manuscripts included a comprehensive set of photo-texts, and none described participant input on choice of manuscript photo-texts. Photovoice designs vary in the advancement of participant voice, with the least advancement occurring in manuscript publication. Future photovoice researchers should expand approaches to advancing participant voice.

  20. Voice quality of psychological origin.

    Science.gov (United States)

    Teixeira, Antonio; Nunes, Ana; Coimbra, Rosa Lídia; Lima, Rosa; Moutinho, Lurdes

    2008-01-01

    Variations in voice quality are essentially related to modifications of the glottal source parameters, such as: F0, jitter, and shimmer. Voice quality is affected by prosody, emotional state, and vocal pathologies. Psychogenic vocal pathology is particularly interesting. In the present case study, the speaker naturally presented a ventricular band voice whereas in a controlled production he was able to use a more normal phonation process. A small corpus was recorded which included sustained vowels and short sentences in both registers. A normal speaker was also recorded in similar tasks. Annotation and extraction of parameters were made using Praat's voice report function. Application of the Hoarseness Diagram to sustained productions situates this case in the pseudo-glottic phonation region. Analysis of several different parameters related to F0, jitter, shimmer, and harmonicity revealed that the speaker with psychogenic voice was capable of controlling certain parameters (e.g. F0 maximum) but was unable to correct others such as shimmer.

  1. Muscular tension and body posture in relation to voice handicap and voice quality in teachers with persistent voice complaints.

    NARCIS (Netherlands)

    Kooijman, P.G.C.; Jong, F.I.C.R.S. de; Oudes, M.J.; Huinck, W.J.; Acht, H. van; Graamans, K.

    2005-01-01

    The aim of this study was to investigate the relationship between extrinsic laryngeal muscular hypertonicity and deviant body posture on the one hand and voice handicap and voice quality on the other hand in teachers with persistent voice complaints and a history of voice-related absenteeism. The st

  2. Feedforward and Feedback Control in Apraxia of Speech: Effects of Noise Masking on Vowel Production

    Science.gov (United States)

    Maas, Edwin; Mailend, Marja-Liisa; Guenther, Frank H.

    2015-01-01

    Purpose: This study was designed to test two hypotheses about apraxia of speech (AOS) derived from the Directions Into Velocities of Articulators (DIVA) model (Guenther et al., 2006): the feedforward system deficit hypothesis and the feedback system deficit hypothesis. Method: The authors used noise masking to minimize auditory feedback during…

  3. Feedforward and Feedback Control in Apraxia of Speech: Effects of Noise Masking on Vowel Production

    Science.gov (United States)

    Maas, Edwin; Mailend, Marja-Liisa; Guenther, Frank H.

    2015-01-01

    Purpose: This study was designed to test two hypotheses about apraxia of speech (AOS) derived from the Directions Into Velocities of Articulators (DIVA) model (Guenther et al., 2006): the feedforward system deficit hypothesis and the feedback system deficit hypothesis. Method: The authors used noise masking to minimize auditory feedback during…

  4. Auditory-motor mapping for pitch control in singers and nonsingers.

    Science.gov (United States)

    Jones, Jeffery A; Keough, Dwayne

    2008-09-01

    Little is known about the basic processes underlying the behavior of singing. This experiment was designed to examine differences in the representation of the mapping between fundamental frequency (F0) feedback and the vocal production system in singers and nonsingers. Auditory feedback regarding F0 was shifted down in frequency while participants sang the consonant-vowel /ta/. During the initial frequency-altered trials, singers compensated to a lesser degree than nonsingers, but this difference was reduced with continued exposure to frequency-altered feedback. After brief exposure to frequency altered auditory feedback, both singers and nonsingers suddenly heard their F0 unaltered. When participants received this unaltered feedback, only singers' F0 values were found to be significantly higher than their F0 values produced during baseline and control trials. These aftereffects in singers were replicated when participants sang a different note than the note they produced while hearing altered feedback. Together, these results suggest that singers rely more on internal models than nonsingers to regulate vocal productions rather than real time auditory feedback.

  5. Bodies, Spaces, Voices, Silences

    Directory of Open Access Journals (Sweden)

    Donatella Mazzoleni

    2013-07-01

    Full Text Available A good architecture should not only allow functional, formal and technical quality for urban spaces, but also let the voice of the city be perceived, listened, enjoyed. Every city has got its specific sound identity, or “ISO” (R. O. Benenzon, made up of a complex texture of background noises and fluctuation of sound figures emerging and disappearing in a game of continuous fadings. For instance, the ISO of Naples is characterized by a spread need of hearing the sound return of one’s/others voices, by a hate of silence. Cities may fall ill: illness from noise, within super-crowded neighbourhoods, or illness from silence, in the forced isolation of peripheries. The proposal of an urban music therapy denotes an unpublished and innovative enlarged interdisciplinary research path, where architecture, music, medicine, psychology, communication science may converge, in order to work for rebalancing spaces and relation life of the urban collectivity, through the care of body and sound dimensions.

  6. Material differences of auditory source retrieval:Evidence from event-related potential studies

    Institute of Scientific and Technical Information of China (English)

    NIE AiQing; GUO ChunYan; SHEN MoWei

    2008-01-01

    Two event-related potential experiments were conducted to investigate the temporal and the spatial distributions of the old/new effects for the item recognition task and the auditory source retrieval task using picture and Chinese character as stimuli respectively. Stimuli were presented on the center of the screen with their names read out either by female or by male voice simultaneously during the study phase and then two testa were performed separately. One test task was to differentiate the old items from the new ones, and the other task was to judge the items read out by a certain voice during the study phase as targets and other ones as non-targets. The results showed that the old/new effect of the auditory source retrieval task was more sustained over time than that of the item recognition task in both experiments, and the spatial distribution of the former effect was wider than that of the latter one. Both experiments recorded reliable old/new effect over the prefrontal cortex during the source retrieval task. However, there existed some differences of the old/new effect for the auditory source retrieval task between picture and Chinese character, and LORETA source analysis indicated that the differ-ences might be rooted in the temporal lobe. These findings demonstrate that the relevancy of the old/new effects between the item recognition task and the auditory source retrieval task supports the dual-process model; the spatial and the temporal distributions of the old/new effect elicited by the auditory source retrieval task are regulated by both the feature of the experimental material and the perceptual attribute of the voice.

  7. The effects of speech motor preparation on auditory perception

    Science.gov (United States)

    Myers, John

    Perception and action are coupled via bidirectional relationships between sensory and motor systems. Motor systems influence sensory areas by imparting a feedforward influence on sensory processing termed "motor efference copy" (MEC). MEC is suggested to occur in humans because speech preparation and production modulate neural measures of auditory cortical activity. However, it is not known if MEC can affect auditory perception. We tested the hypothesis that during speech preparation auditory thresholds will increase relative to a control condition, and that the increase would be most evident for frequencies that match the upcoming vocal response. Participants performed trials in a speech condition that contained a visual cue indicating a vocal response to prepare (one of two frequencies), followed by a go signal to speak. To determine threshold shifts, voice-matched or -mismatched pure tones were presented at one of three time points between the cue and target. The control condition was the same except the visual cues did not specify a response and subjects did not speak. For each participant, we measured f0 thresholds in isolation from the task in order to establish baselines. Results indicated that auditory thresholds were highest during speech preparation, relative to baselines and a non-speech control condition, especially at suprathreshold levels. Thresholds for tones that matched the frequency of planned responses gradually increased over time, but sharply declined for the mismatched tones shortly before targets. Findings support the hypothesis that MEC influences auditory perception by modulating thresholds during speech preparation, with some specificity relative to the planned response. The threshold increase in tasks vs. baseline may reflect attentional demands of the tasks.

  8. Can you hear me now? Musical training shapes functional brain networks for selective auditory attention and hearing speech in noise

    Directory of Open Access Journals (Sweden)

    Dana L Strait

    2011-06-01

    Full Text Available Even in the quietest of rooms, our senses are perpetually inundated by a barrage of sounds, requiring the auditory system to adapt to a variety of listening conditions in order to extract signals of interest (e.g., one speaker’s voice amidst others. Brain networks that promote selective attention are thought to sharpen the neural encoding of a target signal, suppressing competing sounds and enhancing perceptual performance. Here, we ask: does musical training benefit cortical mechanisms that underlie selective attention to speech? To answer this question, we assessed the impact of selective auditory attention on cortical auditory-evoked response variability in musicians and nonmusicians. Outcomes indicate strengthened brain networks for selective auditory attention in musicians in that musicians but not nonmusicians demonstrate decreased prefrontal response variability with auditory attention. Results are interpreted in the context of previous work from our laboratory documenting perceptual and subcortical advantages in musicians for the hearing and neural encoding of speech in background noise. Musicians’ neural proficiency for selectively engaging and sustaining auditory attention to language indicates a potential benefit of music for auditory training. Given the importance of auditory attention for the development of language-related skills, musical training may aid in the prevention, habilitation and remediation of children with a wide range of attention-based language and learning impairments.

  9. Conservative approaches to the management of voice disorders

    Directory of Open Access Journals (Sweden)

    Kruse, Eberhard

    2005-09-01

    Full Text Available The presence of a voice disorder not only affects social interaction but potentially also has a major impact on the work environment. The latter is becoming more important given the increasing demands employers make in terms of competency in both communication skills and adequacy of phonation. The development of newer and more precise phono-microsurgical techniques for the treatment of an increasing variety of voice disorders has not entirely replaced a conservative approach to voice rehabilitation. Nevertheless, conservative methods have to demonstrate an higher effectiveness in comparison with the microsurgical intervention given the alternative indications. This would be especially true for the more specific and systematically a given individual glottic pathophysiology could be changed in direction of individual phonatory physiology or supplementary phonation mechanism. This desired changing depends not only on the theoretical concepts but also on maintaining strict therapeutic principles during their clinical application. Conservative management of voice disorders has to be intensive and comprehensive, especially in the case of accepting our model of Larnygeal Double Phonation Function and the existence of a phonatory feedback loop.

  10. Crossing Cultures with Multi-Voiced Journals

    Science.gov (United States)

    Styslinger, Mary E.; Whisenant, Alison

    2004-01-01

    In this article, the authors discuss the benefits of using multi-voiced journals as a teaching strategy in reading instruction. Multi-voiced journals, an adaptation of dual-voiced journals, encourage responses to reading in varied, cultured voices of characters. It is similar to reading journals in that they prod students to connect to the lives…

  11. Postoperative functional voice changes after conventional open or robotic thyroidectomy: a prospective trial.

    Science.gov (United States)

    Lee, Jeonghun; Na, Kuk Young; Kim, Ra Mi; Oh, Yeonju; Lee, Ji Hyun; Lee, Jandee; Lee, Jin-Seok; Kim, Chul-Ho; Soh, Euy-Young; Chung, Woong Youn

    2012-09-01

    To use objective and subjective voice function analysis to compare outcomes in patients who had undergone conventional open thyroidectomy or robotic thyroidectomy. The study involved 88 consecutive patients who underwent thyroid surgery between May 2009 and December 2009; 46 patients underwent a conventional open thyroidectomy, and 42 underwent a robotic thyroidectomy. Auditory perceptual evaluation was used to make subjective assessments of voice function, and videolaryngostroboscopy, acoustic voice analysis with aerodynamic study, electroglottography, and voice range profile were used to make objective assessments. Each assessment was made before surgery, and at 1 week and 3 months after surgery. The conventional open and robotic thyroidectomy groups were similar in terms of age, gender ratio, and disease profile. We found that 18 (20.5%) of the 88 patients showed some level of voice dysfunction at 1 week after surgery; that the dysfunction resolved by 3 months after surgery in all cases; and that it was not permanent according to postoperative videolaryngostroboscopy. The conventional open and robotic thyroidectomy groups were found to have similar levels of dysfunction at 1 week after surgery, except for jitter, which was greater in the robotic group. For both groups, any such dysfunction spontaneously resolved by 3 months after surgery, and there were no significant differences between the groups in terms of any voice function parameter. Voice dysfunction was present after both open and robotic thyroidectomy (without any evident laryngeal nerve injury). However, function subsequently normalized to preoperative levels at 3 months after surgery in both groups. Voice function outcomes after robotic thyroidectomy are similar to those after conventional open thyroidectomy.

  12. Lexical frequency and voice assimilation.

    Science.gov (United States)

    Ernestus, Mirjam; Lahey, Mybeth; Verhees, Femke; Baayen, R Harald

    2006-08-01

    Acoustic duration and degree of vowel reduction are known to correlate with a word's frequency of occurrence. The present study broadens the research on the role of frequency in speech production to voice assimilation. The test case was regressive voice assimilation in Dutch. Clusters from a corpus of read speech were more often perceived as unassimilated in lower-frequency words and as either completely voiced (regressive assimilation) or, unexpectedly, as completely voiceless (progressive assimilation) in higher-frequency words. Frequency did not predict the voice classifications over and above important acoustic cues to voicing, suggesting that the frequency effects on the classifications were carried exclusively by the acoustic signal. The duration of the cluster and the period of glottal vibration during the cluster decreased while the duration of the release noises increased with frequency. This indicates that speakers reduce articulatory effort for higher-frequency words, with some acoustic cues signaling more voicing and others less voicing. A higher frequency leads not only to acoustic reduction but also to more assimilation.

  13. Religiosity in young adolescents with auditory vocal hallucinations.

    Science.gov (United States)

    Steenhuis, Laura A; Bartels-Velthuis, Agna A; Jenner, Jack A; Aleman, André; Bruggeman, Richard; Nauta, Maaike H; Pijnenborg, Gerdina H M

    2016-02-28

    The current exploratory study examined the associations between auditory vocal hallucinations (AVH) and delusions and religiosity in young adolescents. 337 children from a population-based case-control study with and without AVH, were assessed after five years at age 12 and 13, on the presence and appraisal of AVH, delusions and religiosity. AVH status (persistent, remittent, incident or control) was examined in relationship to religiosity. Results demonstrated a non-linear association between AVH and religiosity. Moderately religious adolescents were more likely to report AVH than non-religious adolescents (O.R.=2.6). Prospectively, moderately religious adolescents were more likely to have recently developed AVH than non-religious adolescents (O.R.=3.6) and strongly religious adolescents (O.R.=7.9). Of the adolescents reporting voices in this sample (16.3%), more than half reported positive voices. Religious beliefs were often described as supportive, useful or neutral (82%), regardless of the level of religiosity, for both adolescents with and without AVH. Co-occurrence of AVH and delusions, and severity of AVH were not related to religiosity. The present findings suggest there may be a non-linear association between religiosity and hearing voices in young adolescents. A speculative explanation may be that religious practices were adopted in response to AVH as a method of coping. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  14. Interdisciplinary approaches to the phenomenology of auditory verbal hallucinations.

    Science.gov (United States)

    Woods, Angela; Jones, Nev; Bernini, Marco; Callard, Felicity; Alderson-Day, Ben; Badcock, Johanna C; Bell, Vaughan; Cook, Chris C H; Csordas, Thomas; Humpston, Clara; Krueger, Joel; Larøi, Frank; McCarthy-Jones, Simon; Moseley, Peter; Powell, Hilary; Raballo, Andrea; Smailes, David; Fernyhough, Charles

    2014-07-01

    Despite the recent proliferation of scientific, clinical, and narrative accounts of auditory verbal hallucinations (AVHs), the phenomenology of voice hearing remains opaque and undertheorized. In this article, we outline an interdisciplinary approach to understanding hallucinatory experiences which seeks to demonstrate the value of the humanities and social sciences to advancing knowledge in clinical research and practice. We argue that an interdisciplinary approach to the phenomenology of AVH utilizes rigorous and context-appropriate methodologies to analyze a wider range of first-person accounts of AVH at 3 contextual levels: (1) cultural, social, and historical; (2) experiential; and (3) biographical. We go on to show that there are significant potential benefits for voice hearers, clinicians, and researchers. These include (1) informing the development and refinement of subtypes of hallucinations within and across diagnostic categories; (2) "front-loading" research in cognitive neuroscience; and (3) suggesting new possibilities for therapeutic intervention. In conclusion, we argue that an interdisciplinary approach to the phenomenology of AVH can nourish the ethical core of scientific enquiry by challenging its interpretive paradigms, and offer voice hearers richer, potentially more empowering ways to make sense of their experiences.

  15. Behind the Scenes of Auditory Perception

    OpenAIRE

    Shamma, Shihab A.; Micheyl, Christophe

    2010-01-01

    Auditory scenes” often contain contributions from multiple acoustic sources. These are usually heard as separate auditory “streams”, which can be selectively followed over time. How and where these auditory streams are formed in the auditory system is one of the most fascinating questions facing auditory scientists today. Findings published within the last two years indicate that both cortical and sub-cortical processes contribute to the formation of auditory streams, and they raise importan...

  16. Variations in voice level and fundamental frequency with changing background noise level and talker-to-listener distance while wearing hearing protectors: A pilot study.

    Science.gov (United States)

    Bouserhal, Rachel E; Macdonald, Ewen N; Falk, Tiago H; Voix, Jérémie

    2016-01-01

    Speech production in noise with varying talker-to-listener distance has been well studied for the open ear condition. However, occluding the ear canal can affect the auditory feedback and cause deviations from the models presented for the open-ear condition. Communication is a main concern for people wearing hearing protection devices (HPD). Although practical, radio communication is cumbersome, as it does not distinguish designated receivers. A smarter radio communication protocol must be developed to alleviate this problem. Thus, it is necessary to model speech production in noise while wearing HPDs. Such a model opens the door to radio communication systems that distinguish receivers and offer more efficient communication between persons wearing HPDs. This paper presents the results of a pilot study aimed to investigate the effects of occluding the ear on changes in voice level and fundamental frequency in noise and with varying talker-to-listener distance. Twelve participants with a mean age of 28 participated in this study. Compared to existing data, results show a trend similar to the open ear condition with the exception of the occluded quiet condition. This implies that a model can be developed to better understand speech production for the occluded ear.

  17. Auditory and non-auditory effects of noise on health

    NARCIS (Netherlands)

    Basner, M.; Babisch, W.; Davis, A.; Brink, M.; Clark, C.; Janssen, S.A.; Stansfeld, S.

    2013-01-01

    Noise is pervasive in everyday life and can cause both auditory and non-auditory health eff ects. Noise-induced hearing loss remains highly prevalent in occupational settings, and is increasingly caused by social noise exposure (eg, through personal music players). Our understanding of molecular mec

  18. Auditory and non-auditory effects of noise on health

    NARCIS (Netherlands)

    Basner, M.; Babisch, W.; Davis, A.; Brink, M.; Clark, C.; Janssen, S.A.; Stansfeld, S.

    2013-01-01

    Noise is pervasive in everyday life and can cause both auditory and non-auditory health eff ects. Noise-induced hearing loss remains highly prevalent in occupational settings, and is increasingly caused by social noise exposure (eg, through personal music players). Our understanding of molecular

  19. Voice Habits and Behaviors: Voice Care Among Flamenco Singers.

    Science.gov (United States)

    Garzón García, Marina; Muñoz López, Juana; Y Mendoza Lara, Elvira

    2017-03-01

    The purpose of this study is to analyze the vocal behavior of flamenco singers, as compared with classical music singers, to establish a differential vocal profile of voice habits and behaviors in flamenco music. Bibliographic review was conducted, and the Singer's Vocal Habits Questionnaire, an experimental tool designed by the authors to gather data regarding hygiene behavior, drinking and smoking habits, type of practice, voice care, and symptomatology perceived in both the singing and the speaking voice, was administered. We interviewed 94 singers, divided into two groups: the flamenco experimental group (FEG, n = 48) and the classical control group (CCG, n = 46). Frequency analysis, a Likert scale, and discriminant and exploratory factor analysis were used to obtain a differential profile for each group. The FEG scored higher than the CCG in speaking voice symptomatology. The FEG scored significantly higher than the CCG in use of "inadequate vocal technique" when singing. Regarding voice habits, the FEG scored higher in "lack of practice and warm-up" and "environmental habits." A total of 92.6% of the subjects classified themselves correctly in each group. The Singer's Vocal Habits Questionnaire has proven effective in differentiating flamenco and classical singers. Flamenco singers are exposed to numerous vocal risk factors that make them more prone to vocal fatigue, mucosa dehydration, phonotrauma, and muscle stiffness than classical singers. Further research is needed in voice training in flamenco music, as a means to strengthen the voice and enable it to meet the requirements of this musical genre. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  20. Voices of the Unheard

    DEFF Research Database (Denmark)

    Matthiesen, Noomi Christine Linde

    2014-01-01

    . They were in two different classes at both schools, i.e. four classes in total. The families were followed for 18 months. Formal interviews were conducted with mothers and teachers, parent-teacher conferences were recorded, participant observations were conducted in classrooms and playgrounds, afterschool...... is that Somali diaspora parents (and with special focus on mothers as these where the parents who took most responsibility in the four cases of this research) have difficulty expressing their opinions as there are structural, historical and social dynamics that create conditions in which their voices...... are silenced, or at least restricted significantly, resulting in marginalizing consequences. The focus in each article is on here-and-now interactional dynamics but in order to understand these constitutive negotiations, it is argued that the analysis must be situated in a description of the constituted...

  1. Passing on power & voice

    DEFF Research Database (Denmark)

    Noer, Vibeke Røn; Nielsen, Cathrine Sand

    2014-01-01

    . The education lasts for 3,5 years and the landmark of the educational model is the continuously shifts between teaching in classroom and teaching in clinical practice. Clinical teaching takes place at approved clinical placement institutions in hospitals and in the social and health care services outside...... intention of gaining knowledge about other possible ways to perform the education. The class, named the E-class, followed what in the field was named ‘an experimental educational model based on experienced-based learning’ (Nielsen et al. 2011). The experiential educational model is argued as an experiment.......aspx Higher degree of student involvement in planning as well as teaching was in the field presented as a part of ‘the overall educational approach’. In the course ‘Acute, Critical Nursing & Terminal, Palliative Care’ this was transferred into an innovative pedagogy with intend to pass on power and voice...

  2. Voice stress analysis

    Science.gov (United States)

    Brenner, Malcolm; Shipp, Thomas

    1988-01-01

    In a study of the validity of eight candidate voice measures (fundamental frequency, amplitude, speech rate, frequency jitter, amplitude shimmer, Psychological Stress Evaluator scores, energy distribution, and the derived measure of the above measures) for determining psychological stress, 17 males age 21 to 35 were subjected to a tracking task on a microcomputer CRT while parameters of vocal production as well as heart rate were measured. Findings confirm those of earlier studies that increases in fundamental frequency, amplitude, and speech rate are found in speakers involved in extreme levels of stress. In addition, it was found that the same changes appear to occur in a regular fashion within a more subtle level of stress that may be characteristic, for example, of routine flying situations. None of the individual speech measures performed as robustly as did heart rate.

  3. Voice over IP

    OpenAIRE

    Mantula, Juha

    2006-01-01

    Tämä opinnäytetyö käsittelee Voice over Internet Protocol -tekniikkaa ja sen tuomia mahdollisuuksia yrityselämässä. Teoriaosa käsittelee VoIP:n kannalta tärkeitä pro-tokollia ja standardeja, VoIP:n ominaisuuksia sekä esittelee erilaisia puheohjelmia, jotka käyttävät VoIP-tekniikkaa hyväkseen. Empiirinen osuus tutkii Viestintä Ky Pitkärannan Skype-ohjelman käyttöä. Työn tarkoituksena on selvittää VoIP:n hyviä ja huonoja puolia ja sitä kuinka tek-niikkaa voidaan käyttää hyväksi päivittäisessä ...

  4. Left temporal lobe structural and functional abnormality underlying auditory hallucinations

    Directory of Open Access Journals (Sweden)

    Kenneth Hugdahl

    2009-05-01

    Full Text Available In this article, we review recent findings from our laboratory that auditory hallucinations in schizophrenia are internally generated speech mis-representations lateralized to the left superior temporal gyrus and sulcus. Such experiences are, moreover, not cognitively suppressed due to enhanced attention to the voices and failure of fronto-parietal executive control functions. An overview of diagnostic questionnaires for scoring of symptoms is presented, together with a review of behavioural, structural and functional MRI data. Functional imaging data have either shown increased or decreased activation depending on whether patients have been presented an external stimulus or not during scanning. Structural imaging data have shown reduction of grey matter density and volume in the same areas in the temporal lobe. The behavioral and neuroimaging findings are moreover hypothesized to be related to glutamate hypofunction in schizophrenia. We propose a model for the understanding of auditory hallucinations that trace the origin of auditory hallucinations to uncontrolled neuronal firing in the speech areas in the left temporal lobe, which is not suppressed by volitional cognitive control processes, due to dysfunctional fronto-parietal executive cortical networks.

  5. Synchrony of auditory brain responses predicts behavioral ability to keep still in children with autism spectrum disorder: Auditory-evoked response in children with autism spectrum disorder.

    Science.gov (United States)

    Yoshimura, Yuko; Kikuchi, Mitsuru; Hiraishi, Hirotoshi; Hasegawa, Chiaki; Takahashi, Tetsuya; Remijn, Gerard B; Oi, Manabu; Munesue, Toshio; Higashida, Haruhiro; Minabe, Yoshio

    2016-01-01

    The auditory-evoked P1m, recorded by magnetoencephalography, reflects a central auditory processing ability in human children. One recent study revealed that asynchrony of P1m between the right and left hemispheres reflected a central auditory processing disorder (i.e., attention deficit hyperactivity disorder, ADHD) in children. However, to date, the relationship between auditory P1m right-left hemispheric synchronization and the comorbidity of hyperactivity in children with autism spectrum disorder (ASD) is unknown. In this study, based on a previous report of an asynchrony of P1m in children with ADHD, to clarify whether the P1m right-left hemispheric synchronization is related to the symptom of hyperactivity in children with ASD, we investigated the relationship between voice-evoked P1m right-left hemispheric synchronization and hyperactivity in children with ASD. In addition to synchronization, we investigated the right-left hemispheric lateralization. Our findings failed to demonstrate significant differences in these values between ASD children with and without the symptom of hyperactivity, which was evaluated using the Autism Diagnostic Observational Schedule, Generic (ADOS-G) subscale. However, there was a significant correlation between the degrees of hemispheric synchronization and the ability to keep still during 12-minute MEG recording periods. Our results also suggested that asynchrony in the bilateral brain auditory processing system is associated with ADHD-like symptoms in children with ASD.

  6. Auditory discrimination of force of impact.

    Science.gov (United States)

    Lutfi, Robert A; Liu, Ching-Ju; Stoelinga, Christophe N J

    2011-04-01

    The auditory discrimination of force of impact was measured for three groups of listeners using sounds synthesized according to first-order equations of motion for the homogenous, isotropic bar [Morse and Ingard (1968). Theoretical Acoustics pp. 175-191]. The three groups were professional percussionists, nonmusicians, and individuals recruited from the general population without regard to musical background. In the two-interval, forced-choice procedure, listeners chose the sound corresponding to the greater force of impact as the length of the bar varied from one presentation to the next. From the equations of motion, a maximum-likelihood test for the task was determined to be of the form Δlog A + αΔ log f > 0, where A and f are the amplitude and frequency of any one partial and α = 0.5. Relative decision weights on Δ log f were obtained from the trial-by-trial responses of listeners and compared to α. Percussionists generally outperformed the other groups; however, the obtained decision weights of all listeners deviated significantly from α and showed variability within groups far in excess of the variability associated with replication. Providing correct feedback after each trial had little effect on the decision weights. The variability in these measures was comparable to that seen in studies involving the auditory discrimination of other source attributes.

  7. Voice and choice by delegation.

    Science.gov (United States)

    van de Bovenkamp, Hester; Vollaard, Hans; Trappenburg, Margo; Grit, Kor

    2013-02-01

    In many Western countries, options for citizens to influence public services are increased to improve the quality of services and democratize decision making. Possibilities to influence are often cast into Albert Hirschman's taxonomy of exit (choice), voice, and loyalty. In this article we identify delegation as an important addition to this framework. Delegation gives individuals the chance to practice exit/choice or voice without all the hard work that is usually involved in these options. Empirical research shows that not many people use their individual options of exit and voice, which could lead to inequality between users and nonusers. We identify delegation as a possible solution to this problem, using Dutch health care as a case study to explore this option. Notwithstanding various advantages, we show that voice and choice by delegation also entail problems of inequality and representativeness.

  8. The Christian voice in philosophy

    Directory of Open Access Journals (Sweden)

    Stuart Fowler

    1982-03-01

    Full Text Available In this paper the Rev. Stuart Fowler outlines a Christian voice in Philosophy and urges the Christian philosopher to investigate his position and his stance with integrity and honesty.

  9. Voice Force tulekul / Tõnu Ojala

    Index Scriptorium Estoniae

    Ojala, Tõnu, 1969-

    2005-01-01

    60. sünnipäeva tähistava Tallinna Tehnikaülikooli Akadeemilise Meeskoori juubelihooaja üritusest - a capella pop-gruppide festivalist Voice Force (kontserdid 12. nov. klubis Parlament ja 3. dets. Vene Kultuurikeskuses)

  10. Voice Force tulekul / Tõnu Ojala

    Index Scriptorium Estoniae

    Ojala, Tõnu, 1969-

    2005-01-01

    60. sünnipäeva tähistava Tallinna Tehnikaülikooli Akadeemilise Meeskoori juubelihooaja üritusest - a capella pop-gruppide festivalist Voice Force (kontserdid 12. nov. klubis Parlament ja 3. dets. Vene Kultuurikeskuses)

  11. Word Recognition in Auditory Cortex

    Science.gov (United States)

    DeWitt, Iain D. J.

    2013-01-01

    Although spoken word recognition is more fundamental to human communication than text recognition, knowledge of word-processing in auditory cortex is comparatively impoverished. This dissertation synthesizes current models of auditory cortex, models of cortical pattern recognition, models of single-word reading, results in phonetics and results in…

  12. Partial Epilepsy with Auditory Features

    Directory of Open Access Journals (Sweden)

    J Gordon Millichap

    2004-07-01

    Full Text Available The clinical characteristics of 53 sporadic (S cases of idiopathic partial epilepsy with auditory features (IPEAF were analyzed and compared to previously reported familial (F cases of autosomal dominant partial epilepsy with auditory features (ADPEAF in a study at the University of Bologna, Italy.

  13. Feature Extraction of Voice Segments Using Cepstral Analysis for Voice Regeneration

    OpenAIRE

    Banerjee, P. S.; Baisakhi Chakraborty; Jaya Banerjee

    2015-01-01

    Even though a lot of work has been done on areas of speech to text and vice versa or voice detection or similarity analysis of two voice samples but very less emphasis has be given to voice regeneration. General algorithms for distinct voice checking for two voice sources paved way for our endeavor in reconstructing the voice from the source voice samples provided. By utilizing these algorithms and putting further stress on the feature extraction part we tried to fabricate the source voice wi...

  14. Associative Plasticity in the Medial Auditory Thalamus and Cerebellar Interpositus Nucleus During Eyeblink Conditioning

    Science.gov (United States)

    Halverson, Hunter E.; Lee, Inah; Freeman, John H.

    2010-01-01

    Eyeblink conditioning, a type of associative motor learning, requires the cerebellum. The medial auditory thalamus is a necessary source of stimulus input to the cerebellum during auditory eyeblink conditioning. Nothing is currently known about interactions between the thalamus and cerebellum during associative learning. In the current study, neuronal activity was recorded in the cerebellar interpositus nucleus and medial auditory thalamus simultaneously from multiple tetrodes during auditory eyeblink conditioning to examine the relative timing of learning-related plasticity within these interconnected areas. Learning-related changes in neuronal activity correlated with the eyeblink conditioned response were evident in the cerebellum before the medial auditory thalamus over the course of training and within conditioning trials, suggesting that thalamic plasticity may be driven by cerebellar feedback. Short-latency plasticity developed in the thalamus during the first conditioning session and may reflect attention to the conditioned stimulus. Extinction training resulted in a decrease in learning-related activity in both structures and an increase in inhibition within the cerebellum. A feedback projection from the cerebellar nuclei to the medial auditory thalamus was identified, which may play a role in learning by facilitating stimulus input to the cerebellum via the thalamo-pontine projection. PMID:20592200

  15. Work-related voice disorder

    OpenAIRE

    Paulo Eduardo Przysiezny; Luciana Tironi Sanson Przysiezny

    2015-01-01

    INTRODUCTION: Dysphonia is the main symptom of the disorders of oral communication. However, voice disorders also present with other symptoms such as difficulty in maintaining the voice (asthenia), vocal fatigue, variation in habitual vocal fundamental frequency, hoarseness, lack of vocal volume and projection, loss of vocal efficiency, and weakness when speaking. There are several proposals for the etiologic classification of dysphonia: functional, organofunctional, organic, and work-related...

  16. Tracheostomy cannulas and voice prosthesis.

    Science.gov (United States)

    Kramp, Burkhard; Dommerich, Steffen

    2009-01-01

    Cannulas and voice prostheses are mechanical aids for patients who had to undergo tracheotomy or laryngectomy for different reasons. For better understanding of the function of those artificial devices, first the indications and particularities of the previous surgical intervention are described in the context of this review. Despite the established procedure of percutaneous dilatation tracheotomy e.g. in intensive care units, the application of epithelised tracheostomas has its own position, especially when airway obstruction is persistent (e.g. caused by traumata, inflammations, or tumors) and a longer artificial ventilation or special care of the patient are required. In order to keep the airways open after tracheotomy, tracheostomy cannulas of different materials with different functions are available. For each patient the most appropriate type of cannula must be found. Voice prostheses are meanwhile the device of choice for rapid and efficient voice rehabilitation after laryngectomy. Individual sizes and materials allow adaptation of the voice prostheses to the individual anatomical situation of the patients. The combined application of voice prostheses with HME (Head and Moisture Exchanger) allows a good vocal as well as pulmonary rehabilitation. Precondition for efficient voice prosthesis is the observation of certain surgical principles during laryngectomy. The duration of the prosthesis mainly depends on material properties and biofilms, mostly consisting of funguses and bacteries. The quality of voice with valve prosthesis is clearly superior to esophagus prosthesis or electro-laryngeal voice. Whenever possible, tracheostoma valves for free-hand speech should be applied. Physicians taking care of patients with speech prostheses after laryngectomy should know exactly what to do in case the device fails or gets lost.

  17. Voice Collection under Different Spectrum

    Directory of Open Access Journals (Sweden)

    Min Li

    2013-05-01

    Full Text Available According to the short-time Fourier transform theory and principle of digital filtering, this paper established a mathematical model called collection of voice signal collection at different spectrum. The voice signal was a non-stationary process, while the standard Fourier transform only applied to the periodic signal, transient signals or stationary random signal. Therefore, the standard Fourier transform could not be directly used for the speech signal. By controlling the input different types and parameters, this paper analyzed the collected original voice signal spectrum with the use of MATLAB software platform. At the same time, it realized the extraction, recording and playback of the speech signal at different frequencies. Therefore, the waveforms could be displayed obviously on the graphic user interface and voice effect could be more clearly. Meanwhile, the result was verified by the hardware platforms, which consisted of TMS320VC5509A [1] chip and TLV320AIC23 voice chip. The results showed that the extraction of voice signal under different spectrum model was scientific, rational and effective.

  18. Acute effects of radioiodine therapy on the voice and larynx of basedow-Graves patients

    Energy Technology Data Exchange (ETDEWEB)

    Isolan-Cury, Roberta Werlang; Cury, Adriano Namo [Sao Paulo Santa Casa de Misericordia, SP (Brazil). Medical Science School (FCMSCSP); Monte, Osmar [Sao Paulo Santa Casa de Misericordia, SP (Brazil). Physiology Department; Silva, Marta Assumpcao de Andrada e [Sao Paulo Santa Casa de Misericordia, SP (Brazil). Medical Science School (FCMSCSP). Speech Therapy School; Duprat, Andre [Sao Paulo Santa Casa de Misericordia, SP (Brazil). Medical Science School (FCMSCSP). Otorhinolaryngology Department; Marone, Marilia [Nuclimagem - Irmanity of the Sao Paulo Santa Casa de Misericordia, SP (Brazil). Nuclear Medicine Unit; Almeida, Renata de; Iglesias, Alexandre [Sao Paulo Santa Casa de Misericordia, SP (Brazil). Medical Science School (FCMSCSP). Otorhinolaryngology Department. Endocrinology and Metabology Unit

    2008-07-01

    Graves's disease is the most common cause of hyperthyroidism. There are three current therapeutic options: anti-thyroid medication, surgery, and radioactive iodine (I 131). There are few data in the literature regarding the effects of radioiodine therapy on the larynx and voice. The aim of this study was: to assess the effect of radioiodine therapy on the voice of Basedow-Graves patients. Material and method: A prospective study was done. Following the diagnosis of Grave's disease, patients underwent investigation of their voice, measurement of maximum phonatory time (/a/) and the s/z ratio, fundamental frequency analysis (Praat software), laryngoscopy and (perceptive-auditory) analysis in three different conditions: pre-treatment, 4 days, and 20 days post-radioiodine therapy. Conditions are based on the inflammatory pattern of thyroid tissue (Jones et al. 1999). Results: No statistically significant differences were found in voice characteristics in these three conditions. Conclusion: Radioiodine therapy does not affect voice quality. (author)

  19. Phonological systems of pediatric cochlear implant users: The acquisition of voicing

    Science.gov (United States)

    Chin, Steven B.; Oglesbee, Eric N.; Kirk, Andrew K.; Krug, Joseph E.

    2005-04-01

    Although cochlear implants are primarily auditory prostheses, they have also demonstrated their usefulness as aids to speech production and the acquisition of spoken language in children. This presentation reports on research currently being conducted at the Indiana University Medical Center on the development of phonological systems by children with five or more years of cochlear implant use in English-speaking environments. Characteristics of the feature [voice] will be examined in children with cochlear implants and in two comparison groups: adults with normal hearing and children with normal hearing. Specific aspects of voicing to be discussed include characteristic error patterns, phonetic implementation of the voicing contrast, and phonetic implementation of neutralization of the voicing contrast. Much of the evidence obtained thus far indicates that voicing acquisition in children with cochlear implants is not radically different from that of children with normal hearing. Many differences between the systems of children with cochlear implants and the ambient system thus appear to reflect the children's age as much as their hearing status. [Work supported by grants from the National Institutes of Health to Indiana University: R01DC005594 and R03DC003852.

  20. The impact of voice on speech realization

    Directory of Open Access Journals (Sweden)

    Jelka Breznik

    2014-12-01

    Full Text Available The study discusses spoken literary language and the impact of voice on speech realization. The voice consists of a sound made by a human being using the vocal folds for talking, singing, laughing, crying, screaming… The human voice is specifically the part of human sound production in which the vocal folds (vocal cords are the primary sound source. Our voice is our instrument and identity card. How does the voice (voice tone affect others and how do they respond, positively or negatively? How important is voice (voice tone in communication process? The study presents how certain individuals perceive voice. The results of the research on the relationships between the spoken word, excellent speaker, voice and description / definition / identification of specific voices done by experts in the field of speech and voice as well as non-professionals are presented. The study encompasses two focus groups. One consists of amateurs (non-specialists in the field of speech or voice who have no knowledge in this field and the other consists of professionals who work with speech or language or voice. The questions were intensified from general to specific, directly related to the topic. The purpose of such a method of questioning was to create relaxed atmosphere, promote discussion, allow participants to interact, complement, and to set up self-listening and additional comments.

  1. Peripheral Auditory Mechanisms

    CERN Document Server

    Hall, J; Hubbard, A; Neely, S; Tubis, A

    1986-01-01

    How weIl can we model experimental observations of the peripheral auditory system'? What theoretical predictions can we make that might be tested'? It was with these questions in mind that we organized the 1985 Mechanics of Hearing Workshop, to bring together auditory researchers to compare models with experimental observations. Tbe workshop forum was inspired by the very successful 1983 Mechanics of Hearing Workshop in Delft [1]. Boston University was chosen as the site of our meeting because of the Boston area's role as a center for hearing research in this country. We made a special effort at this meeting to attract students from around the world, because without students this field will not progress. Financial support for the workshop was provided in part by grant BNS- 8412878 from the National Science Foundation. Modeling is a traditional strategy in science and plays an important role in the scientific method. Models are the bridge between theory and experiment. Tbey test the assumptions made in experim...

  2. Auditory Cortex Characteristics in Schizophrenia: Associations With Auditory Hallucinations.

    Science.gov (United States)

    Mørch-Johnsen, Lynn; Nesvåg, Ragnar; Jørgensen, Kjetil N; Lange, Elisabeth H; Hartberg, Cecilie B; Haukvik, Unn K; Kompus, Kristiina; Westerhausen, René; Osnes, Kåre; Andreassen, Ole A; Melle, Ingrid; Hugdahl, Kenneth; Agartz, Ingrid

    2017-01-01

    Neuroimaging studies have demonstrated associations between smaller auditory cortex volume and auditory hallucinations (AH) in schizophrenia. Reduced cortical volume can result from a reduction of either cortical thickness or cortical surface area, which may reflect different neuropathology. We investigate for the first time how thickness and surface area of the auditory cortex relate to AH in a large sample of schizophrenia spectrum patients. Schizophrenia spectrum (n = 194) patients underwent magnetic resonance imaging. Mean cortical thickness and surface area in auditory cortex regions (Heschl's gyrus [HG], planum temporale [PT], and superior temporal gyrus [STG]) were compared between patients with (AH+, n = 145) and without (AH-, n = 49) a lifetime history of AH and 279 healthy controls. AH+ patients showed significantly thinner cortex in the left HG compared to AH- patients (d = 0.43, P = .0096). There were no significant differences between AH+ and AH- patients in cortical thickness in the PT or STG, or in auditory cortex surface area in any of the regions investigated. Group differences in cortical thickness in the left HG was not affected by duration of illness or current antipsychotic medication. AH in schizophrenia patients were related to thinner cortex, but not smaller surface area of the left HG, a region which includes the primary auditory cortex. The results support that structural abnormalities of the auditory cortex underlie AH in schizophrenia. © The Author 2016. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  3. Multimodal information Management: Evaluation of Auditory and Haptic Cues for NextGen Communication Displays

    Science.gov (United States)

    Begault, Durand R.; Bittner, Rachel M.; Anderson, Mark R.

    2012-01-01

    Auditory communication displays within the NextGen data link system may use multiple synthetic speech messages replacing traditional ATC and company communications. The design of an interface for selecting amongst multiple incoming messages can impact both performance (time to select, audit and release a message) and preference. Two design factors were evaluated: physical pressure-sensitive switches versus flat panel "virtual switches", and the presence or absence of auditory feedback from switch contact. Performance with stimuli using physical switches was 1.2 s faster than virtual switches (2.0 s vs. 3.2 s); auditory feedback provided a 0.54 s performance advantage (2.33 s vs. 2.87 s). There was no interaction between these variables. Preference data were highly correlated with performance.

  4. Exploring the Impact of Role-Playing on Peer Feedback in an Online Case-Based Learning Activity

    Science.gov (United States)

    Ching, Yu-Hui

    2014-01-01

    This study explored the impact of role-playing on the quality of peer feedback and learners' perception of this strategy in a case-based learning activity with VoiceThread in an online course. The findings revealed potential positive impact of role-playing on learners' generation of constructive feedback as role-playing was associated…

  5. Early development of polyphonic sound encoding and the high voice superiority effect.

    Science.gov (United States)

    Marie, Céline; Trainor, Laurel J

    2014-05-01

    Previous research suggests that when two streams of pitched tones are presented simultaneously, adults process each stream in a separate memory trace, as reflected by mismatch negativity (MMN), a component of the event-related potential (ERP). Furthermore, a superior encoding of the higher tone or voice in polyphonic sounds has been found for 7-month-old infants and both musician and non-musician adults in terms of a larger amplitude MMN in response to pitch deviant stimuli in the higher than the lower voice. These results, in conjunction with modeling work, suggest that the high voice superiority effect might originate in characteristics of the peripheral auditory system. If this is the case, the high voice superiority effect should be present in infants younger than 7 months. In the present study we tested 3-month-old infants as there is no evidence at this age of perceptual narrowing or specialization of musical processing according to the pitch or rhythmic structure of music experienced in the infant׳s environment. We presented two simultaneous streams of tones (high and low) with 50% of trials modified by 1 semitone (up or down), either on the higher or the lower tone, leaving 50% standard trials. Results indicate that like the 7-month-olds, 3-month-old infants process each tone in a separate memory trace and show greater saliency for the higher tone. Although MMN was smaller and later in both voices for the group of sixteen 3-month-olds compared to the group of sixteen 7-month-olds, the size of the difference in MMN for the high compared to low voice was similar across ages. These results support the hypothesis of an innate peripheral origin of the high voice superiority effect.

  6. Effect of adenoid hypertrophy on the voice and laryngeal mucosa in children.

    Science.gov (United States)

    Gomaa, Mohammed A; Mohammed, Haitham M; Abdalla, Adel A; Nasr, Dalia M

    2013-12-01

    The adenoids, or pharyngeal tonsils, are lymphatic tissue localized at the mucous layer of the roof and posterior wall of nasopharynx. Dysphonia defined as perceptual audible change of a patient's habitual voice as self judged or judged by his or her listeners. The diagnosis of dysphonia relies on clinical judgment based on phoniatric symptoms, auditory perceptual assessment of voice (APA) and full laryngeal examination. Our study was conducted to evaluate the effect of adenoid hypertrophy on voice and laryngeal mucosa. The study sample composed of sixty children, forty of them had adenoid hypertrophy (patient's group) and twenty healthy children (control group). Patient's group composed of 17 boys (42.5%) and 23 girls (57.5%), while control group consists of 8 males (40%) and 12 females (60%). All patients and control group subjected to history taking, clinical examination, lateral soft tissue X-ray on the nasopharynx, APA based on the modified GRBAS scale and full laryngeal examination. The data are collected and analyzed statistically by using software SPSS. Our results showed that there is a significant association between adenoid hypertrophy and, degree of dysphonia, leaky voice, pitch of voice and laryngeal lesion. Adenoid hypertrophy did not associate with loudness of voice, as well as character (irregular, breathy and strained). Laryngeal lesions were detected in thirteen children from patient group (32.5%): nodules (n = 6), thickening (n = 5), congestion (n = 2), while one child only out of 20 children of the control group had congestion (5.0%). Our results showed the importance of the assessment of voice and laryngeal examination in patients with adenoid hypertrophy, also treating the minimal mucosal lesions that results from adenoid hypertrophy should be taken in consideration. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  7. [Physiognomy-accompanying auditory hallucinations in schizophrenia: psychopathological investigation of 10 patients].

    Science.gov (United States)

    Nagashima, Hideaki; Kobayashi, Toshiyuki

    2010-01-01

    We previously reported two schizophrenic patients with characteristic hallucinations consisting of auditory hallucinations accompanied by visual hallucinations of the speaker's face. The patient sees the face of the hallucinatory speaker in his/her mind and hears the voice talking inwardly. We termed these experiences physiognomy-accompanying auditory hallucinations. In this report, we present 10 patients with schizophrenia showing physiognomy-accompanying auditory hallucinations and evaluate the characteristics of these clinical symptoms. Moreover we consider what the symptoms mean for patients and the metabasis from structural aspects. Lastly, we consider how we can treat these patients living autistic lives with the symptoms. During physiognomy-accompanying auditory hallucinations, the realistic face moves its mouth and talks to the patient expressively. In early onset cases, the faces of various real people appear talking about ordinary things while in late onset cases, the faces can be imaginary but are mainly real people talking about ordinary or delusional things. We suppose that these characteristics of the symptoms unify the schizophrenic world overwhelmed by "a force of non-sense" to "the sense field". "The force of non-sense" is a substantial power but cannot be reduced to the real meaning. And we suppose that not visual reality but the intensity of auditory hallucinations of the face brings about the overwhelming intensity of symptoms and the substantiality of this intensity depends on the states of excessive fullness of "the force of non-sense". With these symptoms patients see the narration of auditory hallucinations through the facial image and the content of auditory hallucinations is compressed into the movement of visual hallucinations of the speaker's face. The form of symptoms is realistic but the speaker's face and voice are beyond ordinary time and space. The symptoms are essentially different from ordinary perception. The visual

  8. Intraoperative auditory monitoring in vestibular schwannoma surgery: new trends.

    Science.gov (United States)

    Schmerber, Sébastien; Lavieille, Jean-Pierre; Dumas, Georges; Herve, Thierry

    2004-01-01

    To investigate the efficiency of a new method of brainstem auditory-evoked potential (BAEP) monitoring during complete vestibular schwannoma (VS) resection with attempted hearing preservation. Dedicated software providing near real-time recording was developed using a rejection strategy of artifacts based on spectral analysis. A small sample number (maximum 200) is required and results are obtained within 10 s. Fourteen consecutive patients with hearing class A operated on for VS, in an attempt to preserve hearing, participated in the investigation. Postoperatively, 7 patients (50%) had useful hearing (hearing class A, 4/14; hearing class B, 3/14) on the operated side. Seven patients (50%) were reduced to hearing class D. Drilling of the internal auditory canal (IAC) and tumor removal at the lateral end of the IAC were identified as the two most critical steps for achieving hearing preservation. Intraoperative BAEP monitoring was sensitive in detecting auditory damage with useful feedback but its effectiveness in preventing irreversible hearing impairment was not demonstrated in this study. Combined BAEP and direct auditory nerve monitoring using the same equipment will be performed in the future in an attempt to enhance the chances of preventing irreversible hearing damage, and possibly to improve the hearing outcome significantly.

  9. Auditory Signal Processing in Communication: Perception and Performance of Vocal Sounds

    Science.gov (United States)

    Prather, Jonathan F.

    2013-01-01

    Learning and maintaining the sounds we use in vocal communication require accurate perception of the sounds we hear performed by others and feedback-dependent imitation of those sounds to produce our own vocalizations. Understanding how the central nervous system integrates auditory and vocal-motor information to enable communication is a fundamental goal of systems neuroscience, and insights into the mechanisms of those processes will profoundly enhance clinical therapies for communication disorders. Gaining the high-resolution insight necessary to define the circuits and cellular mechanisms underlying human vocal communication is presently impractical. Songbirds are the best animal model of human speech, and this review highlights recent insights into the neural basis of auditory perception and feedback-dependent imitation in those animals. Neural correlates of song perception are present in auditory areas, and those correlates are preserved in the auditory responses of downstream neurons that are also active when the bird sings. Initial tests indicate that singing-related activity in those downstream neurons is associated with vocal-motor performance as opposed to the bird simply hearing itself sing. Therefore, action potentials related to auditory perception and action potentials related to vocal performance are co-localized in individual neurons. Conceptual models of song learning involve comparison of vocal commands and the associated auditory feedback to compute an error signal that is used to guide refinement of subsequent song performances, yet the sites of that comparison remain unknown. Convergence of sensory and motor activity onto individual neurons points to a possible mechanism through which auditory and vocal-motor signals may be linked to enable learning and maintenance of the sounds used in vocal communication. PMID:23827717

  10. Performance of the phonatory deviation diagram in the evaluation of rough and breathy synthesized voices.

    Science.gov (United States)

    Lopes, Leonardo Wanderley; Freitas, Jonas Almeida de; Almeida, Anna Alice; Silva, Priscila Oliveira Costa; Alves, Giorvan Ânderson Dos Santos

    2017-07-05

    Voice disorders alter the sound signal in several ways, combining several types of vocal emission disturbances and noise. The Phonatory Deviation Diagram (PDD) is a two-dimensional chart that allows the evaluation of the vocal signal based on the combination of periodicity (jitter, shimmer, and correlation coefficient) and noise (Glottal to Noise Excitation - GNE) measurements. The use of synthesized signals, where one has a greater control and knowledge of the production conditions, may allow a better understanding of the physiological and acoustic mechanisms underlying the vocal emission and its main perceptual-auditory correlates regarding the intensity of the deviation and types of vocal quality. To analyze the performance of the PDD in the discrimination of the presence and degree of roughness and breathiness in synthesized voices. 871 synthesized vocal signals were used corresponding to the vowel /ɛ/. The perceptual-auditory analysis of the degree of roughness and breathiness of the synthesized signals was performed using Visual Analogue Scale (VAS). Subsequently, the signals were categorized regarding the presence/absence of these parameters based on the VAS cutoff values. Acoustic analysis was performed by assessing the distribution of vocal signals according to the PDD area, quadrant, shape, and density. The equality of proportions and the chi-square tests were performed to compare the variables. Rough and breathy vocal signals were located predominantly outside the normal range and in the lower right quadrant of the PDD. Voices with higher degrees of roughness and breathiness were located outside the area of normality in the lower right quadrant and had concentrated density. The normality area and the PDD quadrant can discriminate healthy voices from rough and breathy ones. Voices with higher degrees of roughness and breathiness are proportionally located outside the area of normality, in the lower right quadrant and with concentrated density. Copyright

  11. Integration of auditory and visual speech information

    NARCIS (Netherlands)

    Hall, M.; Smeele, P.M.T.; Kuhl, P.K.

    1998-01-01

    The integration of auditory and visual speech is observed when modes specify different places of articulation. Influences of auditory variation on integration were examined using consonant identifi-cation, plus quality and similarity ratings. Auditory identification predicted auditory-visual integra

  12. Study of Harmonics-to-Noise Ratio and Critical-Band Energy Spectrum of Speech as Acoustic Indicators of Laryngeal and Voice Pathology

    Directory of Open Access Journals (Sweden)

    Niranjan U. Cholayya

    2007-01-01

    Full Text Available Acoustic analysis of speech signals is a noninvasive technique that has been proved to be an effective tool for the objective support of vocal and voice disease screening. In the present study acoustic analysis of sustained vowels is considered. A simple k-means nearest neighbor classifier is designed to test the efficacy of a harmonics-to-noise ratio (HNR measure and the critical-band energy spectrum of the voiced speech signal as tools for the detection of laryngeal pathologies. It groups the given voice signal sample into pathologic and normal. The voiced speech signal is decomposed into harmonic and noise components using an iterative signal extrapolation algorithm. The HNRs at four different frequency bands are estimated and used as features. Voiced speech is also filtered with 21 critical-bandpass filters that mimic the human auditory neurons. Normalized energies of these filter outputs are used as another set of features. The results obtained have shown that the HNR and the critical-band energy spectrum can be used to correlate laryngeal pathology and voice alteration, using previously classified voice samples. This method could be an additional acoustic indicator that supplements the clinical diagnostic features for voice evaluation.

  13. Study of Harmonics-to-Noise Ratio and Critical-Band Energy Spectrum of Speech as Acoustic Indicators of Laryngeal and Voice Pathology

    Directory of Open Access Journals (Sweden)

    Cholayya Niranjan U

    2007-01-01

    Full Text Available Acoustic analysis of speech signals is a noninvasive technique that has been proved to be an effective tool for the objective support of vocal and voice disease screening. In the present study acoustic analysis of sustained vowels is considered. A simple -means nearest neighbor classifier is designed to test the efficacy of a harmonics-to-noise ratio (HNR measure and the critical-band energy spectrum of the voiced speech signal as tools for the detection of laryngeal pathologies. It groups the given voice signal sample into pathologic and normal. The voiced speech signal is decomposed into harmonic and noise components using an iterative signal extrapolation algorithm. The HNRs at four different frequency bands are estimated and used as features. Voiced speech is also filtered with 21 critical-bandpass filters that mimic the human auditory neurons. Normalized energies of these filter outputs are used as another set of features. The results obtained have shown that the HNR and the critical-band energy spectrum can be used to correlate laryngeal pathology and voice alteration, using previously classified voice samples. This method could be an additional acoustic indicator that supplements the clinical diagnostic features for voice evaluation.

  14. Mechanics of human voice production and control

    Science.gov (United States)

    Zhang, Zhaoyan

    2016-01-01

    As the primary means of communication, voice plays an important role in daily life. Voice also conveys personal information such as social status, personal traits, and the emotional state of the speaker. Mechanically, voice production involves complex fluid-structure interaction within the glottis and its control by laryngeal muscle activation. An important goal of voice research is to establish a causal theory linking voice physiology and biomechanics to how speakers use and control voice to communicate meaning and personal information. Establishing such a causal theory has important implications for clinical voice management, voice training, and many speech technology applications. This paper provides a review of voice physiology and biomechanics, the physics of vocal fold vibration and sound production, and laryngeal muscular control of the fundamental frequency of voice, vocal intensity, and voice quality. Current efforts to develop mechanical and computational models of voice production are also critically reviewed. Finally, issues and future challenges in developing a causal theory of voice production and perception are discussed. PMID:27794319

  15. Two distinct auditory-motor circuits for monitoring speech production as revealed by content-specific suppression of auditory cortex.

    Science.gov (United States)

    Ylinen, Sari; Nora, Anni; Leminen, Alina; Hakala, Tero; Huotilainen, Minna; Shtyrov, Yury; Mäkelä, Jyrki P; Service, Elisabet

    2015-06-01

    Speech production, both overt and covert, down-regulates the activation of auditory cortex. This is thought to be due to forward prediction of the sensory consequences of speech, contributing to a feedback control mechanism for speech production. Critically, however, these regulatory effects should be specific to speech content to enable accurate speech monitoring. To determine the extent to which such forward prediction is content-specific, we recorded the brain's neuromagnetic responses to heard multisyllabic pseudowords during covert rehearsal in working memory, contrasted with a control task. The cortical auditory processing of target syllables was significantly suppressed during rehearsal compared with control, but only when they matched the rehearsed items. This critical specificity to speech content enables accurate speech monitoring by forward prediction, as proposed by current models of speech production. The one-to-one phonological motor-to-auditory mappings also appear to serve the maintenance of information in phonological working memory. Further findings of right-hemispheric suppression in the case of whole-item matches and left-hemispheric enhancement for last-syllable mismatches suggest that speech production is monitored by 2 auditory-motor circuits operating on different timescales: Finer grain in the left versus coarser grain in the right hemisphere. Taken together, our findings provide hemisphere-specific evidence of the interface between inner and heard speech.

  16. Syllogisms delivered in an angry voice lead to improved performance and engagement of a different neural system compared to neutral voice

    Directory of Open Access Journals (Sweden)

    Kathleen Walton Smith

    2015-05-01

    Full Text Available Despite the fact that most real-world reasoning occurs in some emotional context, very little is known about the underlying behavioral and neural implications of such context. To further understand the role of emotional context in logical reasoning we scanned 15 participants with fMRI while they engaged in logical reasoning about neutral syllogisms presented through the auditory channel in a sad, angry, or neutral tone of voice. Exposure to angry voice led to improved reasoning performance compared to exposure to sad and neutral voice. A likely explanation for this effect is that exposure to expressions of anger increases selective attention toward the relevant features of target stimuli, in this case the reasoning task. Supporting this interpretation, reasoning in the context of angry voice was accompanied by activation in the superior frontal gyrus—a region known to be associated with selective attention. Our findings contribute to a greater understanding of the neural processes that underlie reasoning in an emotional context by demonstrating that two emotional contexts, despite being of the same (negative valence, have different effects on reasoning.

  17. Native voice, self-concept and the moral case for personalized voice technology.

    Science.gov (United States)

    Nathanson, Esther

    2017-01-01

    Purpose (1) To explore the role of native voice and effects of voice loss on self-concept and identity, and survey the state of assistive voice technology; (2) to establish the moral case for developing personalized voice technology. Methods This narrative review examines published literature on the human significance of voice, the impact of voice loss on self-concept and identity, and the strengths and limitations of current voice technology. Based on the impact of voice loss on self and identity, and voice technology limitations, the moral case for personalized voice technology is developed. Results Given the richness of information conveyed by voice, loss of voice constrains expression of the self, but the full impact is poorly understood. Augmentative and alternative communication (AAC) devices facilitate communication but, despite advances in this field, voice output cannot yet express the unique nuances of individual voice. The ethical principles of autonomy, beneficence and equality of opportunity establish the moral responsibility to invest in accessible, cost-effective, personalized voice technology. Conclusions Although further research is needed to elucidate the full effects of voice loss on self-concept, identity and social functioning, current understanding of the profoundly negative impact of voice loss establishes the moral case for developing personalized voice technology. Implications for Rehabilitation Rehabilitation of voice-disordered patients should facilitate self-expression, interpersonal connectedness and social/occupational participation. Proactive questioning about the psychological and social experiences of patients with voice loss is a valuable entry point for rehabilitation planning. Personalized voice technology would enhance sense of self, communicative participation and autonomy and promote shared healthcare decision-making. Further research is needed to identify the best strategies to preserve and strengthen identity and sense of

  18. Auditory short-term memory in the primate auditory cortex.

    Science.gov (United States)

    Scott, Brian H; Mishkin, Mortimer

    2016-06-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory.

  19. Auditory-olfactory synesthesia coexisting with auditory-visual synesthesia.

    Science.gov (United States)

    Jackson, Thomas E; Sandramouli, Soupramanien

    2012-09-01

    Synesthesia is an unusual condition in which stimulation of one sensory modality causes an experience in another sensory modality or when a sensation in one sensory modality causes another sensation within the same modality. We describe a previously unreported association of auditory-olfactory synesthesia coexisting with auditory-visual synesthesia. Given that many types of synesthesias involve vision, it is important that the clinician provide these patients with the necessary information and support that is available.

  20. Children's Voice or Children's Voices? How Educational Research Can Be at the Heart of Schooling

    Science.gov (United States)

    Stern, Julian

    2015-01-01

    There are problems with considering children and young people in schools as quite separate individuals, and with considering them as members of a single collectivity. The tension is represented in the use of "voice" and "voices" in educational debates. Voices in dialogue, in contrast to "children's voice", are…

  1. Voice complaints, risk factors for voice problems and history of voice problems in relation to puberty in female student teachers.

    NARCIS (Netherlands)

    Thomas, G.; Jong, F.I.C.R.S. de; Kooijman, P.G.C.; Donders, A.R.T.; Cremers, C.W.R.J.

    2006-01-01

    The aim of the study was to estimate voice complaints, risk factors for voice complaints and history of voice problems in student teachers before they embarked on their professional teaching career. A cross-sectional questionnaire survey was performed among female student teachers. The response rate

  2. Voice complaints, risk factors for voice problems and history of voice problems in relation to puberty in female student teachers.

    NARCIS (Netherlands)

    Thomas, G.; Jong, F.I.C.R.S. de; Kooijman, P.G.C.; Donders, A.R.T.; Cremers, C.W.R.J.

    2006-01-01

    The aim of the study was to estimate voice complaints, risk factors for voice complaints and history of voice problems in student teachers before they embarked on their professional teaching career. A cross-sectional questionnaire survey was performed among female student teachers. The response rate

  3. Bimanual Coordination Learning with Different Augmented Feedback Modalities and Information Types.

    Science.gov (United States)

    Chiou, Shiau-Chuen; Chang, Erik Chihhung

    2016-01-01

    Previous studies have shown that bimanual coordination learning is more resistant to the removal of augmented feedback when acquired with auditory than with visual channel. However, it is unclear whether this differential "guidance effect" between feedback modalities is due to enhanced sensorimotor integration via the non-dominant auditory channel or strengthened linkage to kinesthetic information under rhythmic input. The current study aimed to examine how modalities (visual vs. auditory) and information types (continuous visuospatial vs. discrete rhythmic) of concurrent augmented feedback influence bimanual coordination learning. Participants either learned a 90°-out-of-phase pattern for three consecutive days with Lissajous feedback indicating the integrated position of both arms, or with visual or auditory rhythmic feedback reflecting the relative timing of the movement. The results showed diverse performance change after practice when the feedback was removed between Lissajous and the other two rhythmic groups, indicating that the guidance effect may be modulated by the type of information provided during practice. Moreover, significant performance improvement in the dual-task condition where the irregular rhythm counting task was applied as a secondary task also suggested that lower involvement of conscious control may result in better performance in bimanual coordination.

  4. Human voice recognition depends on language ability.

    Science.gov (United States)

    Perrachione, Tyler K; Del Tufo, Stephanie N; Gabrieli, John D E

    2011-07-29

    The ability to recognize people by their voice is an important social behavior. Individuals differ in how they pronounce words, and listeners may take advantage of language-specific knowledge of speech phonology to facilitate recognizing voices. Impaired phonological processing is characteristic of dyslexia and thought to be a basis for difficulty in learning to read. We tested voice-recognition abilities of dyslexic and control listeners for voices speaking listeners' native language or an unfamiliar language. Individuals with dyslexia exhibited impaired voice-recognition abilities compared with controls only for voices speaking their native language. These results demonstrate the importance of linguistic representations for voice recognition. Humans appear to identify voices by making comparisons between talkers' pronunciations of words and listeners' stored abstract representations of the sounds in those words.

  5. Quick Statistics about Voice, Speech, and Language

    Science.gov (United States)

    ... here Home » Health Info » Statistics and Epidemiology Quick Statistics About Voice, Speech, Language Voice, Speech, Language, and ... no 205. Hyattsville, MD: National Center for Health Statistics. 2015. Hoffman HJ, Li C-M, Losonczy K, ...

  6. Introduction: Textual and contextual voices of translation

    DEFF Research Database (Denmark)

    2017-01-01

    Voices – marks of the tangle of subjectivities involved in textual processes – constitute the very fabric of texts in general and translations in particular. The title of this book, Textual and Contextual Voices of Translation, refers both to textual voices, that is, the voices found within...... the translated texts, and to contextual voices, that is, the voices of those involved in shaping, commenting, or otherwise influencing the textual voices. The latter appear in prefaces, reviews, and other texts that surround the translated texts and provide them with a context. Our main claim is that studying...... both the textual and contextual voices helps us better understand and explain the complexity of both the translation process and the translation product. The dovetailed approach to translation research that is advocated in this book aims at highlighting the diversity of participants, power positions...

  7. Auditory Neuropathy - A Case of Auditory Neuropathy after Hyperbilirubinemia

    Directory of Open Access Journals (Sweden)

    Maliheh Mazaher Yazdi

    2007-12-01

    Full Text Available Background and Aim: Auditory neuropathy is an hearing disorder in which peripheral hearing is normal, but the eighth nerve and brainstem are abnormal. By clinical definition, patient with this disorder have normal OAE, but exhibit an absent or severely abnormal ABR. Auditory neuropathy was first reported in the late 1970s as different methods could identify discrepancy between absent ABR and present hearing threshold. Speech understanding difficulties are worse than can be predicted from other tests of hearing function. Auditory neuropathy may also affect vestibular function. Case Report: This article presents electrophysiological and behavioral data from a case of auditory neuropathy in a child with normal hearing after bilirubinemia in a 5 years follow-up. Audiological findings demonstrate remarkable changes after multidisciplinary rehabilitation. Conclusion: auditory neuropathy may involve damage to the inner hair cells-specialized sensory cells in the inner ear that transmit information about sound through the nervous system to the brain. Other causes may include faulty connections between the inner hair cells and the nerve leading from the inner ear to the brain or damage to the nerve itself. People with auditory neuropathy have OAEs response but absent ABR and hearing loss threshold that can be permanent, get worse or get better.

  8. Use of auditory learning to manage listening problems in children

    National Research Council Canada - National Science Library

    David R Moore; Lorna F Halliday; Sygal Amitay

    2009-01-01

    .... It considers the auditory contribution to developmental listening and language problems and the underlying principles of auditory learning that may drive further refinement of auditory learning applications...

  9. Auditory Processing Disorder (For Parents)

    Science.gov (United States)

    ... CAPD often have trouble maintaining attention, although health, motivation, and attitude also can play a role. Auditory ... programs. Several computer-assisted programs are geared toward children with APD. They mainly help the brain do ...

  10. Voicing Consciousness: The Mind in Writing

    Science.gov (United States)

    Luce-Kapler, Rebecca; Catlin, Susan; Sumara, Dennis; Kocher, Philomene

    2011-01-01

    In this paper, the authors investigate the enduring power of voice as a concept in writing pedagogy. They argue that one can benefit from considering Elbow's assertion that both text and voice be considered as important aspects of written discourse. In particular, voice is a powerful metaphor for the material, social and historical nature of…

  11. Understanding the 'Anorexic Voice' in Anorexia Nervosa.

    Science.gov (United States)

    Pugh, Matthew; Waller, Glenn

    2016-07-20

    In common with individuals experiencing a number of disorders, people with anorexia nervosa report experiencing an internal 'voice'. The anorexic voice comments on the individual's eating, weight and shape and instructs the individual to restrict or compensate. However, the core characteristics of the anorexic voice are not known. This study aimed to develop a parsimonious model of the voice characteristics that are related to key features of eating disorder pathology and to determine whether patients with anorexia nervosa fall into groups with different voice experiences. The participants were 49 women with full diagnoses of anorexia nervosa. Each completed validated measures of the power and nature of their voice experience and of their responses to the voice. Different voice characteristics were associated with current body mass index, duration of disorder and eating cognitions. Two subgroups emerged, with 'weaker' and 'stronger' voice experiences. Those with stronger voices were characterized by having more negative eating attitudes, more severe compensatory behaviours, a longer duration of illness and a greater likelihood of having the binge-purge subtype of anorexia nervosa. The findings indicate that the anorexic voice is an important element of the psychopathology of anorexia nervosa. Addressing the anorexic voice might be helpful in enhancing outcomes of treatments for anorexia nervosa, but that conclusion might apply only to patients with more severe eating psychopathology. Copyright © 2016 John Wiley & Sons, Ltd.

  12. Voice and culture: A prospect theory approach

    NARCIS (Netherlands)

    Paddock, E.L.; Ko, Junsu; Cropanzano, R.; Bagger, J.; El Akremi, A.; Camerman, A.; Greguras, G. J.; Mladinic, A.; Moliner, C.; Nam, K.; Törnblom, K.; Van den Bos, Kees

    2015-01-01

    The present study examines the congruence of individuals' minimum preferred amounts of voice with the prospect theory value function across nine countries. Accounting for previously ignored minimum preferred amounts of voice and actual voice amounts integral to testing the steepness of gain and loss

  13. Finding Voice: Learning about Language and Power

    Science.gov (United States)

    Christensen, Linda

    2011-01-01

    Christensen discusses why teachers need to teach students "voice" in its social and political context, to show the intersection of voice and power, to encourage students to ask, "Whose voices get heard? Whose are marginalized?" As Christensen writes, "Once students begin to understand that Standard English is one language among many, we can help…

  14. Analyzing the mediated voice - a datasession

    DEFF Research Database (Denmark)

    Lawaetz, Anna

    Broadcasted voices are technologically manipulated. In order to achieve a certain autencity or sound of “reality” paradoxically the voices are filtered and trained in order to reach the listeners. This “mis-en-scene” is important knowledge when it comes to the development of a consistent method o...... of analysis of the mediated voice...

  15. Analyzing the mediated voice - a datasession

    DEFF Research Database (Denmark)

    Lawaetz, Anna

    Broadcasted voices are technologically manipulated. In order to achieve a certain autencity or sound of “reality” paradoxically the voices are filtered and trained in order to reach the listeners. This “mis-en-scene” is important knowledge when it comes to the development of a consistent method...... of analysis of the mediated voice...

  16. Voice and culture: A prospect theory approach

    NARCIS (Netherlands)

    Paddock, E.L.; Ko, Junsu; Cropanzano, R.; Bagger, J.; El Akremi, A.; Camerman, A.; Greguras, G. J.; Mladinic, A.; Moliner, C.; Nam, K.; Törnblom, K.; Van den Bos, Kees

    2015-01-01

    The present study examines the congruence of individuals' minimum preferred amounts of voice with the prospect theory value function across nine countries. Accounting for previously ignored minimum preferred amounts of voice and actual voice amounts integral to testing the steepness of gain and loss

  17. "Voice Forum" The Human Voice as Primary Instrument in Music Therapy

    DEFF Research Database (Denmark)

    Pedersen, Inge Nygaard; Storm, Sanne

    2009-01-01

    Aspects will be drawn on the human voice as tool for embodying our psychological and physiological state, and attempting integration of feelings. Presentations and dialogues on different methods and techniques in "Therapy related body-and voice work.", as well as the human voice as a tool for non...... for nonverbal orientation and information both to our selves and others. Focus on training on the voice instrument, the effect and impact of the human voice, and listening perspectives...

  18. The Voice of Anger: Oscillatory EEG Responses to Emotional Prosody.

    Science.gov (United States)

    Del Giudice, Renata; Blume, Christine; Wislowska, Malgorzata; Wielek, Tomasz; Heib, Dominik P J; Schabus, Manuel

    2016-01-01

    Emotionally relevant stimuli and in particular anger are, due to their evolutionary relevance, often processed automatically and able to modulate attention independent of conscious access. Here, we tested whether attention allocation is enhanced when auditory stimuli are uttered by an angry voice. We recorded EEG and presented healthy individuals with a passive condition where unfamiliar names as well as the subject's own name were spoken both with an angry and neutral prosody. The active condition instead, required participants to actively count one of the presented (angry) names. Results revealed that in the passive condition the angry prosody only elicited slightly stronger delta synchronization as compared to a neutral voice. In the active condition the attended (angry) target was related to enhanced delta/theta synchronization as well as alpha desynchronization suggesting enhanced allocation of attention and utilization of working memory resources. Altogether, the current results are in line with previous findings and highlight that attention orientation can be systematically related to specific oscillatory brain responses. Potential applications include assessment of non-communicative clinical groups such as post-comatose patients.

  19. The Voice of Anger: Oscillatory EEG Responses to Emotional Prosody.

    Directory of Open Access Journals (Sweden)

    Renata Del Giudice

    Full Text Available Emotionally relevant stimuli and in particular anger are, due to their evolutionary relevance, often processed automatically and able to modulate attention independent of conscious access. Here, we tested whether attention allocation is enhanced when auditory stimuli are uttered by an angry voice. We recorded EEG and presented healthy individuals with a passive condition where unfamiliar names as well as the subject's own name were spoken both with an angry and neutral prosody. The active condition instead, required participants to actively count one of the presented (angry names. Results revealed that in the passive condition the angry prosody only elicited slightly stronger delta synchronization as compared to a neutral voice. In the active condition the attended (angry target was related to enhanced delta/theta synchronization as well as alpha desynchronization suggesting enhanced allocation of attention and utilization of working memory resources. Altogether, the current results are in line with previous findings and highlight that attention orientation can be systematically related to specific oscillatory brain responses. Potential applications include assessment of non-communicative clinical groups such as post-comatose patients.

  20. The Voice of Anger: Oscillatory EEG Responses to Emotional Prosody

    Science.gov (United States)

    del Giudice, Renata; Blume, Christine; Wislowska, Malgorzata; Wielek, Tomasz; Heib, Dominik P. J.; Schabus, Manuel

    2016-01-01

    Emotionally relevant stimuli and in particular anger are, due to their evolutionary relevance, often processed automatically and able to modulate attention independent of conscious access. Here, we tested whether attention allocation is enhanced when auditory stimuli are uttered by an angry voice. We recorded EEG and presented healthy individuals with a passive condition where unfamiliar names as well as the subject’s own name were spoken both with an angry and neutral prosody. The active condition instead, required participants to actively count one of the presented (angry) names. Results revealed that in the passive condition the angry prosody only elicited slightly stronger delta synchronization as compared to a neutral voice. In the active condition the attended (angry) target was related to enhanced delta/theta synchronization as well as alpha desynchronization suggesting enhanced allocation of attention and utilization of working memory resources. Altogether, the current results are in line with previous findings and highlight that attention orientation can be systematically related to specific oscillatory brain responses. Potential applications include assessment of non-communicative clinical groups such as post-comatose patients. PMID:27442445

  1. Voice-Specialized Speech-Language Pathologist's Criteria for Discharge from Voice Therapy.

    Science.gov (United States)

    Gillespie, Amanda I; Gartner-Schmidt, Jackie

    2017-08-07

    No standard protocol exists to determine when a patient is ready and able to be discharged from voice therapy. The aim of the present study was to determine what factors speech-language pathologists (SLPs) deem most important when discharging a patient from voice therapy. A second aim was to determine if responses differed based on years of voice experience. Step 1: Seven voice-specialized SLPs generated a list of items thought to be relevant to voice therapy discharge. Step 2: Fifty voice-specialized SLPs rated each item on the list in terms of importance in determining discharge from voice therapy. Step 1: Four themes emerged-outcome measures, laryngeal appearance, SLP perceptions, and patient factors-as important items when determining discharge from voice therapy. Step 2: The top five most important criteria for discharge readiness were that the patient had to be able to (1) independently use a better voice (transfer), (2) function with his or her new voice production in activities of daily living (transfer), (3) differentiate between good and bad voice, (4) take responsibility for voice, and (5) sound better from baseline. Novice and experienced clinicians agreed between 94% and 97% concerning what was deemed "very important." SLPs agree that a patient's ability to use voice techniques in conversation and real-life situations outside of the therapy room are the most important determinants for voice therapy discharge. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  2. Electrostimulation mapping of comprehension of auditory and visual words.

    Science.gov (United States)

    Roux, Franck-Emmanuel; Miskin, Krasimir; Durand, Jean-Baptiste; Sacko, Oumar; Réhault, Emilie; Tanova, Rositsa; Démonet, Jean-François

    2015-10-01

    In order to spare functional areas during the removal of brain tumours, electrical stimulation mapping was used in 90 patients (77 in the left hemisphere and 13 in the right; 2754 cortical sites tested). Language functions were studied with a special focus on comprehension of auditory and visual words and the semantic system. In addition to naming, patients were asked to perform pointing tasks from auditory and visual stimuli (using sets of 4 different images controlled for familiarity), and also auditory object (sound recognition) and Token test tasks. Ninety-two auditory comprehension interference sites were observed. We found that the process of auditory comprehension involved a few, fine-grained, sub-centimetre cortical territories. Early stages of speech comprehension seem to relate to two posterior regions in the left superior temporal gyrus. Downstream lexical-semantic speech processing and sound analysis involved 2 pathways, along the anterior part of the left superior temporal gyrus, and posteriorly around the supramarginal and middle temporal gyri. Electrostimulation experimentally dissociated perceptual consciousness attached to speech comprehension. The initial word discrimination process can be considered as an "automatic" stage, the attention feedback not being impaired by stimulation as would be the case at the lexical-semantic stage. Multimodal organization of the superior temporal gyrus was also detected since some neurones could be involved in comprehension of visual material and naming. These findings demonstrate a fine graded, sub-centimetre, cortical representation of speech comprehension processing mainly in the left superior temporal gyrus and are in line with those described in dual stream models of language comprehension processing.

  3. The development of the Spanish verb ir into auxiliary of voice

    DEFF Research Database (Denmark)

    Vinther, Thora

    2005-01-01

    spanish, syntax, grammaticalisation, past participle, passive voice, middle voice, language development......spanish, syntax, grammaticalisation, past participle, passive voice, middle voice, language development...

  4. Objective Voice Parameters in Colombian School Workers with Healthy Voices

    Directory of Open Access Journals (Sweden)

    Lady Catherine Cantor Cutiva

    2015-09-01

    Full Text Available Objectives: To characterize the objective voice parameters among school workers, and to identi­fy associated factors of three objective voice parameters, namely fundamental frequency, sound pressure level and maximum phonation time. Materials and methods: We conducted a cross-sectional study among 116 Colombian teachers and 20 Colombian non-teachers. After signing the informed consent form, participants filled out a questionnaire. Then, a voice sample was recorded and evaluated perceptually by a speech therapist and by objective voice analysis with praat software. Short-term environmental measurements of sound level, temperature, humi­dity, and reverberation time were conducted during visits at the workplaces, such as classrooms and offices. Linear regression analysis was used to determine associations between individual and work-related factors and objective voice parameters. Results: Compared with men, women had higher fundamental frequency (201 Hz for teachers and 209 for non-teachers vs. 120 Hz for teachers and 127 for non-teachers and sound pressure level (82 dB vs. 80 dB, and shorter maximum phonation time (around 14 seconds vs. around 16 seconds. Female teachers younger than 50 years of age evidenced a significant tendency to speak with lower fundamental frequen­cy and shorter mpt compared with female teachers older than 50 years of age. Female teachers had significantly higher fundamental frequency (66 Hz, higher sound pressure level (2 dB and short phonation time (2 seconds than male teachers. Conclusion: Female teachers younger than 50 years of age had significantly lower F0 and shorter mpt compared with those older than 50 years of age. The multivariate analysis showed that gender was a much more important determinant of variations in F0, spl and mpt than age and teaching occupation. Objectively measured temperature also contributed to the changes on spl among school workers.

  5. Playful Interaction with Voice Sensing Modular Robots

    DEFF Research Database (Denmark)

    Heesche, Bjarke; MacDonald, Ewen; Fogh, Rune

    2013-01-01

    This paper describes a voice sensor, suitable for modular robotic systems, which estimates the energy and fundamental frequency, F0, of the user’s voice. Through a number of example applications and tests with children, we observe how the voice sensor facilitates playful interaction between...... children and two different robot configurations. In future work, we will investigate if such a system can motivate children to improve voice control and explore how to extend the sensor to detect emotions in the user’s voice....

  6. Climate Voices: Bridging Scientist Citizens and Local Communities across the United States

    Science.gov (United States)

    Wegner, K.; Ristvey, J. D., Jr.

    2016-12-01

    Based out of the University Corporation for Atmospheric Research (UCAR), the Climate Voices Science Speakers Network (climatevoices.org) has more than 400 participants across the United States that volunteer their time as scientist citizens in their local communities. Climate Voices experts engage in nonpartisan conversations about the local impacts of climate change with groups such as Rotary clubs, collaborate with faith-based groups on climate action initiatives, and disseminate their research findings to K-12 teachers and classrooms through webinars. To support their participants, Climate Voices develops partnerships with networks of community groups, provides trainings on how to engage these communities, and actively seeks community feedback. In this presentation, we will share case studies of science-community collaborations, including meta-analyses of collaborations and lessons learned.

  7. Psychology of auditory perception.

    Science.gov (United States)

    Lotto, Andrew; Holt, Lori

    2011-09-01

    Audition is often treated as a 'secondary' sensory system behind vision in the study of cognitive science. In this review, we focus on three seemingly simple perceptual tasks to demonstrate the complexity of perceptual-cognitive processing involved in everyday audition. After providing a short overview of the characteristics of sound and their neural encoding, we present a description of the perceptual task of segregating multiple sound events that are mixed together in the signal reaching the ears. Then, we discuss the ability to localize the sound source in the environment. Finally, we provide some data and theory on how listeners categorize complex sounds, such as speech. In particular, we present research on how listeners weigh multiple acoustic cues in making a categorization decision. One conclusion of this review is that it is time for auditory cognitive science to be developed to match what has been done in vision in order for us to better understand how humans communicate with speech and music. WIREs Cogni Sci 2011 2 479-489 DOI: 10.1002/wcs.123 For further resources related to this article, please visit the WIREs website. Copyright © 2010 John Wiley & Sons, Ltd.

  8. Auditory gist: recognition of very short sounds from timbre cues.

    Science.gov (United States)

    Suied, Clara; Agus, Trevor R; Thorpe, Simon J; Mesgarani, Nima; Pressnitzer, Daniel

    2014-03-01

    Sounds such as the voice or musical instruments can be recognized on the basis of timbre alone. Here, sound recognition was investigated with severely reduced timbre cues. Short snippets of naturally recorded sounds were extracted from a large corpus. Listeners were asked to report a target category (e.g., sung voices) among other sounds (e.g., musical instruments). All sound categories covered the same pitch range, so the task had to be solved on timbre cues alone. The minimum duration for which performance was above chance was found to be short, on the order of a few milliseconds, with the best performance for voice targets. Performance was independent of pitch and was maintained when stimuli contained less than a full waveform cycle. Recognition was not generally better when the sound snippets were time-aligned with the sound onset compared to when they were extracted with a random starting time. Finally, performance did not depend on feedback or training, suggesting that the cues used by listeners in the artificial gating task were similar to those relevant for longer, more familiar sounds. The results show that timbre cues for sound recognition are available at a variety of time scales, including very short ones.

  9. VOICE QUALITY BEFORE AND AFTER THYROIDECTOMY

    Directory of Open Access Journals (Sweden)

    Dora CVELBAR

    2016-04-01

    Full Text Available Introduction: Voice disorders are a well-known complication which is often associated with thyroid gland diseases and because voice is still the basic mean of communication it is very important to maintain its quality healthy. Objectives: The aim of this study referred to questions whether there is a statistically significant difference between results of voice self-assessment, perceptual voice assessment and acoustic voice analysis before and after thyroidectomy and whether there are statistically significant correlations between variables of voice self-assessment, perceptual assessment and acoustic analysis before and after thyroidectomy. Methods: This scientific research included 12 participants aged between 41 and 76. Voice self-assessment was conducted with the help of Croatian version of Voice Handicap Index (VHI. Recorded reading samples were used for perceptual assessment and later evaluated by two clinical speech and language therapists. Recorded samples of phonation were used for acoustic analysis which was conducted with the help of acoustic program Praat. All of the data was processed through descriptive statistics and nonparametric statistical methods. Results: Results showed that there are statistically significant differences between results of voice self-assessments and results of acoustic analysis before and after thyroidectomy. Statistically significant correlations were found between variables of perceptual assessment and acoustic analysis. Conclusion: Obtained results indicate the importance of multidimensional, preoperative and postoperative assessment. This kind of assessment allows the clinician to describe all of the voice features and provides appropriate recommendation for further rehabilitation to the patient in order to optimize voice outcomes.

  10. Beyond Insularity: Releasing the Voices.

    Science.gov (United States)

    Greene, Maxine

    1993-01-01

    Aspects of English-as-a-Second-Language are discussed from the standpoint of a teacher-educator with a particular interest in philosophy, the arts, and humanities and what they signify for the schools. The idea of giving voice to all viewpoints and sociocultural circumstances is considered for content learning and heterogeneous grouping. (Contains…

  11. A voice and nothing more

    DEFF Research Database (Denmark)

    Mebus, Andreas Nozic Lindgren

    2012-01-01

    Andreas Mebus fokuserer herefter på et helt konkret aspekt af talen, nemlig ”stemmen” i sin artikel ”A voice and nothing more – en filosofisk udredning af stemmen”. Gennem Mladen Dolars teori om stemmen, redegør Mebus for de forskellige aspekter ved stemmen; som bærer af mening, som æstetisk...

  12. Voice, Citizenship, and Civic Action

    DEFF Research Database (Denmark)

    Tufte, Thomas

    2014-01-01

    In recent years the world has experienced a resurgence in practices of bottom-up communication for social change, a plethora of agency in which claims for voice and citizenship through massive civic action have conquered center stage in the public debate. This resurgence has sparked a series...

  13. FILTWAM and Voice Emotion Recognition

    NARCIS (Netherlands)

    Bahreini, Kiavash; Nadolski, Rob; Westera, Wim

    2014-01-01

    This paper introduces the voice emotion recognition part of our framework for improving learning through webcams and microphones (FILTWAM). This framework enables multimodal emotion recognition of learners during game-based learning. The main goal of this study is to validate the use of microphone d

  14. The Performing Voice of Radio

    DEFF Research Database (Denmark)

    Lawaetz, Anna

    The ongoing international development of opening media archives for researchers as well as for broader audiences calls for a closer discussion of the mediated voice and how to analyse it. Which parameters can be analysed and which parameters are not covered by the analysis? Furthermore, how do we...

  15. Voice and choice by delegation

    NARCIS (Netherlands)

    van de Bovenkamp, H.; Vollaard, H.; Trappenburg, M.; Grit, K

    2013-01-01

    In many Western countries, options for citizens to influence public services are increased to improve the quality of services and democratize decision making. Possibilities to influence are often cast into Albert Hirschman's taxonomy of exit (choice), voice, and loyalty. In this article we identify

  16. Work-related voice disorder

    Directory of Open Access Journals (Sweden)

    Paulo Eduardo Przysiezny

    2015-04-01

    Full Text Available INTRODUCTION: Dysphonia is the main symptom of the disorders of oral communication. However, voice disorders also present with other symptoms such as difficulty in maintaining the voice (asthenia, vocal fatigue, variation in habitual vocal fundamental frequency, hoarseness, lack of vocal volume and projection, loss of vocal efficiency, and weakness when speaking. There are several proposals for the etiologic classification of dysphonia: functional, organofunctional, organic, and work-related voice disorder (WRVD.OBJECTIVE: To conduct a literature review on WRVD and on the current Brazilian labor legislation.METHODS: This was a review article with bibliographical research conducted on the PubMed and Bireme databases, using the terms "work-related voice disorder", "occupational dysphonia", "dysphonia and labor legislation", and a review of labor and social security relevant laws.CONCLUSION: WRVD is a situation that frequently is listed as a reason for work absenteeism, functional rehabilitation, or for prolonged absence from work. Currently, forensic physicians have no comparative parameters to help with the analysis of vocal disorders. In certain situations WRVD may cause, work disability. This disorder may be labor-related, or be an adjuvant factor to work-related diseases.

  17. The Performing Voice of Radio

    DEFF Research Database (Denmark)

    Lawaetz, Anna

    The ongoing international development of opening media archives for researchers as well as for broader audiences calls for a closer discussion of the mediated voice and how to analyse it. Which parameters can be analysed and which parameters are not covered by the analysis? Furthermore, how do we...

  18. Adolescent Leadership: The Female Voice

    Science.gov (United States)

    Archard, Nicole

    2013-01-01

    This research investigated the female adolescent view of leadership by giving voice to student leaders through focus group discussions. The questions: What is leadership? Where/how was leadership taught?, and How was leadership practised? were explored within the context of girls' schools located in Australia, with one school located in South…

  19. FILTWAM and Voice Emotion Recognition

    NARCIS (Netherlands)

    Bahreini, Kiavash; Nadolski, Rob; Westera, Wim

    2014-01-01

    This paper introduces the voice emotion recognition part of our framework for improving learning through webcams and microphones (FILTWAM). This framework enables multimodal emotion recognition of learners during game-based learning. The main goal of this study is to validate the use of microphone

  20. Voicing children's critique and utopias

    DEFF Research Database (Denmark)

    Husted, Mia; Lind, Unni

    2016-01-01

    , designed to accommodate children's participation through graphic illustrations of young children's critique and utopias. The study is informed by a commitment to democratic participation and processes (Reason and Bradbury 2001, Gunnarsson et al. 2016). Ethical guidelines implied dialogues and discussions......, children's voice, critique and utopias, pedagogical work...

  1. Women's Voices in Experiential Education.

    Science.gov (United States)

    Warren, Karen, Ed.

    This book is a collection of feminist analyses of various topics in experiential education, particularly as it applies to outdoors and adventure education, as well as practical examples of how women's experiences can contribute to the field as a whole. Following an introduction, "The Quilt of Women's Voices" (Maya Angelou), the 25…

  2. Promoting smoke-free homes: a novel behavioral intervention using real-time audio-visual feedback on airborne particle levels.

    Directory of Open Access Journals (Sweden)

    Neil E Klepeis

    Full Text Available Interventions are needed to protect the health of children who live with smokers. We pilot-tested a real-time intervention for promoting behavior change in homes that reduces second hand tobacco smoke (SHS levels. The intervention uses a monitor and feedback system to provide immediate auditory and visual signals triggered at defined thresholds of fine particle concentration. Dynamic graphs of real-time particle levels are also shown on a computer screen. We experimentally evaluated the system, field-tested it in homes with smokers, and conducted focus groups to obtain general opinions. Laboratory tests of the monitor demonstrated SHS sensitivity, stability, precision equivalent to at least 1 µg/m(3, and low noise. A linear relationship (R(2 = 0.98 was observed between the monitor and average SHS mass concentrations up to 150 µg/m(3. Focus groups and interviews with intervention participants showed in-home use to be acceptable and feasible. The intervention was evaluated in 3 homes with combined baseline and intervention periods lasting 9 to 15 full days. Two families modified their behavior by opening windows or doors, smoking outdoors, or smoking less. We observed evidence of lower SHS levels in these homes. The remaining household voiced reluctance to changing their smoking activity and did not exhibit lower SHS levels in main smoking areas or clear behavior change; however, family members expressed receptivity to smoking outdoors. This study established the feasibility of the real-time intervention, laying the groundwork for controlled trials with larger sample sizes. Visual and auditory cues may prompt family members to take immediate action to reduce SHS levels. Dynamic graphs of SHS levels may help families make decisions about specific mitigation approaches.

  3. Fault Tolerant Feedback Control

    DEFF Research Database (Denmark)

    Stoustrup, Jakob; Niemann, H.

    2001-01-01

    An architecture for fault tolerant feedback controllers based on the Youla parameterization is suggested. It is shown that the Youla parameterization will give a residual vector directly in connection with the fault diagnosis part of the fault tolerant feedback controller. It turns out...... that there is a separation be-tween the feedback controller and the fault tolerant part. The closed loop feedback properties are handled by the nominal feedback controller and the fault tolerant part is handled by the design of the Youla parameter. The design of the fault tolerant part will not affect the design...... of the nominal feedback con-troller....

  4. Temporal voice areas exist in autism spectrum disorder but are dysfunctional for voice identity recognition

    Science.gov (United States)

    Borowiak, Kamila; von Kriegstein, Katharina

    2016-01-01

    The ability to recognise the identity of others is a key requirement for successful communication. Brain regions that respond selectively to voices exist in humans from early infancy on. Currently, it is unclear whether dysfunction of these voice-sensitive regions can explain voice identity recognition impairments. Here, we used two independent functional magnetic resonance imaging studies to investigate voice processing in a population that has been reported to have no voice-sensitive regions: autism spectrum disorder (ASD). Our results refute the earlier report that individuals with ASD have no responses in voice-sensitive regions: Passive listening to vocal, compared to non-vocal, sounds elicited typical responses in voice-sensitive regions in the high-functioning ASD group and controls. In contrast, the ASD group had a dysfunction in voice-sensitive regions during voice identity but not speech recognition in the right posterior superior temporal sulcus/gyrus (STS/STG)—a region implicated in processing complex spectrotemporal voice features and unfamiliar voices. The right anterior STS/STG correlated with voice identity recognition performance in controls but not in the ASD group. The findings suggest that right STS/STG dysfunction is critical for explaining voice recognition impairments in high-functioning ASD and show that ASD is not characterised by a general lack of voice-sensitive responses. PMID:27369067

  5. Evaluation of Multi-sensory Feedback on the Usability of a Virtual Assembly Environment

    Directory of Open Access Journals (Sweden)

    Ying Zhang

    2007-02-01

    Full Text Available Virtual assembly environment (VAE technology has the great potential for benefiting the manufacturing applications in industry. Usability is an important aspect of the VAE. This paper presents the usability evaluation of a developed multi-sensory VAE. The evaluation is conducted by using its three attributes: (a efficiency of use; (b user satisfaction; and (c reliability. These are addressed by using task completion times (TCTs, questionnaires, and human performance error rates (HPERs, respectively. A peg-in-a-hole and a Sener electronic box assembly task have been used to perform the experiments, using sixteen participants. The outcomes showed that the introduction of 3D auditory and/or visual feedback could improve the usability. They also indicated that the integrated feedback (visual plus auditory offered better usability than either feedback used in isolation. Most participants preferred the integrated feedback to either feedback (visual or auditory or no feedback. The participants’ comments demonstrated that nonrealistic or inappropriate feedback had negative effects on the usability, and easily made them feel frustrated. The possible reasons behind the outcomes are also analysed.

  6. The role of the medial temporal limbic system in processing emotions in voice and music.

    Science.gov (United States)

    Frühholz, Sascha; Trost, Wiebke; Grandjean, Didier

    2014-12-01

    Subcortical brain structures of the limbic system, such as the amygdala, are thought to decode the emotional value of sensory information. Recent neuroimaging studies, as well as lesion studies in patients, have shown that the amygdala is sensitive to emotions in voice and music. Similarly, the hippocampus, another part of the temporal limbic system (TLS), is responsive to vocal and musical emotions, but its specific roles in emotional processing from music and especially from voices have been largely neglected. Here we review recent research on vocal and musical emotions, and outline commonalities and differences in the neural processing of emotions in the TLS in terms of emotional valence, emotional intensity and arousal, as well as in terms of acoustic and structural features of voices and music. We summarize the findings in a neural framework including several subcortical and cortical functional pathways between the auditory system and the TLS. This framework proposes that some vocal expressions might already receive a fast emotional evaluation via a subcortical pathway to the amygdala, whereas cortical pathways to the TLS are thought to be equally used for vocal and musical emotions. While the amygdala might be specifically involved in a coarse decoding of the emotional value of voices and music, the hippocampus might process more complex vocal and musical emotions, and might have an important role especially for the decoding of musical emotions by providing memory-based and contextual associations.

  7. Listen, you are writing!Speeding up online spelling with a dynamic auditory BCI

    Directory of Open Access Journals (Sweden)

    Martijn eSchreuder

    2011-10-01

    Full Text Available Representing an intuitive spelling interface for Brain-Computer Interfaces (BCI in the auditory domain is not straightforward. In consequence, all existing approaches based on event-related potentials (ERP rely at least partially on a visual representation of the interface. This online study introduces an auditory spelling interface that eliminates the necessity for such a visualization. In up to two sessions, a group of healthy subjects (N=21 was asked to use a text entry application, utilizing the spatial cues of the AMUSE paradigm (Auditory Multiclass Spatial ERP. The speller relies on the auditory sense both for stimulation and the core feedback. Without prior BCI experience, 76% of the participants were able to write a full sentence during the first session. By exploiting the advantages of a newly introduced dynamic stopping method, a maximum writing speed of 1.41 characters/minute (7.55 bits/minute could be reached during the second session (average: .94 char/min, 5.26 bits/min. For the first time, the presented work shows that an auditory BCI can reach performances similar to state-of-the-art visual BCIs based on covert attention. These results represent an important step towards a purely auditory BCI.

  8. Multiplexed and robust representations of sound features in auditory cortex.

    Science.gov (United States)

    Walker, Kerry M M; Bizley, Jennifer K; King, Andrew J; Schnupp, Jan W H

    2011-10-12

    We can recognize the melody of a familiar song when it is played on different musical instruments. Similarly, an animal must be able to recognize a warning call whether the caller has a high-pitched female or a lower-pitched male voice, and whether they are sitting in a tree to the left or right. This type of perceptual invariance to "nuisance" parameters comes easily to listeners, but it is unknown whether or how such robust representations of sounds are formed at the level of sensory cortex. In this study, we investigate whether neurons in both core and belt areas of ferret auditory cortex can robustly represent the pitch, formant frequencies, or azimuthal location of artificial vowel sounds while the other two attributes vary. We found that the spike rates of the majority of cortical neurons that are driven by artificial vowels carry robust representations of these features, but the most informative temporal response windows differ from neuron to neuron and across five auditory cortical fields. Furthermore, individual neurons can represent multiple features of sounds unambiguously by independently modulating their spike rates within distinct time windows. Such multiplexing may be critical to identifying sounds that vary along more than one perceptual dimension. Finally, we observed that formant information is encoded in cortex earlier than pitch information, and we show that this time course matches ferrets' behavioral reaction time differences on a change detection task.

  9. Rateless feedback codes

    DEFF Research Database (Denmark)

    Sørensen, Jesper Hemming; Koike-Akino, Toshiaki; Orlik, Philip

    2012-01-01

    This paper proposes a concept called rateless feedback coding. We redesign the existing LT and Raptor codes, by introducing new degree distributions for the case when a few feedback opportunities are available. We show that incorporating feedback to LT codes can significantly decrease both...... the coding overhead and the encoding/decoding complexity. Moreover, we show that, at the price of a slight increase in the coding overhead, linear complexity is achieved with Raptor feedback coding....

  10. Approaches to the cortical analysis of auditory objects.

    Science.gov (United States)

    Griffiths, Timothy D; Kumar, Sukhbinder; Warren, Jason D; Stewart, Lauren; Stephan, Klaas Enno; Friston, Karl J

    2007-07-01

    We describe work that addresses the cortical basis for the analysis of auditory objects using 'generic' sounds that do not correspond to any particular events or sources (like vowels or voices) that have semantic association. The experiments involve the manipulation of synthetic sounds to produce systematic changes of stimulus features, such as spectral envelope. Conventional analyses of normal functional imaging data demonstrate that the analysis of spectral envelope and perceived timbral change involves a network consisting of planum temporale (PT) bilaterally and the right superior temporal sulcus (STS). Further analysis of imaging data using dynamic causal modelling (DCM) and Bayesian model selection was carried out in the right hemisphere areas to determine the effective connectivity between these auditory areas. Specifically, the objective was to determine if the analysis of spectral envelope in the network is done in a serial fashion (that is from HG to PT to STS) or parallel fashion (that is PT and STS receives input from HG simultaneously). Two families of models, serial and parallel (16 in total) that represent different hypotheses about the connectivity between HG, PT and STS were selected. The models within a family differ with respect to the pathway that is modulated by the analysis of spectral envelope. After the models are identified, Bayesian model selection procedure is then used to select the 'optimal' model from the specified models. The data strongly support a particular serial model containing modulation of the HG to PT effective connectivity during spectral envelope variation. Parallel work in neurological subjects addresses the effect of lesions to different parts of this network. We have recently studied in detail subjects with 'dystimbria': an alteration in the perceived quality of auditory objects distinct from pitch or loudness change. The subjects have lesions of the normal network described above with normal perception of pitch strength

  11. VoiceForum, a software platform for spoken interaction: a model for the "Call Triangle"?

    OpenAIRE

    Fynn, John; Wigham, Ciara R.

    2011-01-01

    VoiceForum is a pedagogical project created as a response to learners' needs in the spoken language observed mainly at the Hypermedia Language Centre of Blaise Pascal University, France. It comprises a web-based forum approach for posting interactive audio and text with a dedicated unintrusive space for teacher feedback. The software platform (freely available via download), thus, offers a means of providing guidance through contextualised help to individual learners on their spoken discourse...

  12. Auditory Hallucinations in Acute Stroke

    Directory of Open Access Journals (Sweden)

    Yair Lampl

    2005-01-01

    Full Text Available Auditory hallucinations are uncommon phenomena which can be directly caused by acute stroke, mostly described after lesions of the brain stem, very rarely reported after cortical strokes. The purpose of this study is to determine the frequency of this phenomenon. In a cross sectional study, 641 stroke patients were followed in the period between 1996–2000. Each patient underwent comprehensive investigation and follow-up. Four patients were found to have post cortical stroke auditory hallucinations. All of them occurred after an ischemic lesion of the right temporal lobe. After no more than four months, all patients were symptom-free and without therapy. The fact the auditory hallucinations may be of cortical origin must be taken into consideration in the treatment of stroke patients. The phenomenon may be completely reversible after a couple of months.

  13. Preventing Feedback Fizzle

    Science.gov (United States)

    Brookhart, Susan M.

    2012-01-01

    Feedback is certainly about saying or writing helpful, learning-focused comments. But that is only part of it. What happens beforehand? What happens afterward? Feedback that is helpful and learning-focused fits into a context. Before a teacher gives feedback, students need to know the learning target so they have a purpose for using the feedback…

  14. Developing Sustainable Feedback Practices

    Science.gov (United States)

    Carless, David; Salter, Diane; Yang, Min; Lam, Joy

    2011-01-01

    Feedback is central to the development of student learning, but within the constraints of modularized learning in higher education it is increasingly difficult to handle effectively. This article makes a case for sustainable feedback as a contribution to the reconceptualization of feedback processes. The data derive from the Student Assessment and…

  15. Development of auditory-vocal perceptual skills in songbirds.

    Directory of Open Access Journals (Sweden)

    Vanessa C Miller-Sims

    Full Text Available Songbirds are one of the few groups of animals that learn the sounds used for vocal communication during development. Like humans, songbirds memorize vocal sounds based on auditory experience with vocalizations of adult "tutors", and then use auditory feedback of self-produced vocalizations to gradually match their motor output to the memory of tutor sounds. In humans, investigations of early vocal learning have focused mainly on perceptual skills of infants, whereas studies of songbirds have focused on measures of vocal production. In order to fully exploit songbirds as a model for human speech, understand the neural basis of learned vocal behavior, and investigate links between vocal perception and production, studies of songbirds must examine both behavioral measures of perception and neural measures of discrimination during development. Here we used behavioral and electrophysiological assays of the ability of songbirds to distinguish vocal calls of varying frequencies at different stages of vocal learning. The results show that neural tuning in auditory cortex mirrors behavioral improvements in the ability to make perceptual distinctions of vocal calls as birds are engaged in vocal learning. Thus, separate measures of neural discrimination and behavioral perception yielded highly similar trends during the course of vocal development. The timing of this improvement in the ability to distinguish vocal sounds correlates with our previous work showing substantial refinement of axonal connectivity in cortico-basal ganglia pathways necessary for vocal learning.

  16. Development of auditory-vocal perceptual skills in songbirds.

    Science.gov (United States)

    Miller-Sims, Vanessa C; Bottjer, Sarah W

    2012-01-01

    Songbirds are one of the few groups of animals that learn the sounds used for vocal communication during development. Like humans, songbirds memorize vocal sounds based on auditory experience with vocalizations of adult "tutors", and then use auditory feedback of self-produced vocalizations to gradually match their motor output to the memory of tutor sounds. In humans, investigations of early vocal learning have focused mainly on perceptual skills of infants, whereas studies of songbirds have focused on measures of vocal production. In order to fully exploit songbirds as a model for human speech, understand the neural basis of learned vocal behavior, and investigate links between vocal perception and production, studies of songbirds must examine both behavioral measures of perception and neural measures of discrimination during development. Here we used behavioral and electrophysiological assays of the ability of songbirds to distinguish vocal calls of varying frequencies at different stages of vocal learning. The results show that neural tuning in auditory cortex mirrors behavioral improvements in the ability to make perceptual distinctions of vocal calls as birds are engaged in vocal learning. Thus, separate measures of neural discrimination and behavioral perception yielded highly similar trends during the course of vocal development. The timing of this improvement in the ability to distinguish vocal sounds correlates with our previous work showing substantial refinement of axonal connectivity in cortico-basal ganglia pathways necessary for vocal learning.

  17. The Role of Occupational Voice Demand and Patient-Rated Impairment in Predicting Voice Therapy Adherence.

    Science.gov (United States)

    Ebersole, Barbara; Soni, Resha S; Moran, Kathleen; Lango, Miriam; Devarajan, Karthik; Jamal, Nausheen

    2017-07-11

    Examine the relationship among the severity of patient-perceived voice impairment, perceptual dysphonia severity, occupational voice demand, and voice therapy adherence. Identify clinical predictors of increased risk for therapy nonadherence. A retrospective cohort study of patients presenting with a chief complaint of persistent dysphonia at an interdisciplinary voice center was done. The Voice Handicap Index-10 (VHI-10) and the Voice-Related Quality of Life (V-RQOL) survey scores, clinician rating of dysphonia severity using the Grade score from the Grade, Roughness Breathiness, Asthenia, and Strain scale, occupational voice demand, and patient demographics were tested for associations with therapy adherence, defined as completion of the treatment plan. Classification and Regression Tree (CART) analysis was performed to establish thresholds for nonadherence risk. Of 166 patients evaluated, 111 were recommended for voice therapy. The therapy nonadherence rate was 56%. Occupational voice demand category, VHI-10, and V-RQOL scores were the only factors significantly correlated with therapy adherence (P occupational voice demand are significantly more likely to be nonadherent with therapy than those with high occupational voice demand (P 40 is a significant cutoff point for predicting therapy nonadherence (P Occupational voice demand and patient perception of impairment are significantly and independently correlated with therapy adherence. A VHI-10 score of ≤9 or a V-RQOL score of >40 is a significant cutoff point for predicting nonadherence risk. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  18. Speech-induced suppression of evoked auditory fields in children who stutter.

    Science.gov (United States)

    Beal, Deryk S; Quraan, Maher A; Cheyne, Douglas O; Taylor, Margot J; Gracco, Vincent L; De Nil, Luc F

    2011-02-14

    Auditory responses to speech sounds that are self-initiated are suppressed compared to responses to the same speech sounds during passive listening. This phenomenon is referred to as speech-induced suppression, a potentially important feedback-mediated speech-motor control process. In an earlier study, we found that both adults who do and do not stutter demonstrated a reduced amplitude of the auditory M50 and M100 responses to speech during active production relative to passive listening. It is unknown if auditory responses to self-initiated speech-motor acts are suppressed in children or if the phenomenon differs between children who do and do not stutter. As stuttering is a developmental speech disorder, examining speech-induced suppression in children may identify possible neural differences underlying stuttering close to its time of onset. We used magnetoencephalography to determine the presence of speech-induced suppression in children and to characterize the properties of speech-induced suppression in children who stutter. We examined the auditory M50 as this was the earliest robust response reproducible across our child participants and the most likely to reflect a motor-to-auditory relation. Both children who do and do not stutter demonstrated speech-induced suppression of the auditory M50. However, children who stutter had a delayed auditory M50 peak latency to vowel sounds compared to children who do not stutter indicating a possible deficiency in their ability to efficiently integrate auditory speech information for the purpose of establishing neural representations of speech sounds. Copyright © 2010 Elsevier Inc. All rights reserved.

  19. Sex differences in the representation of call stimuli in a songbird secondary auditory area

    Directory of Open Access Journals (Sweden)

    Nicolas eGiret

    2015-10-01

    Full Text Available Understanding how communication sounds are encoded in the central auditory system is critical to deciphering the neural bases of acoustic communication. Songbirds use learned or unlearned vocalizations in a variety of social interactions. They have telencephalic auditory areas specialized for processing natural sounds and considered as playing a critical role in the discrimination of behaviorally relevant vocal sounds. The zebra finch, a highly social songbird species, forms lifelong pair bonds. Only male zebra finches sing. However, both sexes produce the distance call when placed in visual isolation. This call is sexually dimorphic, is learned only in males and provides support for individual recognition in both sexes. Here, we assessed whether auditory processing of distance calls differs between paired males and females by recording spiking activity in a secondary auditory area, the caudolateral mesopallium (CLM, while presenting the distance calls of a variety of individuals, including the bird itself, the mate, familiar and unfamiliar males and females. In males, the CLM is potentially involved in auditory feedback processing important for vocal learning. Based on both the analyses of spike rates and temporal aspects of discharges, our results clearly indicate that call-evoked responses of CLM neurons are sexually dimorphic, being stronger, lasting longer and conveying more information about calls in males than in females. In addition, how auditory responses vary among call types differ between sexes. In females, response strength differs between familiar male and female calls. In males, temporal features of responses reveal a sensitivity to the bird’s own call. These findings provide evidence that sexual dimorphism occurs in higher-order processing areas within the auditory system. They suggest a sexual dimorphism in the function of the CLM, contributing to transmit information about the self-generated calls in males and to storage of

  20. Effects of Auditory Input in Individuation Tasks

    Science.gov (United States)

    Robinson, Christopher W.; Sloutsky, Vladimir M.

    2008-01-01

    Under many conditions auditory input interferes with visual processing, especially early in development. These interference effects are often more pronounced when the auditory input is unfamiliar than when the auditory input is familiar (e.g. human speech, pre-familiarized sounds, etc.). The current study extends this research by examining how…