WorldWideScience

Sample records for auditory perception

  1. The Perception of Auditory Motion.

    Science.gov (United States)

    Carlile, Simon; Leung, Johahn

    2016-01-01

    The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion. The fact that we perceive our auditory world to be stable despite almost continual movement of the head demonstrates the efficiency and effectiveness of this process. This review examines the acoustical basis of auditory motion perception and a wide range of psychophysical, electrophysiological, and cortical imaging studies that have probed the limits and possible mechanisms underlying this perception. PMID:27094029

  2. Psychology of auditory perception.

    Science.gov (United States)

    Lotto, Andrew; Holt, Lori

    2011-09-01

    Audition is often treated as a 'secondary' sensory system behind vision in the study of cognitive science. In this review, we focus on three seemingly simple perceptual tasks to demonstrate the complexity of perceptual-cognitive processing involved in everyday audition. After providing a short overview of the characteristics of sound and their neural encoding, we present a description of the perceptual task of segregating multiple sound events that are mixed together in the signal reaching the ears. Then, we discuss the ability to localize the sound source in the environment. Finally, we provide some data and theory on how listeners categorize complex sounds, such as speech. In particular, we present research on how listeners weigh multiple acoustic cues in making a categorization decision. One conclusion of this review is that it is time for auditory cognitive science to be developed to match what has been done in vision in order for us to better understand how humans communicate with speech and music. WIREs Cogni Sci 2011 2 479-489 DOI: 10.1002/wcs.123 For further resources related to this article, please visit the WIREs website. PMID:26302301

  3. Music perception, pitch, and the auditory system

    OpenAIRE

    McDermott, Josh H.; Oxenham, Andrew J.

    2008-01-01

    The perception of music depends on many culture-specific factors, but is also constrained by properties of the auditory system. This has been best characterized for those aspects of music that involve pitch. Pitch sequences are heard in terms of relative, as well as absolute, pitch. Pitch combinations give rise to emergent properties not present in the component notes. In this review we discuss the basic auditory mechanisms contributing to these and other perceptual effects in music.

  4. A Comparison of Three Auditory Discrimination-Perception Tests

    Science.gov (United States)

    Koenke, Karl

    1978-01-01

    Comparisons were made between scores of 52 third graders on three measures of auditory discrimination: Wepman's Auditory Discrimination Test, the Goldman-Fristoe Woodcock (GFW) Test of Auditory Discrimination, and the Kimmell-Wahl Screening Test of Auditory Perception (STAP). (CL)

  5. McGurk illusion recalibrates subsequent auditory perception.

    Science.gov (United States)

    Lüttke, Claudia S; Ekman, Matthias; van Gerven, Marcel A J; de Lange, Floris P

    2016-01-01

    Visual information can alter auditory perception. This is clearly illustrated by the well-known McGurk illusion, where an auditory/aba/ and a visual /aga/ are merged to the percept of 'ada'. It is less clear however whether such a change in perception may recalibrate subsequent perception. Here we asked whether the altered auditory perception due to the McGurk illusion affects subsequent auditory perception, i.e. whether this process of fusion may cause a recalibration of the auditory boundaries between phonemes. Participants categorized auditory and audiovisual speech stimuli as /aba/, /ada/ or /aga/ while activity patterns in their auditory cortices were recorded using fMRI. Interestingly, following a McGurk illusion, an auditory /aba/ was more often misperceived as 'ada'. Furthermore, we observed a neural counterpart of this recalibration in the early auditory cortex. When the auditory input /aba/ was perceived as 'ada', activity patterns bore stronger resemblance to activity patterns elicited by /ada/ sounds than when they were correctly perceived as /aba/. Our results suggest that upon experiencing the McGurk illusion, the brain shifts the neural representation of an /aba/ sound towards /ada/, culminating in a recalibration in perception of subsequent auditory input. PMID:27611960

  6. Speech Perception Within an Auditory Cognitive Science Framework

    OpenAIRE

    Holt, Lori L.; Lotto, Andrew J.

    2008-01-01

    The complexities of the acoustic speech signal pose many significant challenges for listeners. Although perceiving speech begins with auditory processing, investigation of speech perception has progressed mostly independently of study of the auditory system. Nevertheless, a growing body of evidence demonstrates that cross-fertilization between the two areas of research can be productive. We briefly describe research bridging the study of general auditory processing and speech perception, show...

  7. Music perception: information flow within the human auditory cortices.

    Science.gov (United States)

    Angulo-Perkins, Arafat; Concha, Luis

    2014-01-01

    Information processing of all acoustic stimuli involves temporal lobe regions referred to as auditory cortices, which receive direct afferents from the auditory thalamus. However, the perception of music (as well as speech or spoken language) is a complex process that also involves secondary and association cortices that conform a large functional network. Using different analytical techniques and stimulation paradigms, several studies have shown that certain areas are particularly sensitive to specific acoustic characteristics inherent to music (e.g., rhythm). This chapter reviews the functional anatomy of the auditory cortices, and highlights specific experiments that suggest the existence of distinct cortical networks for the perception of music and speech. PMID:25358716

  8. Context, Contrast, and Tone of Voice in Auditory Sarcasm Perception

    Science.gov (United States)

    Voyer, Daniel; Thibodeau, Sophie-Hélène; Delong, Breanna J.

    2016-01-01

    Four experiments were conducted to investigate the interplay between context and tone of voice in the perception of sarcasm. These experiments emphasized the role of contrast effects in sarcasm perception exclusively by means of auditory stimuli whereas most past research has relied on written material. In all experiments, a positive or negative…

  9. Neural correlates of auditory-somatosensory interaction in speech perception

    OpenAIRE

    Ito, Takayuki; Gracco, Vincent; Ostry, David J.

    2015-01-01

    Speech perception is known to rely on both auditory and visual information. However, sound specific somatosensory input has been shown also to influence speech perceptual processing (Ito et al., 2009). In the present study we addressed further the relationship between somatosensory information and speech perceptual processing by addressing the hypothesis that the temporal relationship between orofacial movement and sound processing contributes to somatosensory-auditory interaction in speech p...

  10. Listener orientation and spatial judgments of elevated auditory percepts

    Science.gov (United States)

    Parks, Anthony J.

    How do listener head rotations affect auditory perception of elevation? This investi-. gation addresses this in the hopes that perceptual judgments of elevated auditory. percepts may be more thoroughly understood in terms of dynamic listening cues. engendered by listener head rotations and that this phenomenon can be psychophys-. ically and computationally modeled. Two listening tests were conducted and a. psychophysical model was constructed to this end. The frst listening test prompted. listeners to detect an elevated auditory event produced by a virtual noise source. orbiting the median plane via 24-channel ambisonic spatialization. Head rotations. were tracked using computer vision algorithms facilitated by camera tracking. The. data were used to construct a dichotomous criteria model using factorial binary. logistic regression model. The second auditory test investigated the validity of the. historically supported frequency dependence of auditory elevation perception using. narrow-band noise for continuous and brief stimuli with fxed and free-head rotation. conditions. The data were used to construct a multinomial logistic regression model. to predict categorical judgments of above, below, and behind. Finally, in light. of the psychophysical data found from the above studies, a functional model of. elevation perception for point sources along the cone of confusion was constructed. using physiologically-inspired signal processing methods along with top-down pro-. cessing utilizing principles of memory and orientation. The model is evaluated using. white noise bursts for 42 subjects' head-related transfer functions. The investigation. concludes with study limitations, possible implications, and speculation on future. research trajectories.

  11. Auditory Perception of Statistically Blurred Sound Textures

    DEFF Research Database (Denmark)

    McWalter, Richard Ian; MacDonald, Ewen; Dau, Torsten

    Sound textures have been identified as a category of sounds which are processed by the peripheral auditory system and captured with running timeaveraged statistics. Although sound textures are temporally homogeneous, they offer a listener with enough information to identify and differentiate...... sources. This experiment investigated the ability of the auditory system to identify statistically blurred sound textures and the perceptual relationship between sound textures. Identification performance of statistically blurred sound textures presented at a fixed blur increased over those presented as a...... gradual blur. The results suggests that the correct identification of sound textures is influenced by the preceding blurred stimulus. These findings draw parallels to the recognition of blurred images....

  12. Auditory Spectral Integration in the Perception of Static Vowels

    Science.gov (United States)

    Fox, Robert Allen; Jacewicz, Ewa; Chang, Chiung-Yun

    2011-01-01

    Purpose: To evaluate potential contributions of broadband spectral integration in the perception of static vowels. Specifically, can the auditory system infer formant frequency information from changes in the intensity weighting across harmonics when the formant itself is missing? Does this type of integration produce the same results in the lower…

  13. Auditory Perception, Phonological Processing, and Reading Ability/Disability.

    Science.gov (United States)

    Watson, Betty U.; Miller, Theodore K.

    1993-01-01

    This study of 94 college undergraduates, including 24 with a reading disability, found that speech perception was strongly related to 3 of 4 phonological variables, including short-term and long-term auditory memory and phoneme segmentation, which were in turn strongly related to reading. Nonverbal temporal processing was not related to any…

  14. Odors bias time perception in visual and auditory modalities

    Directory of Open Access Journals (Sweden)

    Zhenzhu eYue

    2016-04-01

    Full Text Available Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 ms or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor. The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a

  15. Odors Bias Time Perception in Visual and Auditory Modalities.

    Science.gov (United States)

    Yue, Zhenzhu; Gao, Tianyu; Chen, Lihan; Wu, Jiashuang

    2016-01-01

    Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal) were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor). The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a framework of

  16. Odors Bias Time Perception in Visual and Auditory Modalities

    Science.gov (United States)

    Yue, Zhenzhu; Gao, Tianyu; Chen, Lihan; Wu, Jiashuang

    2016-01-01

    Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal) were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor). The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a framework of

  17. Music perception and cognition following bilateral lesions of auditory cortex.

    Science.gov (United States)

    Tramo, M J; Bharucha, J J; Musiek, F E

    1990-01-01

    We present experimental and anatomical data from a case study of impaired auditory perception following bilateral hemispheric strokes. To consider the cortical representation of sensory, perceptual, and cognitive functions mediating tonal information processing in music, pure tone sensation thresholds, spectral intonation judgments, and the associative priming of spectral intonation judgments by harmonic context were examined, and lesion localization was analyzed quantitatively using straight-line two-dimensional maps of the cortical surface reconstructed from magnetic resonance images. Despite normal pure tone sensation thresholds at 250-8000 Hz, the perception of tonal spectra was severely impaired, such that harmonic structures (major triads) were almost uniformly judged to sound dissonant; yet, the associative priming of spectral intonation judgments by harmonic context was preserved, indicating that cognitive representations of tonal hierarchies in music remained intact and accessible. Brainprints demonstrated complete bilateral lesions of the transverse gyri of Heschl and partial lesions of the right and left superior temporal gyri involving 98 and 20% of their surface areas, respectively. In the right hemisphere, there was partial sparing of the planum temporale, temporoparietal junction, and inferior parietal cortex. In the left hemisphere, all of the superior temporal region anterior to the transverse gyrus and parts of the planum temporale, temporoparietal junction, inferior parietal cortex, and insula were spared. These observations suggest that (1) sensory, perceptual, and cognitive functions mediating tonal information processing in music are neurologically dissociable; (2) complete bilateral lesions of primary auditory cortex combined with partial bilateral lesions of auditory association cortex chronically impair tonal consonance perception; (3) cognitive functions that hierarchically structure pitch information and generate harmonic expectancies

  18. The plastic ear and perceptual relearning in auditory spatial perception.

    Science.gov (United States)

    Carlile, Simon

    2014-01-01

    The auditory system of adult listeners has been shown to accommodate to altered spectral cues to sound location which presumably provides the basis for recalibration to changes in the shape of the ear over a life time. Here we review the role of auditory and non-auditory inputs to the perception of sound location and consider a range of recent experiments looking at the role of non-auditory inputs in the process of accommodation to these altered spectral cues. A number of studies have used small ear molds to modify the spectral cues that result in significant degradation in localization performance. Following chronic exposure (10-60 days) performance recovers to some extent and recent work has demonstrated that this occurs for both audio-visual and audio-only regions of space. This begs the questions as to the teacher signal for this remarkable functional plasticity in the adult nervous system. Following a brief review of influence of the motor state in auditory localization, we consider the potential role of auditory-motor learning in the perceptual recalibration of the spectral cues. Several recent studies have considered how multi-modal and sensory-motor feedback might influence accommodation to altered spectral cues produced by ear molds or through virtual auditory space stimulation using non-individualized spectral cues. The work with ear molds demonstrates that a relatively short period of training involving audio-motor feedback (5-10 days) significantly improved both the rate and extent of accommodation to altered spectral cues. This has significant implications not only for the mechanisms by which this complex sensory information is encoded to provide spatial cues but also for adaptive training to altered auditory inputs. The review concludes by considering the implications for rehabilitative training with hearing aids and cochlear prosthesis. PMID:25147497

  19. The plastic ear and perceptual relearning in auditory spatial perception.

    Directory of Open Access Journals (Sweden)

    Simon eCarlile

    2014-08-01

    Full Text Available The auditory system of adult listeners has been shown to accommodate to altered spectral cues to sound location which presumably provides the basis for recalibration to changes in the shape of the ear over a life time. Here we review the role of auditory and non-auditory inputs to the perception of sound location and consider a range of recent experiments looking at the role of non-auditory inputs in the process of accommodation to these altered spectral cues. A number of studies have used small ear moulds to modify the spectral cues that result in significant degradation in localization performance. Following chronic exposure (10-60 days performance recovers to some extent and recent work has demonstrated that this occurs for both audio-visual and audio-only regions of space. This begs the questions as to the teacher signal for this remarkable functional plasticity in the adult nervous system. Following a brief review of influence of the motor state in auditory localisation, we consider the potential role of auditory-motor learning in the perceptual recalibration of the spectral cues. Several recent studies have considered how multi-modal and sensory-motor feedback might influence accommodation to altered spectral cues produced by ear moulds or through virtual auditory space stimulation using non-individualised spectral cues. The work with ear moulds demonstrates that a relatively short period of training involving sensory-motor feedback (5 – 10 days significantly improved both the rate and extent of accommodation to altered spectral cues. This has significant implications not only for the mechanisms by which this complex sensory information is encoded to provide a spatial code but also for adaptive training to altered auditory inputs. The review concludes by considering the implications for rehabilitative training with hearing aids and cochlear prosthesis.

  20. Task-dependent calibration of auditory spatial perception through environmental visual observation

    OpenAIRE

    Luca Brayda

    2015-01-01

    Visual information is paramount to space perception. Vision influences auditory space estimation. Many studies show that simultaneous visual and auditory cues improve precision of the final multisensory estimate. However, the amount or the temporal extent of visual information, that is sufficient to influence auditory perception, is still unknown. It is therefore interesting to know if vision can improve auditory precision through a short-term environmental observation preceding the audio tas...

  1. Beat gestures modulate auditory integration in speech perception

    OpenAIRE

    Biau, Emmanuel; Soto-Faraco, Salvador, 1970-

    2013-01-01

    Spontaneous beat gestures are an integral part of the paralinguistic context during face-to-face conversations. Here we investigated the time course of beat-speech integration in speech perception by measuring ERPs evoked by words pronounced with or without an accompanying beat gesture, while participants watched a spoken discourse. Words accompanied by beats elicited a positive shift in ERPs at an early sensory stage (before 100 ms) and at a later time window coinciding with the auditory com...

  2. Auditory Perception of Self-Similarity in Water Sounds

    OpenAIRE

    Geffen, Maria N.; Gervain, Judit; Werker, Janet F.; Magnasco, Marcelo O.

    2011-01-01

    Many natural signals, including environmental sounds, exhibit scale-invariant statistics: their structure is repeated at multiple scales. Such scale-invariance has been identified separately across spectral and temporal correlations of natural sounds (Clarke and Voss, 1975; Attias and Schreiner, 1997; Escabi et al., 2003; Singh and Theunissen, 2003). Yet the role of scale-invariance across overall spectro-temporal structure of the sound has not been explored directly in auditory perception. H...

  3. Auditory looming perception in rhesus monkeys

    OpenAIRE

    Ghazanfar, Asif A.; Neuhoff, John G.; Logothetis, Nikos K.

    2002-01-01

    The detection of approaching objects can be crucial to the survival of an organism. The perception of looming has been studied extensively in the visual system, but remains largely unexplored in audition. Here we show a behavioral bias in rhesus monkeys orienting to “looming” sounds. As in humans, the bias occurred for harmonic tones (which can reliably indicate single sources), but not for broadband noise. These response biases to looming sounds are consistent with an evolved neural mechanis...

  4. Auditory-Visual Perception of Acoustically Degraded Prosodic Contrastive Focus in French

    OpenAIRE

    Dohen, Marion; Loevenbruck, Hélène

    2007-01-01

    Previous studies have shown that visual only perception of prosodic contrastive focus in French is possible. The aim of this study was to determine whether the visual modality could be combined to the auditory one and lead to a perceptual enhancement of prosodic focus. In order to examine this question, we carried out auditory only, audiovisual and visual only perception tests. In order to avoid a ceiling effect, auditory only perception of prosodic focus being very high, we used whispered sp...

  5. Visual Timing of Structured Dance Movements Resembles Auditory Rhythm Perception.

    Science.gov (United States)

    Su, Yi-Huang; Salazar-López, Elvira

    2016-01-01

    Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance. PMID:27313900

  6. Visual Timing of Structured Dance Movements Resembles Auditory Rhythm Perception

    Directory of Open Access Journals (Sweden)

    Yi-Huang Su

    2016-01-01

    Full Text Available Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance.

  7. A computational model of human auditory signal processing and perception

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve; Ewert, Stephan D.; Dau, Torsten

    2008-01-01

    A model of computational auditory signal-processing and perception that accounts for various aspects of simultaneous and nonsimultaneous masking in human listeners is presented. The model is based on the modulation filterbank model described by Dau et al. [J. Acoust. Soc. Am. 102, 2892 (1997......)] but includes major changes at the peripheral and more central stages of processing. The model contains outer- and middle-ear transformations, a nonlinear basilar-membrane processing stage, a hair-cell transduction stage, a squaring expansion, an adaptation stage, a 150-Hz lowpass modulation filter, a bandpass...... modulation filterbank, a constant-variance internal noise, and an optimal detector stage. The model was evaluated in experimental conditions that reflect, to a different degree, effects of compression as well as spectral and temporal resolution in auditory processing. The experiments include intensity...

  8. Classification of Underwater Target Echoes Based on Auditory Perception Characteristics

    Institute of Scientific and Technical Information of China (English)

    Xiukun Li; Xiangxia Meng; Hang Liu; Mingye Liu

    2014-01-01

    In underwater target detection, the bottom reverberation has some of the same properties as the target echo, which has a great impact on the performance. It is essential to study the difference between target echo and reverberation. In this paper, based on the unique advantage of human listening ability on objects distinction, the Gammatone filter is taken as the auditory model. In addition, time-frequency perception features and auditory spectral features are extracted for active sonar target echo and bottom reverberation separation. The features of the experimental data have good concentration characteristics in the same class and have a large amount of differences between different classes, which shows that this method can effectively distinguish between the target echo and reverberation.

  9. Anxiety sensitivity and auditory perception of heartbeat.

    Science.gov (United States)

    Pollock, R A; Carter, A S; Amir, N; Marks, L E

    2006-12-01

    Anxiety sensitivity (AS) is the fear of sensations associated with autonomic arousal. AS has been associated with the development and maintenance of panic disorder. Given that panic patients often rate cardiac symptoms as the most fear-provoking feature of a panic attack, AS individuals may be especially responsive to cardiac stimuli. Consequently, we developed a signal-in-white-noise detection paradigm to examine the strategies that high and low AS individuals use to detect and discriminate normal and abnormal heartbeat sounds. Compared to low AS individuals, high AS individuals demonstrated a greater propensity to report the presence of normal, but not abnormal, heartbeat sounds. High and low AS individuals did not differ in their ability to perceive normal heartbeat sounds against a background of white noise; however, high AS individuals consistently demonstrated lower ability to discriminate abnormal heartbeats from background noise and between abnormal and normal heartbeats. AS was characterized by an elevated false alarm rate across all tasks. These results suggest that heartbeat sounds may be fear-relevant cues for AS individuals, and may affect their attention and perception in tasks involving threat signals. PMID:16513082

  10. Toward a neurobiology of auditory object perception: What can we learn from the songbird forebrain?

    Institute of Scientific and Technical Information of China (English)

    Kai LU; David S. VICARIO

    2011-01-01

    In the acoustic world,no sounds occur entirely in isolation; they always reach the ears in combination with other sounds.How any given sound is discriminated and perceived as an independent auditory object is a challenging question in neuroscience.Although our knowledge of neural processing in the auditory pathway has expanded over the years,no good theory exists to explain how perception of auditory objects is achieved.A growing body of evidence suggests that the selectivity of neurons in the auditory forebrain is under dynamic modulation,and this plasticity may contribute to auditory object perception.We propose that stimulus-specific adaptation in the auditory forebrain of the songbird (and perhaps in other systems) may play an important role in modulating sensitivity in a way that aids discrimination,and thus can potentially contribute to auditory object perception [Current Zoology 57 (6):671-683,2011].

  11. Toward a neurobiology of auditory object perception: What can we learn from the songbird forebrain?

    Directory of Open Access Journals (Sweden)

    Kai LU, David S. VICARIO

    2011-12-01

    Full Text Available In the acoustic world, no sounds occur entirely in isolation; they always reach the ears in combination with other sounds. How any given sound is discriminated and perceived as an independent auditory object is a challenging question in neuroscience. Although our knowledge of neural processing in the auditory pathway has expanded over the years, no good theory exists to explain how perception of auditory objects is achieved. A growing body of evidence suggests that the selectivity of neurons in the auditory forebrain is under dynamic modulation, and this plasticity may contribute to auditory object perception. We propose that stimulus-specific adaptation in the auditory forebrain of the songbird (and perhaps in other systems may play an important role in modulating sensitivity in a way that aids discrimination, and thus can potentially contribute to auditory object perception [Current Zoology 57 (6: 671–683, 2011].

  12. Auditory Speech Perception Tests in Relation to the Coding Strategy in Cochlear Implant

    OpenAIRE

    Bazon, Aline Cristine; Mantello, Erika Barioni; Gonçales, Alina Sanches; Isaac, Myriam de Lima; Hyppolito, Miguel Angelo; Reis, Ana Cláudia Mirândola Barbosa

    2015-01-01

    Introduction  The objective of the evaluation of auditory perception of cochlear implant users is to determine how the acoustic signal is processed, leading to the recognition and understanding of sound. Objective  To investigate the differences in the process of auditory speech perception in individuals with postlingual hearing loss wearing a cochlear implant, using two different speech coding strategies, and to analyze speech perception and handicap perception in relation to the strategy us...

  13. Vision of tongue movements bias auditory speech perception.

    Science.gov (United States)

    D'Ausilio, Alessandro; Bartoli, Eleonora; Maffongelli, Laura; Berry, Jeffrey James; Fadiga, Luciano

    2014-10-01

    Audiovisual speech perception is likely based on the association between auditory and visual information into stable audiovisual maps. Conflicting audiovisual inputs generate perceptual illusions such as the McGurk effect. Audiovisual mismatch effects could be either driven by the detection of violations in the standard audiovisual statistics or via the sensorimotor reconstruction of the distal articulatory event that generated the audiovisual ambiguity. In order to disambiguate between the two hypotheses we exploit the fact that the tongue is hidden to vision. For this reason, tongue movement encoding can solely be learned via speech production but not via others׳ speech perception alone. Here we asked participants to identify speech sounds while matching or mismatching visual representations of tongue movements which were shown. Vision of congruent tongue movements facilitated auditory speech identification with respect to incongruent trials. This result suggests that direct visual experience of an articulator movement is not necessary for the generation of audiovisual mismatch effects. Furthermore, we suggest that audiovisual integration in speech may benefit from speech production learning. PMID:25172391

  14. Auditory spatial perception dynamically realigns with changing eye position.

    Science.gov (United States)

    Razavi, Babak; O'Neill, William E; Paige, Gary D

    2007-09-19

    Audition and vision both form spatial maps of the environment in the brain, and their congruency requires alignment and calibration. Because audition is referenced to the head and vision is referenced to movable eyes, the brain must accurately account for eye position to maintain alignment between the two modalities as well as perceptual space constancy. Changes in eye position are known to variably, but inconsistently, shift sound localization, suggesting subtle shortcomings in the accuracy or use of eye position signals. We systematically and directly quantified sound localization across a broad spatial range and over time after changes in eye position. A sustained fixation task addressed the spatial (steady-state) attributes of eye position-dependent effects on sound localization. Subjects continuously fixated visual reference spots straight ahead (center), to the left (20 degrees), or to the right (20 degrees) of the midline in separate sessions while localizing auditory targets using a laser pointer guided by peripheral vision. An alternating fixation task focused on the temporal (dynamic) aspects of auditory spatial shifts after changes in eye position. Localization proceeded as in sustained fixation, except that eye position alternated between the three fixation references over multiple epochs, each lasting minutes. Auditory space shifted by approximately 40% toward the new eye position and dynamically over several minutes. We propose that this spatial shift reflects an adaptation mechanism for aligning the "straight-ahead" of perceived sensory-motor maps, particularly during early childhood when normal ocular alignment is achieved, but also resolving challenges to normal spatial perception throughout life. PMID:17881531

  15. Neural coding and perception of pitch in the normal and impaired human auditory system

    DEFF Research Database (Denmark)

    Santurette, Sébastien

    2011-01-01

    Pitch is an important attribute of hearing that allows us to perceive the musical quality of sounds. Besides music perception, pitch contributes to speech communication, auditory grouping, and perceptual segregation of sound sources. In this work, several aspects of pitch perception in humans were...... for a variety of basic auditory tasks, indicating that it may be a crucial measure to consider for hearing-loss characterization. In contrast to hearing-impaired listeners, adults with dyslexia showed no deficits in binaural pitch perception, suggesting intact low-level auditory mechanisms. The second part...... that the use of spectral cues remained plausible. Simulations of auditory-nerve representations of the complex tones further suggested that a spectrotemporal mechanism combining precise timing information across auditory channels might best account for the behavioral data. Overall, this work provides insights...

  16. A loudspeaker-based room auralisation (LoRA) system for auditory perception research

    DEFF Research Database (Denmark)

    Buchholz, Jörg; Favrot, Sylvain Emmanuel

    Most research on understanding the signal processing of the auditory system has been realized in anechoic or almost anechoic environments. The knowledge derived from these experiments cannot be directly transferred to reverberant environments. In order to investigate the auditory signal processin...... cross correlation coefficient) were considered. The subject evaluation included speech intelligibility and distance perception measures....

  17. The Perception of Cooperativeness Without Any Visual or Auditory Communication.

    Science.gov (United States)

    Chang, Dong-Seon; Burger, Franziska; Bülthoff, Heinrich H; de la Rosa, Stephan

    2015-12-01

    Perceiving social information such as the cooperativeness of another person is an important part of human interaction. But can people perceive the cooperativeness of others even without any visual or auditory information? In a novel experimental setup, we connected two people with a rope and made them accomplish a point-collecting task together while they could not see or hear each other. We observed a consistently emerging turn-taking behavior in the interactions and installed a confederate in a subsequent experiment who either minimized or maximized this behavior. Participants experienced this only through the haptic force-feedback of the rope and made evaluations about the confederate after each interaction. We found that perception of cooperativeness was significantly affected only by the manipulation of this turn-taking behavior. Gender- and size-related judgments also significantly differed. Our results suggest that people can perceive social information such as the cooperativeness of other people even in situations where possibilities for communication are minimal. PMID:27551362

  18. Auditory feedback affects perception of effort when exercising with a Pulley machine

    DEFF Research Database (Denmark)

    Bordegoni, Monica; Ferrise, Francesco; Grani, Francesco;

    2013-01-01

    In this paper we describe an experiment that investigates the role of auditory feedback in affecting the perception of effort when using a physical pulley machine. Specifically, we investigated whether variations in the amplitude and frequency content of the pulley sound affect perception of effo...

  19. Feeling music: integration of auditory and tactile inputs in musical meter perception.

    Directory of Open Access Journals (Sweden)

    Juan Huang

    Full Text Available Musicians often say that they not only hear, but also "feel" music. To explore the contribution of tactile information in "feeling" musical rhythm, we investigated the degree that auditory and tactile inputs are integrated in humans performing a musical meter recognition task. Subjects discriminated between two types of sequences, 'duple' (march-like rhythms and 'triple' (waltz-like rhythms presented in three conditions: 1 Unimodal inputs (auditory or tactile alone, 2 Various combinations of bimodal inputs, where sequences were distributed between the auditory and tactile channels such that a single channel did not produce coherent meter percepts, and 3 Simultaneously presented bimodal inputs where the two channels contained congruent or incongruent meter cues. We first show that meter is perceived similarly well (70%-85% when tactile or auditory cues are presented alone. We next show in the bimodal experiments that auditory and tactile cues are integrated to produce coherent meter percepts. Performance is high (70%-90% when all of the metrically important notes are assigned to one channel and is reduced to 60% when half of these notes are assigned to one channel. When the important notes are presented simultaneously to both channels, congruent cues enhance meter recognition (90%. Performance drops dramatically when subjects were presented with incongruent auditory cues (10%, as opposed to incongruent tactile cues (60%, demonstrating that auditory input dominates meter perception. We believe that these results are the first demonstration of cross-modal sensory grouping between any two senses.

  20. Fundamental deficits of auditory perception in Wernicke’s aphasia

    OpenAIRE

    Robson, Holly; Grube, Manon; Lambon Ralph, Matthew; Griffiths, Timothy; Sage, Karen

    2012-01-01

    Objective: This work investigates the nature of the comprehension impairment in Wernicke’s aphasia, by examining the relationship between deficits in auditory processing of fundamental, non-verbal acoustic stimuli and auditory comprehension. Wernicke’s aphasia, a condition resulting in severely disrupted auditory comprehension, primarily occurs following a cerebrovascular accident (CVA) to the left temporo-parietal cortex. Whilst damage to posterior superior temporal areas is associated wit...

  1. A Psychophysical Imaging Method Evidencing Auditory Cue Extraction during Speech Perception: A Group Analysis of Auditory Classification Images

    OpenAIRE

    Varnet, Léo; Knoblauch, Kenneth; Serniclaes, Willy; Meunier, Fanny; Hoen, Michel

    2015-01-01

    Although there is a large consensus regarding the involvement of specific acoustic cues in speech perception, the precise mechanisms underlying the transformation from continuous acoustical properties into discrete perceptual units remains undetermined. This gap in knowledge is partially due to the lack of a turnkey solution for isolating critical speech cues from natural stimuli. In this paper, we describe a psychoacoustic imaging method known as the Auditory Classification Image technique t...

  2. Neural coding and perception of pitch in the normal and impaired human auditory system

    OpenAIRE

    Santurette, Sébastien; Dau, Torsten; Buchholz, Jörg; Wouters, Jan; Andrew J Oxenham

    2011-01-01

    Pitch is an important attribute of hearing that allows us to perceive the musical quality of sounds. Besides music perception, pitch contributes to speech communication, auditory grouping, and perceptual segregation of sound sources. In this work, several aspects of pitch perception in humans were investigated using psychophysical methods. First, hearing loss was found to affect the perception of binaural pitch, a pitch sensation created by the binaural interaction of noise stimuli. Specifica...

  3. Effect of Auditory Interference on Memory of Haptic Perceptions.

    Science.gov (United States)

    Anater, Paul F.

    1980-01-01

    The effect of auditory interference on the processing of haptic information by 61 visually impaired students (8 to 20 years old) was the focus of the research described in this article. It was assumed that as the auditory interference approximated the verbalized activity of the haptic task, accuracy of recall would decline. (Author)

  4. A Psychophysical Imaging Method Evidencing Auditory Cue Extraction during Speech Perception: A Group Analysis of Auditory Classification Images : Auditory Classification Images

    OpenAIRE

    Varnet, Léo; Knoblauch, Kenneth; Serniclaes, Willy; Meunier, Fanny; Hoen, Michel

    2015-01-01

    Although there is a large consensus regarding the involvement of specific acoustic cues in speech perception, the precise mechanisms underlying the transformation from continuous acoustical properties into discrete perceptual units remains undetermined. This gap in knowledge is partially due to the lack of a turnkey solution for isolating critical speech cues from natural stimuli. In this paper, we describe a psychoacoustic imaging method known as the Auditory Classification Image technique t...

  5. Feel What You Say: An Auditory Effect on Somatosensory Perception

    OpenAIRE

    François Champoux; Shiller, Douglas M.; Zatorre, Robert J.

    2011-01-01

    In the present study, we demonstrate an audiotactile effect in which amplitude modulation of auditory feedback during voiced speech induces a throbbing sensation over the lip and laryngeal regions. Control tasks coupled with the examination of speech acoustic parameters allow us to rule out the possibility that the effect may have been due to cognitive factors or motor compensatory effects. We interpret the effect as reflecting the tight interplay between auditory and tactile modalities durin...

  6. Auditory Signal Processing in Communication: Perception and Performance of Vocal Sounds

    Science.gov (United States)

    Prather, Jonathan F.

    2013-01-01

    Learning and maintaining the sounds we use in vocal communication require accurate perception of the sounds we hear performed by others and feedback-dependent imitation of those sounds to produce our own vocalizations. Understanding how the central nervous system integrates auditory and vocal-motor information to enable communication is a fundamental goal of systems neuroscience, and insights into the mechanisms of those processes will profoundly enhance clinical therapies for communication disorders. Gaining the high-resolution insight necessary to define the circuits and cellular mechanisms underlying human vocal communication is presently impractical. Songbirds are the best animal model of human speech, and this review highlights recent insights into the neural basis of auditory perception and feedback-dependent imitation in those animals. Neural correlates of song perception are present in auditory areas, and those correlates are preserved in the auditory responses of downstream neurons that are also active when the bird sings. Initial tests indicate that singing-related activity in those downstream neurons is associated with vocal-motor performance as opposed to the bird simply hearing itself sing. Therefore, action potentials related to auditory perception and action potentials related to vocal performance are co-localized in individual neurons. Conceptual models of song learning involve comparison of vocal commands and the associated auditory feedback to compute an error signal that is used to guide refinement of subsequent song performances, yet the sites of that comparison remain unknown. Convergence of sensory and motor activity onto individual neurons points to a possible mechanism through which auditory and vocal-motor signals may be linked to enable learning and maintenance of the sounds used in vocal communication. PMID:23827717

  7. You can't stop the music: reduced auditory alpha power and coupling between auditory and memory regions facilitate the illusory perception of music during noise.

    Science.gov (United States)

    Müller, Nadia; Keil, Julian; Obleser, Jonas; Schulz, Hannah; Grunwald, Thomas; Bernays, René-Ludwig; Huppertz, Hans-Jürgen; Weisz, Nathan

    2013-10-01

    Our brain has the capacity of providing an experience of hearing even in the absence of auditory stimulation. This can be seen as illusory conscious perception. While increasing evidence postulates that conscious perception requires specific brain states that systematically relate to specific patterns of oscillatory activity, the relationship between auditory illusions and oscillatory activity remains mostly unexplained. To investigate this we recorded brain activity with magnetoencephalography and collected intracranial data from epilepsy patients while participants listened to familiar as well as unknown music that was partly replaced by sections of pink noise. We hypothesized that participants have a stronger experience of hearing music throughout noise when the noise sections are embedded in familiar compared to unfamiliar music. This was supported by the behavioral results showing that participants rated the perception of music during noise as stronger when noise was presented in a familiar context. Time-frequency data show that the illusory perception of music is associated with a decrease in auditory alpha power pointing to increased auditory cortex excitability. Furthermore, the right auditory cortex is concurrently synchronized with the medial temporal lobe, putatively mediating memory aspects associated with the music illusion. We thus assume that neuronal activity in the highly excitable auditory cortex is shaped through extensive communication between the auditory cortex and the medial temporal lobe, thereby generating the illusion of hearing music during noise. PMID:23664946

  8. Auditory distance perception in humans: a review of cues, development, neuronal bases, and effects of sensory loss.

    Science.gov (United States)

    Kolarik, Andrew J; Moore, Brian C J; Zahorik, Pavel; Cirstea, Silvia; Pardhan, Shahina

    2016-02-01

    Auditory distance perception plays a major role in spatial awareness, enabling location of objects and avoidance of obstacles in the environment. However, it remains under-researched relative to studies of the directional aspect of sound localization. This review focuses on the following four aspects of auditory distance perception: cue processing, development, consequences of visual and auditory loss, and neurological bases. The several auditory distance cues vary in their effective ranges in peripersonal and extrapersonal space. The primary cues are sound level, reverberation, and frequency. Nonperceptual factors, including the importance of the auditory event to the listener, also can affect perceived distance. Basic internal representations of auditory distance emerge at approximately 6 months of age in humans. Although visual information plays an important role in calibrating auditory space, sensorimotor contingencies can be used for calibration when vision is unavailable. Blind individuals often manifest supranormal abilities to judge relative distance but show a deficit in absolute distance judgments. Following hearing loss, the use of auditory level as a distance cue remains robust, while the reverberation cue becomes less effective. Previous studies have not found evidence that hearing-aid processing affects perceived auditory distance. Studies investigating the brain areas involved in processing different acoustic distance cues are described. Finally, suggestions are given for further research on auditory distance perception, including broader investigation of how background noise and multiple sound sources affect perceived auditory distance for those with sensory loss. PMID:26590050

  9. The evolutionary neuroscience of musical beat perception: the Action Simulation for Auditory Prediction (ASAP hypothesis.

    Directory of Open Access Journals (Sweden)

    Aniruddh D. Patel

    2014-05-01

    Full Text Available Every human culture has some form of music with a beat: a perceived periodic pulse that structures the perception of musical rhythm and which serves as a framework for synchronized movement to music. What are the neural mechanisms of musical beat perception, and how did they evolve? One view, which dates back to Darwin and implicitly informs some current models of beat perception, is that the relevant neural mechanisms are relatively general and are widespread among animal species. On the basis of recent neural and cross-species data on musical beat processing, this paper argues for a different view. Here we argue that beat perception is a complex brain function involving temporally-precise communication between auditory regions and motor planning regions of the cortex (even in the absence of overt movement. More specifically, we propose that simulation of periodic movement in motor planning regions provides a neural signal that helps the auditory system predict the timing of upcoming beats. This action simulation for auditory prediction (ASAP hypothesis leads to testable predictions. We further suggest that ASAP relies on dorsal auditory pathway connections between auditory regions and motor planning regions via the parietal cortex, and suggest that these connections may be stronger in humans than in nonhuman primates due to the evolution of vocal learning in our lineage. This suggestion motivates cross-species research to determine which species are capable of human-like beat perception, i.e., beat perception that involves accurate temporal prediction of beat times across a fairly broad range of tempi.

  10. Temporal asymmetries in auditory coding and perception reflect multi-layered nonlinearities.

    Science.gov (United States)

    Deneux, Thomas; Kempf, Alexandre; Daret, Aurélie; Ponsot, Emmanuel; Bathellier, Brice

    2016-01-01

    Sound recognition relies not only on spectral cues, but also on temporal cues, as demonstrated by the profound impact of time reversals on perception of common sounds. To address the coding principles underlying such auditory asymmetries, we recorded a large sample of auditory cortex neurons using two-photon calcium imaging in awake mice, while playing sounds ramping up or down in intensity. We observed clear asymmetries in cortical population responses, including stronger cortical activity for up-ramping sounds, which matches perceptual saliency assessments in mice and previous measures in humans. Analysis of cortical activity patterns revealed that auditory cortex implements a map of spatially clustered neuronal ensembles, detecting specific combinations of spectral and intensity modulation features. Comparing different models, we show that cortical responses result from multi-layered nonlinearities, which, contrary to standard receptive field models of auditory cortex function, build divergent representations of sounds with similar spectral content, but different temporal structure. PMID:27580932

  11. Categorical vowel perception enhances the effectiveness and generalization of auditory feedback in human-machine-interfaces.

    Directory of Open Access Journals (Sweden)

    Eric Larson

    Full Text Available Human-machine interface (HMI designs offer the possibility of improving quality of life for patient populations as well as augmenting normal user function. Despite pragmatic benefits, utilizing auditory feedback for HMI control remains underutilized, in part due to observed limitations in effectiveness. The goal of this study was to determine the extent to which categorical speech perception could be used to improve an auditory HMI. Using surface electromyography, 24 healthy speakers of American English participated in 4 sessions to learn to control an HMI using auditory feedback (provided via vowel synthesis. Participants trained on 3 targets in sessions 1-3 and were tested on 3 novel targets in session 4. An "established categories with text cues" group of eight participants were trained and tested on auditory targets corresponding to standard American English vowels using auditory and text target cues. An "established categories without text cues" group of eight participants were trained and tested on the same targets using only auditory cuing of target vowel identity. A "new categories" group of eight participants were trained and tested on targets that corresponded to vowel-like sounds not part of American English. Analyses of user performance revealed significant effects of session and group (established categories groups and the new categories group, and a trend for an interaction between session and group. Results suggest that auditory feedback can be effectively used for HMI operation when paired with established categorical (native vowel targets with an unambiguous cue.

  12. Electrically evoked hearing perception by functional neurostimulation of the central auditory system.

    Science.gov (United States)

    Tatagiba, M; Gharabaghi, A

    2005-01-01

    Perceptional benefits and potential risks of electrical stimulation of the central auditory system are constantly changing due to ongoing developments and technical modifications. Therefore, we would like to introduce current treatment protocols and strategies that might have an impact on functional results of auditory brainstem implants (ABI) in profoundly deaf patients. Patients with bilateral tumours as a result of neurofibromatosis type 2 with complete dysfunction of the eighth cranial nerves are the most frequent candidates for auditory brainstem implants. Worldwide, about 300 patients have already received an ABI through a translabyrinthine or suboccipital approach supported by multimodality electrophysiological monitoring. Patient selection is based on disease course, clinical signs, audiological, radiological and psycho-social criteria. The ABI provides the patients with access to auditory information such as environmental sound awareness together with distinct hearing cues in speech. In addition, this device markedly improves speech reception in combination with lip-reading. Nonetheless, there is only limited open-set speech understanding. Results of hearing function are correlated with electrode design, number of activated electrodes, speech processing strategies, duration of pre-existing deafness and extent of brainstem deformation. Functional neurostimulation of the central auditory system by a brainstem implant is a safe and beneficial procedure, which may considerably improve the quality of life in patients suffering from deafness due to bilateral retrocochlear lesions. The auditory outcome may be improved by a new generation of microelectrodes capable of penetrating the surface of the brainstem to access more directly the auditory neurons. PMID:15986735

  13. The perception of coherent and non-coherent auditory objects: a signature in gamma frequency band.

    Science.gov (United States)

    Knief, A; Schulte, M; Bertran, O; Pantev, C

    2000-07-01

    The pertinence of gamma band activity in magnetoencephalographic and electroencephalographic recordings for the performance of a gestalt recognition process is a question at issue. We investigated the functional relevance of gamma band activity for the perception of auditory objects. An auditory experiment was performed as an analog to the Kanizsa experiment in the visual modality, comprising four different coherent and non-coherent stimuli. For the first time functional differences of evoked gamma band activity due to the perception of these stimuli were demonstrated by various methods (localization of sources, wavelet analysis and independent component analysis, ICA). Responses to coherent stimuli were found to have more features in common compared to non-coherent stimuli (e.g. closer located sources and smaller number of ICA components). The results point to the existence of a pitch processor in the auditory pathway. PMID:10867289

  14. Perception of auditory, visual, and egocentric spatial alignment adapts differently to changes in eye position.

    Science.gov (United States)

    Cui, Qi N; Razavi, Babak; O'Neill, William E; Paige, Gary D

    2010-02-01

    Vision and audition represent the outside world in spatial synergy that is crucial for guiding natural activities. Input conveying eye-in-head position is needed to maintain spatial congruence because the eyes move in the head while the ears remain head-fixed. Recently, we reported that the human perception of auditory space shifts with changes in eye position. In this study, we examined whether this phenomenon is 1) dependent on a visual fixation reference, 2) selective for frequency bands (high-pass and low-pass noise) related to specific auditory spatial channels, 3) matched by a shift in the perceived straight-ahead (PSA), and 4) accompanied by a spatial shift for visual and/or bimodal (visual and auditory) targets. Subjects were tested in a dark echo-attenuated chamber with their heads fixed facing a cylindrical screen, behind which a mobile speaker/LED presented targets across the frontal field. Subjects fixated alternating reference spots (0, +/-20 degrees ) horizontally or vertically while either localizing targets or indicating PSA using a laser pointer. Results showed that the spatial shift induced by ocular eccentricity is 1) preserved for auditory targets without a visual fixation reference, 2) generalized for all frequency bands, and thus all auditory spatial channels, 3) paralleled by a shift in PSA, and 4) restricted to auditory space. Findings are consistent with a set-point control strategy by which eye position governs multimodal spatial alignment. The phenomenon is robust for auditory space and egocentric perception, and highlights the importance of controlling for eye position in the examination of spatial perception and behavior. PMID:19846626

  15. Maintaining realism in auditory length-perception experiments

    DEFF Research Database (Denmark)

    Kirkwood, Brent Christopher

    Humans are capable of hearing the lengths of wooden rods dropped onto hard floors. In an attempt to understand the influence of the stimulus presentation method for testing this kind of everyday listening task, listener performance was compared for three presentation methods in an auditory length...

  16. Lifespan Differences in Cortical Dynamics of Auditory Perception

    Science.gov (United States)

    Muller, Viktor; Gruber, Walter; Klimesch, Wolfgang; Lindenberger, Ulman

    2009-01-01

    Using electroencephalographic recordings (EEG), we assessed differences in oscillatory cortical activity during auditory-oddball performance between children aged 9-13 years, younger adults, and older adults. From childhood to old age, phase synchronization increased within and between electrodes, whereas whole power and evoked power decreased. We…

  17. Biases in Visual, Auditory, and Audiovisual Perception of Space.

    Directory of Open Access Journals (Sweden)

    Brian Odegaard

    2015-12-01

    Full Text Available Localization of objects and events in the environment is critical for survival, as many perceptual and motor tasks rely on estimation of spatial location. Therefore, it seems reasonable to assume that spatial localizations should generally be accurate. Curiously, some previous studies have reported biases in visual and auditory localizations, but these studies have used small sample sizes and the results have been mixed. Therefore, it is not clear (1 if the reported biases in localization responses are real (or due to outliers, sampling bias, or other factors, and (2 whether these putative biases reflect a bias in sensory representations of space or a priori expectations (which may be due to the experimental setup, instructions, or distribution of stimuli. Here, to address these questions, a dataset of unprecedented size (obtained from 384 observers was analyzed to examine presence, direction, and magnitude of sensory biases, and quantitative computational modeling was used to probe the underlying mechanism(s driving these effects. Data revealed that, on average, observers were biased towards the center when localizing visual stimuli, and biased towards the periphery when localizing auditory stimuli. Moreover, quantitative analysis using a Bayesian Causal Inference framework suggests that while pre-existing spatial biases for central locations exert some influence, biases in the sensory representations of both visual and auditory space are necessary to fully explain the behavioral data. How are these opposing visual and auditory biases reconciled in conditions in which both auditory and visual stimuli are produced by a single event? Potentially, the bias in one modality could dominate, or the biases could interact/cancel out. The data revealed that when integration occurred in these conditions, the visual bias dominated, but the magnitude of this bias was reduced compared to unisensory conditions. Therefore, multisensory integration not only

  18. Leftward lateralization of auditory cortex underlies holistic sound perception in Williams syndrome.

    Directory of Open Access Journals (Sweden)

    Martina Wengenroth

    Full Text Available BACKGROUND: Individuals with the rare genetic disorder Williams-Beuren syndrome (WS are known for their characteristic auditory phenotype including strong affinity to music and sounds. In this work we attempted to pinpoint a neural substrate for the characteristic musicality in WS individuals by studying the structure-function relationship of their auditory cortex. Since WS subjects had only minor musical training due to psychomotor constraints we hypothesized that any changes compared to the control group would reflect the contribution of genetic factors to auditory processing and musicality. METHODOLOGY/PRINCIPAL FINDINGS: Using psychoacoustics, magnetoencephalography and magnetic resonance imaging, we show that WS individuals exhibit extreme and almost exclusive holistic sound perception, which stands in marked contrast to the even distribution of this trait in the general population. Functionally, this was reflected by increased amplitudes of left auditory evoked fields. On the structural level, volume of the left auditory cortex was 2.2-fold increased in WS subjects as compared to control subjects. Equivalent volumes of the auditory cortex have been previously reported for professional musicians. CONCLUSIONS/SIGNIFICANCE: There has been an ongoing debate in the neuroscience community as to whether increased gray matter of the auditory cortex in musicians is attributable to the amount of training or innate disposition. In this study musical education of WS subjects was negligible and control subjects were carefully matched for this parameter. Therefore our results not only unravel the neural substrate for this particular auditory phenotype, but in addition propose WS as a unique genetic model for training-independent auditory system properties.

  19. (Amusicality in Williams syndrome: Examining relationships among auditory perception, musical skill, and emotional responsiveness to music

    Directory of Open Access Journals (Sweden)

    Miriam eLense

    2013-08-01

    Full Text Available Williams syndrome (WS, a genetic, neurodevelopmental disorder, is of keen interest to music cognition researchers because of its characteristic auditory sensitivities and emotional responsiveness to music. However, actual musical perception and production abilities are more variable. We examined musicality in WS through the lens of amusia and explored how their musical perception abilities related to their auditory sensitivities, musical production skills, and emotional responsiveness to music. In our sample of 73 adolescents and adults with WS, 11% met criteria for amusia, which is higher than the 4% prevalence rate reported in the typically developing population. Amusia was not related to auditory sensitivities but was related to musical training. Performance on the amusia measure strongly predicted musical skill but not emotional responsiveness to music, which was better predicted by general auditory sensitivities. This study represents the first time amusia has been examined in a population with a known neurodevelopmental genetic disorder with a range of cognitive abilities. Results have implications for the relationships across different levels of auditory processing, musical skill development, and emotional responsiveness to music, as well as the understanding of gene-brain-behavior relationships in individuals with WS and typically developing individuals with and without amusia.

  20. Auditory Perception, Suprasegmental Speech Processing, and Vocabulary Development in Chinese Preschoolers.

    Science.gov (United States)

    Wang, Hsiao-Lan S; Chen, I-Chen; Chiang, Chun-Han; Lai, Ying-Hui; Tsao, Yu

    2016-10-01

    The current study examined the associations between basic auditory perception, speech prosodic processing, and vocabulary development in Chinese kindergartners, specifically, whether early basic auditory perception may be related to linguistic prosodic processing in Chinese Mandarin vocabulary acquisition. A series of language, auditory, and linguistic prosodic tests were given to 100 preschool children who had not yet learned how to read Chinese characters. The results suggested that lexical tone sensitivity and intonation production were significantly correlated with children's general vocabulary abilities. In particular, tone awareness was associated with comprehensive language development, whereas intonation production was associated with both comprehensive and expressive language development. Regression analyses revealed that tone sensitivity accounted for 36% of the unique variance in vocabulary development, whereas intonation production accounted for 6% of the variance in vocabulary development. Moreover, auditory frequency discrimination was significantly correlated with lexical tone sensitivity, syllable duration discrimination, and intonation production in Mandarin Chinese. Also it provided significant contributions to tone sensitivity and intonation production. Auditory frequency discrimination may indirectly affect early vocabulary development through Chinese speech prosody. PMID:27519239

  1. Cortical oscillations in auditory perception and speech: evidence for two temporal windows in human auditory cortex

    Directory of Open Access Journals (Sweden)

    Huan eLuo

    2012-05-01

    Full Text Available Natural sounds, including vocal communication sounds, contain critical information at multiple time scales. Two essential temporal modulation rates in speech have been argued to be in the low gamma band (~20-80 ms duration information and the theta band (~150-300 ms, corresponding to segmental and syllabic modulation rates, respectively. On one hypothesis, auditory cortex implements temporal integration using time constants closely related to these values. The neural correlates of a proposed dual temporal window mechanism in human auditory cortex remain poorly understood. We recorded MEG responses from participants listening to non-speech auditory stimuli with different temporal structures, created by concatenating frequency-modulated segments of varied segment durations. We show that these non-speech stimuli with temporal structure matching speech-relevant scales (~25 ms and ~200 ms elicit reliable phase tracking in the corresponding associated oscillatory frequencies (low gamma and theta bands. In contrast, stimuli with non-matching temporal structure do not. Furthermore, the topography of theta band phase tracking shows rightward lateralization while gamma band phase tracking occurs bilaterally. The results support the hypothesis that there exists multi-time resolution processing in cortex on discontinuous scales and provide evidence for an asymmetric organization of temporal analysis (asymmetrical sampling in time, AST. The data argue for a macroscopic-level neural mechanism underlying multi-time resolution processing: the sliding and resetting of intrinsic temporal windows on privileged time scales.

  2. Introduction to auditory perception in listeners with hearing losses

    Science.gov (United States)

    Florentine, Mary; Buus, Søren

    2003-04-01

    Listeners with hearing losses cannot hear low-level sounds. In addition, they often complain that audible sounds do not have a comfortable loudness, lack clarity, and are difficult to hear in the presence of other sounds. In particular, they have difficulty understanding speech in background noise. The mechanisms underlying these complaints are not completely understood, but hearing losses are known to alter many aspects of auditory processing. This presentation highlights alterations in audibility, loudness, pitch, spectral and temporal processes, and binaural hearing that may result from hearing losses. The changes in these auditory processes can vary widely across individuals with seemingly similar amounts of hearing loss. For example, two listeners with nearly identical thresholds can differ in their ability to process spectral and temporal features of sounds. Such individual differences make rehabilitation of hearing losses complex. [Work supported by NIH/NIDCD.

  3. A loudspeaker-based room auralization system for auditory perception research

    DEFF Research Database (Denmark)

    Buchholz, Jörg; Favrot, Sylvain Emmanuel

    2009-01-01

    system provides a flexible research platform for conducting auditory experiments with normal-hearing, hearing-impaired, and aided hearing-impaired listeners in a fully controlled and realistic environment. This includes measures of basic auditory function (e.g., signal detection, distance perception) and...... measures of speech intelligibility. A battery of objective tests (e.g., reverberation time, clarity, interaural correlation coefficient) and subjective tests (e.g., speech reception thresholds) is presented that demonstrates the applicability of the LoRA system....

  4. Using Cortical Auditory Evoked Potentials as a predictor of speech perception ability in Auditory Neuropathy Spectrum Disorder and conditions with ANSD-like clinical presentation

    OpenAIRE

    Stirling, Francesca

    2015-01-01

    Auditory Neuropathy Spectrum Disorder (ANSD) is diagnosed by the presence of outer hair cell function, and absence or severe abnormality of the auditory brainstem response (ABR). Within the spectrum of ANSD, level of severity varies greatly in two domains: hearing thresholds can range from normal levels to a profound hearing loss, and degree of speech perception impairment also varies. The latter gives a meaningful indication of severity in ANSD. As the ABR does not relate to functional perfo...

  5. Modeling auditory perception of individual hearing-impaired listeners

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve; Dau, Torsten

    showed that, in most cases, the reduced or absent cochlear compression, associated with outer hair-cell loss, quantitatively accounts for broadened auditory filters, while a combination of reduced compression and reduced inner hair-cell function accounts for decreased sensitivity and slower recovery from...... selectivity. Three groups of listeners were considered: (a) normal hearing listeners; (b) listeners with a mild-to-moderate sensorineural hearing loss; and (c) listeners with a severe sensorineural hearing loss. A fixed set of model parameters were derived for each hearing-impaired listener. The simulations...

  6. Auditory ERP Differences in the Perception of Anticipated Speech Sequences in 5 - 6 Years Old Children and Adults

    OpenAIRE

    Portnova Galina; Martynova Olga

    2014-01-01

    The perception of complex auditory information such as complete speech sequences develops during human ontogeny. In order to explore age differences in the auditory perception of predictable speech sequences we compared event-related potentials (ERPs) recorded in 5- to 6-year-old children (N = 15) and adults (N = 15) in response to anticipated speech sequences as successive and reverse digital series with randomly omitted digits. The ERPs obtained from the omitted digits significantly differe...

  7. Auditory Perceptual Learning for Speech Perception Can Be Enhanced by Audiovisual Training

    Directory of Open Access Journals (Sweden)

    LynneEBernstein

    2013-03-01

    Full Text Available Speech perception under audiovisual conditions is well known to confer benefits to perception such as increased speed and accuracy. Here, we investigated how audiovisual training might benefit or impede auditory perceptual learning speech degraded by vocoding. In Experiments 1 and 3, participants learned paired associations between vocoded spoken nonsense words and nonsense pictures in a protocol with a fixed number of trials. In Experiment 1, paired-associates (PA audiovisual (AV training of one group of participants was compared with audio-only (AO training of another group. When tested under AO conditions, the AV-trained group was significantly more accurate than the AO-trained group. In addition, pre- and post-training AO forced-choice consonant identification with untrained nonsense words showed that AV-trained participants had learned significantly more than AO participants. The pattern of results pointed to their having learned at the level of the auditory phonetic features of the vocoded stimuli. Experiment 2, a no-training control with testing and re-testing on the AO consonant identification, showed that the controls were as accurate as the AO-trained participants in Experiment 1 but less accurate than the AV-trained participants. In Experiment 3, PA training alternated AV and AO conditions on a list-by-list basis within participants, and training was to criterion (92% correct. PA training with AO stimuli was reliably more effective than training with AV stimuli. We explain these discrepant results in terms of the so-called "reverse hierarchy theory" of perceptual learning and in terms of the diverse multisensory and unisensory processing resources available to speech perception. We propose that early audiovisual speech integration can potentially impede auditory perceptual learning; but visual top-down access to relevant auditory features can promote auditory perceptual learning.

  8. Age differences in visual-auditory self-motion perception during a simulated driving task

    Directory of Open Access Journals (Sweden)

    Robert eRamkhalawansingh

    2016-04-01

    Full Text Available Recent evidence suggests that visual-auditory cue integration may change as a function of age such that integration is heightened among older adults. Our goal was to determine whether these changes in multisensory integration are also observed in the context of self-motion perception under realistic task constraints. Thus, we developed a simulated driving paradigm in which we provided older and younger adults with visual motion cues (i.e. optic flow and systematically manipulated the presence or absence of congruent auditory cues to self-motion (i.e. engine, tire, and wind sounds. Results demonstrated that the presence or absence of congruent auditory input had different effects on older and younger adults. Both age groups demonstrated a reduction in speed variability when auditory cues were present compared to when they were absent, but older adults demonstrated a proportionally greater reduction in speed variability under combined sensory conditions. These results are consistent with evidence indicating that multisensory integration is heightened in older adults. Importantly, this study is the first to provide evidence to suggest that age differences in multisensory integration may generalize from simple stimulus detection tasks to the integration of the more complex and dynamic visual and auditory cues that are experienced during self-motion.

  9. Sources of variability in consonant perception and their auditory correlates

    DEFF Research Database (Denmark)

    Zaar, Johannes; Dau, Torsten

    2015-01-01

    Responses obtained in consonant perception experiments typically show a large variability across stimuli of the same phonetic identity. The present study investigated the influence of different potential sources of this response variability. It was distinguished between source-induced variability...... using consonant-vowel combinations (CVs) in white noise. The responses were analyzed with respect to the different sources of variability based on a measure of perceptual distance. The speech-induced variability across and within talkers and the across-listener variability were substantial and of...... perception was evaluated. © 2015 Acoustical Society of America...

  10. Perception of Auditory, Visual, and Egocentric Spatial Alignment Adapts Differently to Changes in Eye Position

    OpenAIRE

    Cui, Qi N; Razavi, Babak; O'Neill, William E.; Paige, Gary D.

    2009-01-01

    Vision and audition represent the outside world in spatial synergy that is crucial for guiding natural activities. Input conveying eye-in-head position is needed to maintain spatial congruence because the eyes move in the head while the ears remain head-fixed. Recently, we reported that the human perception of auditory space shifts with changes in eye position. In this study, we examined whether this phenomenon is 1) dependent on a visual fixation reference, 2) selective for frequency bands (...

  11. Auditory Sensitivity, Speech Perception, and Reading Development and Impairment

    Science.gov (United States)

    Zhang, Juan; McBride-Chang, Catherine

    2010-01-01

    While the importance of phonological sensitivity for understanding reading acquisition and impairment across orthographies is well documented, what underlies deficits in phonological sensitivity is not well understood. Some researchers have argued that speech perception underlies variability in phonological representations. Others have…

  12. Beat Gestures Modulate Auditory Integration in Speech Perception

    Science.gov (United States)

    Biau, Emmanuel; Soto-Faraco, Salvador

    2013-01-01

    Spontaneous beat gestures are an integral part of the paralinguistic context during face-to-face conversations. Here we investigated the time course of beat-speech integration in speech perception by measuring ERPs evoked by words pronounced with or without an accompanying beat gesture, while participants watched a spoken discourse. Words…

  13. Auditory cortical deactivation during speech production and following speech perception: an EEG investigation of the temporal dynamics of the auditory alpha rhythm.

    Science.gov (United States)

    Jenson, David; Harkrider, Ashley W; Thornton, David; Bowers, Andrew L; Saltuklaroglu, Tim

    2015-01-01

    Sensorimotor integration (SMI) across the dorsal stream enables online monitoring of speech. Jenson et al. (2014) used independent component analysis (ICA) and event related spectral perturbation (ERSP) analysis of electroencephalography (EEG) data to describe anterior sensorimotor (e.g., premotor cortex, PMC) activity during speech perception and production. The purpose of the current study was to identify and temporally map neural activity from posterior (i.e., auditory) regions of the dorsal stream in the same tasks. Perception tasks required "active" discrimination of syllable pairs (/ba/ and /da/) in quiet and noisy conditions. Production conditions required overt production of syllable pairs and nouns. ICA performed on concatenated raw 68 channel EEG data from all tasks identified bilateral "auditory" alpha (α) components in 15 of 29 participants localized to pSTG (left) and pMTG (right). ERSP analyses were performed to reveal fluctuations in the spectral power of the α rhythm clusters across time. Production conditions were characterized by significant α event related synchronization (ERS; pFDR < 0.05) concurrent with EMG activity from speech production, consistent with speech-induced auditory inhibition. Discrimination conditions were also characterized by α ERS following stimulus offset. Auditory α ERS in all conditions temporally aligned with PMC activity reported in Jenson et al. (2014). These findings are indicative of speech-induced suppression of auditory regions, possibly via efference copy. The presence of the same pattern following stimulus offset in discrimination conditions suggests that sensorimotor contributions following speech perception reflect covert replay, and that covert replay provides one source of the motor activity previously observed in some speech perception tasks. To our knowledge, this is the first time that inhibition of auditory regions by speech has been observed in real-time with the ICA/ERSP technique. PMID:26500519

  14. Mechanisms Underlying Auditory Hallucinations—Understanding Perception without Stimulus

    Directory of Open Access Journals (Sweden)

    Sukhwinder S. Shergill

    2013-04-01

    Full Text Available Auditory verbal hallucinations (AVH are a common phenomenon, occurring in the “healthy” population as well as in several mental illnesses, most notably schizophrenia. Current thinking supports a spectrum conceptualisation of AVH: several neurocognitive hypotheses of AVH have been proposed, including the “feed-forward” model of failure to provide appropriate information to somatosensory cortices so that stimuli appear unbidden, and an “aberrant memory model” implicating deficient memory processes. Neuroimaging and connectivity studies are in broad agreement with these with a general dysconnectivity between frontotemporal regions involved in language, memory and salience properties. Disappointingly many AVH remain resistant to standard treatments and persist for many years. There is a need to develop novel therapies to augment existing pharmacological and psychological therapies: transcranial magnetic stimulation has emerged as a potential treatment, though more recent clinical data has been less encouraging. Our understanding of AVH remains incomplete though much progress has been made in recent years. We herein provide a broad overview and review of this.

  15. How may the basal ganglia contribute to auditory categorization and speech perception?

    Directory of Open Access Journals (Sweden)

    Sung-JooLim

    2014-08-01

    Full Text Available Listeners must accomplish two complementary perceptual feats in extracting a message from speech. They must discriminate linguistically-relevant acoustic variability and generalize across irrelevant variability. Said another way, they must categorize speech. Since the mapping of acoustic variability is language-specific, these categories must be learned from experience. Thus, understanding how, in general, the auditory system acquires and represents categories can inform us about the toolbox of mechanisms available to speech perception. This perspective invites consideration of findings from cognitive neuroscience literatures outside of the speech domain as a means of constraining models of speech perception. Although neurobiological models of speech perception have mainly focused on cerebral cortex, research outside the speech domain is consistent with the possibility of significant subcortical contributions in category learning. Here, we review the functional role of one such structure, the basal ganglia. We examine research from animal electrophysiology, human neuroimaging, and behavior to consider characteristics of basal ganglia processing that may be advantageous for speech category learning. We also present emerging evidence for a direct role for basal ganglia in learning auditory categories in a complex, naturalistic task intended to model the incidental manner in which speech categories are acquired. To conclude, we highlight new research questions that arise in incorporating the broader neuroscience research literature in modeling speech perception, and suggest how understanding contributions of the basal ganglia can inform attempts to optimize training protocols for learning non-native speech categories in adulthood.

  16. Auditory-visual speech integration by prelinguistic infants: perception of an emergent consonant in the McGurk effect.

    Science.gov (United States)

    Burnham, Denis; Dodd, Barbara

    2004-12-01

    The McGurk effect, in which auditory [ba] dubbed onto [ga] lip movements is perceived as "da" or "tha," was employed in a real-time task to investigate auditory-visual speech perception in prelingual infants. Experiments 1A and 1B established the validity of real-time dubbing for producing the effect. In Experiment 2, 4 1/2-month-olds were tested in a habituation-test paradigm, in which an auditory-visual stimulus was presented contingent upon visual fixation of a live face. The experimental group was habituated to a McGurk stimulus (auditory [ba] visual [ga]), and the control group to matching auditory-visual [ba]. Each group was then presented with three auditory-only test trials, [ba], [da], and [(delta)a] (as in then). Visual-fixation durations in test trials showed that the experimental group treated the emergent percept in the McGurk effect, [da] or [(delta)a], as familiar (even though they had not heard these sounds previously) and [ba] as novel. For control group infants [da] and [(delta)a] were no more familiar than [ba]. These results are consistent with infants' perception of the McGurk effect, and support the conclusion that prelinguistic infants integrate auditory and visual speech information. PMID:15549685

  17. PERCEVAL: a Computer-Driven System for Experimentation on Auditory and Visual Perception

    CERN Document Server

    André, Carine; Cavé, Christian; Teston, Bernard

    2007-01-01

    Since perception tests are highly time-consuming, there is a need to automate as many operations as possible, such as stimulus generation, procedure control, perception testing, and data analysis. The computer-driven system we are presenting here meets these objectives. To achieve large flexibility, the tests are controlled by scripts. The system's core software resembles that of a lexical-syntactic analyzer, which reads and interprets script files sent to it. The execution sequence (trial) is modified in accordance with the commands and data received. This type of operation provides a great deal of flexibility and supports a wide variety of tests such as auditory-lexical decision making, phoneme monitoring, gating, phonetic categorization, word identification, voice quality, etc. To achieve good performance, we were careful about timing accuracy, which is the greatest problem in computerized perception tests.

  18. Auditory-Visual Perception of Prosodic Information: Inter-linguistic Analysis - Contrastive Focus in French and Japanese

    OpenAIRE

    Dohen, Marion; Hill, Harold; Wu, Chun-Huei

    2008-01-01

    Audition and vision are combined for the perception of speech segments and recent studies have shown that this is also the case for some types of supra-segmental information such as prosodic focus. The integration of vision and audition for the perception of speech segments however seems to be less important in Japanese. This study aims at comparing auditory-visual perception of prosodic contrastive focus in French and in Japanese. Two parallel focus identification tests were conducted for th...

  19. Voluntary movement affects simultaneous perception of auditory and tactile stimuli presented to a non-moving body part.

    Science.gov (United States)

    Hao, Qiao; Ora, Hiroki; Ogawa, Ken-Ichiro; Ogata, Taiki; Miyake, Yoshihiro

    2016-01-01

    The simultaneous perception of multimodal sensory information has a crucial role for effective reactions to the external environment. Voluntary movements are known to occasionally affect simultaneous perception of auditory and tactile stimuli presented to the moving body part. However, little is known about spatial limits on the effect of voluntary movements on simultaneous perception, especially when tactile stimuli are presented to a non-moving body part. We examined the effect of voluntary movement on the simultaneous perception of auditory and tactile stimuli presented to the non-moving body part. We considered the possible mechanism using a temporal order judgement task under three experimental conditions: voluntary movement, where participants voluntarily moved their right index finger and judged the temporal order of auditory and tactile stimuli presented to their non-moving left index finger; passive movement; and no movement. During voluntary movement, the auditory stimulus needed to be presented before the tactile stimulus so that they were perceived as occurring simultaneously. This subjective simultaneity differed significantly from the passive movement and no movement conditions. This finding indicates that the effect of voluntary movement on simultaneous perception of auditory and tactile stimuli extends to the non-moving body part. PMID:27622584

  20. Auditory Cortical Deactivation during Speech Production and following Speech Perception: An EEG investigation of the temporal dynamics of the auditory alpha rhythm

    Directory of Open Access Journals (Sweden)

    David E Jenson

    2015-10-01

    Full Text Available Sensorimotor integration within the dorsal stream enables online monitoring of speech. Jenson et al. (2014 used independent component analysis (ICA and event related spectral perturbation (ERSP analysis of EEG data to describe anterior sensorimotor (e.g., premotor cortex; PMC activity during speech perception and production. The purpose of the current study was to identify and temporally map neural activity from posterior (i.e., auditory regions of the dorsal stream in the same tasks. Perception tasks required ‘active’ discrimination of syllable pairs (/ba/ and /da/ in quiet and noisy conditions. Production conditions required overt production of syllable pairs and nouns. ICA performed on concatenated raw 68 channel EEG data from all tasks identified bilateral ‘auditory’ alpha (α components in 15 of 29 participants localized to pSTG (left and pMTG (right. ERSP analyses were performed to reveal fluctuations in the spectral power of the α rhythm clusters across time. Production conditions were characterized by significant α event related synchronization (ERS; pFDR < .05 concurrent with EMG activity from speech production, consistent with speech-induced auditory inhibition. Discrimination conditions were also characterized by α ERS following stimulus offset. Auditory α ERS in all conditions also temporally aligned with PMC activity reported in Jenson et al. (2014. These findings are indicative of speech-induced suppression of auditory regions, possibly via efference copy. The presence of the same pattern following stimulus offset in discrimination conditions suggests that sensorimotor contributions following speech perception reflect covert replay, and that covert replay provides one source of the motor activity previously observed in some speech perception tasks. To our knowledge, this is the first time that inhibition of auditory regions by speech has been observed in real-time with the ICA/ERSP technique.

  1. Auditory distance perception in fixed and varying simulated acoustic environments

    Science.gov (United States)

    Schoolmaster, Matthew; Kopčo, Norbert; Shinn-Cunningham, Barbara G.

    2001-05-01

    Listeners must calibrate to the acoustic environment in order to judge source distance using reverberation. Results of a previous study [Schoolmaster, Kopčo, and Shinn-Cunningham, J. Acoust. Soc. Am. 113, 2285 (2003)] showed that distance perception is more accurate when the simulated acoustic environment is consistent rather than randomly chosen from trial to trial; however, the environments were very different (anechoic versus a large classroom). The study was replicated here using two rooms that were much more similar. Each subject completed two series of trials. In the fixed-room series, the room (large or small) was fixed within a trial block. In the mixed-room series, the room was randomly chosen from trial to trial. Half of the subjects performed the mixed and half performed the fixed series first. Differences between subject groups were smaller in the current study than when anechoic and reverberant trials were intermingled; performance in the two subject groups was similar for (1) fixed trials and (2) trials simulating the large room. However, there was an interaction between subject group and room. Results suggest that listeners calibrate distance judgments based on past experience, but that such calibration partially generalizes to similar environments [Work supported by AFOSR and the NRC.

  2. Perception and psychological evaluation for visual and auditory environment based on the correlation mechanisms

    Science.gov (United States)

    Fujii, Kenji

    2002-06-01

    In this dissertation, the correlation mechanism in modeling the process in the visual perception is introduced. It has been well described that the correlation mechanism is effective for describing subjective attributes in auditory perception. The main result is that it is possible to apply the correlation mechanism to the process in temporal vision and spatial vision, as well as in audition. (1) The psychophysical experiment was performed on subjective flicker rates for complex waveforms. A remarkable result is that the phenomenon of missing fundamental is found in temporal vision as analogous to the auditory pitch perception. This implies the existence of correlation mechanism in visual system. (2) For spatial vision, the autocorrelation analysis provides useful measures for describing three primary perceptual properties of visual texture: contrast, coarseness, and regularity. Another experiment showed that the degree of regularity is a salient cue for texture preference judgment. (3) In addition, the autocorrelation function (ACF) and inter-aural cross-correlation function (IACF) were applied for analysis of the temporal and spatial properties of environmental noise. It was confirmed that the acoustical properties of aircraft noise and traffic noise are well described. These analyses provided useful parameters extracted from the ACF and IACF in assessing the subjective annoyance for noise. Thesis advisor: Yoichi Ando Copies of this thesis written in English can be obtained from Junko Atagi, 6813 Mosonou, Saijo-cho, Higashi-Hiroshima 739-0024, Japan. E-mail address: atagi\\@urban.ne.jp.

  3. Using auditory-visual speech to probe the basis of noise-impaired consonant-vowel perception in dyslexia and auditory neuropathy

    Science.gov (United States)

    Ramirez, Joshua; Mann, Virginia

    2005-08-01

    Both dyslexics and auditory neuropathy (AN) subjects show inferior consonant-vowel (CV) perception in noise, relative to controls. To better understand these impairments, natural acoustic speech stimuli that were masked in speech-shaped noise at various intensities were presented to dyslexic, AN, and control subjects either in isolation or accompanied by visual articulatory cues. AN subjects were expected to benefit from the pairing of visual articulatory cues and auditory CV stimuli, provided that their speech perception impairment reflects a relatively peripheral auditory disorder. Assuming that dyslexia reflects a general impairment of speech processing rather than a disorder of audition, dyslexics were not expected to similarly benefit from an introduction of visual articulatory cues. The results revealed an increased effect of noise masking on the perception of isolated acoustic stimuli by both dyslexic and AN subjects. More importantly, dyslexics showed less effective use of visual articulatory cues in identifying masked speech stimuli and lower visual baseline performance relative to AN subjects and controls. Last, a significant positive correlation was found between reading ability and the ameliorating effect of visual articulatory cues on speech perception in noise. These results suggest that some reading impairments may stem from a central deficit of speech processing.

  4. Auditory Cortical Deactivation during Speech Production and following Speech Perception: An EEG investigation of the temporal dynamics of the auditory alpha rhythm

    OpenAIRE

    David E Jenson; Bowers, Andrew L.

    2015-01-01

    Sensorimotor integration within the dorsal stream enables online monitoring of speech. Jenson et al. (2014) used independent component analysis (ICA) and event related spectral perturbation (ERSP) analysis of EEG data to describe anterior sensorimotor (e.g., premotor cortex; PMC) activity during speech perception and production. The purpose of the current study was to identify and temporally map neural activity from posterior (i.e., auditory) regions of the dorsal stream in the same tasks. ...

  5. Auditory cortical deactivation during speech production and following speech perception: an EEG investigation of the temporal dynamics of the auditory alpha rhythm

    OpenAIRE

    Jenson, David; Harkrider, Ashley W.; Thornton, David; Bowers, Andrew L.; Saltuklaroglu, Tim

    2015-01-01

    Sensorimotor integration (SMI) across the dorsal stream enables online monitoring of speech. Jenson et al. (2014) used independent component analysis (ICA) and event related spectral perturbation (ERSP) analysis of electroencephalography (EEG) data to describe anterior sensorimotor (e.g., premotor cortex, PMC) activity during speech perception and production. The purpose of the current study was to identify and temporally map neural activity from posterior (i.e., auditory) regions of the dors...

  6. Stable individual characteristics in the perception of multiple embedded patterns in multistable auditory stimuli.

    Science.gov (United States)

    Denham, Susan; Bõhm, Tamás M; Bendixen, Alexandra; Szalárdy, Orsolya; Kocsis, Zsuzsanna; Mill, Robert; Winkler, István

    2014-01-01

    The ability of the auditory system to parse complex scenes into component objects in order to extract information from the environment is very robust, yet the processing principles underlying this ability are still not well understood. This study was designed to investigate the proposal that the auditory system constructs multiple interpretations of the acoustic scene in parallel, based on the finding that when listening to a long repetitive sequence listeners report switching between different perceptual organizations. Using the "ABA-" auditory streaming paradigm we trained listeners until they could reliably recognize all possible embedded patterns of length four which could in principle be extracted from the sequence, and in a series of test sessions investigated their spontaneous reports of those patterns. With the training allowing them to identify and mark a wider variety of possible patterns, participants spontaneously reported many more patterns than the ones traditionally assumed (Integrated vs. Segregated). Despite receiving consistent training and despite the apparent randomness of perceptual switching, we found individual switching patterns were idiosyncratic; i.e., the perceptual switching patterns of each participant were more similar to their own switching patterns in different sessions than to those of other participants. These individual differences were found to be preserved even between test sessions held a year after the initial experiment. Our results support the idea that the auditory system attempts to extract an exhaustive set of embedded patterns which can be used to generate expectations of future events and which by competing for dominance give rise to (changing) perceptual awareness, with the characteristics of pattern discovery and perceptual competition having a strong idiosyncratic component. Perceptual multistability thus provides a means for characterizing both general mechanisms and individual differences in human perception. PMID

  7. Stable individual characteristics in the perception of multiple embedded patterns in multistable auditory stimuli

    Directory of Open Access Journals (Sweden)

    SusanDenham

    2014-02-01

    Full Text Available The ability of the auditory system to parse complex scenes into component objects in order to extract information from the environment is very robust, yet the processing principles underlying this ability are still not well understood. This study was designed to investigate the proposal that the auditory system constructs multiple interpretations of the acoustic scene in parallel, based on the finding that when listening to a long repetitive sequence listeners report switching between different perceptual organizations. Using the ‘ABA-’ auditory streaming paradigm we trained listeners until they could reliably recognise all possible embedded patterns of length four which could in principle be extracted from the sequence, and in a series of test sessions investigated their spontaneous reports of those patterns. With the training allowing them to identify and mark a wider variety of possible patterns, participants spontaneously reported many more patterns than the ones traditionally assumed (Integrated vs. Segregated. Despite receiving consistent training and despite the apparent randomness of perceptual switching, we found individual switching patterns were idiosyncratic; i.e. the perceptual switching patterns of each participant were more similar to their own switching patterns in different sessions than to those of other participants. These individual differences were found to be preserved even between test sessions held a year after the initial experiment. Our results support the idea that the auditory system attempts to extract an exhaustive set of embedded patterns which can be used to generate expectations of future events and which by competing for dominance give rise to (changing perceptual awareness, with the characteristics of pattern discovery and perceptual competition having a strong idiosyncratic component. Perceptual multistability thus provides a means for characterizing both general mechanisms and individual differences in

  8. Auditory agnosia.

    Science.gov (United States)

    Slevc, L Robert; Shell, Alison R

    2015-01-01

    Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. PMID:25726291

  9. Vocal development and auditory perception in CBA/CaJ mice

    Science.gov (United States)

    Radziwon, Kelly E.

    Mice are useful laboratory subjects because of their small size, their modest cost, and the fact that researchers have created many different strains to study a variety of disorders. In particular, researchers have found nearly 100 naturally occurring mouse mutations with hearing impairments. For these reasons, mice have become an important model for studies of human deafness. Although much is known about the genetic makeup and physiology of the laboratory mouse, far less is known about mouse auditory behavior. To fully understand the effects of genetic mutations on hearing, it is necessary to determine the hearing abilities of these mice. Two experiments here examined various aspects of mouse auditory perception using CBA/CaJ mice, a commonly used mouse strain. The frequency difference limens experiment tested the mouse's ability to discriminate one tone from another based solely on the frequency of the tone. The mice had similar thresholds as wild mice and gerbils but needed a larger change in frequency than humans and cats. The second psychoacoustic experiment sought to determine which cue, frequency or duration, was more salient when the mice had to identify various tones. In this identification task, the mice overwhelmingly classified the tones based on frequency instead of duration, suggesting that mice are using frequency when differentiating one mouse vocalization from another. The other two experiments were more naturalistic and involved both auditory perception and mouse vocal production. Interest in mouse vocalizations is growing because of the potential for mice to become a model of human speech disorders. These experiments traced mouse vocal development from infant to adult, and they tested the mouse's preference for various vocalizations. This was the first known study to analyze the vocalizations of individual mice across development. Results showed large variation in calling rates among the three cages of adult mice but results were highly

  10. Open your eyes and listen carefully. Auditory and audiovisual speech perception and the McGurk effect in aphasia

    NARCIS (Netherlands)

    Klitsch, Julia Ulrike

    2008-01-01

    This dissertation investigates speech perception in three different groups of native adult speakers of Dutch; an aphasic and two age-varying control groups. By means of two different experiments it is examined if the availability of visual articulatory information is beneficial to the auditory speec

  11. Auditory Sensitivity, Speech Perception, L1 Chinese, and L2 English Reading Abilities in Hong Kong Chinese Children

    Science.gov (United States)

    Zhang, Juan; McBride-Chang, Catherine

    2014-01-01

    A 4-stage developmental model, in which auditory sensitivity is fully mediated by speech perception at both the segmental and suprasegmental levels, which are further related to word reading through their associations with phonological awareness, rapid automatized naming, verbal short-term memory and morphological awareness, was tested with…

  12. Mapping a lateralisation gradient within the ventral stream for auditory speech perception

    Directory of Open Access Journals (Sweden)

    Karsten Specht

    2013-10-01

    Full Text Available Recent models on speech perception propose a dual stream processing network, with a dorsal stream, extending from the posterior temporal lobe of the left hemisphere through inferior parietal areas into the left inferior frontal gyrus, and a ventral stream that is assumed to originate in the primary auditory cortex in the upper posterior part of the temporal lobe and to extend towards the anterior part of the temporal lobe, where it may connect to the ventral part of the inferior frontal gyrus. This article describes and reviews the results from a series of complementary functional magnetic imaging (fMRI studies that aimed to trace the hierarchical processing network for speech comprehension within the left and right hemisphere with a particular focus on the temporal lobe and the ventral stream. As hypothesised, the results demonstrate a bilateral involvement of the temporal lobes in the processing of speech signals. However, an increasing leftward asymmetry was detected from auditory-phonetic to lexico-semantic processing and along the posterior-anterior axis, thus forming a “lateralisation” gradient. This increasing leftward lateralisation was particularly evident for the left superior temporal sulcus (STS and more anterior parts of the temporal lobe.

  13. (A)musicality in Williams syndrome: examining relationships among auditory perception, musical skill, and emotional responsiveness to music

    Science.gov (United States)

    Lense, Miriam D.; Shivers, Carolyn M.; Dykens, Elisabeth M.

    2013-01-01

    Williams syndrome (WS), a genetic, neurodevelopmental disorder, is of keen interest to music cognition researchers because of its characteristic auditory sensitivities and emotional responsiveness to music. However, actual musical perception and production abilities are more variable. We examined musicality in WS through the lens of amusia and explored how their musical perception abilities related to their auditory sensitivities, musical production skills, and emotional responsiveness to music. In our sample of 73 adolescents and adults with WS, 11% met criteria for amusia, which is higher than the 4% prevalence rate reported in the typically developing (TD) population. Amusia was not related to auditory sensitivities but was related to musical training. Performance on the amusia measure strongly predicted musical skill but not emotional responsiveness to music, which was better predicted by general auditory sensitivities. This study represents the first time amusia has been examined in a population with a known neurodevelopmental genetic disorder with a range of cognitive abilities. Results have implications for the relationships across different levels of auditory processing, musical skill development, and emotional responsiveness to music, as well as the understanding of gene-brain-behavior relationships in individuals with WS and TD individuals with and without amusia. PMID:23966965

  14. The influence of acoustic reflections from diffusive architectural surfaces on spatial auditory perception

    Science.gov (United States)

    Robinson, Philip W.

    This thesis addresses the effect of reflections from diffusive architectural surfaces on the perception of echoes and on auditory spatial resolution. Diffusive architectural surfaces play an important role in performance venue design for architectural expression and proper sound distribution. Extensive research has been devoted to the prediction and measurement of the spatial dispersion. However, previous psychoacoustic research on perception of reflections and the precedence effect has focused on specular reflections. This study compares the echo threshold of specular reflections, against those for reflections from realistic architectural surfaces, and against synthesized reflections that isolate individual qualities of reflections from diffusive surfaces, namely temporal dispersion and spectral coloration. In particular, the activation of the precedence effect, as indicated by the echo threshold is measured. Perceptual tests are conducted with direct sound, and simulated or measured reflections with varying temporal dispersion. The threshold for reflections from diffusive architectural surfaces is found to be comparable to that of a specular re ection of similar energy rather than similar amplitude. This is surprising because the amplitude of the dispersed re ection is highly attenuated, and onset cues are reduced. This effect indicates that the auditory system is integrating re ection response energy dispersed over many milliseconds into a single stream. Studies on the effect of a single diffuse reflection are then extended to a full architectural enclosure with various surface properties. This research utilizes auralizations from measured and simulated performance venues to investigate spatial discrimination of multiple acoustic sources in rooms. It is found that discriminating the lateral arrangement of two sources is possible at narrower separation angles when reflections come from at rather than diffusive surfaces. Additionally, subjective impressions are

  15. The Effect of Short-Term Auditory Training on Speech in Noise Perception and Cortical Auditory Evoked Potentials in Adults with Cochlear Implants.

    Science.gov (United States)

    Barlow, Nathan; Purdy, Suzanne C; Sharma, Mridula; Giles, Ellen; Narne, Vijay

    2016-02-01

    This study investigated whether a short intensive psychophysical auditory training program is associated with speech perception benefits and changes in cortical auditory evoked potentials (CAEPs) in adult cochlear implant (CI) users. Ten adult implant recipients trained approximately 7 hours on psychophysical tasks (Gap-in-Noise Detection, Frequency Discrimination, Spectral Rippled Noise [SRN], Iterated Rippled Noise, Temporal Modulation). Speech performance was assessed before and after training using Lexical Neighborhood Test (LNT) words in quiet and in eight-speaker babble. CAEPs evoked by a natural speech stimulus /baba/ with varying syllable stress were assessed pre- and post-training, in quiet and in noise. SRN psychophysical thresholds showed a significant improvement (78% on average) over the training period, but performance on other psychophysical tasks did not change. LNT scores in noise improved significantly post-training by 11% on average compared with three pretraining baseline measures. N1P2 amplitude changed post-training for /baba/ in quiet (p = 0.005, visit 3 pretraining versus visit 4 post-training). CAEP changes did not correlate with behavioral measures. CI recipients' clinical records indicated a plateau in speech perception performance prior to participation in the study. A short period of intensive psychophysical training produced small but significant gains in speech perception in noise and spectral discrimination ability. There remain questions about the most appropriate type of training and the duration or dosage of training that provides the most robust outcomes for adults with CIs. PMID:27587925

  16. The effect of auditory perception training on reading performance of the 8-9-year old female students with dyslexia: A preliminary study

    Directory of Open Access Journals (Sweden)

    Nafiseh Vatandoost

    2014-01-01

    Full Text Available Background and Aim: Dyslexia is the most common learning disability. One of the main factors have role in this disability is auditory perception imperfection that cause a lot of problems in education. We aimed to study the effect of auditory perception training on reading performance of female students with dyslexia at the third grade of elementary school.Methods: Thirty-eight female students at the third grade of elementary schools of Khomeinishahr City, Iran, were selected by multistage cluster random sampling of them, 20 students which were diagnosed dyslexic by Reading test and Wechsler test, devided randomly to two equal groups of experimental and control. For experimental group, during ten 45-minute sessions, auditory perception training were conducted, but no intervention was done for control group. An participants were re-assessed by Reading test after the intervention (pre- and post- test method. Data were analyed by covariance test.Results: The effect of auditory perception training on reading performance (81% was significant (p<0.0001 for all subtests execpt the separate compound word test.Conclusion: Findings of our study confirm the hypothesis that auditory perception training effects on students' functional reading. So, auditory perception training seems to be necessary for the students with dyslexia.

  17. Towards a Perceptually–grounded Theory of Microtonality: issues in sonority, scale construction and auditory perception and cognition

    OpenAIRE

    Bridges, Brian

    2012-01-01

    This thesis engages with the topic of microtonal music through a discussion of relevant music theories and compositional practice alongside the investigation of theoretical perspectives drawn from psychology. Its aim is to advance a theory of microtonal music which is informed by current models of auditory perception and music cognition. In doing so, it treats a range of microtonal approaches and philosophies ranging from duplex subdivision of tempered scales to the generation ...

  18. Development of working memory, speech perception and auditory temporal resolution in children with allention deficit hyperactivity disorder and language impairment

    OpenAIRE

    Norrelgen, Fritjof

    2002-01-01

    Speech perception (SP), verbal working memory (WM) and auditory temporal resolution (ATR) have been studied in children with attention deficit hyperactivity disorder (ADHD) and language impairment (LI), as well as in reference groups of typically developed children. A computerised method was developed, in which discrimination of same or different pairs of stimuli was tested. In a functional Magnetic Resonance Imaging (fMRI) study a similar test was used to explore the neural...

  19. Positron Emission Tomography Imaging Reveals Auditory and Frontal Cortical Regions Involved with Speech Perception and Loudness Adaptation.

    Directory of Open Access Journals (Sweden)

    Georg Berding

    Full Text Available Considerable progress has been made in the treatment of hearing loss with auditory implants. However, there are still many implanted patients that experience hearing deficiencies, such as limited speech understanding or vanishing perception with continuous stimulation (i.e., abnormal loudness adaptation. The present study aims to identify specific patterns of cerebral cortex activity involved with such deficiencies. We performed O-15-water positron emission tomography (PET in patients implanted with electrodes within the cochlea, brainstem, or midbrain to investigate the pattern of cortical activation in response to speech or continuous multi-tone stimuli directly inputted into the implant processor that then delivered electrical patterns through those electrodes. Statistical parametric mapping was performed on a single subject basis. Better speech understanding was correlated with a larger extent of bilateral auditory cortex activation. In contrast to speech, the continuous multi-tone stimulus elicited mainly unilateral auditory cortical activity in which greater loudness adaptation corresponded to weaker activation and even deactivation. Interestingly, greater loudness adaptation was correlated with stronger activity within the ventral prefrontal cortex, which could be up-regulated to suppress the irrelevant or aberrant signals into the auditory cortex. The ability to detect these specific cortical patterns and differences across patients and stimuli demonstrates the potential for using PET to diagnose auditory function or dysfunction in implant patients, which in turn could guide the development of appropriate stimulation strategies for improving hearing rehabilitation. Beyond hearing restoration, our study also reveals a potential role of the frontal cortex in suppressing irrelevant or aberrant activity within the auditory cortex, and thus may be relevant for understanding and treating tinnitus.

  20. Positron Emission Tomography Imaging Reveals Auditory and Frontal Cortical Regions Involved with Speech Perception and Loudness Adaptation.

    Science.gov (United States)

    Berding, Georg; Wilke, Florian; Rode, Thilo; Haense, Cathleen; Joseph, Gert; Meyer, Geerd J; Mamach, Martin; Lenarz, Minoo; Geworski, Lilli; Bengel, Frank M; Lenarz, Thomas; Lim, Hubert H

    2015-01-01

    Considerable progress has been made in the treatment of hearing loss with auditory implants. However, there are still many implanted patients that experience hearing deficiencies, such as limited speech understanding or vanishing perception with continuous stimulation (i.e., abnormal loudness adaptation). The present study aims to identify specific patterns of cerebral cortex activity involved with such deficiencies. We performed O-15-water positron emission tomography (PET) in patients implanted with electrodes within the cochlea, brainstem, or midbrain to investigate the pattern of cortical activation in response to speech or continuous multi-tone stimuli directly inputted into the implant processor that then delivered electrical patterns through those electrodes. Statistical parametric mapping was performed on a single subject basis. Better speech understanding was correlated with a larger extent of bilateral auditory cortex activation. In contrast to speech, the continuous multi-tone stimulus elicited mainly unilateral auditory cortical activity in which greater loudness adaptation corresponded to weaker activation and even deactivation. Interestingly, greater loudness adaptation was correlated with stronger activity within the ventral prefrontal cortex, which could be up-regulated to suppress the irrelevant or aberrant signals into the auditory cortex. The ability to detect these specific cortical patterns and differences across patients and stimuli demonstrates the potential for using PET to diagnose auditory function or dysfunction in implant patients, which in turn could guide the development of appropriate stimulation strategies for improving hearing rehabilitation. Beyond hearing restoration, our study also reveals a potential role of the frontal cortex in suppressing irrelevant or aberrant activity within the auditory cortex, and thus may be relevant for understanding and treating tinnitus. PMID:26046763

  1. Effect of 24 Hours of Sleep Deprivation on Auditory and Linguistic Perception: A Comparison among Young Controls, Sleep-Deprived Participants, Dyslexic Readers, and Aging Adults

    Science.gov (United States)

    Fostick, Leah; Babkoff, Harvey; Zukerman, Gil

    2014-01-01

    Purpose: To test the effects of 24 hr of sleep deprivation on auditory and linguistic perception and to assess the magnitude of this effect by comparing such performance with that of aging adults on speech perception and with that of dyslexic readers on phonological awareness. Method: Fifty-five sleep-deprived young adults were compared with 29…

  2. Musical experience, auditory perception and reading-related skills in children.

    Directory of Open Access Journals (Sweden)

    Karen Banai

    Full Text Available BACKGROUND: The relationships between auditory processing and reading-related skills remain poorly understood despite intensive research. Here we focus on the potential role of musical experience as a confounding factor. Specifically we ask whether the pattern of correlations between auditory and reading related skills differ between children with different amounts of musical experience. METHODOLOGY/PRINCIPAL FINDINGS: Third grade children with various degrees of musical experience were tested on a battery of auditory processing and reading related tasks. Very poor auditory thresholds and poor memory skills were abundant only among children with no musical education. In this population, indices of auditory processing (frequency and interval discrimination thresholds were significantly correlated with and accounted for up to 13% of the variance in reading related skills. Among children with more than one year of musical training, auditory processing indices were better, yet reading related skills were not correlated with them. A potential interpretation for the reduction in the correlations might be that auditory and reading-related skills improve at different rates as a function of musical training. CONCLUSIONS/SIGNIFICANCE: Participants' previous musical training, which is typically ignored in studies assessing the relations between auditory and reading related skills, should be considered. Very poor auditory and memory skills are rare among children with even a short period of musical training, suggesting musical training could have an impact on both. The lack of correlation in the musically trained population suggests that a short period of musical training does not enhance reading related skills of individuals with within-normal auditory processing skills. Further studies are required to determine whether the associations between musical training, auditory processing and memory are indeed causal or whether children with poor auditory and

  3. Auditory, Visual, and Auditory-Visual Speech Perception by Individuals with Cochlear Implants versus Individuals with Hearing Aids

    Science.gov (United States)

    Most, Tova; Rothem, Hilla; Luntz, Michal

    2009-01-01

    The researchers evaluated the contribution of cochlear implants (CIs) to speech perception by a sample of prelingually deaf individuals implanted after age 8 years. This group was compared with a group with profound hearing impairment (HA-P), and with a group with severe hearing impairment (HA-S), both of which used hearing aids. Words and…

  4. Change in Speech Perception and Auditory Evoked Potentials over Time after Unilateral Cochlear Implantation in Postlingually Deaf Adults.

    Science.gov (United States)

    Purdy, Suzanne C; Kelly, Andrea S

    2016-02-01

    Speech perception varies widely across cochlear implant (CI) users and typically improves over time after implantation. There is also some evidence for improved auditory evoked potentials (shorter latencies, larger amplitudes) after implantation but few longitudinal studies have examined the relationship between behavioral and evoked potential measures after implantation in postlingually deaf adults. The relationship between speech perception and auditory evoked potentials was investigated in newly implanted cochlear implant users from the day of implant activation to 9 months postimplantation, on five occasions, in 10 adults age 27 to 57 years who had been bilaterally profoundly deaf for 1 to 30 years prior to receiving a unilateral CI24 cochlear implant. Changes over time in middle latency response (MLR), mismatch negativity, and obligatory cortical auditory evoked potentials and word and sentence speech perception scores were examined. Speech perception improved significantly over the 9-month period. MLRs varied and showed no consistent change over time. Three participants aged in their 50s had absent MLRs. The pattern of change in N1 amplitudes over the five visits varied across participants. P2 area increased significantly for 1,000- and 4,000-Hz tones but not for 250 Hz. The greatest change in P2 area occurred after 6 months of implant experience. Although there was a trend for mismatch negativity peak latency to reduce and width to increase after 3 months of implant experience, there was considerable variability and these changes were not significant. Only 60% of participants had a detectable mismatch initially; this increased to 100% at 9 months. The continued change in P2 area over the period evaluated, with a trend for greater change for right hemisphere recordings, is consistent with the pattern of incremental change in speech perception scores over time. MLR, N1, and mismatch negativity changes were inconsistent and hence P2 may be a more robust measure

  5. Auditory and Visual Differences in Time Perception? An Investigation from a Developmental Perspective with Neuropsychological Tests

    Science.gov (United States)

    Zelanti, Pierre S.; Droit-Volet, Sylvie

    2012-01-01

    Adults and children (5- and 8-year-olds) performed a temporal bisection task with either auditory or visual signals and either a short (0.5-1.0s) or long (4.0-8.0s) duration range. Their working memory and attentional capacities were assessed by a series of neuropsychological tests administered in both the auditory and visual modalities. Results…

  6. Underlying structure of auditory-visual consonant perception by hearing-impaired children and the influences of syllabic compression.

    Science.gov (United States)

    Busby, P A; Tong, Y C; Clark, G M

    1988-06-01

    The identification of consonants in (a)-C-(a) nonsense syllables, using a fourteen-alternative forced-choice procedure, was examined in 4 profoundly hearing-impaired children under five conditions: audition alone using hearing aids in free-field (A), vision alone (V), auditory-visual using hearing aids in free-field (AV1), auditory-visual with linear amplification (AV2), and auditory-visual with syllabic compression (AV3). In the AV2 and AV3 conditions, acoustic signals were binaurally presented by magnetic or acoustic coupling to the subjects' hearing aids. The syllabic compressor had a compression ratio of 10:1, and attack and release times were 1.2 ms and 60 ms. The confusion matrices were subjected to two analysis methods: hierarchical clustering and information transmission analysis using articulatory features. The same general conclusions were drawn on the basis of results obtained from either analysis method. The results indicated better performance in the V condition than in the A condition. In the three AV conditions, the subjects predominantly combined the acoustic parameter of voicing with the visual signal. No consistent differences were recorded across the three AV conditions. Syllabic compression did not, therefore, appear to have a significant influence on AV perception for these children. A high degree of subject variability was recorded for the A and three AV conditions, but not for the V condition. PMID:3398489

  7. Musical Experience, Auditory Perception and Reading-Related Skills in Children

    OpenAIRE

    Karen Banai; Merav Ahissar

    2013-01-01

    BACKGROUND: The relationships between auditory processing and reading-related skills remain poorly understood despite intensive research. Here we focus on the potential role of musical experience as a confounding factor. Specifically we ask whether the pattern of correlations between auditory and reading related skills differ between children with different amounts of musical experience. METHODOLOGY/PRINCIPAL FINDINGS: Third grade children with various degrees of musical experience were teste...

  8. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events

    Directory of Open Access Journals (Sweden)

    Jeroen eStekelenburg

    2012-05-01

    Full Text Available In many natural audiovisual events (e.g., a clap of the two hands, the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have already reported that there are distinct neural correlates of temporal (when versus phonetic/semantic (which content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual part of the audiovisual stimulus. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical subadditive amplitude reductions (AV – V < A were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that the N1 suppression was larger for spatially congruent stimuli. A very early audiovisual interaction was also found at 30-50 ms in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  9. Operator Auditory Perception and Spectral Quantification of Umbilical Artery Doppler Ultrasound Signals

    Science.gov (United States)

    Thuring, Ann; Brännström, K. Jonas; Ewerlöf, Maria; Hernandez-Andrade, Edgar; Ley, David; Lingman, Göran; Liuba, Karina; Maršál, Karel; Jansson, Tomas

    2013-01-01

    Objective An experienced sonographer can by listening to the Doppler audio signals perceive various timbres that distinguish different types of umbilical artery flow despite an unchanged pulsatility index (PI). Our aim was to develop an objective measure of the Doppler audio signals recorded from fetoplacental circulation in a sheep model. Methods Various degrees of pathological flow velocity waveforms in the umbilical artery, similar to those in human complicated pregnancies, were induced by microsphere embolization of the placental bed (embolization model, 7 lamb fetuses, 370 Doppler recordings) or by fetal hemodilution (anemia model, 4 lamb fetuses, 184 recordings). A subjective 11-step operator auditory scale (OAS) was related to conventional Doppler parameters, PI and time average mean velocity (TAM), and to sound frequency analysis of Doppler signals (sound frequency with the maximum energy content [MAXpeak] and frequency band at maximum level minus 15 dB [MAXpeak-15 dB] over several heart cycles). Results We found a negative correlation between the OAS and PI: median Rho −0.73 (range −0.35– −0.94) and −0.68 (range −0.57– −0.78) in the two lamb models, respectively. There was a positive correlation between OAS and TAM in both models: median Rho 0.80 (range 0.58–0.95) and 0.90 (range 0.78–0.95), respectively. A strong correlation was found between TAM and the results of sound spectrum analysis; in the embolization model the median r was 0.91 (range 0.88–0.97) for MAXpeak and 0.91 (range 0.82–0.98) for MAXpeak-15 dB. In the anemia model, the corresponding values were 0.92 (range 0.78–0.96) and 0.96 (range 0.89–0.98), respectively. Conclusion Audio-spectrum analysis reflects the subjective perception of Doppler sound signals in the umbilical artery and has a strong correlation to TAM-velocity. This information might be of importance for clinical management of complicated pregnancies as an addition to conventional Doppler parameters

  10. Modeling auditory processing and speech perception in hearing-impaired listeners

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve

    A better understanding of how the human auditory system represents and analyzes sounds and how hearing impairment affects such processing is of great interest for researchers in the fields of auditory neuroscience, audiology, and speech communication as well as for applications in hearing-instrument...... was shown that an accurate simulation of cochlear input-output functions, in addition to the audiogram, played a major role in accounting both for sensitivity and supra-threshold processing. Finally, the model was used as a front-end in a framework developed to predict consonant discrimination in a...

  11. Auditory and visual distance perception: The proximity-image effect revisited

    Science.gov (United States)

    Zahorik, Pavel

    2003-04-01

    The proximity-image effect [M. B. Gardner, J. Acoust. Soc. Am. 43, 163 (1968)] describes a phenomenon in which the apparent distance of an auditory target is determined by the distance of the nearest plausible visual target rather than by acoustic distance cues. Here this effect is examined using a single visual target (an un-energized loudspeaker) and invisible virtual sound sources. These sources were synthesized from binaural impulse-response measurements at distances ranging from 1 to 5 m (0.25-m steps) in the semi-reverberant room (7.7 m×4.2 m×2.7 m) in which the experiment was conducted. Listeners (n=11) were asked whether or not the auditory target appeared to be at the same distance as the visual target. Within a block of trials, the visual target was placed at a fixed distance of 1.5, 3, or 4.5 m, and the auditory target varied randomly from trial-to-trial over the sample of measurement distances. The resulting psychometric functions are consistent with the proximity-image effect, and can be predicted using a simple model of sensory integration and decision in which perceived auditory space is both compressed in distance and has lower resolution than perceived visual space. [Work supported by NIH-NEI.

  12. How musical expertise shapes speech perception: Evidence from auditory classification images

    OpenAIRE

    Léo Varnet; Tianyun Wang; Chloe Peter; Fanny Meunier; Michel Hoen

    2015-01-01

    It is now well established that extensive musical training percolates to higher levels of cognition, such as speech processing. However, the lack of a precise technique to investigate the specific listening strategy involved in speech comprehension has made it difficult to determine how musicians’ higher performance in non-speech tasks contributes to their enhanced speech comprehension. The recently developed Auditory Classification Image approach reveals the precise time-frequency regions us...

  13. How may the basal ganglia contribute to auditory categorization and speech perception?

    OpenAIRE

    Lim, Sung-joo; Fiez, Julie A.; Holt, Lori L.

    2014-01-01

    Listeners must accomplish two complementary perceptual feats in extracting a message from speech. They must discriminate linguistically-relevant acoustic variability and generalize across irrelevant variability. Said another way, they must categorize speech. Since the mapping of acoustic variability is language-specific, these categories must be learned from experience. Thus, understanding how, in general, the auditory system acquires and represents categories can inform us about the toolbox ...

  14. Auditory Processing in Specific Language Impairment (SLI): Relations with the Perception of Lexical and Phrasal Stress

    Science.gov (United States)

    Richards, Susan; Goswami, Usha

    2015-01-01

    Purpose: We investigated whether impaired acoustic processing is a factor in developmental language disorders. The amplitude envelope of the speech signal is known to be important in language processing. We examined whether impaired perception of amplitude envelope rise time is related to impaired perception of lexical and phrasal stress in…

  15. Auditory Time-Interval Perception as Causal Inference on Sound Sources

    OpenAIRE

    Sawai, Ken-ichi; Sato, Yoshiyuki; Aihara, Kazuyuki

    2012-01-01

    Perception of a temporal pattern in a sub-second time scale is fundamental to conversation, music perception, and other kinds of sound communication. However, its mechanism is not fully understood. A simple example is hearing three successive sounds with short time intervals. The following misperception of the latter interval is known: underestimation of the latter interval when the former is a little shorter or much longer than the latter, and overestimation of the latter when the former is ...

  16. Auditory time-interval perception as causal inference on sound sources

    OpenAIRE

    Ken-ichi eSawai; Yoshiyuki eSato; Kazuyuki eAihara

    2012-01-01

    Perception of a temporal pattern in a sub-second time scale is fundamental to conversation, music perception, and other kinds of sound communication. However, its mechanism is not fully understood. A simple example is hearing three successive sounds with short time intervals. The following misperception of the latter interval is known: underestimation of the latter interval when the former is a little shorter or much longer than the latter, and overestimation of the latter when the former is ...

  17. The use of auditory and visual context in speech perception by listeners with normal hearing and listeners with cochlear implants

    Directory of Open Access Journals (Sweden)

    MatthewWinn

    2013-11-01

    Full Text Available There is a wide range of acoustic and visual variability across different talkers and different speaking contexts. Listeners with normal hearing accommodate that variability in ways that facilitate efficient perception, but it is not known whether listeners with cochlear implants can do the same. In this study, listeners with normal hearing (NH and listeners with cochlear implants (CIs were tested for accommodation to auditory and visual phonetic contexts created by gender-driven speech differences as well as vowel coarticulation and lip rounding in both consonants and vowels. Accommodation was measured as the shifting of perceptual boundaries between /s/ and /ʃ/ sounds in various contexts, as modeled by mixed-effects logistic regression. Owing to the spectral contrasts thought to underlie these context effects, CI listeners were predicted to perform poorly, but showed considerable success. Listeners with cochlear implants not only showed sensitivity to auditory cues to gender, they were also able to use visual cues to gender (i.e. faces as a supplement or proxy for information in the acoustic domain, in a pattern that was not observed for listeners with normal hearing. Spectrally-degraded stimuli heard by listeners with normal hearing generally did not elicit strong context effects, underscoring the limitations of noise vocoders and/or the importance of experience with electric hearing. Visual cues for consonant lip rounding and vowel lip rounding were perceived in a manner consistent with coarticulation and were generally used more heavily by listeners with CIs. Results suggest that listeners with cochlear implants are able to accommodate various sources of acoustic variability either by attending to appropriate acoustic cues or by inferring them via the visual signal.

  18. Auditory imagery: empirical findings.

    Science.gov (United States)

    Hubbard, Timothy L

    2010-03-01

    The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d) auditory imagery's relationship to perception and memory (detection, encoding, recall, mnemonic properties, phonological loop), and (e) individual differences in auditory imagery (in vividness, musical ability and experience, synesthesia, musical hallucinosis, schizophrenia, amusia) are considered. It is concluded that auditory imagery (a) preserves many structural and temporal properties of auditory stimuli, (b) can facilitate auditory discrimination but interfere with auditory detection, (c) involves many of the same brain areas as auditory perception, (d) is often but not necessarily influenced by subvocalization, (e) involves semantically interpreted information and expectancies, (f) involves depictive components and descriptive components, (g) can function as a mnemonic but is distinct from rehearsal, and (h) is related to musical ability and experience (although the mechanisms of that relationship are not clear). PMID:20192565

  19. Compensation for Coarticulation: Disentangling Auditory and Gestural Theories of Perception of Coarticulatory Effects in Speech

    Science.gov (United States)

    Viswanathan, Navin; Magnuson, James S.; Fowler, Carol A.

    2010-01-01

    According to one approach to speech perception, listeners perceive speech by applying general pattern matching mechanisms to the acoustic signal (e.g., Diehl, Lotto, & Holt, 2004). An alternative is that listeners perceive the phonetic gestures that structured the acoustic signal (e.g., Fowler, 1986). The two accounts have offered different…

  20. Hearing Aid-Induced Plasticity in the Auditory System of Older Adults: Evidence from Speech Perception

    Science.gov (United States)

    Lavie, Limor; Banai, Karen; Karni, Avi; Attias, Joseph

    2015-01-01

    Purpose: We tested whether using hearing aids can improve unaided performance in speech perception tasks in older adults with hearing impairment. Method: Unaided performance was evaluated in dichotic listening and speech-­in-­noise tests in 47 older adults with hearing impairment; 36 participants in 3 study groups were tested before hearing aid…

  1. Perception of parents about the auditory attention skills of his kid with cleft lip and palate: retrospective study

    Directory of Open Access Journals (Sweden)

    Mondelli, Maria Fernanda Capoani Garcia

    2012-01-01

    Full Text Available Introduction: To process and decode the acoustic stimulation are necessary cognitive and neurophysiological mechanisms. The hearing stimulation is influenced by cognitive factor from the highest levels, such as the memory, attention and learning. The sensory deprivation caused by hearing loss from the conductive type, frequently in population with cleft lip and palate, can affect many cognitive functions - among them the attention, besides harm the school performance, linguistic and interpersonal. Objective: Verify the perception of the parents of children with cleft lip and palate about the hearing attention of their kids. Method: Retrospective study of infants with any type of cleft lip and palate, without any genetic syndrome associate which parents answered a relevant questionnaire about the auditory attention skills. Results: 44 are from the male kind and 26 from the female kind, 35,71% of the answers were affirmative for the hearing loss and 71,43% to otologic infections. Conclusion: Most of the interviewed parents pointed at least one of the behaviors related to attention contained in the questionnaire, indicating that the presence of cleft lip and palate can be related to difficulties in hearing attention.

  2. Auditory Processing Disorders

    Science.gov (United States)

    Auditory Processing Disorders Auditory processing disorders (APDs) are referred to by many names: central auditory processing disorders , auditory perceptual disorders , and central auditory disorders . APDs ...

  3. [Central auditory prosthesis].

    Science.gov (United States)

    Lenarz, T; Lim, H; Joseph, G; Reuter, G; Lenarz, M

    2009-06-01

    Deaf patients with severe sensory hearing loss can benefit from a cochlear implant (CI), which stimulates the auditory nerve fibers. However, patients who do not have an intact auditory nerve cannot benefit from a CI. The majority of these patients are neurofibromatosis type 2 (NF2) patients who developed neural deafness due to growth or surgical removal of a bilateral acoustic neuroma. The only current solution is the auditory brainstem implant (ABI), which stimulates the surface of the cochlear nucleus in the brainstem. Although the ABI provides improvement in environmental awareness and lip-reading capabilities, only a few NF2 patients have achieved some limited open set speech perception. In the search for alternative procedures our research group in collaboration with Cochlear Ltd. (Australia) developed a human prototype auditory midbrain implant (AMI), which is designed to electrically stimulate the inferior colliculus (IC). The IC has the potential as a new target for an auditory prosthesis as it provides access to neural projections necessary for speech perception as well as a systematic map of spectral information. In this paper the present status of research and development in the field of central auditory prostheses is presented with respect to technology, surgical technique and hearing results as well as the background concepts of ABI and AMI. PMID:19517084

  4. Communication, Listening, Cognitive and Speech Perception Skills in Children with Auditory Processing Disorder (APD) or Specific Language Impairment (SLI)

    Science.gov (United States)

    Ferguson, Melanie A.; Hall, Rebecca L.; Riley, Alison; Moore, David R.

    2011-01-01

    Purpose: Parental reports of communication, listening, and behavior in children receiving a clinical diagnosis of specific language impairment (SLI) or auditory processing disorder (APD) were compared with direct tests of intelligence, memory, language, phonology, literacy, and speech intelligibility. The primary aim was to identify whether there…

  5. Physiological activation of the human cerebral cortex during auditory perception and speech revealed by regional increases in cerebral blood flow

    DEFF Research Database (Denmark)

    Lassen, N A; Friberg, L

    1988-01-01

    methods, that are based of the use of radioactive tracers, can be applied in the same manner for mapping cortex activity. In particular single photon tomography SPECT is readily applicable to clinical audiology, so that the cortical components of the auditory processing can be more closely investigated....

  6. Expression of amyloid-β in mouse cochlear hair cells causes an early-onset auditory defect in high-frequency sound perception.

    Science.gov (United States)

    Omata, Yasuhiro; Tharasegaran, Suganya; Lim, Young-Mi; Yamasaki, Yasutoyo; Ishigaki, Yasuhito; Tatsuno, Takanori; Maruyama, Mitsuo; Tsuda, Leo

    2016-03-01

    Increasing evidence indicates that defects in the sensory system are highly correlated with age-related neurodegenerative diseases, including Alzheimer's disease (AD). This raises the possibility that sensory cells possess some commonalities with neurons and may provide a tool for studying AD. The sensory system, especially the auditory system, has the advantage that depression in function over time can easily be measured with electrophysiological methods. To establish a new mouse AD model that takes advantage of this benefit, we produced transgenic mice expressing amyloid-β (Aβ), a causative element for AD, in their auditory hair cells. Electrophysiological assessment indicated that these mice had hearing impairment, specifically in high-frequency sound perception (>32 kHz), at 4 months after birth. Furthermore, loss of hair cells in the basal region of the cochlea, which is known to be associated with age-related hearing loss, appeared to be involved in this hearing defect. Interestingly, overexpression of human microtubule-associated protein tau, another factor in AD development, synergistically enhanced the Aβ-induced hearing defects. These results suggest that our new system reflects some, if not all, aspects of AD progression and, therefore, could complement the traditional AD mouse model to monitor Aβ-induced neuronal dysfunction quantitatively over time. PMID:26959388

  7. Comparative study of three techniques of palatoplasty in patients with cleft of lip and palate via instrumental and auditory-perceptive evaluations

    Directory of Open Access Journals (Sweden)

    Paniagua, Lauren Medeiros

    2010-03-01

    Full Text Available Introduction: Palatoplasty is a surgical procedure that aims at the reconstruction of the soft and/or hard palate. Actually, we dispose of different techniques that look for the bigger stretching of the soft palate joint to the nasofaryngeal wall to contribute in the appropriate operation of the velopharyngeal sphincter. Failure in its closing brings on speech dysfunctions. Objective: To compare the auditory-perceptive' evaluations and instrumental findings in patients with cleft lip and palate operate through three distinctive techniques of palatoplasty. Method: A prospective transversal study of a group of patients with complete unilateral cleft lip and palate. Everybody was subjected to a randomized clinical essay, through distinctive techniques of palatoplasty performed for a single surgeon, about 8 years. In the period of the surgery, the patients were divided in three distinctive groups with 10 participants each one. The present study has evaluates: 10 patients of the Furlow technique, 7 patients of the Veau-Wardill-Kilner+Braithwaite technique and, 9 patients of the Veau-Wardill-Kilner+Braithwaite+Zetaplasty technique; having a total sample of 26 individuals. All the patients were subjected to auditory-perceptive evaluation through speech recording. An instrumental evaluation was also performed through video endoscopy exam. Results: The findings were satisfactory in the three techniques, in other words, the majority of the individuals does not present hyper nasality, compensatory articulatory disturbance and audible nasal air emission. In addition, in the instrumental evaluation, the majority of the individuals of the three techniques of palatoplasty present an appropriate velopharyngeal function. Conclusion: Was not found statistically significant difference between the palatoplasty techniques in both evaluations

  8. Auditory Display

    DEFF Research Database (Denmark)

    volume. The conference's topics include auditory exploration of data via sonification and audification; real time monitoring of multivariate date; sound in immersive interfaces and teleoperation; perceptual issues in auditory display; sound in generalized computer interfaces; technologies supporting...

  9. Cranial pneumatization and auditory perceptions of the oviraptorid dinosaur Conchoraptor gracilis (Theropoda, Maniraptora) from the Late Cretaceous of Mongolia

    Czech Academy of Sciences Publication Activity Database

    Kundrát, M.; Janáček, Jiří

    2007-01-01

    Roč. 94, č. 9 (2007), s. 769-778. ISSN 0028-1042 Institutional research plan: CEZ:AV0Z50110509 Keywords : Conchoraptor * neurocranium * acoustic perception Subject RIV: EG - Zoology Impact factor: 1.955, year: 2007

  10. Dysfunctions of visual and auditory Gestalt perception (amusia) after stroke : Behavioral correlates and functional magnetic resonance imaging

    OpenAIRE

    Rosemann, Stephanie Heike

    2016-01-01

    Music is a special and unique part of human nature. Not only actively playing (making music in a group or alone) but also passive listening to music involves a richness of processes to make music the ideal tool to investigate how the human brain works. Acquired amusia denotes the impaired perception of melodies, rhythms, and the associated disability to enjoy music which can occur after a stroke. Many amusia patients also show deficits in visual perception, language, memory, and attention. He...

  11. Functional asymmetry and effective connectivity of the auditory system during speech perception is modulated by the place of articulation of the consonant- A 7T fMRI study

    Directory of Open Access Journals (Sweden)

    Karsten eSpecht

    2014-06-01

    Full Text Available To differentiate between stop-consonants, the auditory system has to detect subtle place of articulation (PoA and voice onset time (VOT differences between stop-consonants. How this differential processing is represented on the cortical level remains unclear. The present functional magnetic resonance (fMRI study takes advantage of the superior spatial resolution and high sensitivity of ultra high field 7T MRI. Subjects were attentively listening to consonant-vowel syllables with an alveolar or bilabial stop-consonant and either a short or long voice-onset time. The results showed an overall bilateral activation pattern in the posterior temporal lobe during the processing of the consonant-vowel syllables. This was however modulated strongest by place of articulation such that syllables with an alveolar stop-consonant showed stronger left lateralized activation. In addition, analysis of underlying functional and effective connectivity revealed an inhibitory effect of the left planum temporale onto the right auditory cortex during the processing of alveolar consonant-vowel syllables. Further, the connectivity result indicated also a directed information flow from the right to the left auditory cortex, and further to the left planum temporale for all syllables. These results indicate that auditory speech perception relies on an interplay between the left and right auditory cortex, with the left planum temporale as modulator. Furthermore, the degree of functional asymmetry is determined by the acoustic properties of the consonant-vowel syllables.

  12. Perception and production of English vowels by Chilean learners of English: effect of auditory and visual modalities on phonetic training

    OpenAIRE

    Pereira Reyes, Y. I.

    2014-01-01

    The aim of this thesis was to examine the perception of English vowels by L2 learners of English with Spanish as L1 (Chilean-Spanish), and more specifically the degree to which they are able to take advantage of visual cues to vowel distinctions. Two main studies were conducted for this thesis. In study 1, data was collected from L2 beginners, L2 advanced learners and native speakers of Southern British English (ENS). Participants were tested on their perception of 11 English vowels in audio ...

  13. ERP correlates of auditory goal-directed behavior of younger and older adults in a dynamic speech perception task.

    Science.gov (United States)

    Getzmann, Stephan; Falkenstein, Michael; Wascher, Edmund

    2015-02-01

    The ability to understand speech under adverse listening conditions deteriorates with age. In addition to genuine hearing deficits, age-related declines in attentional and inhibitory control are assumed to contribute to these difficulties. Here, the impact of task-irrelevant distractors on speech perception was studied in 28 younger and 24 older participants in a simulated "cocktail party" scenario. In a two-alternative forced-choice word discrimination task, the participants responded to a rapid succession of short speech stimuli ("on" and "off") that was presented at a frequent standard location or at a rare deviant location in silence or with a concurrent distractor speaker. Behavioral responses and event-related potentials (mismatch negativity MMN, P3a, and reorienting negativity RON) were analyzed to study the interplay of distraction, orientation, and refocusing in the presence of changes in target location. While shifts in target location decreased performance of both age groups, this effect was more pronounced in the older group. Especially in the distractor condition, the electrophysiological measures indicated a delayed attention capture and a delayed re-focussing of attention toward the task-relevant stimulus feature in the older group, relative to the young group. In sum, the results suggest that a delay in the attention switching mechanism contribute to the age-related difficulties in speech perception in dynamic listening situations with multiple speakers. PMID:25447300

  14. Electrical stimulation of the auditory nerve: the coding of frequency, the perception of pitch and the development of cochlear implant speech processing strategies for profoundly deaf people.

    Science.gov (United States)

    Clark, G M

    1996-09-01

    1. The development of speech processing strategies for multiple-channel cochlear implants has depended on encoding sound frequencies and intensities as temporal and spatial patterns of electrical stimulation of the auditory nerve fibres so that speech information of most importance of intelligibility could be transmitted. 2. Initial physiological studies showed that rate encoding of electrical stimulation above 200 pulses/s could not reproduce the normal response patterns in auditory neurons for acoustic stimulation in the speech frequency range above 200 Hz and suggested that place coding was appropriate for the higher frequencies. 3. Rate difference limens in the experimental animal were only similar to those for sound up to 200 Hz. 4. Rate difference limens in implant patients were similar to those obtained in the experimental animal. 5. Satisfactory rate discrimination could be made for durations of 50 and 100 ms, but not 25 ms. This made rate suitable for encoding longer duration suprasegmental speech information, but not segmental information, such as consonants. The rate of stimulation could also be perceived as pitch, discriminated at different electrode sites along the cochlea and discriminated for stimuli across electrodes. 6. Place pitch could be scaled according to the site of stimulation in the cochlea so that a frequency scale was preserved and it also had a different quality from rate pitch and was described as tonality. Place pitch could also be discriminated for the shorter durations (25 ms) required for identifying consonants. 7. The inaugural speech processing strategy encoded the second formant frequencies (concentrations of frequency energy in the mid frequency range of most importance for speech intelligibility) as place of stimulation, the voicing frequency as rate of stimulation and the intensity as current level. Our further speech processing strategies have extracted additional frequency information and coded this as place of stimulation

  15. Environment for Auditory Research Facility (EAR)

    Data.gov (United States)

    Federal Laboratory Consortium — EAR is an auditory perception and communication research center enabling state-of-the-art simulation of various indoor and outdoor acoustic environments. The heart...

  16. Auditory Neuropathy

    Science.gov (United States)

    ... field differ in their opinions about the potential benefits of hearing aids, cochlear implants, and other technologies for people with auditory neuropathy. Some professionals report that hearing aids and personal listening devices such as frequency modulation (FM) systems are ...

  17. Fingers phrase music differently: trial-to-trial variability in piano scale playing and auditory perception reveal motor chunking

    Directory of Open Access Journals (Sweden)

    Floris Tijmen Van Vugt

    2012-11-01

    Full Text Available We investigated how musical phrasing and motor sequencing interact to yield timing patterns in the conservatory students' playing piano scales. We propose a novel analysis method that compared the measured note onsets to an objectively regular scale fitted to the data. Subsequently, we segment the timing variability into (i systematic deviations from objective evenness that are perhaps residuals of expressive timing or of perceptual biases and (ii non-systematic deviations that can be interpreted as motor execution errors, perhaps due to noise in the nervous system. The former, systematic deviations, reveal that the two octave scales are played as a single musical phrase. The latter, trial-to-trial variabilities reveal that pianists' timing was less consistent at the boundaries between the octaves, providing evidence that the octave is represented as a single motor sequence. These effects cannot be explained by low-level properties of the motor task such as the thumb-passage and also did not show up in simulated scales with temporal jitter. Intriguingly, this instability in motor production around the octave boundary is mirrored by an impairment in the detection of timing deviations at those positions, suggesting that chunks overlap between perception and action. We conclude that the octave boundary instability in the scale playing motor program provides behavioural evidence that our brain chunks musical sequences into octave units that do not coincide with musical phrases. Our results indicate that trial-to-trial variability is a novel and meaningful indicator of this chunking. The procedure can readily be extended to a variety of tasks to help understand how movements are divided into units and what processing occurs at their boundaries.

  18. Fingers Phrase Music Differently: Trial-to-Trial Variability in Piano Scale Playing and Auditory Perception Reveal Motor Chunking.

    Science.gov (United States)

    van Vugt, Floris Tijmen; Jabusch, Hans-Christian; Altenmüller, Eckart

    2012-01-01

    We investigated how musical phrasing and motor sequencing interact to yield timing patterns in the conservatory students' playing piano scales. We propose a novel analysis method that compared the measured note onsets to an objectively regular scale fitted to the data. Subsequently, we segment the timing variability into (i) systematic deviations from objective evenness that are perhaps residuals of expressive timing or of perceptual biases and (ii) non-systematic deviations that can be interpreted as motor execution errors, perhaps due to noise in the nervous system. The former, systematic deviations reveal that the two-octave scales are played as a single musical phrase. The latter, trial-to-trial variabilities reveal that pianists' timing was less consistent at the boundaries between the octaves, providing evidence that the octave is represented as a single motor sequence. These effects cannot be explained by low-level properties of the motor task such as the thumb passage and also did not show up in simulated scales with temporal jitter. Intriguingly, this instability in motor production around the octave boundary is mirrored by an impairment in the detection of timing deviations at those positions, suggesting that chunks overlap between perception and action. We conclude that the octave boundary instability in the scale playing motor program provides behavioral evidence that our brain chunks musical sequences into octave units that do not coincide with musical phrases. Our results indicate that trial-to-trial variability is a novel and meaningful indicator of this chunking. The procedure can readily be extended to a variety of tasks to help understand how movements are divided into units and what processing occurs at their boundaries. PMID:23181040

  19. Adaptation in the auditory system: an overview

    Directory of Open Access Journals (Sweden)

    David Pérez-González

    2014-02-01

    Full Text Available The early stages of the auditory system need to preserve the timing information of sounds in order to extract the basic features of acoustic stimuli. At the same time, different processes of neuronal adaptation occur at several levels to further process the auditory information. For instance, auditory nerve fiber responses already experience adaptation of their firing rates, a type of response that can be found in many other auditory nuclei and may be useful for emphasizing the onset of the stimuli. However, it is at higher levels in the auditory hierarchy where more sophisticated types of neuronal processing take place. For example, stimulus-specific adaptation, where neurons show adaptation to frequent, repetitive stimuli, but maintain their responsiveness to stimuli with different physical characteristics, thus representing a distinct kind of processing that may play a role in change and deviance detection. In the auditory cortex, adaptation takes more elaborate forms, and contributes to the processing of complex sequences, auditory scene analysis and attention. Here we review the multiple types of adaptation that occur in the auditory system, which are part of the pool of resources that the neurons employ to process the auditory scene, and are critical to a proper understanding of the neuronal mechanisms that govern auditory perception.

  20. Auditory short-term memory in the primate auditory cortex.

    Science.gov (United States)

    Scott, Brian H; Mishkin, Mortimer

    2016-06-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory. PMID:26541581

  1. Perspectives on the design of musical auditory interfaces

    OpenAIRE

    Leplatre, G.; Brewster, S.A.

    1998-01-01

    This paper addresses the issue of music as a communication medium in auditory human-computer interfaces. So far, psychoacoustics has had a great influence on the development of auditory interfaces, directly and through music cognition. We suggest that a better understanding of the processes involved in the perception of actual musical excerpts should allow musical auditory interface designers to exploit the communicative potential of music. In this respect, we argue that the real advantage of...

  2. Audiovisual Speech Perception in Infancy: The Influence of Vowel Identity and Infants' Productive Abilities on Sensitivity to (Mis)Matches between Auditory and Visual Speech Cues

    Science.gov (United States)

    Altvater-Mackensen, Nicole; Mani, Nivedita; Grossmann, Tobias

    2016-01-01

    Recent studies suggest that infants' audiovisual speech perception is influenced by articulatory experience (Mugitani et al., 2008; Yeung & Werker, 2013). The current study extends these findings by testing if infants' emerging ability to produce native sounds in babbling impacts their audiovisual speech perception. We tested 44 6-month-olds…

  3. Auditory agnosia due to long-term severe hydrocephalus caused by spina bifida - specific auditory pathway versus nonspecific auditory pathway.

    Science.gov (United States)

    Zhang, Qing; Kaga, Kimitaka; Hayashi, Akimasa

    2011-07-01

    A 27-year-old female showed auditory agnosia after long-term severe hydrocephalus due to congenital spina bifida. After years of hydrocephalus, she gradually suffered from hearing loss in her right ear at 19 years of age, followed by her left ear. During the time when she retained some ability to hear, she experienced severe difficulty in distinguishing verbal, environmental, and musical instrumental sounds. However, her auditory brainstem response and distortion product otoacoustic emissions were largely intact in the left ear. Her bilateral auditory cortices were preserved, as shown by neuroimaging, whereas her auditory radiations were severely damaged owing to progressive hydrocephalus. Although she had a complete bilateral hearing loss, she felt great pleasure when exposed to music. After years of self-training to read lips, she regained fluent ability to communicate. Clinical manifestations of this patient indicate that auditory agnosia can occur after long-term hydrocephalus due to spina bifida; the secondary auditory pathway may play a role in both auditory perception and hearing rehabilitation. PMID:21413843

  4. Musical experience shapes top-down auditory mechanisms: evidence from masking and auditory attention performance.

    Science.gov (United States)

    Strait, Dana L; Kraus, Nina; Parbery-Clark, Alexandra; Ashley, Richard

    2010-03-01

    A growing body of research suggests that cognitive functions, such as attention and memory, drive perception by tuning sensory mechanisms to relevant acoustic features. Long-term musical experience also modulates lower-level auditory function, although the mechanisms by which this occurs remain uncertain. In order to tease apart the mechanisms that drive perceptual enhancements in musicians, we posed the question: do well-developed cognitive abilities fine-tune auditory perception in a top-down fashion? We administered a standardized battery of perceptual and cognitive tests to adult musicians and non-musicians, including tasks either more or less susceptible to cognitive control (e.g., backward versus simultaneous masking) and more or less dependent on auditory or visual processing (e.g., auditory versus visual attention). Outcomes indicate lower perceptual thresholds in musicians specifically for auditory tasks that relate with cognitive abilities, such as backward masking and auditory attention. These enhancements were observed in the absence of group differences for the simultaneous masking and visual attention tasks. Our results suggest that long-term musical practice strengthens cognitive functions and that these functions benefit auditory skills. Musical training bolsters higher-level mechanisms that, when impaired, relate to language and literacy deficits. Thus, musical training may serve to lessen the impact of these deficits by strengthening the corticofugal system for hearing. PMID:20018234

  5. Auditory motion capturing ambiguous visual motion

    Directory of Open Access Journals (Sweden)

    ArjenAlink

    2012-01-01

    Full Text Available In this study, it is demonstrated that moving sounds have an effect on the direction in which one sees visual stimuli move. During the main experiment sounds were presented consecutively at four speaker locations inducing left- or rightwards auditory apparent motion. On the path of auditory apparent motion, visual apparent motion stimuli were presented with a high degree of directional ambiguity. The main outcome of this experiment is that our participants perceived visual apparent motion stimuli that were ambiguous (equally likely to be perceived as moving left- or rightwards more often as moving in the same direction than in the opposite direction of auditory apparent motion. During the control experiment we replicated this finding and found no effect of sound motion direction on eye movements. This indicates that auditory motion can capture our visual motion percept when visual motion direction is insufficiently determinate without affecting eye movements.

  6. The role of vowel perceptual cues in compensatory responses to perturbations of speech auditory feedback

    OpenAIRE

    Reilly, Kevin J.; Dougherty, Kathleen E.

    2013-01-01

    The perturbation of acoustic features in a speaker's auditory feedback elicits rapid compensatory responses that demonstrate the importance of auditory feedback for control of speech output. The current study investigated whether responses to a perturbation of speech auditory feedback vary depending on the importance of the perturbed feature to perception of the vowel being produced. Auditory feedback of speakers' first formant frequency (F1) was shifted upward by 130 mels in randomly selecte...

  7. Dissociation of Detection and Discrimination of Pure Tones following Bilateral Lesions of Auditory Cortex

    OpenAIRE

    Andrew R Dykstra; Christine K Koh; Braida, Louis D.; Tramo, Mark Jude

    2012-01-01

    It is well known that damage to the peripheral auditory system causes deficits in tone detection as well as pitch and loudness perception across a wide range of frequencies. However, the extent to which to which the auditory cortex plays a critical role in these basic aspects of spectral processing, especially with regard to speech, music, and environmental sound perception, remains unclear. Recent experiments indicate that primary auditory cortex is necessary for the normally-high perceptual...

  8. Effects of multitasking on operator performance using computational and auditory tasks.

    Science.gov (United States)

    Fasanya, Bankole K

    2016-09-01

    This study investigated the effects of multiple cognitive tasks on human performance. Twenty-four students at North Carolina A&T State University participated in the study. The primary task was auditory signal change perception and the secondary task was a computational task. Results showed that participants' performance in a single task was statistically significantly different from their performance in combined tasks: (a) algebra problems (algebra problem primary and auditory perception secondary); (b) auditory perception tasks (auditory perception primary and algebra problems secondary); and (c) mean false-alarm score in auditory perception (auditory detection primary and algebra problems secondary). Using signal detection theory (SDT), participants' performance measured in terms of sensitivity was calculated as -0.54 for combined tasks (algebra problems the primary task) and -0.53 auditory perceptions the primary task. During auditory perception tasks alone, SDT was found to be 2.51. Performance was 83% in a single task compared to 17% when combined tasks. PMID:26886505

  9. Effects of an Auditory Lateralization Training in Children Suspected to Central Auditory Processing Disorder

    Science.gov (United States)

    Lotfi, Yones; Moosavi, Abdollah; Bakhshi, Enayatollah; Sadjedi, Hamed

    2016-01-01

    Background and Objectives Central auditory processing disorder [(C)APD] refers to a deficit in auditory stimuli processing in nervous system that is not due to higher-order language or cognitive factors. One of the problems in children with (C)APD is spatial difficulties which have been overlooked despite their significance. Localization is an auditory ability to detect sound sources in space and can help to differentiate between the desired speech from other simultaneous sound sources. Aim of this research was investigating effects of an auditory lateralization training on speech perception in presence of noise/competing signals in children suspected to (C)APD. Subjects and Methods In this analytical interventional study, 60 children suspected to (C)APD were selected based on multiple auditory processing assessment subtests. They were randomly divided into two groups: control (mean age 9.07) and training groups (mean age 9.00). Training program consisted of detection and pointing to sound sources delivered with interaural time differences under headphones for 12 formal sessions (6 weeks). Spatial word recognition score (WRS) and monaural selective auditory attention test (mSAAT) were used to follow the auditory lateralization training effects. Results This study showed that in the training group, mSAAT score and spatial WRS in noise (p value≤0.001) improved significantly after the auditory lateralization training. Conclusions We used auditory lateralization training for 6 weeks and showed that auditory lateralization can improve speech understanding in noise significantly. The generalization of this results needs further researches.

  10. Computational characterization of visually-induced auditory spatial adaptation

    Directory of Open Access Journals (Sweden)

    David R Wozny

    2011-11-01

    Full Text Available Recent research investigating the principles governing human perception has provided increasing evidence for probabilistic inference in human perception. For example, human auditory and visual localization judgments closely resemble that of a Bayesian causal inference observer, where the underlying causal structure of the stimuli are inferred based on both the available sensory evidence and prior knowledge. However, most previous studies have focused on characterization of perceptual inference within a static environment, and therefore, little is known about how this inference process changes when observers are exposed to a new environment. In this study we aimed to computationally characterize the change in auditory spatial perception induced by repeated auditory-visual spatial conflict, known as the Ventriloquist Aftereffect. In theory, this change could reflect a shift in the auditory sensory representations (i.e., shift in auditory likelihood distribution, a decrease in the precision of the auditory estimates (i.e., increase in spread of likelihood distribution, a shift in the auditory bias (i.e., shift in prior distribution, or an increase/decrease in strength of the auditory bias (i.e., the spread of prior distribution, or a combination of these. By quantitatively estimating the parameters of the perceptual process for each individual observer using a Bayesian causal inference model, we found that the shift in the perceived locations after exposure was associated with a shift in the mean of the auditory likelihood functions in the direction of the experienced visual offset. The results suggest that repeated exposure to a fixed auditory-visual discrepancy is attributed by the nervous system to sensory representation error and as a result, the sensory map of space is recalibrated to correct the error.

  11. Electrophysiological correlates of auditory change detection and change deafness in complex auditory scenes.

    Science.gov (United States)

    Puschmann, Sebastian; Sandmann, Pascale; Ahrens, Janina; Thorne, Jeremy; Weerda, Riklef; Klump, Georg; Debener, Stefan; Thiel, Christiane M

    2013-07-15

    Change deafness describes the failure to perceive even intense changes within complex auditory input, if the listener does not attend to the changing sound. Remarkably, previous psychophysical data provide evidence that this effect occurs independently of successful stimulus encoding, indicating that undetected changes are processed to some extent in auditory cortex. Here we investigated cortical representations of detected and undetected auditory changes using electroencephalographic (EEG) recordings and a change deafness paradigm. We applied a one-shot change detection task, in which participants listened successively to three complex auditory scenes, each of them consisting of six simultaneously presented auditory streams. Listeners had to decide whether all scenes were identical or whether the pitch of one stream was changed between the last two presentations. Our data show significantly increased middle-latency Nb responses for both detected and undetected changes as compared to no-change trials. In contrast, only successfully detected changes were associated with a later mismatch response in auditory cortex, followed by increased N2, P3a and P3b responses, originating from hierarchically higher non-sensory brain regions. These results strengthen the view that undetected changes are successfully encoded at sensory level in auditory cortex, but fail to trigger later change-related cortical responses that lead to conscious perception of change. PMID:23466938

  12. Tinnitus intensity dependent gamma oscillations of the contralateral auditory cortex.

    Directory of Open Access Journals (Sweden)

    Elsa van der Loo

    Full Text Available BACKGROUND: Non-pulsatile tinnitus is considered a subjective auditory phantom phenomenon present in 10 to 15% of the population. Tinnitus as a phantom phenomenon is related to hyperactivity and reorganization of the auditory cortex. Magnetoencephalography studies demonstrate a correlation between gamma band activity in the contralateral auditory cortex and the presence of tinnitus. The present study aims to investigate the relation between objective gamma-band activity in the contralateral auditory cortex and subjective tinnitus loudness scores. METHODS AND FINDINGS: In unilateral tinnitus patients (N = 15; 10 right, 5 left source analysis of resting state electroencephalographic gamma band oscillations shows a strong positive correlation with Visual Analogue Scale loudness scores in the contralateral auditory cortex (max r = 0.73, p<0.05. CONCLUSION: Auditory phantom percepts thus show similar sound level dependent activation of the contralateral auditory cortex as observed in normal audition. In view of recent consciousness models and tinnitus network models these results suggest tinnitus loudness is coded by gamma band activity in the contralateral auditory cortex but might not, by itself, be responsible for tinnitus perception.

  13. Auditory learning: a developmental method.

    Science.gov (United States)

    Zhang, Yilu; Weng, Juyang; Hwang, Wey-Shiuan

    2005-05-01

    Motivated by the human autonomous development process from infancy to adulthood, we have built a robot that develops its cognitive and behavioral skills through real-time interactions with the environment. We call such a robot a developmental robot. In this paper, we present the theory and the architecture to implement a developmental robot and discuss the related techniques that address an array of challenging technical issues. As an application, experimental results on a real robot, self-organizing, autonomous, incremental learner (SAIL), are presented with emphasis on its audition perception and audition-related action generation. In particular, the SAIL robot conducts the auditory learning from unsegmented and unlabeled speech streams without any prior knowledge about the auditory signals, such as the designated language or the phoneme models. Neither available before learning starts are the actions that the robot is expected to perform. SAIL learns the auditory commands and the desired actions from physical contacts with the environment including the trainers. PMID:15940990

  14. Entraînement auditif et musical chez l'enfant sourd profond : effets sur la perception auditive et effets de transferts

    OpenAIRE

    Rochette, Françoise

    2012-01-01

    This thesis focuses on the auditory training and auditory skill development of congenitally profoundly deaf children. It aims to evaluate, not only the effects of auditory training/auditory skill development on general auditory performances, but also the transfer effects on speech perception and development. An extended period of auditory deprivation leads to major difficulties in reception and speech production, cognitive difficulties, and also disrupts the maturation of central auditory pat...

  15. Emotional Faces Modulate Implicit Perception of Shorter SOA in Auditory Modality%情绪面孔对听觉通道短时距内隐知觉的影响

    Institute of Scientific and Technical Information of China (English)

    宣宾; 汪凯; 张达人

    2011-01-01

    Time perception is a fundamental ability of human beings. Daily experience indicates that time perception is easy to be affected by emotion. However, the emotional influence was often accompanied by active attention and explicit motor response in previous studies. Here the question whether implicit time perception could be influenced by emotional faces is addressed. Observers actively completed a visual discrimination task of emotional faces (fearful, happy and neutral faces) while they passively listened to a sequence of tones with 80% standard stimulus onset asynchronies (SOAs) (800 ms) and 20% deviant SOAs (400, 600, 1 000 and 1 200 ms).Event-related potentials (ERPs) were recorded for frequent standard SOAs and rare deviant SOAs. The two short deviant SOAs (400 and 600 ms) elicited two changed-related ERP components: the mismatch negativity (MMN) and the P3a. The MMN amplitude, which indexes the early detection of irregular time changes, was modulated by facial emotion. Fearful faces elicited reduced MMN as compared with happy faces and neutral faces did for the shorter deviant conditions, and happy faces elicited enhanced P3as as compared with fearful and neutral faces did.The current ERP study suggested that shorter time perception of auditory modality was affected by visual emotional faces, and fearful faces decreased the accuracy of implicit time perception.%时间知觉是人类的一项基本能力.日常生活经验表明时间知觉容易受到情绪的影响.但是在前人的研究中,这些影响往往伴随着主动注意和外显的运动反应.这里关注的是不伴随主动注意和外显运动反应的内隐时间知觉是否受到情绪面孔的影响.被试在主动完成一个由情绪面孔组成的视觉辨别任务的同时,被动地听一系列声音刺激.声音刺激的刺激启动异步时间(stimulus onset asynchrony,SOA.)中,80%是标准SOA(800ms),20%是偏差SOA(400,600ms).对频繁出现的标准SOA和偶尔出现的偏

  16. Applied research in auditory data representation

    Science.gov (United States)

    Frysinger, Steve P.

    1990-08-01

    A class of data displays, characterized generally as Auditory Data Representation, is described and motivated. This type of data representation takes advantage of the tremendous pattern recognition capability of the human auditory channel. Audible displays offer an alternative means of conveying quantitative data to the analyst to facilitate information extraction, and are successfully used alone and in conjunction with visual displays. The Auditory Data Representation literature is reviewed, along with elements of the allied fields of investigation, Psychoacoustics and Musical Perception. A methodology for applied research in this field, based upon the well-developed discipline of psychophysics, is elaborated using a recent experiment as a case study. This method permits objective estimation of a data representation technique by comparing it to alternative displays for the pattern recognition task at hand. The psychophysical threshold of signal to noise level, for constant pattern recognition performance, is the measure of display effectiveness.

  17. Statistical representation of sound textures in the impaired auditory system

    DEFF Research Database (Denmark)

    McWalter, Richard Ian; Dau, Torsten

    Many challenges exist when it comes to understanding and compensating for hearing impairment. Traditional methods, such as pure tone audiometry and speech intelligibility tests, offer insight into the deficiencies of a hearingimpaired listener, but can only partially reveal the mechanisms that...... underlie the hearing loss. An alternative approach is to investigate the statistical representation of sounds for hearing-impaired listeners along the auditory pathway. Using models of the auditory periphery and sound synthesis, we aimed to probe hearing impaired perception for sound textures – temporally...... homogenous sounds such as rain, birds, or fire. It has been suggested that sound texture perception is mediated by time-averaged statistics measured from early auditory representations (McDermott et al., 2013). Changes to early auditory processing, such as broader “peripheral” filters or reduced compression...

  18. The role of visual spatial attention in audiovisual speech perception

    DEFF Research Database (Denmark)

    Andersen, Tobias; Tiippana, K.; Laarni, J.;

    2009-01-01

    recent reports have challenged this view. Here we study the effect of visual spatial attention on the McGurk effect. By presenting a movie of two faces symmetrically displaced to each side of a central fixation point and dubbed with a single auditory speech track, we were able to discern the influences......Auditory and visual information is integrated when perceiving speech, as evidenced by the McGurk effect in which viewing an incongruent talking face categorically alters auditory speech perception. Audiovisual integration in speech perception has long been considered automatic and pre-attentive but...... from each of the faces and from the voice on the auditory speech percept. We found that directing visual spatial attention towards a face increased the influence of that face on auditory perception. However, the influence of the voice on auditory perception did not change suggesting that audiovisual...

  19. Neurocognitive mechanisms of audiovisual speech perception

    OpenAIRE

    Ojanen, Ville

    2005-01-01

    Face-to-face communication involves both hearing and seeing speech. Heard and seen speech inputs interact during audiovisual speech perception. Specifically, seeing the speaker's mouth and lip movements improves identification of acoustic speech stimuli, especially in noisy conditions. In addition, visual speech may even change the auditory percept. This occurs when mismatching auditory speech is dubbed onto visual articulation. Research on the brain mechanisms of audiovisual perception a...

  20. Auditory Verbal Cues Alter the Perceived Flavor of Beverages and Ease of Swallowing: A Psychometric and Electrophysiological Analysis

    OpenAIRE

    Aya Nakamura; Satoshi Imaizumi

    2013-01-01

    We investigated the possible effects of auditory verbal cues on flavor perception and swallow physiology for younger and elder participants. Apple juice, aojiru (grass) juice, and water were ingested with or without auditory verbal cues. Flavor perception and ease of swallowing were measured using a visual analog scale and swallow physiology by surface electromyography and cervical auscultation. The auditory verbal cues had significant positive effects on flavor and ease of swallowing as well...

  1. Minimal effects of visual memory training on auditory performance of adult cochlear implant users

    Directory of Open Access Journals (Sweden)

    Sandra I. Oba, MS

    2013-02-01

    Full Text Available Auditory training has been shown to significantly improve cochlear implant (CI users’ speech and music perception. However, it is unclear whether posttraining gains in performance were due to improved auditory perception or to generally improved attention, memory, and/or cognitive processing. In this study, speech and music perception, as well as auditory and visual memory, were assessed in 10 CI users before, during, and after training with a nonauditory task. A visual digit span (VDS task was used for training, in which subjects recalled sequences of digits presented visually. After the VDS training, VDS performance significantly improved. However, there were no significant improvements for most auditory outcome measures (auditory digit span, phoneme recognition, sentence recognition in noise, digit recognition in noise, except for small (but significant improvements in vocal emotion recognition and melodic contour identification. Posttraining gains were much smaller with the nonauditory VDS training than observed in previous auditory training studies with CI users. The results suggest that posttraining gains observed in previous studies were not solely attributable to improved attention or memory and were more likely due to improved auditory perception. The results also suggest that CI users may require targeted auditory training to improve speech and music perception.

  2. Weak responses to auditory feedback perturbation during articulation in persons who stutter: evidence for abnormal auditory-motor transformation.

    Directory of Open Access Journals (Sweden)

    Shanqing Cai

    Full Text Available Previous empirical observations have led researchers to propose that auditory feedback (the auditory perception of self-produced sounds when speaking functions abnormally in the speech motor systems of persons who stutter (PWS. Researchers have theorized that an important neural basis of stuttering is the aberrant integration of auditory information into incipient speech motor commands. Because of the circumstantial support for these hypotheses and the differences and contradictions between them, there is a need for carefully designed experiments that directly examine auditory-motor integration during speech production in PWS. In the current study, we used real-time manipulation of auditory feedback to directly investigate whether the speech motor system of PWS utilizes auditory feedback abnormally during articulation and to characterize potential deficits of this auditory-motor integration. Twenty-one PWS and 18 fluent control participants were recruited. Using a short-latency formant-perturbation system, we examined participants' compensatory responses to unanticipated perturbation of auditory feedback of the first formant frequency during the production of the monophthong [ε]. The PWS showed compensatory responses that were qualitatively similar to the controls' and had close-to-normal latencies (∼150 ms, but the magnitudes of their responses were substantially and significantly smaller than those of the control participants (by 47% on average, p<0.05. Measurements of auditory acuity indicate that the weaker-than-normal compensatory responses in PWS were not attributable to a deficit in low-level auditory processing. These findings are consistent with the hypothesis that stuttering is associated with functional defects in the inverse models responsible for the transformation from the domain of auditory targets and auditory error information into the domain of speech motor commands.

  3. Music lessons improve auditory perceptual and cognitive performance in deaf children

    Directory of Open Access Journals (Sweden)

    Françoise eROCHETTE

    2014-07-01

    Full Text Available Despite advanced technologies in auditory rehabilitation of profound deafness, deaf children often exhibit delayed cognitive and linguistic development and auditory training remains a crucial element of their education. In the present cross-sectional study, we assess whether music would be a relevant tool for deaf children rehabilitation. In normal-hearing children, music lessons have been shown to improve cognitive and linguistic-related abilities, such as phonetic discrimination and reading. We compared auditory perception, auditory cognition, and phonetic discrimination between 14 profoundly deaf children who completed weekly music lessons for a period of 1.5 to 4 years and 14 deaf children who did not receive musical instruction. Children were assessed on perceptual and cognitive auditory tasks using environmental sounds: discrimination, identification, auditory scene analysis, auditory working memory. Transfer to the linguistic domain was tested with a phonetic discrimination task. Musically-trained children showed better performance in auditory scene analysis, auditory working memory and phonetic discrimination tasks, and multiple regressions showed that success on these tasks was at least partly driven by music lessons. We propose that musical education contributes to development of general processes such as auditory attention and perception, which, in turn, facilitate auditory-related cognitive and linguistic processes.

  4. Music Lessons Improve Auditory Perceptual and Cognitive Performance in Deaf Children

    Science.gov (United States)

    Rochette, Françoise; Moussard, Aline; Bigand, Emmanuel

    2014-01-01

    Despite advanced technologies in auditory rehabilitation of profound deafness, deaf children often exhibit delayed cognitive and linguistic development and auditory training remains a crucial element of their education. In the present cross-sectional study, we assess whether music would be a relevant tool for deaf children rehabilitation. In normal-hearing children, music lessons have been shown to improve cognitive and linguistic-related abilities, such as phonetic discrimination and reading. We compared auditory perception, auditory cognition, and phonetic discrimination between 14 profoundly deaf children who completed weekly music lessons for a period of 1.5–4 years and 14 deaf children who did not receive musical instruction. Children were assessed on perceptual and cognitive auditory tasks using environmental sounds: discrimination, identification, auditory scene analysis, auditory working memory. Transfer to the linguistic domain was tested with a phonetic discrimination task. Musically trained children showed better performance in auditory scene analysis, auditory working memory and phonetic discrimination tasks, and multiple regressions showed that success on these tasks was at least partly driven by music lessons. We propose that musical education contributes to development of general processes such as auditory attention and perception, which, in turn, facilitate auditory-related cognitive and linguistic processes. PMID:25071518

  5. Music lessons improve auditory perceptual and cognitive performance in deaf children.

    Science.gov (United States)

    Rochette, Françoise; Moussard, Aline; Bigand, Emmanuel

    2014-01-01

    Despite advanced technologies in auditory rehabilitation of profound deafness, deaf children often exhibit delayed cognitive and linguistic development and auditory training remains a crucial element of their education. In the present cross-sectional study, we assess whether music would be a relevant tool for deaf children rehabilitation. In normal-hearing children, music lessons have been shown to improve cognitive and linguistic-related abilities, such as phonetic discrimination and reading. We compared auditory perception, auditory cognition, and phonetic discrimination between 14 profoundly deaf children who completed weekly music lessons for a period of 1.5-4 years and 14 deaf children who did not receive musical instruction. Children were assessed on perceptual and cognitive auditory tasks using environmental sounds: discrimination, identification, auditory scene analysis, auditory working memory. Transfer to the linguistic domain was tested with a phonetic discrimination task. Musically trained children showed better performance in auditory scene analysis, auditory working memory and phonetic discrimination tasks, and multiple regressions showed that success on these tasks was at least partly driven by music lessons. We propose that musical education contributes to development of general processes such as auditory attention and perception, which, in turn, facilitate auditory-related cognitive and linguistic processes. PMID:25071518

  6. Auditory plasticity and speech motor learning

    OpenAIRE

    Nasir, Sazzad M.; Ostry, David J.

    2009-01-01

    Is plasticity in sensory and motor systems linked? Here, in the context of speech motor learning and perception, we test the idea sensory function is modified by motor learning and, in particular, that speech motor learning affects a speaker's auditory map. We assessed speech motor learning by using a robotic device that displaced the jaw and selectively altered somatosensory feedback during speech. We found that with practice speakers progressively corrected for the mechanical perturbation a...

  7. Adult age effects in auditory statistical learning

    OpenAIRE

    Neger, T.M.; Rietveld, A.C.M.; Janse, E.

    2015-01-01

    Statistical learning plays a key role in language processing, e.g., for speech segmentation. Older adults have been reported to show less statistical learning on the basis of visual input than younger adults. Given age-related changes in perception and cognition, we investigated whether statistical learning is also impaired in the auditory modality in older compared to younger adults and whether individual learning ability is associated with measures of perceptual (i.e., hearing sensitivity) ...

  8. A Transient Auditory Signal Shifts the Perceived Offset Position of a Moving Visual Object

    OpenAIRE

    Chien, Sung-en; Ono, Fuminori; Watanabe, Katsumi

    2013-01-01

    Information received from different sensory modalities profoundly influences human perception. For example, changes in the auditory flutter rate induce changes in the apparent flicker rate of a flashing light (Shipley, 1964). In the present study, we investigated whether auditory information would affect the perceived offset position of a moving object. In Experiment 1, a visual object moved toward the center of the computer screen and disappeared abruptly. A transient auditory signal was pre...

  9. Audition dominates vision in duration perception irrespective of salience, attention, and temporal discriminability.

    Science.gov (United States)

    Ortega, Laura; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2014-07-01

    Whereas the visual modality tends to dominate over the auditory modality in bimodal spatial perception, the auditory modality tends to dominate over the visual modality in bimodal temporal perception. Recent results suggest that the visual modality dominates bimodal spatial perception because spatial discriminability is typically greater for the visual than for the auditory modality; accordingly, visual dominance is eliminated or reversed when visual-spatial discriminability is reduced by degrading visual stimuli to be equivalent or inferior to auditory spatial discriminability. Thus, for spatial perception, the modality that provides greater discriminability dominates. Here, we ask whether auditory dominance in duration perception is similarly explained by factors that influence the relative quality of auditory and visual signals. In contrast to the spatial results, the auditory modality dominated over the visual modality in bimodal duration perception even when the auditory signal was clearly weaker, when the auditory signal was ignored (i.e., the visual signal was selectively attended), and when the temporal discriminability was equivalent for the auditory and visual signals. Thus, unlike spatial perception, where the modality carrying more discriminable signals dominates, duration perception seems to be mandatorily linked to auditory processing under most circumstances. PMID:24806403

  10. Intact spectral but abnormal temporal processing of auditory stimuli in autism.

    NARCIS (Netherlands)

    Groen, W.B.; Orsouw, L. van; Huurne, N.; Swinkels, S.H.N.; Gaag, R.J. van der; Buitelaar, J.K.; Zwiers, M.P.

    2009-01-01

    The perceptual pattern in autism has been related to either a specific localized processing deficit or a pathway-independent, complexity-specific anomaly. We examined auditory perception in autism using an auditory disembedding task that required spectral and temporal integration. 23 children with h

  11. Auditory-motor learning influences auditory memory for music.

    Science.gov (United States)

    Brown, Rachel M; Palmer, Caroline

    2012-05-01

    In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features. PMID:22271265

  12. Temporal factors affecting somatosensory-auditory interactions in speech processing

    Directory of Open Access Journals (Sweden)

    Takayuki eIto

    2014-11-01

    Full Text Available Speech perception is known to rely on both auditory and visual information. However, sound specific somatosensory input has been shown also to influence speech perceptual processing (Ito et al., 2009. In the present study we addressed further the relationship between somatosensory information and speech perceptual processing by addressing the hypothesis that the temporal relationship between orofacial movement and sound processing contributes to somatosensory-auditory interaction in speech perception. We examined the changes in event-related potentials in response to multisensory synchronous (simultaneous and asynchronous (90 ms lag and lead somatosensory and auditory stimulation compared to individual unisensory auditory and somatosensory stimulation alone. We used a robotic device to apply facial skin somatosensory deformations that were similar in timing and duration to those experienced in speech production. Following synchronous multisensory stimulation the amplitude of the event-related potential was reliably different from the two unisensory potentials. More importantly, the magnitude of the event-related potential difference varied as a function of the relative timing of the somatosensory-auditory stimulation. Event-related activity change due to stimulus timing was seen between 160-220 ms following somatosensory onset, mostly around the parietal area. The results demonstrate a dynamic modulation of somatosensory-auditory convergence and suggest the contribution of somatosensory information for speech processing process is dependent on the specific temporal order of sensory inputs in speech production.

  13. Auditory temporal processing skills in musicians with dyslexia.

    Science.gov (United States)

    Bishop-Liebler, Paula; Welch, Graham; Huss, Martina; Thomson, Jennifer M; Goswami, Usha

    2014-08-01

    The core cognitive difficulty in developmental dyslexia involves phonological processing, but adults and children with dyslexia also have sensory impairments. Impairments in basic auditory processing show particular links with phonological impairments, and recent studies with dyslexic children across languages reveal a relationship between auditory temporal processing and sensitivity to rhythmic timing and speech rhythm. As rhythm is explicit in music, musical training might have a beneficial effect on the auditory perception of acoustic cues to rhythm in dyslexia. Here we took advantage of the presence of musicians with and without dyslexia in musical conservatoires, comparing their auditory temporal processing abilities with those of dyslexic non-musicians matched for cognitive ability. Musicians with dyslexia showed equivalent auditory sensitivity to musicians without dyslexia and also showed equivalent rhythm perception. The data support the view that extensive rhythmic experience initiated during childhood (here in the form of music training) can affect basic auditory processing skills which are found to be deficient in individuals with dyslexia. PMID:25044949

  14. Auditory Integration Training

    Directory of Open Access Journals (Sweden)

    Zahra Jafari

    2002-07-01

    Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depressin, and hyperacute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of The Sound of a Moracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.

  15. Electrophysiologic Assessment of Auditory Training Benefits in Older Adults.

    Science.gov (United States)

    Anderson, Samira; Jenkins, Kimberly

    2015-11-01

    Older adults often exhibit speech perception deficits in difficult listening environments. At present, hearing aids or cochlear implants are the main options for therapeutic remediation; however, they only address audibility and do not compensate for central processing changes that may accompany aging and hearing loss or declines in cognitive function. It is unknown whether long-term hearing aid or cochlear implant use can restore changes in central encoding of temporal and spectral components of speech or improve cognitive function. Therefore, consideration should be given to auditory/cognitive training that targets auditory processing and cognitive declines, taking advantage of the plastic nature of the central auditory system. The demonstration of treatment efficacy is an important component of any training strategy. Electrophysiologic measures can be used to assess training-related benefits. This article will review the evidence for neuroplasticity in the auditory system and the use of evoked potentials to document treatment efficacy. PMID:27587912

  16. Listening to another sense: somatosensory integration in the auditory system.

    Science.gov (United States)

    Wu, Calvin; Stefanescu, Roxana A; Martel, David T; Shore, Susan E

    2015-07-01

    Conventionally, sensory systems are viewed as separate entities, each with its own physiological process serving a different purpose. However, many functions require integrative inputs from multiple sensory systems and sensory intersection and convergence occur throughout the central nervous system. The neural processes for hearing perception undergo significant modulation by the two other major sensory systems, vision and somatosensation. This synthesis occurs at every level of the ascending auditory pathway: the cochlear nucleus, inferior colliculus, medial geniculate body and the auditory cortex. In this review, we explore the process of multisensory integration from (1) anatomical (inputs and connections), (2) physiological (cellular responses), (3) functional and (4) pathological aspects. We focus on the convergence between auditory and somatosensory inputs in each ascending auditory station. This review highlights the intricacy of sensory processing and offers a multisensory perspective regarding the understanding of sensory disorders. PMID:25526698

  17. The auditory and non-auditory brain areas involved in tinnitus. An emergent property of multiple parallel overlapping subnetworks.

    Science.gov (United States)

    Vanneste, Sven; De Ridder, Dirk

    2012-01-01

    Tinnitus is the perception of a sound in the absence of an external sound source. It is characterized by sensory components such as the perceived loudness, the lateralization, the tinnitus type (pure tone, noise-like) and associated emotional components, such as distress and mood changes. Source localization of quantitative electroencephalography (qEEG) data demonstrate the involvement of auditory brain areas as well as several non-auditory brain areas such as the anterior cingulate cortex (dorsal and subgenual), auditory cortex (primary and secondary), dorsal lateral prefrontal cortex, insula, supplementary motor area, orbitofrontal cortex (including the inferior frontal gyrus), parahippocampus, posterior cingulate cortex and the precuneus, in different aspects of tinnitus. Explaining these non-auditory brain areas as constituents of separable subnetworks, each reflecting a specific aspect of the tinnitus percept increases the explanatory power of the non-auditory brain areas involvement in tinnitus. Thus, the unified percept of tinnitus can be considered an emergent property of multiple parallel dynamically changing and partially overlapping subnetworks, each with a specific spontaneous oscillatory pattern and functional connectivity signature. PMID:22586375

  18. The auditory and non-auditory brain areas involved in tinnitus. An emergent property of multiple parallel overlapping subnetworks.

    Directory of Open Access Journals (Sweden)

    Sven Vanneste

    2012-05-01

    Full Text Available Tinnitus is the perception of a sound in the absence of an external sound source. It is characterized by sensory components such as the perceived loudness, the lateralization, the tinnitus type (pure tone, noise-like and associated emotional components, such as distress and mood changes. Source localization of qEEG data demonstrate the involvement of auditory brain areas as well as several non-auditory brain areas such as the anterior cingulate cortex (dorsal and subgenual, auditory cortex (primary and secondary, dorsal lateral prefrontal cortex, insula, supplementary motor area, orbitofrontal cortex (including the inferior frontal gyrus, parahippocampus, posterior cingulate cortex and the precuneus, in different aspects of tinnitus. Explaining these non-auditory brain areas as constituents of separable subnetworks, each reflecting a specific aspect of the tinnitus percept increases the explanatory power of the non-auditory brain areas involvement in tinnitus. Thus the unified percept of tinnitus can be considered an emergent property of multiple parallel dynamically changing and partially overlapping subnetworks, each with a specific spontaneous oscillatory pattern and functional connectivity signature.

  19. Spatial processing in the auditory cortex of the macaque monkey

    Science.gov (United States)

    Recanzone, Gregg H.

    2000-10-01

    The patterns of cortico-cortical and cortico-thalamic connections of auditory cortical areas in the rhesus monkey have led to the hypothesis that acoustic information is processed in series and in parallel in the primate auditory cortex. Recent physiological experiments in the behaving monkey indicate that the response properties of neurons in different cortical areas are both functionally distinct from each other, which is indicative of parallel processing, and functionally similar to each other, which is indicative of serial processing. Thus, auditory cortical processing may be similar to the serial and parallel "what" and "where" processing by the primate visual cortex. If "where" information is serially processed in the primate auditory cortex, neurons in cortical areas along this pathway should have progressively better spatial tuning properties. This prediction is supported by recent experiments that have shown that neurons in the caudomedial field have better spatial tuning properties than neurons in the primary auditory cortex. Neurons in the caudomedial field are also better than primary auditory cortex neurons at predicting the sound localization ability across different stimulus frequencies and bandwidths in both azimuth and elevation. These data support the hypothesis that the primate auditory cortex processes acoustic information in a serial and parallel manner and suggest that this may be a general cortical mechanism for sensory perception.

  20. Audition dominates vision in duration perception irrespective of salience, attention, and temporal discriminability

    OpenAIRE

    Ortega, Laura; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2014-01-01

    Whereas the visual modality tends to dominate over the auditory modality in bimodal spatial perception, the auditory modality tends to dominate over the visual modality in bimodal temporal perception. Recent results suggest that the visual modality dominates bimodal spatial perception because spatial discriminability is typically greater for the visual than auditory modality; accordingly, visual dominance is eliminated or reversed when visual-spatial discriminability is reduced by degrading v...

  1. Overriding auditory attentional capture

    OpenAIRE

    Dalton, Polly; Lavie, Nilli

    2007-01-01

    Attentional capture by color singletons during shape search can be eliminated when the target is not a feature singleton (Bacon & Egeth, 1994). This suggests that a "singleton detection" search strategy must be adopted for attentional capture to occur. Here we find similar effects on auditory attentional capture. Irrelevant high-intensity singletons interfered with an auditory search task when the target itself was also a feature singleton. However, singleton interference was eliminated when ...

  2. Influence of Syllable Structure on L2 Auditory Word Learning

    Science.gov (United States)

    Hamada, Megumi; Goya, Hideki

    2015-01-01

    This study investigated the role of syllable structure in L2 auditory word learning. Based on research on cross-linguistic variation of speech perception and lexical memory, it was hypothesized that Japanese L1 learners of English would learn English words with an open-syllable structure without consonant clusters better than words with a…

  3. Making and monitoring errors based on altered auditory feedback

    Directory of Open Access Journals (Sweden)

    Peter ePfordresher

    2014-08-01

    Full Text Available Previous research has demonstrated that altered auditory feedback (AAF disrupts music performance and causes disruptions in both action planning and the perception of feedback events. It has been proposed that this disruption occurs because of interference within a shared representation for perception and action (Pfordresher, 2006. Studies reported here address this claim from the standpoint of error monitoring. In Experiment 1 participants performed short melodies on a keyboard while hearing no auditory feedback, normal auditory feedback, or alterations to feedback pitch on some subset of events. Participants overestimated error frequency when AAF was present but not for normal feedback. Experiment 2 introduced a concurrent load task to determine whether error monitoring requires executive resources. Although the concurrent task enhanced the effect of AAF, it did not alter participants’ tendency to overestimate errors when AAF was present. A third correlational study addressed whether effects of AAF are reduced for a subset of the population who may lack the kind of perception/action associations that lead to AAF disruption: poor-pitch singers. Effects of manipulations similar to those presented in Experiments 1 and 2 were reduced for these individuals. We propose that these results are consistent with the notion that AAF interference is based on associations between perception and action within a forward internal model of auditory-motor relationships.

  4. Discovering Structure in Auditory Input: Evidence from Williams Syndrome

    Science.gov (United States)

    Elsabbagh, Mayada; Cohen, Henri; Karmiloff-Smith, Annette

    2010-01-01

    We examined auditory perception in Williams syndrome by investigating strategies used in organizing sound patterns into coherent units. In Experiment 1, we investigated the streaming of sound sequences into perceptual units, on the basis of pitch cues, in a group of children and adults with Williams syndrome compared to typical controls. We showed…

  5. Tuned with a Tune: Talker Normalization via General Auditory Processes.

    Science.gov (United States)

    Laing, Erika J C; Liu, Ran; Lotto, Andrew J; Holt, Lori L

    2012-01-01

    Voices have unique acoustic signatures, contributing to the acoustic variability listeners must contend with in perceiving speech, and it has long been proposed that listeners normalize speech perception to information extracted from a talker's speech. Initial attempts to explain talker normalization relied on extraction of articulatory referents, but recent studies of context-dependent auditory perception suggest that general auditory referents such as the long-term average spectrum (LTAS) of a talker's speech similarly affect speech perception. The present study aimed to differentiate the contributions of articulatory/linguistic versus auditory referents for context-driven talker normalization effects and, more specifically, to identify the specific constraints under which such contexts impact speech perception. Synthesized sentences manipulated to sound like different talkers influenced categorization of a subsequent speech target only when differences in the sentences' LTAS were in the frequency range of the acoustic cues relevant for the target phonemic contrast. This effect was true both for speech targets preceded by spoken sentence contexts and for targets preceded by non-speech tone sequences that were LTAS-matched to the spoken sentence contexts. Specific LTAS characteristics, rather than perceived talker, predicted the results suggesting that general auditory mechanisms play an important role in effects considered to be instances of perceptual talker normalization. PMID:22737140

  6. Tuned with a tune: Talker normalization via general auditory processes

    Directory of Open Access Journals (Sweden)

    Erika J C Laing

    2012-06-01

    Full Text Available Voices have unique acoustic signatures, contributing to the acoustic variability listeners must contend with in perceiving speech, and it has long been proposed that listeners normalize speech perception to information extracted from a talker’s speech. Initial attempts to explain talker normalization relied on extraction of articulatory referents, but recent studies of context-dependent auditory perception suggest that general auditory referents such as the long-term average spectrum (LTAS of a talker’s speech similarly affect speech perception. The present study aimed to differentiate the contributions of articulatory/linguistic versus auditory referents for context-driven talker normalization effects and, more specifically, to identify the specific constraints under which such contexts impact speech perception. Synthesized sentences manipulated to sound like different talkers influenced categorization of a subsequent speech target only when differences in the sentences’ LTAS were in the frequency range of the acoustic cues relevant for the target phonemic contrast. This effect was true both for speech targets preceded by spoken sentence contexts and for targets preceded by nonspeech tone sequences that were LTAS-matched to the spoken sentence contexts. Specific LTAS characteristics, rather than perceived talker, predicted the results suggesting that general auditory mechanisms play an important role in effects considered to be instances of perceptual talker normalization.

  7. Auditory object formation affects modulation perception

    DEFF Research Database (Denmark)

    Piechowiak, Tobias

    2005-01-01

    Most sounds in our environment, including speech, music, animal vocalizations and environmental noise, have fluctuations in intensity that are often highly correlated across different frequency regions. Because across-frequency modulation is so common, the ability to process such information is t...

  8. Perception of Loudness Is Influenced by Emotion

    OpenAIRE

    Asutay, Erkin; Västfjäll, Daniel

    2012-01-01

    Loudness perception is thought to be a modular system that is unaffected by other brain systems. We tested the hypothesis that loudness perception can be influenced by negative affect using a conditioning paradigm, where some auditory stimuli were paired with aversive experiences while others were not. We found that the same auditory stimulus was reported as being louder, more negative and fear-inducing when it was conditioned with an aversive experience, compared to when it was used as a con...

  9. Auditory reafferences: The influence of real-time feedback on movement control

    Directory of Open Access Journals (Sweden)

    Christian eKennel

    2015-01-01

    Full Text Available Auditory reafferences are real-time auditory products created by a person’s own movements. Whereas the interdependency of action and perception is generally well studied, the auditory feedback channel and the influence of perceptual processes during movement execution remain largely unconsidered. We argue that movements have a rhythmic character that is closely connected to sound, making it possible to manipulate auditory reafferences online to understand their role in motor control. We examined if step sounds, occurring as a by-product of running, have an influence on the performance of a complex movement task. Twenty participants completed a hurdling task in three auditory feedback conditions: a control condition with normal auditory feedback, a white noise condition in which sound was masked, and a delayed auditory feedback condition. Overall time and kinematic data were collected. Results show that delayed auditory feedback led to a significantly slower overall time and changed kinematic parameters. Our findings complement previous investigations in a natural movement situation with nonartificial auditory cues. Our results support the existing theoretical understanding of action–perception coupling and hold potential for applied work, where naturally occurring movement sounds can be implemented in the motor learning processes.

  10. Psychophysical and Neural Correlates of Auditory Attraction and Aversion

    Science.gov (United States)

    Patten, Kristopher Jakob

    This study explores the psychophysical and neural processes associated with the perception of sounds as either pleasant or aversive. The underlying psychophysical theory is based on auditory scene analysis, the process through which listeners parse auditory signals into individual acoustic sources. The first experiment tests and confirms that a self-rated pleasantness continuum reliably exists for 20 various stimuli (r = .48). In addition, the pleasantness continuum correlated with the physical acoustic characteristics of consonance/dissonance (r = .78), which can facilitate auditory parsing processes. The second experiment uses an fMRI block design to test blood oxygen level dependent (BOLD) changes elicited by a subset of 5 exemplar stimuli chosen from Experiment 1 that are evenly distributed over the pleasantness continuum. Specifically, it tests and confirms that the pleasantness continuum produces systematic changes in brain activity for unpleasant acoustic stimuli beyond what occurs with pleasant auditory stimuli. Results revealed that the combination of two positively and two negatively valenced experimental sounds compared to one neutral baseline control elicited BOLD increases in the primary auditory cortex, specifically the bilateral superior temporal gyrus, and left dorsomedial prefrontal cortex; the latter being consistent with a frontal decision-making process common in identification tasks. The negatively-valenced stimuli yielded additional BOLD increases in the left insula, which typically indicates processing of visceral emotions. The positively-valenced stimuli did not yield any significant BOLD activation, consistent with consonant, harmonic stimuli being the prototypical acoustic pattern of auditory objects that is optimal for auditory scene analysis. Both the psychophysical findings of Experiment 1 and the neural processing findings of Experiment 2 support that consonance is an important dimension of sound that is processed in a manner that aids

  11. Estudo do comportamento vocal no ciclo menstrual: avaliação perceptivo-auditiva, acústica e auto-perceptiva Vocal behavior during menstrual cycle: perceptual-auditory, acoustic and self-perception analysis

    Directory of Open Access Journals (Sweden)

    Luciane C. de Figueiredo

    2004-06-01

    Full Text Available Durante o período pré-menstrual é comum a ocorrência de disfonia, e são poucas as mulheres que se dão conta dessa variação da voz dentro do ciclo menstrual (Quinteiro, 1989. OBJETIVO: Verificar se há diferença no padrão vocal de mulheres no período de ovulação em relação ao primeiro dia do ciclo menstrual, utilizando-se da análise perceptivo-auditiva, da espectrografia, dos parâmetros acústicos e quando esta diferença está presente, se é percebida pelas mulheres. FORMA DE ESTUDO: Caso-controle. MATERIAL E MÉTODO: A amostra coletada foi de 30 estudantes de Fonoaudiologia, na faixa etária de 18 a 25 anos, não-fumantes, com ciclo menstrual regular e sem o uso de contraceptivo oral. As vozes foram gravadas no primeiro dia de menstruação e no décimo-terceiro dia pós-menstruação (ovulação, para posterior comparação. RESULTADOS: Observou-se durante o período menstrual que as vozes estão rouco-soprosa de grau leve a moderado, instáveis, sem a presença de quebra de sonoridade, com pitch e loudness adequados e ressonância equilibrada. Há pior qualidade de definição dos harmônicos, maior quantidade de ruído entre eles e menor extensão dos harmônicos superiores. Encontramos uma f0 mais aguda, jitter e shimmer aumentados e PHR diminuída. CONCLUSÃO: No período menstrual há mudanças na qualidade vocal, no comportamento dos harmônicos e nos parâmetros vocais (f0,jitter, shimmer e PHR. Além disso, a maioria das estudantes de Fonoaudiologia não percebeu a variação da voz durante o ciclo menstrual.During the premenstruation period dysphonia often can be observed and only few women are aware of this voice variation (Quinteiro, 1989. AIM: To verify if there are vocal quality variations between the ovulation period and the first day of the menstrual cycle, by using perceptual-auditory and acoustic analysis, including spectrography, and the self perception of the vocal changes when it occurs. STUDY DESIGN: Case

  12. Hemodynamic responses in human multisensory and auditory association cortex to purely visual stimulation

    Directory of Open Access Journals (Sweden)

    Baumann Simon

    2007-02-01

    Full Text Available Abstract Background Recent findings of a tight coupling between visual and auditory association cortices during multisensory perception in monkeys and humans raise the question whether consistent paired presentation of simple visual and auditory stimuli prompts conditioned responses in unimodal auditory regions or multimodal association cortex once visual stimuli are presented in isolation in a post-conditioning run. To address this issue fifteen healthy participants partook in a "silent" sparse temporal event-related fMRI study. In the first (visual control habituation phase they were presented with briefly red flashing visual stimuli. In the second (auditory control habituation phase they heard brief telephone ringing. In the third (conditioning phase we coincidently presented the visual stimulus (CS paired with the auditory stimulus (UCS. In the fourth phase participants either viewed flashes paired with the auditory stimulus (maintenance, CS- or viewed the visual stimulus in isolation (extinction, CS+ according to a 5:10 partial reinforcement schedule. The participants had no other task than attending to the stimuli and indicating the end of each trial by pressing a button. Results During unpaired visual presentations (preceding and following the paired presentation we observed significant brain responses beyond primary visual cortex in the bilateral posterior auditory association cortex (planum temporale, planum parietale and in the right superior temporal sulcus whereas the primary auditory regions were not involved. By contrast, the activity in auditory core regions was markedly larger when participants were presented with auditory stimuli. Conclusion These results demonstrate involvement of multisensory and auditory association areas in perception of unimodal visual stimulation which may reflect the instantaneous forming of multisensory associations and cannot be attributed to sensation of an auditory event. More importantly, we are able

  13. Neural responses to complex auditory rhythms: the role of attending

    Directory of Open Access Journals (Sweden)

    HeatherLChapin

    2010-12-01

    Full Text Available The aim of this study was to explore the role of attention in pulse and meter perception using complex rhythms. We used a selective attention paradigm in which participants attended to either a complex auditory rhythm or a visually presented word list. Performance on a reproduction task was used to gauge whether participants were attending to the appropriate stimulus. We hypothesized that attention to complex rhythms – which contain no energy at the pulse frequency – would lead to activations in motor areas involved in pulse perception. Moreover, because multiple repetitions of a complex rhythm are needed to perceive a pulse, activations in pulse related areas would be seen only after sufficient time had elapsed for pulse perception to develop. Selective attention was also expected to modulate activity in sensory areas specific to the modality. We found that selective attention to rhythms led to increased BOLD responses in basal ganglia, and basal ganglia activity was observed only after the rhythms had cycled enough times for a stable pulse percept to develop. These observations suggest that attention is needed to recruit motor activations associated with the perception of pulse in complex rhythms. Moreover, attention to the auditory stimulus enhanced activity in an attentional sensory network including primary auditory, insula, anterior cingulate, and prefrontal cortex, and suppressed activity in sensory areas associated with attending to the visual stimulus.

  14. Overriding auditory attentional capture.

    Science.gov (United States)

    Dalton, Polly; Lavie, Nilli

    2007-02-01

    Attentional capture by color singletons during shape search can be eliminated when the target is not a feature singleton (Bacon & Egeth, 1994). This suggests that a "singleton detection" search strategy must be adopted for attentional capture to occur. Here we find similar effects on auditory attentional capture. Irrelevant high-intensity singletons interfered with an auditory search task when the target itself was also a feature singleton. However, singleton interference was eliminated when the target was not a singleton (i.e., when nontargets were made heterogeneous, or when more than one target sound was presented). These results suggest that auditory attentional capture depends on the observer's attentional set, as does visual attentional capture. The suggestion that hearing might act as an early warning system that would always be tuned to unexpected unique stimuli must therefore be modified to accommodate these strategy-dependent capture effects. PMID:17557587

  15. Auditory and Visual Sensations

    CERN Document Server

    Ando, Yoichi

    2010-01-01

    Professor Yoichi Ando, acoustic architectural designer of the Kirishima International Concert Hall in Japan, presents a comprehensive rational-scientific approach to designing performance spaces. His theory is based on systematic psychoacoustical observations of spatial hearing and listener preferences, whose neuronal correlates are observed in the neurophysiology of the human brain. A correlation-based model of neuronal signal processing in the central auditory system is proposed in which temporal sensations (pitch, timbre, loudness, duration) are represented by an internal autocorrelation representation, and spatial sensations (sound location, size, diffuseness related to envelopment) are represented by an internal interaural crosscorrelation function. Together these two internal central auditory representations account for the basic auditory qualities that are relevant for listening to music and speech in indoor performance spaces. Observed psychological and neurophysiological commonalities between auditor...

  16. A unique cellular scaling rule in the avian auditory system.

    Science.gov (United States)

    Corfield, Jeremy R; Long, Brendan; Krilow, Justin M; Wylie, Douglas R; Iwaniuk, Andrew N

    2016-06-01

    Although it is clear that neural structures scale with body size, the mechanisms of this relationship are not well understood. Several recent studies have shown that the relationship between neuron numbers and brain (or brain region) size are not only different across mammalian orders, but also across auditory and visual regions within the same brains. Among birds, similar cellular scaling rules have not been examined in any detail. Here, we examine the scaling of auditory structures in birds and show that the scaling rules that have been established in the mammalian auditory pathway do not necessarily apply to birds. In galliforms, neuronal densities decrease with increasing brain size, suggesting that auditory brainstem structures increase in size faster than neurons are added; smaller brains have relatively more neurons than larger brains. The cellular scaling rules that apply to auditory brainstem structures in galliforms are, therefore, different to that found in primate auditory pathway. It is likely that the factors driving this difference are associated with the anatomical specializations required for sound perception in birds, although there is a decoupling of neuron numbers in brain structures and hair cell numbers in the basilar papilla. This study provides significant insight into the allometric scaling of neural structures in birds and improves our understanding of the rules that govern neural scaling across vertebrates. PMID:26002617

  17. Resizing Auditory Communities

    DEFF Research Database (Denmark)

    Kreutzfeldt, Jacob

    2012-01-01

    Heard through the ears of the Canadian composer and music teacher R. Murray Schafer the ideal auditory community had the shape of a village. Schafer’s work with the World Soundscape Project in the 70s represent an attempt to interpret contemporary environments through musical and auditory...... of sound as an active component in shaping urban environments. As urban conditions spreads globally, new scales, shapes and forms of communities appear and call for new distinctions and models in the study and representation of sonic environments. Particularly so, since urban environments...

  18. Inversion of Auditory Spectrograms, Traditional Spectrograms, and Other Envelope Representations

    DEFF Research Database (Denmark)

    Decorsière, Remi Julien Blaise; Søndergaard, Peter Lempel; MacDonald, Ewen;

    2015-01-01

    implementations of this framework are presented for auditory spectrograms, where the filterbank is based on the behavior of the basilar membrane and envelope extraction is modeled on the response of inner hair cells. One implementation is direct while the other is a two-stage approach that is computationally...... simpler. While both can accurately invert an auditory spectrogram, the two-stage approach performs better on time-domain metrics. The same framework is applied to traditional spectrograms based on the magnitude of the short-time Fourier transform. Inspired by human perception of loudness, a modification...

  19. A loudspeaker-based room auralization system for auditory research

    DEFF Research Database (Denmark)

    Favrot, Sylvain Emmanuel

    systematically study the signal processing of realistic sounds by normal-hearing and hearing-impaired listeners, a flexible, reproducible and fully controllable auditory environment is needed. A loudspeaker-based room auralization (LoRA) system was developed in this thesis to provide virtual auditory...... investigated the perception of distance in VAEs generated by the LoRA system. These results showed that the distance of far field sources are similarly perceived in these VAEs as in real environments. For close sources (<1 m), a comprehensive study about the near field compensated HOA method was presented and...

  20. Pure word deafness with auditory object agnosia after bilateral lesion of the superior temporal sulcus.

    Science.gov (United States)

    Gutschalk, Alexander; Uppenkamp, Stefan; Riedel, Bernhard; Bartsch, Andreas; Brandt, Tobias; Vogt-Schaden, Marlies

    2015-12-01

    Based on results from functional imaging, cortex along the superior temporal sulcus (STS) has been suggested to subserve phoneme and pre-lexical speech perception. For vowel classification, both superior temporal plane (STP) and STS areas have been suggested relevant. Lesion of bilateral STS may conversely be expected to cause pure word deafness and possibly also impaired vowel classification. Here we studied a patient with bilateral STS lesions caused by ischemic strokes and relatively intact medial STPs to characterize the behavioral consequences of STS loss. The patient showed severe deficits in auditory speech perception, whereas his speech production was fluent and communication by written speech was grossly intact. Auditory-evoked fields in the STP were within normal limits on both sides, suggesting that major parts of the auditory cortex were functionally intact. Further studies showed that the patient had normal hearing thresholds and only mild disability in tests for telencephalic hearing disorder. Prominent deficits were discovered in an auditory-object classification task, where the patient performed four standard deviations below the control group. In marked contrast, performance in a vowel-classification task was intact. Auditory evoked fields showed enhanced responses for vowels compared to matched non-vowels within normal limits. Our results are consistent with the notion that cortex along STS is important for auditory speech perception, although it does not appear to be entirely speech specific. Formant analysis and single vowel classification, however, appear to be already implemented in auditory cortex on the STP. PMID:26343343

  1. A lateralized auditory evoked potential elicited when auditory objects are defined by spatial motion.

    Science.gov (United States)

    Butcher, Andrew; Govenlock, Stanley W; Tata, Matthew S

    2011-02-01

    Scene analysis involves the process of segmenting a field of overlapping objects from each other and from the background. It is a fundamental stage of perception in both vision and hearing. The auditory system encodes complex cues that allow listeners to find boundaries between sequential objects, even when no gap of silence exists between them. In this sense, object perception in hearing is similar to perceiving visual objects defined by isoluminant color, motion or binocular disparity. Motion is one such cue: when a moving sound abruptly disappears from one location and instantly reappears somewhere else, the listener perceives two sequential auditory objects. Smooth reversals of motion direction do not produce this segmentation. We investigated the brain electrical responses evoked by this spatial segmentation cue and compared them to the familiar auditory evoked potential elicited by sound onsets. Segmentation events evoke a pattern of negative and positive deflections that are unlike those evoked by onsets. We identified a negative component in the waveform - the Lateralized Object-Related Negativity - generated by the hemisphere contralateral to the side on which the new sound appears. The relationship between this component and similar components found in related paradigms is considered. PMID:21056097

  2. Quadri-stability of a spatially ambiguous auditory illusion

    Directory of Open Access Journals (Sweden)

    Constance May Bainbridge

    2015-01-01

    Full Text Available In addition to vision, audition plays an important role in sound localization in our world. One way we estimate the motion of an auditory object moving towards or away from us is from changes in volume intensity. However, the human auditory system has unequally distributed spatial resolution, including difficulty distinguishing sounds in front versus behind the listener. Here, we introduce a novel quadri-stable illusion, the Transverse-and-Bounce Auditory Illusion, which combines front-back confusion with changes in volume levels of a nonspatial sound to create ambiguous percepts of an object approaching and withdrawing from the listener. The sound can be perceived as traveling transversely from front to back or back to front, or bouncing to remain exclusively in front of or behind the observer. Here we demonstrate how human listeners experience this illusory phenomenon by comparing ambiguous and unambiguous stimuli for each of the four possible motion percepts. When asked to rate their confidence in perceiving each sound’s motion, participants reported equal confidence for the illusory and unambiguous stimuli. Participants perceived all four illusory motion percepts, and could not distinguish the illusion from the unambiguous stimuli. These results show that this illusion is effectively quadri-stable. In a second experiment, the illusory stimulus was looped continuously in headphones while participants identified its perceived path of motion to test properties of perceptual switching, locking, and biases. Participants were biased towards perceiving transverse compared to bouncing paths, and they became perceptually locked into alternating between front-to-back and back-to-front percepts, perhaps reflecting how auditory objects commonly move in the real world. This multi-stable auditory illusion opens opportunities for studying the perceptual, cognitive, and neural representation of objects in motion, as well as exploring multimodal perceptual

  3. Auditory Discrimination Learning: Role of Working Memory.

    Science.gov (United States)

    Zhang, Yu-Xuan; Moore, David R; Guiraud, Jeanne; Molloy, Katharine; Yan, Ting-Ting; Amitay, Sygal

    2016-01-01

    Perceptual training is generally assumed to improve perception by modifying the encoding or decoding of sensory information. However, this assumption is incompatible with recent demonstrations that transfer of learning can be enhanced by across-trial variation of training stimuli or task. Here we present three lines of evidence from healthy adults in support of the idea that the enhanced transfer of auditory discrimination learning is mediated by working memory (WM). First, the ability to discriminate small differences in tone frequency or duration was correlated with WM measured with a tone n-back task. Second, training frequency discrimination around a variable frequency transferred to and from WM learning, but training around a fixed frequency did not. The transfer of learning in both directions was correlated with a reduction of the influence of stimulus variation in the discrimination task, linking WM and its improvement to across-trial stimulus interaction in auditory discrimination. Third, while WM training transferred broadly to other WM and auditory discrimination tasks, variable-frequency training on duration discrimination did not improve WM, indicating that stimulus variation challenges and trains WM only if the task demands stimulus updating in the varied dimension. The results provide empirical evidence as well as a theoretic framework for interactions between cognitive and sensory plasticity during perceptual experience. PMID:26799068

  4. Central auditory development in children with cochlear implants: clinical implications.

    Science.gov (United States)

    Sharma, Anu; Dorman, Michael F

    2006-01-01

    A common finding in developmental neurobiology is that stimulation must be delivered to a sensory system within a narrow window of time (a sensitive period) during development in order for that sensory system to develop normally. Experiments with congenitally deaf children have allowed us to establish the existence and time limits of a sensitive period for the development of central auditory pathways in humans. Using the latency of cortical auditory evoked potentials (CAEPs) as a measure we have found that central auditory pathways are maximally plastic for a period of about 3.5 years. If the stimulation is delivered within that period CAEP latencies reach age-normal values within 3-6 months after stimulation. However, if stimulation is withheld for more than 7 years, CAEP latencies decrease significantly over a period of approximately 1 month following the onset of stimulation. They then remain constant or change very slowly over months or years. The lack of development of the central auditory system in congenitally deaf children implanted after 7 years is correlated with relatively poor development of speech and language skills [Geers, this vol, pp 50-65]. Animal models suggest that the primary auditory cortex may be functionally decoupled from higher order auditory cortex due to restricted development of inter- and intracortical connections in late-implanted children [Kral and Tillein, this vol, pp 89-108]. Another aspect of plasticity that works against late-implanted children is the reorganization of higher order cortex by other sensory modalities (e.g. vision). The hypothesis of decoupling of primary auditory cortex from higher order auditory cortex in children deprived of sound for a long time may explain the speech perception and oral language learning difficulties of children who receive an implant after the end of the sensitive period. PMID:16891837

  5. Functional studies of the human auditory cortex, auditory memory and musical hallucinations

    International Nuclear Information System (INIS)

    Objectives. 1. To determine which areas of the cerebral cortex are activated stimulating the left ear with pure tones, and what type of stimulation occurs (eg. excitatory or inhibitory) in these different areas. 2. To use this information as an initial step to develop a normal functional data base for future studies. 3. To try to determine if there is a biological substrate to the process of recalling previous auditory perceptions and if possible, suggest a locus for auditory memory. Method. Brain perfusion single photon emission computerized tomography (SPECT) evaluation was conducted: 1-2) Using auditory stimulation with pure tones in 4 volunteers with normal hearing. 3) In a patient with bilateral profound hearing loss who had auditory perception of previous musical experiences; while injected with Tc99m HMPAO while she was having the sensation of hearing a well known melody. Results. Both in the patient with auditory hallucinations and the normal controls -stimulated with pure tones- there was a statistically significant increase in perfusion in Brodmann's area 39, more intense on the right side (right to left p < 0.05). With a lesser intensity there was activation in the adjacent area 40 and there was intense activation also in the executive frontal cortex areas 6, 8, 9, and 10 of Brodmann. There was also activation of area 7 of Brodmann; an audio-visual association area; more marked on the right side in the patient and the normal stimulated controls. In the subcortical structures there was also marked activation in the patient with hallucinations in both lentiform nuclei, thalamus and caudate nuclei also more intense in the right hemisphere, 5, 4.7 and 4.2 S.D. above the mean respectively and 5, 3.3, and 3 S.D. above the normal mean in the left hemisphere respectively. Similar findings were observed in normal controls. Conclusions. After auditory stimulation with pure tones in the left ear of normal female volunteers, there is bilateral activation of area 39

  6. Sonic morphology: Aesthetic dimensional auditory spatial awareness

    Science.gov (United States)

    Whitehouse, Martha M.

    The sound and ceramic sculpture installation, " Skirting the Edge: Experiences in Sound & Form," is an integration of art and science demonstrating the concept of sonic morphology. "Sonic morphology" is herein defined as aesthetic three-dimensional auditory spatial awareness. The exhibition explicates my empirical phenomenal observations that sound has a three-dimensional form. Composed of ceramic sculptures that allude to different social and physical situations, coupled with sound compositions that enhance and create a three-dimensional auditory and visual aesthetic experience (see accompanying DVD), the exhibition supports the research question, "What is the relationship between sound and form?" Precisely how people aurally experience three-dimensional space involves an integration of spatial properties, auditory perception, individual history, and cultural mores. People also utilize environmental sound events as a guide in social situations and in remembering their personal history, as well as a guide in moving through space. Aesthetically, sound affects the fascination, meaning, and attention one has within a particular space. Sonic morphology brings art forms such as a movie, video, sound composition, and musical performance into the cognitive scope by generating meaning from the link between the visual and auditory senses. This research examined sonic morphology as an extension of musique concrete, sound as object, originating in Pierre Schaeffer's work in the 1940s. Pointing, as John Cage did, to the corporeal three-dimensional experience of "all sound," I composed works that took their total form only through the perceiver-participant's participation in the exhibition. While contemporary artist Alvin Lucier creates artworks that draw attention to making sound visible, "Skirting the Edge" engages the perceiver-participant visually and aurally, leading to recognition of sonic morphology.

  7. A Transient Auditory Signal Shifts the Perceived Offset Position of a Moving Visual Object

    Directory of Open Access Journals (Sweden)

    Sung-EnChien

    2013-02-01

    Full Text Available Information received from different sensory modalities profoundly influences human perception. For example, changes in the auditory flutter rate induce changes in the apparent flicker rate of a flashing light (Shipley, 1964. In the present study, we investigated whether auditory information would affect the perceived offset position of a moving object. In Experiment 1, a visual object moved toward the center of the computer screen and disappeared abruptly. A transient auditory signal was presented at different times relative to the moment when the object disappeared. The results showed that if the auditory signal was presented before the abrupt offset of the moving object, the perceived final position was shifted backward, implying that the perceived offset position was affected by the transient auditory information. In Experiment 2, we presented the transient auditory signal to either the left or the right ear. The results showed that the perceived offset shifted backward more strongly when the auditory signal was presented to the same side from which the moving object originated. In Experiment 3, we found that the perceived timing of the visual offset was not affected by the spatial relation between the auditory signal and the visual offset. The present results are interpreted as indicating that an auditory signal may influence the offset position of a moving object through both spatial and temporal processes.

  8. A transient auditory signal shifts the perceived offset position of a moving visual object.

    Science.gov (United States)

    Chien, Sung-En; Ono, Fuminori; Watanabe, Katsumi

    2013-01-01

    Information received from different sensory modalities profoundly influences human perception. For example, changes in the auditory flutter rate induce changes in the apparent flicker rate of a flashing light (Shipley, 1964). In the present study, we investigated whether auditory information would affect the perceived offset position of a moving object. In Experiment 1, a visual object moved toward the center of the computer screen and disappeared abruptly. A transient auditory signal was presented at different times relative to the moment when the object disappeared. The results showed that if the auditory signal was presented before the abrupt offset of the moving object, the perceived final position was shifted backward, implying that the perceived visual offset position was affected by the transient auditory information. In Experiment 2, we presented the transient auditory signal to either the left or the right ear. The results showed that the perceived visual offset shifted backward more strongly when the auditory signal was presented to the same side from which the moving object originated. In Experiment 3, we found that the perceived timing of the visual offset was not affected by the spatial relation between the auditory signal and the visual offset. The present results are interpreted as indicating that an auditory signal may influence the offset position of a moving object through both spatial and temporal processes. PMID:23439729

  9. Robust speech features representation based on computational auditory model

    Institute of Scientific and Technical Information of China (English)

    LU Xugang; JIA Chuan; DANG Jianwu

    2004-01-01

    A speech signal processing and features extracting method based on computational auditory model is proposed. The computational model is based on psychological, physiological knowledge and digital signal processing methods. In each stage of a hearing perception system, there is a corresponding computational model to simulate its function. Based on this model, speech features are extracted. In each stage, the features in different kinds of level are extracted. A further processing for primary auditory spectrum based on lateral inhibition is proposed to extract much more robust speech features. All these features can be regarded as the internal representations of speech stimulation in hearing system. The robust speech recognition experiments are conducted to test the robustness of the features. Results show that the representations based on the proposed computational auditory model are robust representations for speech signals.

  10. The Effects of Auditory Contrast Tuning upon Speech Intelligibility

    Science.gov (United States)

    Killian, Nathan J.; Watkins, Paul V.; Davidson, Lisa S.; Barbour, Dennis L.

    2016-01-01

    We have previously identified neurons tuned to spectral contrast of wideband sounds in auditory cortex of awake marmoset monkeys. Because additive noise alters the spectral contrast of speech, contrast-tuned neurons, if present in human auditory cortex, may aid in extracting speech from noise. Given that this cortical function may be underdeveloped in individuals with sensorineural hearing loss, incorporating biologically-inspired algorithms into external signal processing devices could provide speech enhancement benefits to cochlear implantees. In this study we first constructed a computational signal processing algorithm to mimic auditory cortex contrast tuning. We then manipulated the shape of contrast channels and evaluated the intelligibility of reconstructed noisy speech using a metric to predict cochlear implant user perception. Candidate speech enhancement strategies were then tested in cochlear implantees with a hearing-in-noise test. Accentuation of intermediate contrast values or all contrast values improved computed intelligibility. Cochlear implant subjects showed significant improvement in noisy speech intelligibility with a contrast shaping procedure.

  11. Altered intrinsic connectivity of the auditory cortex in congenital amusia.

    Science.gov (United States)

    Leveque, Yohana; Fauvel, Baptiste; Groussard, Mathilde; Caclin, Anne; Albouy, Philippe; Platel, Hervé; Tillmann, Barbara

    2016-07-01

    Congenital amusia, a neurodevelopmental disorder of music perception and production, has been associated with abnormal anatomical and functional connectivity in a right frontotemporal pathway. To investigate whether spontaneous connectivity in brain networks involving the auditory cortex is altered in the amusic brain, we ran a seed-based connectivity analysis, contrasting at-rest functional MRI data of amusic and matched control participants. Our results reveal reduced frontotemporal connectivity in amusia during resting state, as well as an overconnectivity between the auditory cortex and the default mode network (DMN). The findings suggest that the auditory cortex is intrinsically more engaged toward internal processes and less available to external stimuli in amusics compared with controls. Beyond amusia, our findings provide new evidence for the link between cognitive deficits in pathology and abnormalities in the connectivity between sensory areas and the DMN at rest. PMID:27009161

  12. Silicon modeling of pitch perception.

    OpenAIRE

    Lazzaro, J.; Mead, C

    1989-01-01

    We have designed and tested an integrated circuit that models human pitch perception. The chip receives as input a time-varying voltage corresponding to sound pressure at the ear and produces as output a map of perceived pitch. The chip is a physiological model; subcircuits on the chip correspond to known and proposed structures in the auditory system. Chip output approximates human performance in response to a variety of classical pitch-perception stimuli. The 125,000-transistor chip compute...

  13. Development of visuo-auditory integration in space and time

    Directory of Open Access Journals (Sweden)

    Monica Gori

    2012-09-01

    Full Text Available Adults integrate multisensory information optimally (e.g. Ernst & Banks, 2002 while children are not able to integrate multisensory visual haptic cues until 8-10 years of age (e.g. Gori, Del Viva, Sandini, & Burr, 2008. Before that age strong unisensory dominance is present for size and orientation visual-haptic judgments maybe reflecting a process of cross-sensory calibration between modalities. It is widely recognized that audition dominates time perception, while vision dominates space perception. If the cross sensory calibration process is necessary for development, then the auditory modality should calibrate vision in a bimodal temporal task, and the visual modality should calibrate audition in a bimodal spatial task. Here we measured visual-auditory integration in both the temporal and the spatial domains reproducing for the spatial task a child-friendly version of the ventriloquist stimuli used by Alais and Burr (2004 and for the temporal task a child-friendly version of the stimulus used by Burr, Banks and Morrone (2009. Unimodal and bimodal (conflictual or not conflictual audio-visual thresholds and PSEs were measured and compared with the Bayesian predictions. In the temporal domain, we found that both in children and adults, audition dominates the bimodal visuo-auditory task both in perceived time and precision thresholds. Contrarily, in the visual-auditory spatial task, children younger than 12 years of age show clear visual dominance (on PSEs and bimodal thresholds higher than the Bayesian prediction. Only in the adult group bimodal thresholds become optimal. In agreement with previous studies, our results suggest that also visual-auditory adult-like behaviour develops late. Interestingly, the visual dominance for space and the auditory dominance for time that we found might suggest a cross-sensory comparison of vision in a spatial visuo-audio task and a cross-sensory comparison of audition in a temporal visuo-audio task.

  14. Auditory Learning. Dimensions in Early Learning Series.

    Science.gov (United States)

    Zigmond, Naomi K.; Cicci, Regina

    The monograph discusses the psycho-physiological operations for processing of auditory information, the structure and function of the ear, the development of auditory processes from fetal responses through discrimination, language comprehension, auditory memory, and auditory processes related to written language. Disorders of auditory learning…

  15. Visual–auditory spatial processing in auditory cortical neurons

    OpenAIRE

    Bizley, Jennifer K.; King, Andrew J

    2008-01-01

    Neurons responsive to visual stimulation have now been described in the auditory cortex of various species, but their functions are largely unknown. Here we investigate the auditory and visual spatial sensitivity of neurons recorded in 5 different primary and non-primary auditory cortical areas of the ferret. We quantified the spatial tuning of neurons by measuring the responses to stimuli presented across a range of azimuthal positions and calculating the mutual information (MI) between the ...

  16. Pitch-induced responses in the right auditory cortex correlate with musical ability in normal listeners.

    Science.gov (United States)

    Puschmann, Sebastian; Özyurt, Jale; Uppenkamp, Stefan; Thiel, Christiane M

    2013-10-23

    Previous work compellingly shows the existence of functional and structural differences in human auditory cortex related to superior musical abilities observed in professional musicians. In this study, we investigated the relationship between musical abilities and auditory cortex activity in normal listeners who had not received a professional musical education. We used functional MRI to measure auditory cortex responses related to auditory stimulation per se and the processing of pitch and pitch changes, which represents a prerequisite for the perception of musical sequences. Pitch-evoked responses in the right lateral portion of Heschl's gyrus were correlated positively with the listeners' musical abilities, which were assessed using a musical aptitude test. In contrast, no significant relationship was found for noise stimuli, lacking any musical information, and for responses induced by pitch changes. Our results suggest that superior musical abilities in normal listeners are reflected by enhanced neural encoding of pitch information in the auditory system. PMID:23995293

  17. Sensory Responses during Sleep in Primate Primary and Secondary Auditory Cortex

    OpenAIRE

    Issa, Elias B.; Wang, Xiaoqin

    2008-01-01

    Most sensory stimuli do not reach conscious perception during sleep. It has been thought that the thalamus prevents the relay of sensory information to cortex during sleep, but the consequences for cortical responses to sensory signals in this physiological state remain unclear. We recorded from two auditory cortical areas downstream of the thalamus in naturally sleeping marmoset monkeys. Single neurons in primary auditory cortex either increased or decreased their responses during sleep comp...

  18. Discrimination of Communication Vocalizations by Single Neurons and Groups of Neurons in the Auditory Midbrain

    OpenAIRE

    Schneider, David M.; Woolley, Sarah M. N.

    2010-01-01

    Many social animals including songbirds use communication vocalizations for individual recognition. The perception of vocalizations depends on the encoding of complex sounds by neurons in the ascending auditory system, each of which is tuned to a particular subset of acoustic features. Here, we examined how well the responses of single auditory neurons could be used to discriminate among bird songs and we compared discriminability to spectrotemporal tuning. We then used biologically realistic...

  19. Auditory priming effects on the production of second language speech sounds

    OpenAIRE

    Leong, Lindsay Ann

    2013-01-01

    Research shows that speech perception and production are connected, however, the extent to which auditory speech stimuli can affect second language production has been less thoroughly explored. The current study presents Mandarin learners of English with an English vowel as an auditory prime (/i/, /ɪ/, /u/, /ʊ/) followed by an English target word containing either a tensity congruent (e.g. prime: /i/ - target: “peach”) or incongruent (e.g. prime: /i/ - target: “pitch”) vowel. Pronunciation of...

  20. The auditory characteristics of children with inner auditory canal stenosis.

    Science.gov (United States)

    Ai, Yu; Xu, Lei; Li, Li; Li, Jianfeng; Luo, Jianfen; Wang, Mingming; Fan, Zhaomin; Wang, Haibo

    2016-07-01

    Conclusions This study shows that the prevalence of auditory neuropathy spectrum disorder (ANSD) in the children with inner auditory canal (IAC) stenosis is much higher than those without IAC stenosis, regardless of whether they have other inner ear anomalies. In addition, the auditory characteristics of ANSD with IAC stenosis are significantly different from those of ANSD without any middle and inner ear malformations. Objectives To describe the auditory characteristics in children with IAC stenosis as well as to examine whether the narrow inner auditory canal is associated with ANSD. Method A total of 21 children, with inner auditory canal stenosis, participated in this study. A series of auditory tests were measured. Meanwhile, a comparative study was conducted on the auditory characteristics of ANSD, based on whether the children were associated with isolated IAC stenosis. Results Wave V in the ABR was not observed in all the patients, while cochlear microphonic (CM) response was detected in 81.1% ears with stenotic IAC. Sixteen of 19 (84.2%) ears with isolated IAC stenosis had CM response present on auditory brainstem responses (ABR) waveforms. There was no significant difference in ANSD characteristics between the children with and without isolated IAC stenosis. PMID:26981851

  1. A virtual auditory environment for investigating the auditory signal processing of realistic sounds

    DEFF Research Database (Denmark)

    Favrot, Sylvain Emmanuel; Buchholz, Jörg

    2008-01-01

    reverberation. The environment is based on the ODEON room acoustic simulation software to render the acoustical scene. ODEON outputs are processed using a combination of different order Ambisonic techniques to calculate multichannel room impulse responses (mRIR). Auralization is then obtained by the convolution....... Throughout the VAE development, special care was taken in order to achieve a realistic auditory percept and to avoid “artifacts” such as unnatural coloration. The performance of the VAE has been evaluated and optimized on a 29 loudspeaker setup using both objective and subjective measurement techniques....

  2. Reduced object related negativity response indicates impaired auditory scene analysis in adults with autistic spectrum disorder

    Directory of Open Access Journals (Sweden)

    Veema Lodhia

    2014-02-01

    Full Text Available Auditory Scene Analysis provides a useful framework for understanding atypical auditory perception in autism. Specifically, a failure to segregate the incoming acoustic energy into distinct auditory objects might explain the aversive reaction autistic individuals have to certain auditory stimuli or environments. Previous research with non-autistic participants has demonstrated the presence of an Object Related Negativity (ORN in the auditory event related potential that indexes pre-attentive processes associated with auditory scene analysis. Also evident is a later P400 component that is attention dependent and thought to be related to decision-making about auditory objects. We sought to determine whether there are differences between individuals with and without autism in the levels of processing indexed by these components. Electroencephalography (EEG was used to measure brain responses from a group of 16 autistic adults, and 16 age- and verbal-IQ-matched typically-developing adults. Auditory responses were elicited using lateralized dichotic pitch stimuli in which inter-aural timing differences create the illusory perception of a pitch that is spatially separated from a carrier noise stimulus. As in previous studies, control participants produced an ORN in response to the pitch stimuli. However, this component was significantly reduced in the participants with autism. In contrast, processing differences were not observed between the groups at the attention-dependent level (P400. These findings suggest that autistic individuals have difficulty segregating auditory stimuli into distinct auditory objects, and that this difficulty arises at an early pre-attentive level of processing.

  3. Auditory hallucinations: A review of the ERC "VOICE" project.

    Science.gov (United States)

    Hugdahl, Kenneth

    2015-06-22

    In this invited review I provide a selective overview of recent research on brain mechanisms and cognitive processes involved in auditory hallucinations. The review is focused on research carried out in the "VOICE" ERC Advanced Grant Project, funded by the European Research Council, but I also review and discuss the literature in general. Auditory hallucinations are suggested to be perceptual phenomena, with a neuronal origin in the speech perception areas in the temporal lobe. The phenomenology of auditory hallucinations is conceptualized along three domains, or dimensions; a perceptual dimension, experienced as someone speaking to the patient; a cognitive dimension, experienced as an inability to inhibit, or ignore the voices, and an emotional dimension, experienced as the "voices" having primarily a negative, or sinister, emotional tone. I will review cognitive, imaging, and neurochemistry data related to these dimensions, primarily the first two. The reviewed data are summarized in a model that sees auditory hallucinations as initiated from temporal lobe neuronal hyper-activation that draws attentional focus inward, and which is not inhibited due to frontal lobe hypo-activation. It is further suggested that this is maintained through abnormal glutamate and possibly gamma-amino-butyric-acid transmitter mediation, which could point towards new pathways for pharmacological treatment. A final section discusses new methods of acquiring quantitative data on the phenomenology and subjective experience of auditory hallucination that goes beyond standard interview questionnaires, by suggesting an iPhone/iPod app. PMID:26110121

  4. Modulating Human Auditory Processing by Transcranial Electrical Stimulation.

    Science.gov (United States)

    Heimrath, Kai; Fiene, Marina; Rufener, Katharina S; Zaehle, Tino

    2016-01-01

    Transcranial electrical stimulation (tES) has become a valuable research tool for the investigation of neurophysiological processes underlying human action and cognition. In recent years, striking evidence for the neuromodulatory effects of transcranial direct current stimulation, transcranial alternating current stimulation, and transcranial random noise stimulation has emerged. While the wealth of knowledge has been gained about tES in the motor domain and, to a lesser extent, about its ability to modulate human cognition, surprisingly little is known about its impact on perceptual processing, particularly in the auditory domain. Moreover, while only a few studies systematically investigated the impact of auditory tES, it has already been applied in a large number of clinical trials, leading to a remarkable imbalance between basic and clinical research on auditory tES. Here, we review the state of the art of tES application in the auditory domain focussing on the impact of neuromodulation on acoustic perception and its potential for clinical application in the treatment of auditory related disorders. PMID:27013969

  5. Measuring the dynamics of neural responses in primary auditory cortex

    OpenAIRE

    Depireux, Didier A; Simon, Jonathan Z.; Shamma, Shihab A.

    1998-01-01

    We review recent developments in the measurement of the dynamics of the response properties of auditory cortical neurons to broadband sounds, which is closely related to the perception of timbre. The emphasis is on a method that characterizes the spectro-temporal properties of single neurons to dynamic, broadband sounds, akin to the drifting gratings used in vision. The method treats the spectral and temporal aspects of the response on an equal footing.

  6. Auditory Discrimination and Auditory Sensory Behaviours in Autism Spectrum Disorders

    Science.gov (United States)

    Jones, Catherine R. G.; Happe, Francesca; Baird, Gillian; Simonoff, Emily; Marsden, Anita J. S.; Tregay, Jenifer; Phillips, Rebecca J.; Goswami, Usha; Thomson, Jennifer M.; Charman, Tony

    2009-01-01

    It has been hypothesised that auditory processing may be enhanced in autism spectrum disorders (ASD). We tested auditory discrimination ability in 72 adolescents with ASD (39 childhood autism; 33 other ASD) and 57 IQ and age-matched controls, assessing their capacity for successful discrimination of the frequency, intensity and duration…

  7. Auditory and non-auditory effects of noise on health

    NARCIS (Netherlands)

    Basner, M.; Babisch, W.; Davis, A.; Brink, M.; Clark, C.; Janssen, S.A.; Stansfeld, S.

    2013-01-01

    Noise is pervasive in everyday life and can cause both auditory and non-auditory health eff ects. Noise-induced hearing loss remains highly prevalent in occupational settings, and is increasingly caused by social noise exposure (eg, through personal music players). Our understanding of molecular mec

  8. Acoustic Shadows: An Auditory Exploration of the Sense of Space

    Directory of Open Access Journals (Sweden)

    Frank Dufour

    2011-12-01

    Full Text Available This paper examines the question of auditory detection of the movements of silent objects in noisy environments. The approach to studying and exploring this phenomenon is primarily based on the framework of the ecology of perception defined by James Gibson (Gibson, 1979 in the sense that it focuses on the direct auditory perception of events, or “structured energy that specifies properties of the environment” (Michaels & Carello, 1981 P. 157. The goal of this study is triple: -Theoretical; for various reasons, this kind of acoustic situations has not been extensively studied by traditional acoustics and psychoacoustics, therefore, this project demonstrates and supports the pertinence of the Ecology of Perception for the description and explanation of such complex phenomena. -Practical; like echolocation, perception of acoustic shadows can be improved by practice, this project intends to contribute to the acknowledgment of this way of listening and to help individuals placed in noisy environments without the support of vision acquiring a detailed detection of the movements occurring in these environments. -Artistic; this project explores a new artistic expression based on the creation and exploration of complex multisensory environments. Acoustic Shadows, a multimedia interactive composition is being developed on the premises of the ecological approach to perception. The last dimension of this project is meant to be a contribution to the sonic representation of space in films and in computer generated virtual environments by producing simulations of acoustic shadows.

  9. The Role of Visual Spatial Attention in Audiovisual Speech Perception

    OpenAIRE

    Andersen, Tobias; Tiippana, K.; Laarni, J.; Kojo, I.; Sams, M.

    2008-01-01

    Auditory and visual information is integrated when perceiving speech, as evidenced by the McGurk effect in which viewing an incongruent talking face categorically alters auditory speech perception. Audiovisual integration in speech perception has long been considered automatic and pre-attentive but recent reports have challenged this view. Here we study the effect of visual spatial attention on the McGurk effect. By presenting a movie of two faces symmetrically displaced to each side of a cen...

  10. Aero-tactile integration in speech perception

    OpenAIRE

    Gick, Bryan; Derrick, Donald

    2009-01-01

    Visual information from a speaker’s face can enhance1 or interfere with2 accurate auditory perception. This integration of information across auditory and visual streams has been observed in functional imaging studies3,4, and has typically been attributed to the frequency and robustness with which perceivers jointly encounter event-specific information from these two modalities5. Adding the tactile modality has long been considered a crucial next step in understanding multisensory integration...

  11. Hypermnesia using auditory input.

    Science.gov (United States)

    Allen, J

    1992-07-01

    The author investigated whether hypermnesia would occur with auditory input. In addition, the author examined the effects of subjects' knowledge that they would later be asked to recall the stimuli. Two groups of 26 subjects each were given three successive recall trials after they listened to an audiotape of 59 high-imagery nouns. The subjects in the uninformed group were not told that they would later be asked to remember the words; those in the informed group were. Hypermnesia was evident, but only in the uninformed group. PMID:1447564

  12. Partial Epilepsy with Auditory Features

    Directory of Open Access Journals (Sweden)

    J Gordon Millichap

    2004-07-01

    Full Text Available The clinical characteristics of 53 sporadic (S cases of idiopathic partial epilepsy with auditory features (IPEAF were analyzed and compared to previously reported familial (F cases of autosomal dominant partial epilepsy with auditory features (ADPEAF in a study at the University of Bologna, Italy.

  13. Auditory Temporal Structure Processing in Dyslexia: Processing of Prosodic Phrase Boundaries Is Not Impaired in Children with Dyslexia

    Science.gov (United States)

    Geiser, Eveline; Kjelgaard, Margaret; Christodoulou, Joanna A.; Cyr, Abigail; Gabrieli, John D. E.

    2014-01-01

    Reading disability in children with dyslexia has been proposed to reflect impairment in auditory timing perception. We investigated one aspect of timing perception--"temporal grouping"--as present in prosodic phrase boundaries of natural speech, in age-matched groups of children, ages 6-8 years, with and without dyslexia. Prosodic phrase…

  14. Modelling the emergence and dynamics of perceptual organisation in auditory streaming.

    Science.gov (United States)

    Mill, Robert W; Bőhm, Tamás M; Bendixen, Alexandra; Winkler, István; Denham, Susan L

    2013-01-01

    Many sound sources can only be recognised from the pattern of sounds they emit, and not from the individual sound events that make up their emission sequences. Auditory scene analysis addresses the difficult task of interpreting the sound world in terms of an unknown number of discrete sound sources (causes) with possibly overlapping signals, and therefore of associating each event with the appropriate source. There are potentially many different ways in which incoming events can be assigned to different causes, which means that the auditory system has to choose between them. This problem has been studied for many years using the auditory streaming paradigm, and recently it has become apparent that instead of making one fixed perceptual decision, given sufficient time, auditory perception switches back and forth between the alternatives-a phenomenon known as perceptual bi- or multi-stability. We propose a new model of auditory scene analysis at the core of which is a process that seeks to discover predictable patterns in the ongoing sound sequence. Representations of predictable fragments are created on the fly, and are maintained, strengthened or weakened on the basis of their predictive success, and conflict with other representations. Auditory perceptual organisation emerges spontaneously from the nature of the competition between these representations. We present detailed comparisons between the model simulations and data from an auditory streaming experiment, and show that the model accounts for many important findings, including: the emergence of, and switching between, alternative organisations; the influence of stimulus parameters on perceptual dominance, switching rate and perceptual phase durations; and the build-up of auditory streaming. The principal contribution of the model is to show that a two-stage process of pattern discovery and competition between incompatible patterns can account for both the contents (perceptual organisations) and the dynamics

  15. Modelling the emergence and dynamics of perceptual organisation in auditory streaming.

    Directory of Open Access Journals (Sweden)

    Robert W Mill

    Full Text Available Many sound sources can only be recognised from the pattern of sounds they emit, and not from the individual sound events that make up their emission sequences. Auditory scene analysis addresses the difficult task of interpreting the sound world in terms of an unknown number of discrete sound sources (causes with possibly overlapping signals, and therefore of associating each event with the appropriate source. There are potentially many different ways in which incoming events can be assigned to different causes, which means that the auditory system has to choose between them. This problem has been studied for many years using the auditory streaming paradigm, and recently it has become apparent that instead of making one fixed perceptual decision, given sufficient time, auditory perception switches back and forth between the alternatives-a phenomenon known as perceptual bi- or multi-stability. We propose a new model of auditory scene analysis at the core of which is a process that seeks to discover predictable patterns in the ongoing sound sequence. Representations of predictable fragments are created on the fly, and are maintained, strengthened or weakened on the basis of their predictive success, and conflict with other representations. Auditory perceptual organisation emerges spontaneously from the nature of the competition between these representations. We present detailed comparisons between the model simulations and data from an auditory streaming experiment, and show that the model accounts for many important findings, including: the emergence of, and switching between, alternative organisations; the influence of stimulus parameters on perceptual dominance, switching rate and perceptual phase durations; and the build-up of auditory streaming. The principal contribution of the model is to show that a two-stage process of pattern discovery and competition between incompatible patterns can account for both the contents (perceptual

  16. Peripheral Auditory Mechanisms

    CERN Document Server

    Hall, J; Hubbard, A; Neely, S; Tubis, A

    1986-01-01

    How weIl can we model experimental observations of the peripheral auditory system'? What theoretical predictions can we make that might be tested'? It was with these questions in mind that we organized the 1985 Mechanics of Hearing Workshop, to bring together auditory researchers to compare models with experimental observations. Tbe workshop forum was inspired by the very successful 1983 Mechanics of Hearing Workshop in Delft [1]. Boston University was chosen as the site of our meeting because of the Boston area's role as a center for hearing research in this country. We made a special effort at this meeting to attract students from around the world, because without students this field will not progress. Financial support for the workshop was provided in part by grant BNS- 8412878 from the National Science Foundation. Modeling is a traditional strategy in science and plays an important role in the scientific method. Models are the bridge between theory and experiment. Tbey test the assumptions made in experim...

  17. Adaptation Aftereffects in Vocal Emotion Perception Elicited by Expressive Faces and Voices

    OpenAIRE

    Verena G Skuk; Schweinberger, Stefan R.

    2013-01-01

    The perception of emotions is often suggested to be multimodal in nature, and bimodal as compared to unimodal (auditory or visual) presentation of emotional stimuli can lead to superior emotion recognition. In previous studies, contrastive aftereffects in emotion perception caused by perceptual adaptation have been shown for faces and for auditory affective vocalization, when adaptors were of the same modality. By contrast, crossmodal aftereffects in the perception of emotional vocalizations ...

  18. Auditory performance in an open sound field

    Science.gov (United States)

    Fluitt, Kim F.; Letowski, Tomasz; Mermagen, Timothy

    2003-04-01

    Detection and recognition of acoustic sources in an open field are important elements of situational awareness on the battlefield. They are affected by many technical and environmental conditions such as type of sound, distance to a sound source, terrain configuration, meteorological conditions, hearing capabilities of the listener, level of background noise, and the listener's familiarity with the sound source. A limited body of knowledge about auditory perception of sources located over long distances makes it difficult to develop models predicting auditory behavior on the battlefield. The purpose of the present study was to determine the listener's abilities to detect, recognize, localize, and estimate distances to sound sources from 25 to 800 m from the listing position. Data were also collected for meteorological conditions (wind direction and strength, temperature, atmospheric pressure, humidity) and background noise level for each experimental trial. Forty subjects (men and women, ages 18 to 25) participated in the study. Nine types of sounds were presented from six loudspeakers in random order; each series was presented four times. Partial results indicate that both detection and recognition declined at distances greater than approximately 200 m and distance estimation was grossly underestimated by listeners. Specific results will be presented.

  19. Music, rhythm, rise time perception and developmental dyslexia: Perception of musical meter predicts reading and phonology

    OpenAIRE

    M. Huss; Verney, J.P.; Fosker, Tim; Mead, N.; Goswami, U.

    2011-01-01

    Introduction: Rhythm organises musical events into patterns and forms, and rhythm perception in music is usually studied by using metrical tasks. Metrical structure also plays an organisational function in the phonology of language, via speech prosody, and there is evidence for rhythmic perceptual difficulties in developmental dyslexia. Here we investigate the hypothesis that the accurate perception of musical metrical structure is related to basic auditory perception of rise time, and also t...

  20. Multiple benefits of personal FM system use by children with auditory processing disorder (APD).

    Science.gov (United States)

    Johnston, Kristin N; John, Andrew B; Kreisman, Nicole V; Hall, James W; Crandell, Carl C

    2009-01-01

    Children with auditory processing disorders (APD) were fitted with Phonak EduLink FM devices for home and classroom use. Baseline measures of the children with APD, prior to FM use, documented significantly lower speech-perception scores, evidence of decreased academic performance, and psychosocial problems in comparison to an age- and gender-matched control group. Repeated measures during the school year demonstrated speech-perception improvement in noisy classroom environments as well as significant academic and psychosocial benefits. Compared with the control group, the children with APD showed greater speech-perception advantage with FM technology. Notably, after prolonged FM use, even unaided (no FM device) speech-perception performance was improved in the children with APD, suggesting the possibility of fundamentally enhanced auditory system function. PMID:19925345

  1. Auditory perspective taking.

    Science.gov (United States)

    Martinson, Eric; Brock, Derek

    2013-06-01

    Effective communication with a mobile robot using speech is a difficult problem even when you can control the auditory scene. Robot self-noise or ego noise, echoes and reverberation, and human interference are all common sources of decreased intelligibility. Moreover, in real-world settings, these problems are routinely aggravated by a variety of sources of background noise. Military scenarios can be punctuated by high decibel noise from materiel and weaponry that would easily overwhelm a robot's normal speaking volume. Moreover, in nonmilitary settings, fans, computers, alarms, and transportation noise can cause enough interference to make a traditional speech interface unusable. This work presents and evaluates a prototype robotic interface that uses perspective taking to estimate the effectiveness of its own speech presentation and takes steps to improve intelligibility for human listeners. PMID:23096077

  2. Tactile feedback improves auditory spatial localization

    OpenAIRE

    Gori, Monica; Vercillo, Tiziana; Sandini, Giulio; Burr, David

    2014-01-01

    Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gori et al., 2014). To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds b...

  3. Tactile feedback improves auditory spatial localization

    OpenAIRE

    Monica eGori; Tiziana eVercillo; Giulio eSandini; David eBurr

    2014-01-01

    Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial-bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gori et al., 2014). To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds b...

  4. Auditory and non-auditory effects of noise on health

    OpenAIRE

    Basner, Mathias; Babisch, Wolfgang; Davis, Adrian; Brink, Mark; Clark, Charlotte; Janssen, Sabine; Stansfeld, Stephen

    2013-01-01

    Noise is pervasive in everyday life and can cause both auditory and non-auditory health effects. Noise-induced hearing loss remains highly prevalent in occupational settings, and is increasingly caused by social noise exposure (eg, through personal music players). Our understanding of molecular mechanisms involved in noise-induced hair-cell and nerve damage has substantially increased, and preventive and therapeutic drugs will probably become available within 10 years. Evidence of the non-aud...

  5. Auditory Neuropathy in Two Patients with Generalized Neuropathic Disorder: A Case Report

    Directory of Open Access Journals (Sweden)

    P Ahmadi

    2008-06-01

    Full Text Available Background: Although it is not a new disorder, in recent times we have attained a greater understanding of auditory neuropathy (AN. In this type of hearing impairment, cochlear hair cells function but AN victims suffer from disordered neural transmission in the auditory pathway. The auditory neuropathy result profile often occurs as a part of that of the generalized neuropathic disorders, indicated in approximately 30-40% of all reported auditory neuropathy/auditory dyssynchrony (AN/AD cases, with approximately 80% of patients reporting symptom onset over the age of 15 years. In the present report, the results of audiologic tests (behavioral, physiologic and evoked potentials on two young patients with generalized neuropathy are discussed.Case report: Two brothers, 26 and 17 years old, presented with speech perception weakness and movement difficulties that started at 12 years of age and progressed as time passed. In their last examination, there was a moderate to severe flat audiogram in the older patient and mild low tone loss in the younger one. The major difficulty of the patients was severe speech perception impairment that was not compatible with their hearing thresholds. Paresthesia, sural muscle contraction and pain, and balance disorder were the first symptoms of the older brother. Now he can only move with crutches and his finger muscle tonicity has decreased remarkably, with marked fatigue after a short period of walking. Increasing movement difficulties were noted in his last visit. Visual neuropathy had been reported in repeated visual system examinations for the older brother, with similar, albeit less severe, symptoms in the younger brother.In the present study of these patients, behavioral investigations included pure-tone audiometry and speech discrimination scoring. Physiologic studies consisted Transient Evoked Otoacoustic Emission (TEOAE and acoustic reflexes. Electrophysiologic auditory tests were also performed to determine

  6. Synchronization and phonological skills: precise auditory timing hypothesis (PATH

    Directory of Open Access Journals (Sweden)

    Adam eTierney

    2014-11-01

    Full Text Available Phonological skills are enhanced by music training, but the mechanisms enabling this cross-domain enhancement remain unknown. To explain this cross-domain transfer, we propose a precise auditory timing hypothesis (PATH whereby entrainment practice is the core mechanism underlying enhanced phonological abilities in musicians. Both rhythmic synchronization and language skills such as consonant discrimination, detection of word and phrase boundaries, and conversational turn-taking rely on the perception of extremely fine-grained timing details in sound. Auditory-motor timing is an acoustic feature which meets all five of the pre-conditions necessary for cross-domain enhancement to occur (Patel 2011, 2012, 2014. There is overlap between the neural networks that process timing in the context of both music and language. Entrainment to music demands more precise timing sensitivity than does language processing. Moreover, auditory-motor timing integration captures the emotion of the trainee, is repeatedly practiced, and demands focused attention. The precise auditory timing hypothesis predicts that musical training emphasizing entrainment will be particularly effective in enhancing phonological skills.

  7. Experience-dependent learning of auditory temporal resolution: evidence from Carnatic-trained musicians.

    Science.gov (United States)

    Mishra, Srikanta K; Panda, Manasa R

    2014-01-22

    Musical training and experience greatly enhance the cortical and subcortical processing of sounds, which may translate to superior auditory perceptual acuity. Auditory temporal resolution is a fundamental perceptual aspect that is critical for speech understanding in noise in listeners with normal hearing, auditory disorders, cochlear implants, and language disorders, yet very few studies have focused on music-induced learning of temporal resolution. This report demonstrates that Carnatic musical training and experience have a significant impact on temporal resolution assayed by gap detection thresholds. This experience-dependent learning in Carnatic-trained musicians exhibits the universal aspects of human perception and plasticity. The present work adds the perceptual component to a growing body of neurophysiological and imaging studies that suggest plasticity of the peripheral auditory system at the level of the brainstem. The present work may be intriguing to researchers and clinicians alike interested in devising cross-cultural training regimens to alleviate listening-in-noise difficulties. PMID:24264076

  8. Auditory Processing Disorder in Children

    Science.gov (United States)

    ... free publications Find organizations Related Topics Auditory Neuropathy Autism Spectrum Disorder: Communication Problems in Children Dysphagia Quick ... NIH… Turning Discovery Into Health ® National Institute on Deafness and Other Communication Disorders 31 Center Drive, MSC ...

  9. Auditory Processing Disorder (For Parents)

    Science.gov (United States)

    ... and school. A positive, realistic attitude and healthy self-esteem in a child with APD can work wonders. And kids with APD can go on to ... Parents MORE ON THIS TOPIC Auditory Processing Disorder Special ...

  10. Asymmetry in primary auditory cortex activity in tinnitus patients and controls

    NARCIS (Netherlands)

    Geven, L. I.; de Kleine, E.; Willemsen, A. T. M.; van Dijk, P.

    2014-01-01

    Tinnitus is a bothersome phantom sound percept and its neural correlates are not yet disentangled. Previously published papers, using [(18)F]-fluoro-deoxyglucose positron emission tomography (FDG-PET), have suggested an increased metabolism in the left primary auditory cortex in tinnitus patients. T

  11. An EMG Study of the Lip Muscles during Covert Auditory Verbal Hallucinations in Schizophrenia

    Science.gov (United States)

    Rapin, Lucile; Dohen, Marion; Polosan, Mircea; Perrier, Pascal; Loevenbruck, Hélène

    2013-01-01

    Purpose: "Auditory verbal hallucinations" (AVHs) are speech perceptions in the absence of external stimulation. According to an influential theoretical account of AVHs in schizophrenia, a deficit in inner-speech monitoring may cause the patients' verbal thoughts to be perceived as external voices. The account is based on a…

  12. Can Spectro-Temporal Complexity Explain the Autistic Pattern of Performance on Auditory Tasks?

    Science.gov (United States)

    Samson, Fabienne; Mottron, Laurent; Jemel, Boutheina; Belin, Pascal; Ciocca, Valter

    2006-01-01

    To test the hypothesis that level of neural complexity explain the relative level of performance and brain activity in autistic individuals, available behavioural, ERP and imaging findings related to the perception of increasingly complex auditory material under various processing tasks in autism were reviewed. Tasks involving simple material…

  13. Expectation and Attention in Hierarchical Auditory Prediction

    Science.gov (United States)

    Noreika, Valdas; Gueorguiev, David; Blenkmann, Alejandro; Kochen, Silvia; Ibáñez, Agustín; Owen, Adrian M.; Bekinschtein, Tristan A.

    2013-01-01

    Hierarchical predictive coding suggests that attention in humans emerges from increased precision in probabilistic inference, whereas expectation biases attention in favor of contextually anticipated stimuli. We test these notions within auditory perception by independently manipulating top-down expectation and attentional precision alongside bottom-up stimulus predictability. Our findings support an integrative interpretation of commonly observed electrophysiological signatures of neurodynamics, namely mismatch negativity (MMN), P300, and contingent negative variation (CNV), as manifestations along successive levels of predictive complexity. Early first-level processing indexed by the MMN was sensitive to stimulus predictability: here, attentional precision enhanced early responses, but explicit top-down expectation diminished it. This pattern was in contrast to later, second-level processing indexed by the P300: although sensitive to the degree of predictability, responses at this level were contingent on attentional engagement and in fact sharpened by top-down expectation. At the highest level, the drift of the CNV was a fine-grained marker of top-down expectation itself. Source reconstruction of high-density EEG, supported by intracranial recordings, implicated temporal and frontal regions differentially active at early and late levels. The cortical generators of the CNV suggested that it might be involved in facilitating the consolidation of context-salient stimuli into conscious perception. These results provide convergent empirical support to promising recent accounts of attention and expectation in predictive coding. PMID:23825422

  14. Natural auditory scene statistics shapes human spatial hearing.

    Science.gov (United States)

    Parise, Cesare V; Knorre, Katharina; Ernst, Marc O

    2014-04-22

    Human perception, cognition, and action are laced with seemingly arbitrary mappings. In particular, sound has a strong spatial connotation: Sounds are high and low, melodies rise and fall, and pitch systematically biases perceived sound elevation. The origins of such mappings are unknown. Are they the result of physiological constraints, do they reflect natural environmental statistics, or are they truly arbitrary? We recorded natural sounds from the environment, analyzed the elevation-dependent filtering of the outer ear, and measured frequency-dependent biases in human sound localization. We find that auditory scene statistics reveals a clear mapping between frequency and elevation. Perhaps more interestingly, this natural statistical mapping is tightly mirrored in both ear-filtering properties and in perceived sound location. This suggests that both sound localization behavior and ear anatomy are fine-tuned to the statistics of natural auditory scenes, likely providing the basis for the spatial connotation of human hearing. PMID:24711409

  15. Auditory size-deviant detection in adults and newborn infants

    Science.gov (United States)

    Vestergaard, Martin D.; Háden, Gábor P.; Shtyrov, Yury; Patterson, Roy D.; Pulvermüller, Friedemann; Denham, Sue L.; Sziller, István; Winkler, István

    2010-01-01

    Auditory size perception refers to the ability to make accurate judgements of the size of a sound source based solely upon the sound emitted from the source. Electro-physiological and behavioural data were collected to test whether sound-source size parameters are detected from task-irrelevant sequences in adults and newborn infants. The mismatch negativity (MMN) obtained from adults indexed automatic detection of changes in size for voices, musical instruments and animal calls, regardless of whether the acoustic change indicated larger or smaller sources. Neonates detected changes in the size of a musical instrument. The data are consistent with the notion that auditory size-deviant detection in humans is an innate automatic process. This conclusion is compatible with the theory that the ability to assess the size of sound sources evolved because it provided selective advantage of being able to detect larger (more competent) suitors and larger (more dangerous) predators. PMID:19596043

  16. Short-term visual deprivation improves the perception of harmonicity.

    Science.gov (United States)

    Landry, Simon P; Shiller, Douglas M; Champoux, François

    2013-12-01

    Neuroimaging studies have shown that the perception of auditory stimuli involves occipital cortical regions traditionally associated with visual processing, even in the absence of any overt visual component to the task. Analogous behavioral evidence of an interaction between visual and auditory processing during purely auditory tasks comes from studies of short-term visual deprivation on the perception of auditory cues, however, the results of such studies remain equivocal. Although some data have suggested that visual deprivation significantly increases loudness and pitch discrimination and reduces spatial localization inaccuracies, it is still unclear whether such improvement extends to the perception of spectrally complex cues, such as those involved in speech and music perception. We present data demonstrating that a 90-min period of visual deprivation causes a transient improvement in the perception of harmonicity: a spectrally complex cue that plays a key role in music and speech perception. The results provide clear behavioral evidence supporting a role for the visual system in the processing of complex auditory stimuli, even in the absence of any visual component to the task. PMID:23957309

  17. Auditory Verbal Cues Alter the Perceived Flavor of Beverages and Ease of Swallowing: A Psychometric and Electrophysiological Analysis

    Directory of Open Access Journals (Sweden)

    Aya Nakamura

    2013-01-01

    Full Text Available We investigated the possible effects of auditory verbal cues on flavor perception and swallow physiology for younger and elder participants. Apple juice, aojiru (grass juice, and water were ingested with or without auditory verbal cues. Flavor perception and ease of swallowing were measured using a visual analog scale and swallow physiology by surface electromyography and cervical auscultation. The auditory verbal cues had significant positive effects on flavor and ease of swallowing as well as on swallow physiology. The taste score and the ease of swallowing score significantly increased when the participant’s anticipation was primed by accurate auditory verbal cues. There was no significant effect of auditory verbal cues on distaste score. Regardless of age, the maximum suprahyoid muscle activity significantly decreased when a beverage was ingested without auditory verbal cues. The interval between the onset of swallowing sounds and the peak timing point of the infrahyoid muscle activity significantly shortened when the anticipation induced by the cue was contradicted in the elderly participant group. These results suggest that auditory verbal cues can improve the perceived flavor of beverages and swallow physiology.

  18. Non-auditory factors affecting urban soundscape evaluation.

    Science.gov (United States)

    Jeon, Jin Yong; Lee, Pyoung Jik; Hong, Joo Young; Cabrera, Densil

    2011-12-01

    The aim of this study is to characterize urban spaces, which combine landscape, acoustics, and lighting, and to investigate people's perceptions of urban soundscapes through quantitative and qualitative analyses. A general questionnaire survey and soundwalk were performed to investigate soundscape perception in urban spaces. Non-auditory factors (visual image, day lighting, and olfactory perceptions), as well as acoustic comfort, were selected as the main contexts that affect soundscape perception, and context preferences and overall impressions were evaluated using an 11-point numerical scale. For qualitative analysis, a semantic differential test was performed in the form of a social survey, and subjects were also asked to describe their impressions during a soundwalk. The results showed that urban soundscapes can be characterized by soundmarks, and soundscape perceptions are dominated by acoustic comfort, visual images, and day lighting, whereas reverberance in urban spaces does not yield consistent preference judgments. It is posited that the subjective evaluation of reverberance can be replaced by physical measurements. The categories extracted from the qualitative analysis revealed that spatial impressions such as openness and density emerged as some of the contexts of soundscape perception. PMID:22225033

  19. Neural basis of the time window for subjective motor-auditory integration

    Directory of Open Access Journals (Sweden)

    Koichi Toida

    2016-01-01

    Full Text Available Temporal contiguity between an action and corresponding auditory feedback is crucial to the perception of self-generated sound. However, the neural mechanisms underlying motor–auditory temporal integration are unclear. Here, we conducted four experiments with an oddball paradigm to examine the specific event-related potentials (ERPs elicited by delayed auditory feedback for a self-generated action. The first experiment confirmed that a pitch-deviant auditory stimulus elicits mismatch negativity (MMN and P300, both when it is generated passively and by the participant’s action. In our second and third experiments, we investigated the ERP components elicited by delayed auditory feedback of for a self-generated action. We found that delayed auditory feedback elicited an enhancement of P2 (enhanced-P2 and a N300 component, which were apparently different from the MMN and P300 components observed in the first experiment. We further investigated the sensitivity of the enhanced-P2 and N300 to delay length in our fourth experiment. Strikingly, the amplitude of the N300 increased as a function of the delay length. Additionally, the N300 amplitude was significantly correlated with the conscious detection of the delay (the 50% detection point was around 200 ms, and hence reduction in the feeling of authorship of the sound (the sense of agency. In contrast, the enhanced-P2 was most prominent in short-delay (≤ 200 ms conditions and diminished in long-delay conditions. Our results suggest that different neural mechanisms are employed for the processing of temporally-deviant and pitch-deviant auditory feedback. Additionally, the temporal window for subjective motor–auditory integration is likely about 200 ms, as indicated by these auditory ERP components.

  20. Rise Time Perception and Detection of Syllable Stress in Adults with Developmental Dyslexia

    Science.gov (United States)

    Leong, Victoria; Hamalainen, Jarmo; Soltesz, Fruzsina; Goswami, Usha

    2011-01-01

    Introduction: The perception of syllable stress has not been widely studied in developmental dyslexia, despite strong evidence for auditory rhythmic perceptual difficulties. Here we investigate the hypothesis that perception of sound rise time is related to the perception of syllable stress in adults with developmental dyslexia. Methods: A…

  1. Multi-sensory integration in brainstem and auditory cortex.

    Science.gov (United States)

    Basura, Gregory J; Koehler, Seth D; Shore, Susan E

    2012-11-16

    Tinnitus is the perception of sound in the absence of a physical sound stimulus. It is thought to arise from aberrant neural activity within central auditory pathways that may be influenced by multiple brain centers, including the somatosensory system. Auditory-somatosensory (bimodal) integration occurs in the dorsal cochlear nucleus (DCN), where electrical activation of somatosensory regions alters pyramidal cell spike timing and rates of sound stimuli. Moreover, in conditions of tinnitus, bimodal integration in DCN is enhanced, producing greater spontaneous and sound-driven neural activity, which are neural correlates of tinnitus. In primary auditory cortex (A1), a similar auditory-somatosensory integration has been described in the normal system (Lakatos et al., 2007), where sub-threshold multisensory modulation may be a direct reflection of subcortical multisensory responses (Tyll et al., 2011). The present work utilized simultaneous recordings from both DCN and A1 to directly compare bimodal integration across these separate brain stations of the intact auditory pathway. Four-shank, 32-channel electrodes were placed in DCN and A1 to simultaneously record tone-evoked unit activity in the presence and absence of spinal trigeminal nucleus (Sp5) electrical activation. Bimodal stimulation led to long-lasting facilitation or suppression of single and multi-unit responses to subsequent sound in both DCN and A1. Immediate (bimodal response) and long-lasting (bimodal plasticity) effects of Sp5-tone stimulation were facilitation or suppression of tone-evoked firing rates in DCN and A1 at all Sp5-tone pairing intervals (10, 20, and 40 ms), and greater suppression at 20 ms pairing-intervals for single unit responses. Understanding the complex relationships between DCN and A1 bimodal processing in the normal animal provides the basis for studying its disruption in hearing loss and tinnitus models. This article is part of a Special Issue entitled: Tinnitus Neuroscience

  2. Visibility of speech articulation enhances auditory phonetic convergence.

    Science.gov (United States)

    Dias, James W; Rosenblum, Lawrence D

    2016-01-01

    Talkers automatically imitate aspects of perceived speech, a phenomenon known as phonetic convergence. Talkers have previously been found to converge to auditory and visual speech information. Furthermore, talkers converge more to the speech of a conversational partner who is seen and heard, relative to one who is just heard (Dias & Rosenblum Perception, 40, 1457-1466, 2011). A question raised by this finding is what visual information facilitates the enhancement effect. In the following experiments, we investigated the possible contributions of visible speech articulation to visual enhancement of phonetic convergence within the noninteractive context of a shadowing task. In Experiment 1, we examined the influence of the visibility of a talker on phonetic convergence when shadowing auditory speech either in the clear or in low-level auditory noise. The results suggest that visual speech can compensate for convergence that is reduced by auditory noise masking. Experiment 2 further established the visibility of articulatory mouth movements as being important to the visual enhancement of phonetic convergence. Furthermore, the word frequency and phonological neighborhood density characteristics of the words shadowed were found to significantly predict phonetic convergence in both experiments. Consistent with previous findings (e.g., Goldinger Psychological Review, 105, 251-279, 1998), phonetic convergence was greater when shadowing low-frequency words. Convergence was also found to be greater for low-density words, contrasting with previous predictions of the effect of phonological neighborhood density on auditory phonetic convergence (e.g., Pardo, Jordan, Mallari, Scanlon, & Lewandowski Journal of Memory and Language, 69, 183-195, 2013). Implications of the results for a gestural account of phonetic convergence are discussed. PMID:26358471

  3. Auditory dominance in motor-sensory temporal recalibration.

    Science.gov (United States)

    Sugano, Yoshimori; Keetels, Mirjam; Vroomen, Jean

    2016-05-01

    Perception of synchrony between one's own action (e.g. a finger tap) and the sensory feedback thereof (e.g. a flash or click) can be shifted after exposure to an induced delay (temporal recalibration effect, TRE). It remains elusive, however, whether the same mechanism underlies motor-visual (MV) and motor-auditory (MA) TRE. We examined this by measuring crosstalk between MV- and MA-delayed feedbacks. During an exposure phase, participants pressed a mouse at a constant pace while receiving visual or auditory feedback that was either delayed (+150 ms) or subjectively synchronous (+50 ms). During a post-test, participants then tried to tap in sync with visual or auditory pacers. TRE manifested itself as a compensatory shift in the tap-pacer asynchrony (a larger anticipation error after exposure to delayed feedback). In experiment 1, MA and MV feedback were either both synchronous (MV-sync and MA-sync) or both delayed (MV-delay and MA-delay), whereas in experiment 2, different delays were mixed across alternating trials (MV-sync and MA-delay or MV-delay and MA-sync). Exposure to consistent delays induced equally large TREs for auditory and visual pacers with similar build-up courses. However, with mixed delays, we found that synchronized sounds erased MV-TRE, but synchronized flashes did not erase MA-TRE. These results suggest that similar mechanisms underlie MA- and MV-TRE, but that auditory feedback is more potent than visual feedback to induce a rearrangement of motor-sensory timing. PMID:26610349

  4. Plasticity in the Human Speech Motor System Drives Changes in Speech Perception

    OpenAIRE

    Lametti, Daniel R.; Rochet-Capellan, Amélie; Neufeld, Emily; Shiller, Douglas M.; Ostry, David J.

    2014-01-01

    Recent studies of human speech motor learning suggest that learning is accompanied by changes in auditory perception. But what drives the perceptual change? Is it a consequence of changes in the motor system? Or is it a result of sensory inflow during learning? Here, subjects participated in a speech motor-learning task involving adaptation to altered auditory feedback and they were subsequently tested for perceptual change. In two separate experiments, involving two different auditory percep...

  5. Auditory and non-auditory effects of noise on health.

    Science.gov (United States)

    Basner, Mathias; Babisch, Wolfgang; Davis, Adrian; Brink, Mark; Clark, Charlotte; Janssen, Sabine; Stansfeld, Stephen

    2014-04-12

    Noise is pervasive in everyday life and can cause both auditory and non-auditory health effects. Noise-induced hearing loss remains highly prevalent in occupational settings, and is increasingly caused by social noise exposure (eg, through personal music players). Our understanding of molecular mechanisms involved in noise-induced hair-cell and nerve damage has substantially increased, and preventive and therapeutic drugs will probably become available within 10 years. Evidence of the non-auditory effects of environmental noise exposure on public health is growing. Observational and experimental studies have shown that noise exposure leads to annoyance, disturbs sleep and causes daytime sleepiness, affects patient outcomes and staff performance in hospitals, increases the occurrence of hypertension and cardiovascular disease, and impairs cognitive performance in schoolchildren. In this Review, we stress the importance of adequate noise prevention and mitigation strategies for public health. PMID:24183105

  6. Individual differences in auditory abilities.

    Science.gov (United States)

    Kidd, Gary R; Watson, Charles S; Gygi, Brian

    2007-07-01

    Performance on 19 auditory discrimination and identification tasks was measured for 340 listeners with normal hearing. Test stimuli included single tones, sequences of tones, amplitude-modulated and rippled noise, temporal gaps, speech, and environmental sounds. Principal components analysis and structural equation modeling of the data support the existence of a general auditory ability and four specific auditory abilities. The specific abilities are (1) loudness and duration (overall energy) discrimination; (2) sensitivity to temporal envelope variation; (3) identification of highly familiar sounds (speech and nonspeech); and (4) discrimination of unfamiliar simple and complex spectral and temporal patterns. Examination of Scholastic Aptitude Test (SAT) scores for a large subset of the population revealed little or no association between general or specific auditory abilities and general intellectual ability. The findings provide a basis for research to further specify the nature of the auditory abilities. Of particular interest are results suggestive of a familiar sound recognition (FSR) ability, apparently specialized for sound recognition on the basis of limited or distorted information. This FSR ability is independent of normal variation in both spectral-temporal acuity and of general intellectual ability. PMID:17614500

  7. Neural and behavioral investigations into timbre perception.

    Science.gov (United States)

    Town, Stephen M; Bizley, Jennifer K

    2013-01-01

    Timbre is the attribute that distinguishes sounds of equal pitch, loudness and duration. It contributes to our perception and discrimination of different vowels and consonants in speech, instruments in music and environmental sounds. Here we begin by reviewing human timbre perception and the spectral and temporal acoustic features that give rise to timbre in speech, musical and environmental sounds. We also consider the perception of timbre by animals, both in the case of human vowels and non-human vocalizations. We then explore the neural representation of timbre, first within the peripheral auditory system and later at the level of the auditory cortex. We examine the neural networks that are implicated in timbre perception and the computations that may be performed in auditory cortex to enable listeners to extract information about timbre. We consider whether single neurons in auditory cortex are capable of representing spectral timbre independently of changes in other perceptual attributes and the mechanisms that may shape neural sensitivity to timbre. Finally, we conclude by outlining some of the questions that remain about the role of neural mechanisms in behavior and consider some potentially fruitful avenues for future research. PMID:24312021

  8. Neural and behavioral investigations into timbre perception

    Directory of Open Access Journals (Sweden)

    Stephen Michael Town

    2013-11-01

    Full Text Available Timbre is the attribute that distinguishes sounds of equal pitch, loudness and duration. It contributes to our perception and discrimination of different vowels and consonants in speech, instruments in music and environmental sounds. Here we begin by reviewing human timbre perception and the spectral and temporal acoustic features that give rise to timbre in speech, musical and environmental sounds. We also consider the perception of timbre by animals, both in the case of human vowels and non-human vocalizations. We then explore the neural representation of timbre, first within the peripheral auditory system and later at the level of the auditory cortex. We examine the neural networks that are implicated in timbre perception and the computations that may be performed in auditory cortex to enable listeners to extract information about timbre. We consider whether single neurons in auditory cortex are capable of representing spectral timbre independently of changes in other perceptual attributes and the mechanisms that may shape neural sensitivity to timbre. Finally, we conclude by outlining some of the questions that remain about the role of neural mechanisms in behavior and consider some potentially fruitful avenues for future research.

  9. The Neural Substrates of Infant Speech Perception

    Science.gov (United States)

    Homae, Fumitaka; Watanabe, Hama; Taga, Gentaro

    2014-01-01

    Infants often pay special attention to speech sounds, and they appear to detect key features of these sounds. To investigate the neural foundation of speech perception in infants, we measured cortical activation using near-infrared spectroscopy. We presented the following three types of auditory stimuli while 3-month-old infants watched a silent…

  10. Silicon modeling of pitch perception.

    Science.gov (United States)

    Lazzaro, J; Mead, C

    1989-12-01

    We have designed and tested an integrated circuit that models human pitch perception. The chip receives as input a time-varying voltage corresponding to sound pressure at the ear and produces as output a map of perceived pitch. The chip is a physiological model; subcircuits on the chip correspond to known and proposed structures in the auditory system. Chip output approximates human performance in response to a variety of classical pitch-perception stimuli. The 125,000-transistor chip computes all outputs in real time by using analog continuous-time processing. PMID:2594787

  11. Predictive uncertainty in auditory sequence processing

    Directory of Open Access Journals (Sweden)

    Niels Chr. eHansen

    2014-09-01

    Full Text Available Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty - a property of listeners’ prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure.Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex. Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty. We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty. Finally, we simulate listeners’ perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature.The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music.

  12. Predictive uncertainty in auditory sequence processing.

    Science.gov (United States)

    Hansen, Niels Chr; Pearce, Marcus T

    2014-01-01

    Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty-a property of listeners' prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure. Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex). Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty). We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty). Finally, we simulate listeners' perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature. The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music. PMID:25295018

  13. Neural Correlates of an Auditory Afterimage in Primary Auditory Cortex

    OpenAIRE

    Noreña, A. J.; Eggermont, J. J.

    2003-01-01

    The Zwicker tone (ZT) is defined as an auditory negative afterimage, perceived after the presentation of an appropriate inducer. Typically, a notched noise (NN) with a notch width of 1/2 octave induces a ZT with a pitch falling in the frequency range of the notch. The aim of the present study was to find potential neural correlates of the ZT in the primary auditory cortex of ketamine-anesthetized cats. Responses of multiunits were recorded simultaneously with two 8-electrode arrays during 1 s...

  14. Electrophysiological assessment of audiovisual integration in speech perception

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Dau, Torsten

    auditory speech percept? In two experiments, which both combine behavioral and neurophysiological measures, an uncovering of the relation between perception of faces and of audiovisual integration is attempted. Behavioral findings suggest a strong effect of face perception, whereas the MMN results are less......Speech perception integrates signal from ear and eye. This is witnessed by a wide range of audiovisual integration effects, such as ventriloquism and the McGurk illusion. Some behavioral evidence suggest that audiovisual integration of specific aspects is special for speech perception. However, our...... mismatch negativity response (MMN). MMN has the property of being evoked when an acoustic stimulus deviates from a learned pattern of stimuli. In three experimental studies, this effect is utilized to track when a coinciding visual signal alters auditory speech perception. Visual speech emanates from the...

  15. Proceedings of the 2009 international conference on auditory display

    DEFF Research Database (Denmark)

      I am pleased to present the 15th International Conference on Auditory Display (ICAD), which takes place in Copenhagen, Denmark, May 18-21, 2009. The ICAD 2009 theme is Timeless Sound, including the universal aspect of sounds as well as the influence of time in the perception of sounds. ICAD 2009...... with the re-new festival. The conference addresses all aspects related to the design of sounds, either conceptual or technical. Besides traditionally topics addressed by ICAD, I would like to take the opportunity of ICAD being organized by re-new to highlight the ICAD 2009 theme Timeless Sound, and the...

  16. Toward a sustainable faucet design: effects of sound and vision on perception of running water

    NARCIS (Netherlands)

    Golan, Aidai; Fenko, Anna

    2015-01-01

    People form their judgments of everyday phenomena based on multisensory information. This study investigates the relative impact of visual and auditory information on the perception of running tap water. Two visual and two auditory stimuli were combined to create four different combinations of high

  17. Auditory Hallucinations in Acute Stroke

    Directory of Open Access Journals (Sweden)

    Yair Lampl

    2005-01-01

    Full Text Available Auditory hallucinations are uncommon phenomena which can be directly caused by acute stroke, mostly described after lesions of the brain stem, very rarely reported after cortical strokes. The purpose of this study is to determine the frequency of this phenomenon. In a cross sectional study, 641 stroke patients were followed in the period between 1996–2000. Each patient underwent comprehensive investigation and follow-up. Four patients were found to have post cortical stroke auditory hallucinations. All of them occurred after an ischemic lesion of the right temporal lobe. After no more than four months, all patients were symptom-free and without therapy. The fact the auditory hallucinations may be of cortical origin must be taken into consideration in the treatment of stroke patients. The phenomenon may be completely reversible after a couple of months.

  18. Adaptation in the auditory system: an overview

    OpenAIRE

    David ePérez-González; Malmierca, Manuel S.

    2014-01-01

    The early stages of the auditory system need to preserve the timing information of sounds in order to extract the basic features of acoustic stimuli. At the same time, different processes of neuronal adaptation occur at several levels to further process the auditory information. For instance, auditory nerve fiber responses already experience adaptation of their firing rates, a type of response that can be found in many other auditory nuclei and may be useful for emphasizing the onset of the s...

  19. Dissociation of detection and discrimination of pure tones following bilateral lesions of auditory cortex.

    Directory of Open Access Journals (Sweden)

    Andrew R Dykstra

    Full Text Available It is well known that damage to the peripheral auditory system causes deficits in tone detection as well as pitch and loudness perception across a wide range of frequencies. However, the extent to which to which the auditory cortex plays a critical role in these basic aspects of spectral processing, especially with regard to speech, music, and environmental sound perception, remains unclear. Recent experiments indicate that primary auditory cortex is necessary for the normally-high perceptual acuity exhibited by humans in pure-tone frequency discrimination. The present study assessed whether the auditory cortex plays a similar role in the intensity domain and contrasted its contribution to sensory versus discriminative aspects of intensity processing. We measured intensity thresholds for pure-tone detection and pure-tone loudness discrimination in a population of healthy adults and a middle-aged man with complete or near-complete lesions of the auditory cortex bilaterally. Detection thresholds in his left and right ears were 16 and 7 dB HL, respectively, within clinically-defined normal limits. In contrast, the intensity threshold for monaural loudness discrimination at 1 kHz was 6.5 ± 2.1 dB in the left ear and 6.5 ± 1.9 dB in the right ear at 40 dB sensation level, well above the means of the control population (left ear: 1.6 ± 0.22 dB; right ear: 1.7 ± 0.19 dB. The results indicate that auditory cortex lowers just-noticeable differences for loudness discrimination by approximately 5 dB but is not necessary for tone detection in quiet. Previous human and Old-world monkey experiments employing lesion-effect, neurophysiology, and neuroimaging methods to investigate the role of auditory cortex in intensity processing are reviewed.

  20. Using auditory steady state responses to outline the functional connectivity in the tinnitus brain.

    Directory of Open Access Journals (Sweden)

    Winfried Schlee

    Full Text Available BACKGROUND: Tinnitus is an auditory phantom perception that is most likely generated in the central nervous system. Most of the tinnitus research has concentrated on the auditory system. However, it was suggested recently that also non-auditory structures are involved in a global network that encodes subjective tinnitus. We tested this assumption using auditory steady state responses to entrain the tinnitus network and investigated long-range functional connectivity across various non-auditory brain regions. METHODS AND FINDINGS: Using whole-head magnetoencephalography we investigated cortical connectivity by means of phase synchronization in tinnitus subjects and healthy controls. We found evidence for a deviating pattern of long-range functional connectivity in tinnitus that was strongly correlated with individual ratings of the tinnitus percept. Phase couplings between the anterior cingulum and the right frontal lobe and phase couplings between the anterior cingulum and the right parietal lobe showed significant condition x group interactions and were correlated with the individual tinnitus distress ratings only in the tinnitus condition and not in the control conditions. CONCLUSIONS: To the best of our knowledge this is the first study that demonstrates existence of a global tinnitus network of long-range cortical connections outside the central auditory system. This result extends the current knowledge of how tinnitus is generated in the brain. We propose that this global extend of the tinnitus network is crucial for the continuos perception of the tinnitus tone and a therapeutical intervention that is able to change this network should result in relief of tinnitus.

  1. Animal models of spontaneous activity in the healthy and impaired auditory system

    Directory of Open Access Journals (Sweden)

    Jos J Eggermont

    2015-04-01

    Full Text Available Spontaneous neural activity in the auditory nerve fibers and in auditory cortex in healthy animals is discussed with respect to the question: Is spontaneous activity noise or information carrier? The studies reviewed suggest strongly that spontaneous activity is a carrier of information. Subsequently, I review the numerous findings in the impaired auditory system, particularly with reference to noise trauma and tinnitus. Here the common assumption is that tinnitus reflects increased noise in the auditory system that among others affects temporal processing and interferes with the gap-startle reflex, which is frequently used as a behavioral assay for tinnitus. It is, however, more likely that the increased spontaneous activity in tinnitus, firing rate as well as neural synchrony, carries information that shapes the activity of downstream structures, including non-auditory ones, and leading to the tinnitus percept. The main drivers of that process are bursting and synchronous firing, which facilitates transfer of activity across synapses, and allows formation of auditory objects, such as tinnitus

  2. Hearing aid fitting results in a case of a patient with auditory neuropathy

    Directory of Open Access Journals (Sweden)

    Dell´Aringa, Ana Helena Bannwart

    2009-03-01

    Full Text Available Introduction: The Auditory Neuropathy is described recently as a hearing loss characterized by the preservation of outer hair cells and absence of auditory brainstem responses. Objective: To present a case report of hearing aid fitting in a patient with Auditory Neuropathy. Case Report: S.A.P., male, 32 years old, sought the Otorhinolaryngology Service after five years of Guillain-Barré syndrome, complaining of progressive and bilateral tinnitus auditory loss in both ears. The audiological evaluation resulted in: severe sensorioneural hearing deficiency with bilateral irregular configuration; speech recognition rate of 0% and speech detection rate in 35dB in both ears; type A tympanometric curve and absent ipsilateral, bilateral and contralateral reflexes; absence of waves and presence of cochlear microphonics in both ears in the auditory evoked potential and present bilateral distortion product-evoked otoacoustic emissions. The speech perception test was performed with polysyllabic words and lip reading, and presented 44% of hit with hearing aid and 12% without it. Final Comments: Despite the differences in the process of hearing aid habilitation and rehabilitation, we conclude that sound amplification brought benefits to the patient with auditory neuropathy.

  3. Relating auditory attributes of multichannel sound to preference and to physical parameters

    DEFF Research Database (Denmark)

    Choisel, Sylvain; Wickelmaier, Florian Maria

    2006-01-01

    Sound reproduced by multichannel systems is affected by many factors giving rise to various sensations, or auditory attributes. Relating specific attributes to overall preference and to physical measures of the sound field provides valuable information for a better understanding of the parameters...... within and between musical program materials, allowing for a careful generalization regarding the perception of spatial audio reproduction. Finally, a set of objective measures is derived from analysis of the sound field at the listening position in an attempt to predict the auditory attributes....

  4. Auditory Neural Prostheses – A Window to the Future

    Directory of Open Access Journals (Sweden)

    Mohan Kameshwaran

    2015-06-01

    Full Text Available Hearing loss is one of the commonest congenital anomalies to affect children world-over. The incidence of congenital hearing loss is more pronounced in developing countries like the Indian sub-continent, especially with the problems of consanguinity. Hearing loss is a double tragedy, as it leads to not only deafness but also language deprivation. However, hearing loss is the only truly remediable handicap, due to remarkable advances in biomedical engineering and surgical techniques. Auditory neural prostheses help to augment or restore hearing by integration of an external circuitry with the peripheral hearing apparatus and the central circuitry of the brain. A cochlear implant (CI is a surgically implantable device that helps restore hearing in patients with severe-profound hearing loss, unresponsive to amplification by conventional hearing aids. CIs are electronic devices designed to detect mechanical sound energy and convert it into electrical signals that can be delivered to the coch­lear nerve, bypassing the damaged hair cells of the coch­lea. The only true prerequisite is an intact auditory nerve. The emphasis is on implantation as early as possible to maximize speech understanding and perception. Bilateral CI has significant benefits which include improved speech perception in noisy environments and improved sound localization. Presently, the indications for CI have widened and these expanded indications for implantation are related to age, additional handicaps, residual hearing, and special etiologies of deafness. Combined electric and acoustic stimulation (EAS / hybrid device is designed for individuals with binaural low-frequency hearing and severe-to-profound high-frequency hearing loss. Auditory brainstem implantation (ABI is a safe and effective means of hearing rehabilitation in patients with retrocochlear disorders, such as neurofibromatosis type 2 (NF2 or congenital cochlear nerve aplasia, wherein the cochlear nerve is damaged

  5. The Effects of Auditory Contrast Tuning upon Speech Intelligibility.

    Science.gov (United States)

    Killian, Nathan J; Watkins, Paul V; Davidson, Lisa S; Barbour, Dennis L

    2016-01-01

    We have previously identified neurons tuned to spectral contrast of wideband sounds in auditory cortex of awake marmoset monkeys. Because additive noise alters the spectral contrast of speech, contrast-tuned neurons, if present in human auditory cortex, may aid in extracting speech from noise. Given that this cortical function may be underdeveloped in individuals with sensorineural hearing loss, incorporating biologically-inspired algorithms into external signal processing devices could provide speech enhancement benefits to cochlear implantees. In this study we first constructed a computational signal processing algorithm to mimic auditory cortex contrast tuning. We then manipulated the shape of contrast channels and evaluated the intelligibility of reconstructed noisy speech using a metric to predict cochlear implant user perception. Candidate speech enhancement strategies were then tested in cochlear implantees with a hearing-in-noise test. Accentuation of intermediate contrast values or all contrast values improved computed intelligibility. Cochlear implant subjects showed significant improvement in noisy speech intelligibility with a contrast shaping procedure. PMID:27555826

  6. What you see isn't always what you get: Auditory word signals trump consciously perceived words in lexical access.

    Science.gov (United States)

    Ostrand, Rachel; Blumstein, Sheila E; Ferreira, Victor S; Morgan, James L

    2016-06-01

    Human speech perception often includes both an auditory and visual component. A conflict in these signals can result in the McGurk illusion, in which the listener perceives a fusion of the two streams, implying that information from both has been integrated. We report two experiments investigating whether auditory-visual integration of speech occurs before or after lexical access, and whether the visual signal influences lexical access at all. Subjects were presented with McGurk or Congruent primes and performed a lexical decision task on related or unrelated targets. Although subjects perceived the McGurk illusion, McGurk and Congruent primes with matching real-word auditory signals equivalently primed targets that were semantically related to the auditory signal, but not targets related to the McGurk percept. We conclude that the time course of auditory-visual integration is dependent on the lexicality of the auditory and visual input signals, and that listeners can lexically access one word and yet consciously perceive another. PMID:27011021

  7. Auditory scene analysis: The sweet music of ambiguity

    Directory of Open Access Journals (Sweden)

    Daniel ePressnitzer

    2011-12-01

    Full Text Available In this review paper aimed at the non-specialist, we explore the use that neuroscientists and musicians have made of perceptual illusions based on ambiguity. The pivotal issue is auditory scene analysis, or what enables us to make sense of complex acoustic mixtures in order to follow, for instance, a single melody in the midst of an orchestra. In general, auditory scene analysis uncovers the most likely physical causes that account for the waveform collected at the ears. However, the acoustical problem is ill-posed and it must be solved from noisy sensory input. Recently, the neural mechanisms implicated in the transformation of ambiguous sensory information into coherent auditory scenes have been investigated using so-called bistability illusions (where an unchanging ambiguous stimulus evokes a succession of distinct percepts in the mind of the listener. After reviewing some of those studies, we turn to music, which arguably provides some of the most complex acoustic scenes that a human listener will ever encounter. Interestingly, musicians will not always aim at making each physical source intelligible, but rather to express one or more melodic lines with a small or large number of instruments. By means of a few musical illustrations and by using a computational model inspired by neuro-physiological principles, we suggest that this relies on a detailed (if perhaps implicit knowledge of the rules of auditory scene analysis and of its inherent ambiguity. We then put forward the opinion that some degree perceptual ambiguity may participate in our appreciation of music.

  8. Proximal vocal threat recruits the right voice-sensitive auditory cortex.

    Science.gov (United States)

    Ceravolo, Leonardo; Frühholz, Sascha; Grandjean, Didier

    2016-05-01

    The accurate estimation of the proximity of threat is important for biological survival and to assess relevant events of everyday life. We addressed the question of whether proximal as compared with distal vocal threat would lead to a perceptual advantage for the perceiver. Accordingly, we sought to highlight the neural mechanisms underlying the perception of proximal vs distal threatening vocal signals by the use of functional magnetic resonance imaging. Although we found that the inferior parietal and superior temporal cortex of human listeners generally decoded the spatial proximity of auditory vocalizations, activity in the right voice-sensitive auditory cortex was specifically enhanced for proximal aggressive relative to distal aggressive voices as compared with neutral voices. Our results shed new light on the processing of imminent danger signaled by proximal vocal threat and show the crucial involvement of the right mid voice-sensitive auditory cortex in such processing. PMID:26746180

  9. Effect of attentional load on audiovisual speech perception: Evidence from ERPs

    Directory of Open Access Journals (Sweden)

    KaisaTiippana

    2014-07-01

    Full Text Available Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e. a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech.

  10. Conceptual priming for realistic auditory scenes and for auditory words.

    Science.gov (United States)

    Frey, Aline; Aramaki, Mitsuko; Besson, Mireille

    2014-02-01

    Two experiments were conducted using both behavioral and Event-Related brain Potentials methods to examine conceptual priming effects for realistic auditory scenes and for auditory words. Prime and target sounds were presented in four stimulus combinations: Sound-Sound, Word-Sound, Sound-Word and Word-Word. Within each combination, targets were conceptually related to the prime, unrelated or ambiguous. In Experiment 1, participants were asked to judge whether the primes and targets fit together (explicit task) and in Experiment 2 they had to decide whether the target was typical or ambiguous (implicit task). In both experiments and in the four stimulus combinations, reaction times and/or error rates were longer/higher and the N400 component was larger to ambiguous targets than to conceptually related targets, thereby pointing to a common conceptual system for processing auditory scenes and linguistic stimuli in both explicit and implicit tasks. However, fine-grained analyses also revealed some differences between experiments and conditions in scalp topography and duration of the priming effects possibly reflecting differences in the integration of perceptual and cognitive attributes of linguistic and nonlinguistic sounds. These results have clear implications for the building-up of virtual environments that need to convey meaning without words. PMID:24378910

  11. Neural entrainment to rhythmically-presented auditory, visual and audio-visual speech in children

    Directory of Open Access Journals (Sweden)

    Alan James Power

    2012-07-01

    Full Text Available Auditory cortical oscillations have been proposed to play an important role in speech perception. It is suggested that the brain may take temporal ‘samples’ of information from the speech stream at different rates, phase-resetting ongoing oscillations so that they are aligned with similar frequency bands in the input (‘phase locking’. Information from these frequency bands is then bound together for speech perception. To date, there are no explorations of neural phase-locking and entrainment to speech input in children. However, it is clear from studies of language acquisition that infants use both visual speech information and auditory speech information in learning. In order to study neural entrainment to speech in typically-developing children, we use a rhythmic entrainment paradigm (underlying 2 Hz or delta rate based on repetition of the syllable ba, presented in either the auditory modality alone, the visual modality alone, or as auditory-visual speech (via a talking head. To ensure attention to the task, children aged 13 years were asked to press a button as fast as possible when the ba stimulus violated the rhythm for each stream type. Rhythmic violation depended on delaying the occurrence of a ba in the isochronous stream. Neural entrainment was demonstrated for all stream types, and individual differences in standardized measures of language processing were related to auditory entrainment at the theta rate. Further, there was significant modulation of the preferred phase of auditory entrainment in the theta band when visual speech cues were present, indicating cross-modal phase resetting. The rhythmic entrainment paradigm developed here offers a method for exploring individual differences in oscillatory phase locking during development. In particular, a method for assessing neural entrainment and cross-modal phase resetting would be useful for exploring developmental learning difficulties thought to involve temporal sampling

  12. The effect of auditory memory load on intensity resolution in individuals with Parkinson's disease

    Science.gov (United States)

    Richardson, Kelly C.

    Purpose: The purpose of the current study was to investigate the effect of auditory memory load on intensity resolution in individuals with Parkinson's disease (PD) as compared to two groups of listeners without PD. Methods: Nineteen individuals with Parkinson's disease, ten healthy age- and hearing-matched adults, and ten healthy young adults were studied. All listeners participated in two intensity discrimination tasks differing in auditory memory load; a lower memory load, 4IAX task and a higher memory load, ABX task. Intensity discrimination performance was assessed using a bias-free measurement of signal detectability known as d' (d-prime). Listeners further participated in a continuous loudness scaling task where they were instructed to rate the loudness level of each signal intensity using a computerized 150mm visual analogue scale. Results: Group discrimination functions indicated significantly lower intensity discrimination sensitivity (d') across tasks for the individuals with PD, as compared to the older and younger controls. No significant effect of aging on intensity discrimination was observed for either task. All three listeners groups demonstrated significantly lower intensity discrimination sensitivity for the higher auditory memory load, ABX task, compared to the lower auditory memory load, 4IAX task. Furthermore, a significant effect of aging was identified for the loudness scaling condition. The younger controls were found to rate most stimuli along the continuum as significantly louder than the older controls and the individuals with PD. Conclusions: The persons with PD showed evidence of impaired auditory perception for intensity information, as compared to the older and younger controls. The significant effect of aging on loudness perception may indicate peripheral and/or central auditory involvement.

  13. Effects of Auditory-Verbal Therapy for School-Aged Children with Hearing Loss: An Exploratory Study

    Science.gov (United States)

    Fairgray, Elizabeth; Purdy, Suzanne C.; Smart, Jennifer L.

    2010-01-01

    With modern improvements to hearing aids and cochlear implant (CI) technology, and consequently improved access to speech, there has been greater emphasis on listening-based therapies for children with hearing loss, such as auditory-verbal therapy (AVT). Speech and language, speech perception in noise, and reading were evaluated before and after…

  14. Psychophysiological responses to auditory change.

    Science.gov (United States)

    Chuen, Lorraine; Sears, David; McAdams, Stephen

    2016-06-01

    A comprehensive characterization of autonomic and somatic responding within the auditory domain is currently lacking. We studied whether simple types of auditory change that occur frequently during music listening could elicit measurable changes in heart rate, skin conductance, respiration rate, and facial motor activity. Participants heard a rhythmically isochronous sequence consisting of a repeated standard tone, followed by a repeated target tone that changed in pitch, timbre, duration, intensity, or tempo, or that deviated momentarily from rhythmic isochrony. Changes in all parameters produced increases in heart rate. Skin conductance response magnitude was affected by changes in timbre, intensity, and tempo. Respiratory rate was sensitive to deviations from isochrony. Our findings suggest that music researchers interpreting physiological responses as emotional indices should consider acoustic factors that may influence physiology in the absence of induced emotions. PMID:26927928

  15. Sound and Music in Narrative Multimedia : A macroscopic discussion of audiovisual relations and auditory narrative functions in film, television and video games

    OpenAIRE

    2012-01-01

    This thesis examines how we perceive an audiovisual narrative - here defined as film, television and video games - and seeks to establish a descriptive framework for auditory stimuli and their narrative functions in this regard. I initially adopt the viewpoint of cognitive psychology an account for basic information processing operations. I then discuss audiovisual perception in terms of the effects of sensory integration between the visual and auditory modalities on the construction of meani...

  16. INDIVIDUAL DIFFERENCES IN AUDITORY PROCESSING IN SPECIFIC LANGUAGE IMPAIRMENT: A FOLLOW-UP STUDY USING EVENT-RELATED POTENTIALS AND BEHAVIOURAL THRESHOLDS

    OpenAIRE

    Bishop, Dorothy V. M.; McArthur, Genevieve M.

    2005-01-01

    It has frequently been claimed that children with specific language impairment (SLI) have impaired auditory perception, but there is much controversy about the role of such deficits in causing their language problems, and it has been difficult to establish solid, replicable findings in this area. Discrepancies in this field may arise because (a) a focus on mean results obscures the heterogeneity in the population and (b) insufficient attention has been paid to maturational aspects of auditory...

  17. Basic to Applied Research: The Benefits of Audio-Visual Speech Perception Research in Teaching Foreign Languages

    Science.gov (United States)

    Erdener, Dogu

    2016-01-01

    Traditionally, second language (L2) instruction has emphasised auditory-based instruction methods. However, this approach is restrictive in the sense that speech perception by humans is not just an auditory phenomenon but a multimodal one, and specifically, a visual one as well. In the past decade, experimental studies have shown that the…

  18. Auditory distraction and serial memory

    OpenAIRE

    Jones, D M; Hughes, Rob; Macken, W.J.

    2010-01-01

    One mental activity that is very vulnerable to auditory distraction is serial recall. This review of the contemporary findings relating to serial recall charts the key determinants of distraction. It is evident that there is one form of distraction that is a joint product of the cognitive characteristics of the task and of the obligatory cognitive processing of the sound. For sequences of sound, distraction appears to be an ineluctable product of similarity-of-process, specifically, the seria...

  19. Auditory sequence analysis and phonological skill.

    Science.gov (United States)

    Grube, Manon; Kumar, Sukhbinder; Cooper, Freya E; Turton, Stuart; Griffiths, Timothy D

    2012-11-01

    This work tests the relationship between auditory and phonological skill in a non-selected cohort of 238 school students (age 11) with the specific hypothesis that sound-sequence analysis would be more relevant to phonological skill than the analysis of basic, single sounds. Auditory processing was assessed across the domains of pitch, time and timbre; a combination of six standard tests of literacy and language ability was used to assess phonological skill. A significant correlation between general auditory and phonological skill was demonstrated, plus a significant, specific correlation between measures of phonological skill and the auditory analysis of short sequences in pitch and time. The data support a limited but significant link between auditory and phonological ability with a specific role for sound-sequence analysis, and provide a possible new focus for auditory training strategies to aid language development in early adolescence. PMID:22951739

  20. Auditory Cortical Maturation in a Child with Cochlear Implant: Analysis of Electrophysiological and Behavioral Measures

    Science.gov (United States)

    Silva, Liliane Aparecida Fagundes; Couto, Maria Inês Vieira; Tsuji, Robinson Koji; Bento, Ricardo Ferreira; de Carvalho, Ana Claudia Martinho; Matas, Carla Gentile

    2015-01-01

    The purpose of this study was to longitudinally assess the behavioral and electrophysiological hearing changes of a girl inserted in a CI program, who had bilateral profound sensorineural hearing loss and underwent surgery of cochlear implantation with electrode activation at 21 months of age. She was evaluated using the P1 component of Long Latency Auditory Evoked Potential (LLAEP); speech perception tests of the Glendonald Auditory Screening Procedure (GASP); Infant Toddler Meaningful Auditory Integration Scale (IT-MAIS); and Meaningful Use of Speech Scales (MUSS). The study was conducted prior to activation and after three, nine, and 18 months of cochlear implant activation. The results of the LLAEP were compared with data from a hearing child matched by gender and chronological age. The results of the LLAEP of the child with cochlear implant showed gradual decrease in latency of the P1 component after auditory stimulation (172 ms–134 ms). In the GASP, IT-MAIS, and MUSS, gradual development of listening skills and oral language was observed. The values of the LLAEP of the hearing child were expected for chronological age (132 ms–128 ms). The use of different clinical instruments allow a better understanding of the auditory habilitation and rehabilitation process via CI. PMID:26881163

  1. Listen to Yourself: The Medial Prefrontal Cortex Modulates Auditory Alpha Power During Speech Preparation.

    Science.gov (United States)

    Müller, Nadia; Leske, Sabine; Hartmann, Thomas; Szebényi, Szabolcs; Weisz, Nathan

    2015-11-01

    How do we process stimuli that stem from the external world and stimuli that are self-generated? In the case of voice perception it has been shown that evoked activity elicited by self-generated sounds is suppressed compared with the same sounds played-back externally. We here wanted to reveal whether neural excitability of the auditory cortex-putatively reflected in local alpha band power--is modulated already prior to speech onset, and which brain regions may mediate such a top-down preparatory response. In the left auditory cortex we show that the typical alpha suppression found when participants prepare to listen disappears when participants expect a self-spoken sound. This suggests an inhibitory adjustment of auditory cortical activity already before sound onset. As a second main finding we demonstrate that the medial prefrontal cortex, a region known for self-referential processes, mediates these condition-specific alpha power modulations. This provides crucial insights into how higher-order regions prepare the auditory cortex for the processing of self-generated sounds. Furthermore, the mechanism outlined could provide further explanations to self-referential phenomena, such as "tickling yourself". Finally, it has implications for the so-far unsolved question of how auditory alpha power is mediated by higher-order regions in a more general sense. PMID:24904068

  2. Differential coding of conspecific vocalizations in the ventral auditory cortical stream.

    Science.gov (United States)

    Fukushima, Makoto; Saunders, Richard C; Leopold, David A; Mishkin, Mortimer; Averbeck, Bruno B

    2014-03-26

    The mammalian auditory cortex integrates spectral and temporal acoustic features to support the perception of complex sounds, including conspecific vocalizations. Here we investigate coding of vocal stimuli in different subfields in macaque auditory cortex. We simultaneously measured auditory evoked potentials over a large swath of primary and higher order auditory cortex along the supratemporal plane in three animals chronically using high-density microelectrocorticographic arrays. To evaluate the capacity of neural activity to discriminate individual stimuli in these high-dimensional datasets, we applied a regularized multivariate classifier to evoked potentials to conspecific vocalizations. We found a gradual decrease in the level of overall classification performance along the caudal to rostral axis. Furthermore, the performance in the caudal sectors was similar across individual stimuli, whereas the performance in the rostral sectors significantly differed for different stimuli. Moreover, the information about vocalizations in the caudal sectors was similar to the information about synthetic stimuli that contained only the spectral or temporal features of the original vocalizations. In the rostral sectors, however, the classification for vocalizations was significantly better than that for the synthetic stimuli, suggesting that conjoined spectral and temporal features were necessary to explain differential coding of vocalizations in the rostral areas. We also found that this coding in the rostral sector was carried primarily in the theta frequency band of the response. These findings illustrate a progression in neural coding of conspecific vocalizations along the ventral auditory pathway. PMID:24672012

  3. Evaluation of embryonic alcoholism from auditory event-related potential in fetal rats

    Institute of Scientific and Technical Information of China (English)

    梁勇; 王正敏; 屈卫东

    2004-01-01

    @@ Auditory event-related potential (AERP) is a kind of electroencephalography that measures the responses of perception, memory and judgement to special acoustic stimulation in the auditory cortex. AERP can be recorded with not only active but also passive mode. The active and passive recording modes of AERP have been shown a possible application in animals.1,2 Alcohol is a substance that can markedly affect the conscious reaction of human. Recently, AERP has been applied to study the effects of alcohol on the auditory centers of the brain. Some reports have shown dose-dependent differences in latency, amplitude, responsibility and waveform of AERP between persons who have and have not take in alcohol.3,4 The epidemiological investigations show that the central nervous function of the offspring of alcohol users might be also affected.5,6 Because the clinic research is limited by certain factors, several animal models have been applied to examine the influences of alcohol on consciousness with AERP. In the present study, young rats were exposed to alcohol during fetal development and AERP as indicator was recorded to monitor the central auditory function, and its mechanisms and characteristics of effects of the fetal alcoholism on auditory center function in rats were analyzed and discussed.

  4. Assembly of the auditory circuitry by a Hox genetic network in the mouse brainstem.

    Directory of Open Access Journals (Sweden)

    Maria Di Bonito

    Full Text Available Rhombomeres (r contribute to brainstem auditory nuclei during development. Hox genes are determinants of rhombomere-derived fate and neuronal connectivity. Little is known about the contribution of individual rhombomeres and their associated Hox codes to auditory sensorimotor circuitry. Here, we show that r4 contributes to functionally linked sensory and motor components, including the ventral nucleus of lateral lemniscus, posterior ventral cochlear nuclei (VCN, and motor olivocochlear neurons. Assembly of the r4-derived auditory components is involved in sound perception and depends on regulatory interactions between Hoxb1 and Hoxb2. Indeed, in Hoxb1 and Hoxb2 mutant mice the transmission of low-level auditory stimuli is lost, resulting in hearing impairments. On the other hand, Hoxa2 regulates the Rig1 axon guidance receptor and controls contralateral projections from the anterior VCN to the medial nucleus of the trapezoid body, a circuit involved in sound localization. Thus, individual rhombomeres and their associated Hox codes control the assembly of distinct functionally segregated sub-circuits in the developing auditory brainstem.

  5. Development of Receiver Stimulator for Auditory Prosthesis

    OpenAIRE

    K. Raja Kumar; P. Seetha Ramaiah

    2010-01-01

    The Auditory Prosthesis (AP) is an electronic device that can provide hearing sensations to people who are profoundly deaf by stimulating the auditory nerve via an array of electrodes with an electric current allowing them to understand the speech. The AP system consists of two hardware functional units such as Body Worn Speech Processor (BWSP) and Receiver Stimulator. The prototype model of Receiver Stimulator for Auditory Prosthesis (RSAP) consists of Speech Data Decoder, DAC, ADC, constant...

  6. Auditory stimulation and cardiac autonomic regulation

    OpenAIRE

    Vitor E Valenti; Guida, Heraldo L.; Frizzo, Ana C F; Cardoso, Ana C. V.; Vanderlei, Luiz Carlos M; Luiz Carlos de Abreu

    2012-01-01

    Previous studies have already demonstrated that auditory stimulation with music influences the cardiovascular system. In this study, we described the relationship between musical auditory stimulation and heart rate variability. Searches were performed with the Medline, SciELO, Lilacs and Cochrane databases using the following keywords: "auditory stimulation", "autonomic nervous system", "music" and "heart rate variability". The selected studies indicated that there is a strong correlation bet...

  7. Behavioural and neural correlates of auditory attention

    OpenAIRE

    Roberts, Katherine Leonie

    2005-01-01

    The auditory attention skills of alterting, orienting, and executive control were assessed using behavioural and neuroimaging techniques. Initially, an auditory analgue of the visual attention network test (ANT) (FAN, McCandliss, Sommer, Raz, & Posner, 2002) was created and tested alongside the visual ANT in a group of 40 healthy subjects. The results from this study showed similarities between auditory and visual spatial orienting. An fMRI study was conducted to investigate whether the simil...

  8. Psychophysical and neural correlates of noised-induced tinnitus in animals: Intra- and inter-auditory and non-auditory brain structure studies.

    Science.gov (United States)

    Zhang, Jinsheng; Luo, Hao; Pace, Edward; Li, Liang; Liu, Bin

    2016-04-01

    Tinnitus, a ringing in the ear or head without an external sound source, is a prevalent health problem. It is often associated with a number of limbic-associated disorders such as anxiety, sleep disturbance, and emotional distress. Thus, to investigate tinnitus, it is important to consider both auditory and non-auditory brain structures. This paper summarizes the psychophysical, immunocytochemical and electrophysiological evidence found in rats or hamsters with behavioral evidence of tinnitus. Behaviorally, we tested for tinnitus using a conditioned suppression/avoidance paradigm, gap detection acoustic reflex behavioral paradigm, and our newly developed conditioned licking suppression paradigm. Our new tinnitus behavioral paradigm requires relatively short baseline training, examines frequency specification of tinnitus perception, and achieves sensitive tinnitus testing at an individual level. To test for tinnitus-related anxiety and cognitive impairment, we used the elevated plus maze and Morris water maze. Our results showed that not all animals with tinnitus demonstrate anxiety and cognitive impairment. Immunocytochemically, we found that animals with tinnitus manifested increased Fos-like immunoreactivity (FLI) in both auditory and non-auditory structures. The manner in which FLI appeared suggests that lower brainstem structures may be involved in acute tinnitus whereas the midbrain and cortex are involved in more chronic tinnitus. Meanwhile, animals with tinnitus also manifested increased FLI in non-auditory brain structures that are involved in autonomic reactions, stress, arousal and attention. Electrophysiologically, we found that rats with tinnitus developed increased spontaneous firing in the auditory cortex (AC) and amygdala (AMG), as well as intra- and inter-AC and AMG neurosynchrony, which demonstrate that tinnitus may be actively produced and maintained by the interactions between the AC and AMG. PMID:26299842

  9. Effects of Methylphenidate (Ritalin) on Auditory Performance in Children with Attention and Auditory Processing Disorders.

    Science.gov (United States)

    Tillery, Kim L.; Katz, Jack; Keller, Warren D.

    2000-01-01

    A double-blind, placebo-controlled study examined effects of methylphenidate (Ritalin) on auditory processing in 32 children with both attention deficit hyperactivity disorder and central auditory processing (CAP) disorder. Analyses revealed that Ritalin did not have a significant effect on any of the central auditory processing measures, although…

  10. Seeing the song: left auditory structures may track auditory-visual dynamic alignment.

    Directory of Open Access Journals (Sweden)

    Julia A Mossbridge

    Full Text Available Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements, it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment.

  11. Seeing the song: left auditory structures may track auditory-visual dynamic alignment.

    Science.gov (United States)

    Mossbridge, Julia A; Grabowecky, Marcia; Suzuki, Satoru

    2013-01-01

    Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements), it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization) across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR) was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment. PMID:24194873

  12. Corticofugal modulation of peripheral auditory responses

    Directory of Open Access Journals (Sweden)

    Paul Hinckley Delano

    2015-09-01

    Full Text Available The auditory efferent system originates in the auditory cortex and projects to the medial geniculate body, inferior colliculus, cochlear nucleus and superior olivary complex reaching the cochlea through olivocochlear fibers. This unique neuronal network is organized in several afferent-efferent feedback loops including: the (i colliculo-thalamic-cortico-collicular, (ii cortico-(collicular-olivocochlear and (iii cortico-(collicular-cochlear nucleus pathways. Recent experiments demonstrate that blocking ongoing auditory-cortex activity with pharmacological and physical methods modulates the amplitude of cochlear potentials. In addition, auditory-cortex microstimulation independently modulates cochlear sensitivity and the strength of the olivocochlear reflex. In this mini-review, anatomical and physiological evidence supporting the presence of a functional efferent network from the auditory cortex to the cochlear receptor is presented. Special emphasis is given to the corticofugal effects on initial auditory processing, that is, on cochlear nucleus, auditory nerve and cochlear responses. A working model of three parallel pathways from the auditory cortex to the cochlea and auditory nerve is proposed.

  13. Corticofugal modulation of peripheral auditory responses.

    Science.gov (United States)

    Terreros, Gonzalo; Delano, Paul H

    2015-01-01

    The auditory efferent system originates in the auditory cortex and projects to the medial geniculate body (MGB), inferior colliculus (IC), cochlear nucleus (CN) and superior olivary complex (SOC) reaching the cochlea through olivocochlear (OC) fibers. This unique neuronal network is organized in several afferent-efferent feedback loops including: the (i) colliculo-thalamic-cortico-collicular; (ii) cortico-(collicular)-OC; and (iii) cortico-(collicular)-CN pathways. Recent experiments demonstrate that blocking ongoing auditory-cortex activity with pharmacological and physical methods modulates the amplitude of cochlear potentials. In addition, auditory-cortex microstimulation independently modulates cochlear sensitivity and the strength of the OC reflex. In this mini-review, anatomical and physiological evidence supporting the presence of a functional efferent network from the auditory cortex to the cochlear receptor is presented. Special emphasis is given to the corticofugal effects on initial auditory processing, that is, on CN, auditory nerve and cochlear responses. A working model of three parallel pathways from the auditory cortex to the cochlea and auditory nerve is proposed. PMID:26483647

  14. Assessment of visual, auditory, and kinesthetic learning style among undergraduate nursing students

    OpenAIRE

    Radhwan Hussein Ibrahim; Dhiaa Al-rahman Hussein

    2015-01-01

    Background: Learning styles refer to the ability of learner to perceive and process information in learning situations. The ability to understand students’ learning styles can increase the educational outcomes. VAK (Visual, auditory, kinesthetic) learning style is one of the learning style in which students use three of sensory perception to receive information. Teachers can incorporate these learning styles in their classroom activities so that students are competent to be successful in thei...

  15. Tracking cortical entrainment in neural activity: Auditory processes in human temporal cortex

    OpenAIRE

    Thwaites, Andrew; Nimmo-Smith, Ian; Fonteneau, Elisabeth; Patterson, Roy D.; Buttery, Paula; Marslen-Wilson, William D.

    2015-01-01

    A primary objective for cognitive neuroscience is to identify how features of the sensory environment are encoded in neural activity. Current auditory models of loudness perception can be used to make detailed predictions about the neural activity of the cortex as an individual listens to speech. We used two such models (loudness-sones and loudness-phons), varying in their psychophysiological realism, to predict the instantaneous loudness contours produced by 480 isolated words. These two set...

  16. Did you hear that? The role of stimulus similarity and uncertainty in auditory change deafness

    OpenAIRE

    Dickerson, Kelly; Gaston, Jeremy R.

    2014-01-01

    Change deafness, the auditory analog to change blindness, occurs when salient, and behaviorally relevant changes to sound sources are missed. Missing significant changes in the environment can have serious consequences, however, this effect, has remained little more than a lab phenomenon and a party trick. It is only recently that researchers have begun to explore the nature of these profound errors in change perception. Despite a wealth of examples of the change blindness phenomenon, work on...

  17. Lis1 mediates planar polarity of auditory hair cells through regulation of microtubule organization

    OpenAIRE

    Sipe, Conor W.; Liu, Lixia; Lee, Jianyi; Grimsley-Myers, Cynthia; Lu, Xiaowei

    2013-01-01

    The V-shaped hair bundles atop auditory hair cells and their uniform orientation are manifestations of epithelial planar cell polarity (PCP) required for proper perception of sound. PCP is regulated at the tissue level by a conserved core Wnt/PCP pathway. However, the hair cell-intrinsic polarity machinery is poorly understood. Recent findings implicate hair cell microtubules in planar polarization of hair cells. To elucidate the microtubule-mediated polarity pathway, we analyzed Lis1 functio...

  18. Lateralization of Music Processing with Noises in the Auditory Cortex: An fNIRS Study

    OpenAIRE

    Hendrik eSantosa; Melissa Jiyoun Hong; Keum-Shik eHong

    2014-01-01

    The present study is to determine the effects of background noise on the hemispheric lateralization in music processing by exposing fourteen subjects to four different auditory environments: music segments only, noise segments only, music+noise segments, and the entire music interfered by noise segments. The hemodynamic responses in both hemispheres caused by the perception of music in 10 different conditions were measured using functional near-infrared spectroscopy. As a feature to distingui...

  19. Lateralization of music processing with noises in the auditory cortex: an fNIRS study

    OpenAIRE

    Santosa, Hendrik; Hong, Melissa Jiyoun; Hong, Keum-Shik

    2014-01-01

    The present study is to determine the effects of background noise on the hemispheric lateralization in music processing by exposing 14 subjects to four different auditory environments: music segments only, noise segments only, music + noise segments, and the entire music interfered by noise segments. The hemodynamic responses in both hemispheres caused by the perception of music in 10 different conditions were measured using functional near-infrared spectroscopy. As a feature to distinguish s...

  20. Auditory and speech processing in specific language impairment (SLI) and dyslexia

    OpenAIRE

    Tuomainen, O. T.

    2009-01-01

    This thesis investigates auditory and speech processing in Specific Language Impairment (SLI) and dyslexia. One influential theory of SLI and dyslexia postulates that both SLI and dyslexia stem from similar underlying sensory deficit that impacts speech perception and phonological development leading to oral language and literacy deficits. Previous studies, however, have shown that these underlying sensory deficits exist in only a subgroup of language impaired individuals, and ...

  1. Auditory Multi-Stability: Idiosyncratic Perceptual Switching Patterns, Executive Functions and Personality Traits

    Science.gov (United States)

    Farkas, Dávid; Denham, Susan L.; Bendixen, Alexandra; Tóth, Dénes; Kondo, Hirohito M.; Winkler, István

    2016-01-01

    Multi-stability refers to the phenomenon of perception stochastically switching between possible interpretations of an unchanging stimulus. Despite considerable variability, individuals show stable idiosyncratic patterns of switching between alternative perceptions in the auditory streaming paradigm. We explored correlates of the individual switching patterns with executive functions, personality traits, and creativity. The main dimensions on which individual switching patterns differed from each other were identified using multidimensional scaling. Individuals with high scores on the dimension explaining the largest portion of the inter-individual variance switched more often between the alternative perceptions than those with low scores. They also perceived the most unusual interpretation more often, and experienced all perceptual alternatives with a shorter delay from stimulus onset. The ego-resiliency personality trait, which reflects a tendency for adaptive flexibility and experience seeking, was significantly positively related to this dimension. Taking these results together we suggest that this dimension may reflect the individual’s tendency for exploring the auditory environment. Executive functions were significantly related to some of the variables describing global properties of the switching patterns, such as the average number of switches. Thus individual patterns of perceptual switching in the auditory streaming paradigm are related to some personality traits and executive functions. PMID:27135945

  2. Sources of variability in consonant perception and their auditory correlates

    DEFF Research Database (Denmark)

    Zaar, Johannes; Dau, Torsten

    can be demonstrated that any physical change in the sBmuli had a measurable effect. This holds even for slight Bme-shics in the steady-state masking-noise waveform. Furthermore, responses obtained with idenBcal sBmuli differed substanBally across different normal-hearing listeners, while individual...

  3. Loudspeaker-based room auralization in auditory perception research

    DEFF Research Database (Denmark)

    Buchholz, Jörg; Favrot, Sylvain Emmanuel

    2010-01-01

    Recently a loudspeaker-based room auralisation (LoRA) system has been developed at CAHR, which combines modern room acoustic modeling techniques with high-order Ambisonics auralisation. The environment provides: (i) a flexible research tool to study the signal processing of the normal, impaired, ...

  4. Relating binaural pitch perception to the individual listener's auditory profile

    DEFF Research Database (Denmark)

    Santurette, Sébastien; Dau, Torsten

    2012-01-01

    The ability of eight normal-hearing listeners and fourteen listeners with sensorineural hearing loss to detect and identify pitch contours was measured for binaural-pitch stimuli and salience-matched monaurally detectable pitches. In an effort to determine whether impaired binaural pitch percepti...

  5. Auditory hallucinations suppressed by etizolam in a patient with schizophrenia.

    Science.gov (United States)

    Benazzi, F; Mazzoli, M; Rossi, E

    1993-10-01

    A patient presented with a 15 year history of schizophrenia with auditory hallucinations. Though unresponsive to prolonged trials of neuroleptics, the auditory hallucinations disappeared with etizolam. PMID:7902201

  6. Prediction and constraint in audiovisual speech perception.

    Science.gov (United States)

    Peelle, Jonathan E; Sommers, Mitchell S

    2015-07-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing the precision of prediction. Electrophysiological studies demonstrate that oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to acoustic information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration

  7. Auditory Association Cortex Lesions Impair Auditory Short-Term Memory in Monkeys

    Science.gov (United States)

    Colombo, Michael; D'Amato, Michael R.; Rodman, Hillary R.; Gross, Charles G.

    1990-01-01

    Monkeys that were trained to perform auditory and visual short-term memory tasks (delayed matching-to-sample) received lesions of the auditory association cortex in the superior temporal gyrus. Although visual memory was completely unaffected by the lesions, auditory memory was severely impaired. Despite this impairment, all monkeys could discriminate sounds closer in frequency than those used in the auditory memory task. This result suggests that the superior temporal cortex plays a role in auditory processing and retention similar to the role the inferior temporal cortex plays in visual processing and retention.

  8. Narrow, duplicated internal auditory canal

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, T. [Servico de Neurorradiologia, Hospital Garcia de Orta, Avenida Torrado da Silva, 2801-951, Almada (Portugal); Shayestehfar, B. [Department of Radiology, UCLA Oliveview School of Medicine, Los Angeles, California (United States); Lufkin, R. [Department of Radiology, UCLA School of Medicine, Los Angeles, California (United States)

    2003-05-01

    A narrow internal auditory canal (IAC) constitutes a relative contraindication to cochlear implantation because it is associated with aplasia or hypoplasia of the vestibulocochlear nerve or its cochlear branch. We report an unusual case of a narrow, duplicated IAC, divided by a bony septum into a superior relatively large portion and an inferior stenotic portion, in which we could identify only the facial nerve. This case adds support to the association between a narrow IAC and aplasia or hypoplasia of the vestibulocochlear nerve. The normal facial nerve argues against the hypothesis that the narrow IAC is the result of a primary bony defect which inhibits the growth of the vestibulocochlear nerve. (orig.)

  9. Auditory brainstem response in dolphins.

    OpenAIRE

    Ridgway, S. H.; Bullock, T H; Carder, D.A.; Seeley, R L; Woods, D.; Galambos, R

    1981-01-01

    We recorded the auditory brainstem response (ABR) in four dolphins (Tursiops truncatus and Delphinus delphis). The ABR evoked by clicks consists of seven waves within 10 msec; two waves often contain dual peaks. The main waves can be identified with those of humans and laboratory mammals; in spite of a much longer path, the latencies of the peaks are almost identical to those of the rat. The dolphin ABR waves increase in latency as the intensity of a sound decreases by only 4 microseconds/dec...

  10. Auditory Processing Disorder and Foreign Language Acquisition

    Science.gov (United States)

    Veselovska, Ganna

    2015-01-01

    This article aims at exploring various strategies for coping with the auditory processing disorder in the light of foreign language acquisition. The techniques relevant to dealing with the auditory processing disorder can be attributed to environmental and compensatory approaches. The environmental one involves actions directed at creating a…

  11. STUDY ON PHASE PERCEPTION IN SPEECH

    Institute of Scientific and Technical Information of China (English)

    Tong Ming; Bian Zhengzhong; Li Xiaohui; Dai Qijun; Chen Yanpu

    2003-01-01

    The perceptual effect of the phase information in speech has been studied by auditorysubjective tests. On the condition that the phase spectrum in speech is changed while amplitudespectrum is unchanged, the tests show that: (1) If the envelop of the reconstructed speech signalis unchanged, there is indistinctive auditory perception between the original speech and thereconstructed speech; (2) The auditory perception effect of the reconstructed speech mainly lieson the amplitude of the derivative of the additive phase; (3) td is the maximum relative time shiftbetween different frequency components of the reconstructed speech signal. The speech qualityis excellent while td <10ms; good while 10ms< td <20ms; common while 20ms< td <35ms, andpoor while td >35ms.

  12. A percepção auditiva nos pacientes em estado de coma: uma revisão bibliográfica La percepción auditiva en los pacientes en estado de coma: una revisión bibliográfica The auditory perception in the patients in state of coma: a bibliographical revision

    Directory of Open Access Journals (Sweden)

    Ana Cláudia Giesbrecht Puggina

    2005-09-01

    , ninguno de ellos de la publicación nacional. La mayoría de los estudios seleccionados fueron publicados en el período de 1971-1995 (80%, en los Estados Unidos (70%, como la pesquisa longitudinal (el 50%, estudo del caso (30%; la muestra en general varió entre 1 y 5 (el 70%, la mayoría utilizaron la música (el 80%; el EEG (el 60% y el observación comportamental (50% habían sido las medidas de respuesta más usadas. Los procedimientos habían sido muy variados, no obstante de la mayoría de los estudios, enseñan para la existencia de una percepción auditiva en los pacientes en estado de coma.Much is still needed to be discovered by the neuroscience in relation to the consciousness regarding cognitive processes and the auditive perception in coma. Thus, this study is a bibliographic review on the auditive perception of patients in coma and shows its relevance to explain and identify evidence. To identify, in the publications found, the gaps, consistencies and inconsistencies on this subject. This study consists of a bibliographical survey based on LILACS and PUBMED database. Ten articles were selected about the subject, none of them are national publications. Most of the selected studies were published within the period of 1971-1995 (80%, by the United States (70%, as longitudinal research (50% or study of case (30%; the sample in general varied between 1 and 5 (70%, the majority used music (80%; the EEG (60% and behavior observation (50% were the most used measuring ways of the responses to the stimulus. The procedures were varied, however most of the studies, show that there is an auditory perception in patients with coma.

  13. Neurophysiological Influence of Musical Training on Speech Perception

    OpenAIRE

    Shahin, Antoine J.

    2011-01-01

    Does musical training affect our perception of speech? For example, does learning to play a musical instrument modify the neural circuitry for auditory processing in a way that improves one’s ability to perceive speech more clearly in noisy environments? If so, can speech perception in individuals with hearing loss, who struggle in noisy situations, benefit from musical training? While music and speech exhibit some specialization in neural processing, there is evidence suggesting that skill...

  14. Timbre perception and object separation with normal and impaired hearing

    OpenAIRE

    Emiroglu, Suzan Selma

    2007-01-01

    Timbre is a combination of all auditory object attributes other than pitch, loudness and duration. A timbre distortion caused by a sensorineural hearing loss not only affects music perception, but may also influence object recognition in general. In order to quantify differences in object segregation and timbre discrimination between normal-hearing and hearing-impaired listeners with a sensorineural hearing loss, a new method for studying timbre perception was developed, which uses cross-fade...

  15. Tactile feedback improves auditory spatial localization.

    Science.gov (United States)

    Gori, Monica; Vercillo, Tiziana; Sandini, Giulio; Burr, David

    2014-01-01

    Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gori et al., 2014). To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds before and after training, either with tactile feedback, verbal feedback, or no feedback. Audio thresholds were first measured with a spatial bisection task: subjects judged whether the second sound of a three sound sequence was spatially closer to the first or the third sound. The tactile feedback group underwent two audio-tactile feedback sessions of 100 trials, where each auditory trial was followed by the same spatial sequence played on the subject's forearm; auditory spatial bisection thresholds were evaluated after each session. In the verbal feedback condition, the positions of the sounds were verbally reported to the subject after each feedback trial. The no feedback group did the same sequence of trials, with no feedback. Performance improved significantly only after audio-tactile feedback. The results suggest that direct tactile feedback interacts with the auditory spatial localization system, possibly by a process of cross-sensory recalibration. Control tests with the subject rotated suggested that this effect occurs only when the tactile and acoustic sequences are spatially congruent. Our results suggest that the tactile system can be used to recalibrate the auditory sense of space. These results encourage the possibility of designing rehabilitation programs to help blind persons establish a robust auditory sense of space, through training with the tactile modality. PMID:25368587

  16. Tactile feedback improves auditory spatial localization

    Directory of Open Access Journals (Sweden)

    Monica eGori

    2014-10-01

    Full Text Available Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial-bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gori et al., 2014. To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds before and after training, either with tactile feedback, verbal feedback or no feedback. Audio thresholds were first measured with a spatial bisection task: subjects judged whether the second sound of a three sound sequence was spatially closer to the first or the third sound. The tactile-feedback group underwent two audio-tactile feedback sessions of 100 trials, where each auditory trial was followed by the same spatial sequence played on the subject’s forearm; auditory spatial bisection thresholds were evaluated after each session. In the verbal-feedback condition, the positions of the sounds were verbally reported to the subject after each feedback trial. The no-feedback group did the same sequence of trials, with no feedback. Performance improved significantly only after audio-tactile feedback. The results suggest that direct tactile feedback interacts with the auditory spatial localization system, possibly by a process of cross-sensory recalibration. Control tests with the subject rotated suggested that this effect occurs only when the tactile and acoustic sequences are spatially coherent. Our results suggest that the tactile system can be used to recalibrate the auditory sense of space. These results encourage the possibility of designing rehabilitation programs to help blind persons establish a robust auditory sense of space, through training with the tactile modality.

  17. THE EFFECTS OF SALICYLATE ON AUDITORY EVOKED POTENTIAL AMPLITWDE FROM THE AUDITORY CORTEX AND AUDITORY BRAINSTEM

    Institute of Scientific and Technical Information of China (English)

    Brian Sawka; SUN Wei

    2014-01-01

    Tinnitus has often been studied using salicylate in animal models as they are capable of inducing tempo-rary hearing loss and tinnitus. Studies have recently observed enhancement of auditory evoked responses of the auditory cortex (AC) post salicylate treatment which is also shown to be related to tinnitus like behavior in rats. The aim of this study was to observe if enhancements of the AC post salicylate treatment are also present at structures in the brainstem. Four male Sprague Dawley rats with AC implanted electrodes were tested for both AC and auditory brainstem response (ABR) recordings pre and post 250 mg/kg intraperitone-al injections of salicylate. The responses were recorded as the peak to trough amplitudes of P1-N1 (AC), ABR wave V, and ABR waveⅡ. AC responses resulted in statistically significant enhancement of ampli-tude at 2 hours post salicylate with 90 dB stimuli tone bursts of 4, 8, 12, and 20 kHz. Wave V of ABR re-sponses at 90 dB resulted in a statistically significant reduction of amplitude 2 hours post salicylate and a mean decrease of amplitude of 31%for 16 kHz. WaveⅡamplitudes at 2 hours post treatment were signifi-cantly reduced for 4, 12, and 20 kHz stimuli at 90 dB SPL. Our results suggest that the enhancement chang-es of the AC related to salicylate induced tinnitus are generated superior to the level of the inferior colliculus and may originate in the AC.

  18. [Physiognomy-accompanying auditory hallucinations in schizophrenia: psychopathological investigation of 10 patients].

    Science.gov (United States)

    Nagashima, Hideaki; Kobayashi, Toshiyuki

    2010-01-01

    We previously reported two schizophrenic patients with characteristic hallucinations consisting of auditory hallucinations accompanied by visual hallucinations of the speaker's face. The patient sees the face of the hallucinatory speaker in his/her mind and hears the voice talking inwardly. We termed these experiences physiognomy-accompanying auditory hallucinations. In this report, we present 10 patients with schizophrenia showing physiognomy-accompanying auditory hallucinations and evaluate the characteristics of these clinical symptoms. Moreover we consider what the symptoms mean for patients and the metabasis from structural aspects. Lastly, we consider how we can treat these patients living autistic lives with the symptoms. During physiognomy-accompanying auditory hallucinations, the realistic face moves its mouth and talks to the patient expressively. In early onset cases, the faces of various real people appear talking about ordinary things while in late onset cases, the faces can be imaginary but are mainly real people talking about ordinary or delusional things. We suppose that these characteristics of the symptoms unify the schizophrenic world overwhelmed by "a force of non-sense" to "the sense field". "The force of non-sense" is a substantial power but cannot be reduced to the real meaning. And we suppose that not visual reality but the intensity of auditory hallucinations of the face brings about the overwhelming intensity of symptoms and the substantiality of this intensity depends on the states of excessive fullness of "the force of non-sense". With these symptoms patients see the narration of auditory hallucinations through the facial image and the content of auditory hallucinations is compressed into the movement of visual hallucinations of the speaker's face. The form of symptoms is realistic but the speaker's face and voice are beyond ordinary time and space. The symptoms are essentially different from ordinary perception. The visual

  19. The Neuromagnetic Dynamics of Time Perception

    OpenAIRE

    Carver, Frederick W.; Elvevåg, Brita; Altamura, Mario; Weinberger, Daniel R.; Coppola, Richard

    2012-01-01

    Examining real-time cortical dynamics is crucial for understanding time perception. Using magnetoencephalography we studied auditory duration discrimination of short (.5 s) versus a pitch control. Time-frequency analysis of event-related fields showed widespread beta-band (13–30 Hz) desynchronization during all tone presentations. Synthetic aperture magnetometry indicated automatic primarily sensorimotor responses in short and pitch conditions, with activation specific to timing in bilateral ...

  20. Speech perception as an active cognitive process

    OpenAIRE

    Howard Charles Nusbaum

    2014-01-01

    One view of speech perception is that acoustic signals are transformed into representations for pattern matching to determine linguistic structure. This process can be taken as a statistical pattern-matching problem, assuming realtively stable linguistic categories are characterized by neural representations related to auditory properties of speech that can be compared to speech input. This kind of pattern matching can be termed a passive process which implies rigidity of processingd with few...

  1. Lecture recording system in anatomy: possible benefit to auditory learners.

    Science.gov (United States)

    Bacro, Thierry R H; Gebregziabher, Mulugeta; Ariail, Jennie

    2013-01-01

    The literature reports that using Learning Recording Systems (LRS) is usually well received by students but that the pedagogical value of LRS in academic settings remains somewhat unclear. The primary aim of the current study is to document students' perceptions, actual pattern of usage, and impact of use of LRS on students' grade in a dental gross and neuroanatomy course. Other aims are to determine if students' learning preference correlated with final grades and to see if other factors like gender, age, overall academic score on the Dental Aptitude Test (DAT), lecture levels of difficulty, type of lecture, category of lecture, or teaching faculty could explain the impact, if any, of the use of LRS on the course final grade. No significant correlation was detected between the final grades and the variables studied except for a significant but modest correlation between final grades and the number of times the students accessed the lecture recordings (r=0.33 with P=0.01). Also, after adjusting for gender, age, learning style, and academic DAT, a significant interaction between auditory and average usage time was found for final grade (P=0.03). Students who classified themselves as auditory and who used the LRS on average for fewer than 10 minutes per access, scored an average final grade of 16.43 % higher than the nonauditory students using the LRS for the same amount of time per access. Based on these findings, implications for teaching are discussed and recommendations for use of LRS are proposed. PMID:23508921

  2. Auditory and language skills of children using hearing aids

    Directory of Open Access Journals (Sweden)

    Leticia Macedo Penna

    2015-04-01

    Full Text Available INTRODUCTION: Hearing loss may impair the development of a child. The rehabilitation process for individuals with hearing loss depends on effective interventions.OBJECTIVE: To describe the linguistic profile and the hearing skills of children using hearing aids, to characterize the rehabilitation process and to analyze its association with the children's degree of hearing loss.METHODS: Cross-sectional study with a non-probabilistic sample of 110 children using hearing aids (6-10 years of age for mild to profound hearing loss. Tests of language, speech perception, phonemic discrimination, and school performance were performed. The associations were verified by the following tests: chi-squared for linear trend and Kruskal-Wallis.RESULTS: About 65% of the children had altered vocabulary, whereas 89% and 94% had altered phonology and inferior school performance, respectively. The degree of hearing loss was associated with differences in the median age of diagnosis; the age at which the hearing aids were adapted and at which speech therapy was started; and the performance on auditory tests and the type of communication used.CONCLUSION: The diagnosis of hearing loss and the clinical interventions occurred late, contributing to impairments in auditory and language development.

  3. Multisensory Speech Perception in Children with Autism Spectrum Disorders

    Science.gov (United States)

    Woynaroski, Tiffany G.; Kwakye, Leslie D.; Foss-Feig, Jennifer H.; Stevenson, Ryan A.; Stone, Wendy L.; Wallace, Mark T.

    2013-01-01

    This study examined unisensory and multisensory speech perception in 8-17 year old children with autism spectrum disorders (ASD) and typically developing controls matched on chronological age, sex, and IQ. Consonant-vowel syllables were presented in visual only, auditory only, matched audiovisual, and mismatched audiovisual ("McGurk")…

  4. To What Extent Do Gestalt Grouping Principles Influence Tactile Perception?

    Science.gov (United States)

    Gallace, Alberto; Spence, Charles

    2011-01-01

    Since their formulation by the Gestalt movement more than a century ago, the principles of perceptual grouping have primarily been investigated in the visual modality and, to a lesser extent, in the auditory modality. The present review addresses the question of whether the same grouping principles also affect the perception of tactile stimuli.…

  5. Crossmodal and incremental perception of audiovisual cues to emotional speech

    NARCIS (Netherlands)

    Barkhuysen, Pashiera; Krahmer, E.J.; Swerts, M.G.J.

    2010-01-01

    In this article we report on two experiments about the perception of audiovisual cues to emotional speech. The article addresses two questions: (1) how do visual cues from a speaker's face to emotion relate to auditory cues, and (2) what is the recognition speed for various facial cues to emotion? B

  6. Crossmodal and Incremental Perception of Audiovisual Cues to Emotional Speech

    Science.gov (United States)

    Barkhuysen, Pashiera; Krahmer, Emiel; Swerts, Marc

    2010-01-01

    In this article we report on two experiments about the perception of audiovisual cues to emotional speech. The article addresses two questions: (1) how do visual cues from a speaker's face to emotion relate to auditory cues, and (2) what is the recognition speed for various facial cues to emotion? Both experiments reported below are based on tests…

  7. Speech Perception Ability in Individuals with Friedreich Ataxia

    Science.gov (United States)

    Rance, Gary; Fava, Rosanne; Baldock, Heath; Chong, April; Barker, Elizabeth; Corben, Louise; Delatycki

    2008-01-01

    The aim of this study was to investigate auditory pathway function and speech perception ability in individuals with Friedreich ataxia (FRDA). Ten subjects confirmed by genetic testing as being homozygous for a GAA expansion in intron 1 of the FXN gene were included. While each of the subjects demonstrated normal, or near normal sound detection, 3…

  8. Behavioral evidence for the role of cortical theta oscillations in determining auditory channel capacity for speech

    Directory of Open Access Journals (Sweden)

    OdedGhitza

    2014-07-01

    Full Text Available Studies on the intelligibility of time-compressed speech have shown flawless performance for moderate compression factors, a sharp deterioration for compression factors above three, and an improved performance as a result of "repackaging" – a process of dividing the time-compressed waveform into fragments, called packets, and delivering the packets in a prescribed rate. This intricate pattern of performance reflects the reliability of the auditory system in processing speech streams with different information transfer rates; the knee-point of performance defines the auditory channel capacity. This study is concerned with the cortical computation principle that determines channel capacity. Oscillation-based models of speech perception hypothesize that the speech decoding process is guided by a cascade of oscillations with θ as "master," capable of tracking the input rhythm, with the θ cycles aligned with the intervocalic speech fragments termed θ-syllables; intelligibility remains high as long as θ is in sync with the input, and it sharply deteriorates once θ is out of sync. In the study described here the hypothesized role of θ was examined by measuring the auditory channel capacity of time-compressed speech undergone repackaging. For all compression factors tested (up to eight, packaging rate at capacity equals 9 packets/sec – aligned with the upper limit of cortical θ, θmax (about 9 Hz – and the packet duration equals the duration of one uncompressed θ-syllable divided by the compression factor. The alignment of both the packaging rate and the packet duration with properties of cortical θ suggests that the auditory channel capacity is determined by θ. Irrespective of speech speed, the maximum information transfer rate through the auditory channel is the information in one uncompressed θ-syllable long speech fragment per one θmax cycle. Equivalently, the auditory channel capacity is 9 θ-syllables/sec.

  9. Is the auditory evoked P2 response a biomarker of learning?

    Science.gov (United States)

    Tremblay, Kelly L; Ross, Bernhard; Inoue, Kayo; McClannahan, Katrina; Collet, Gregory

    2014-01-01

    Even though auditory training exercises for humans have been shown to improve certain perceptual skills of individuals with and without hearing loss, there is a lack of knowledge pertaining to which aspects of training are responsible for the perceptual gains, and which aspects of perception are changed. To better define how auditory training impacts brain and behavior, electroencephalography (EEG) and magnetoencephalography (MEG) have been used to determine the time course and coincidence of cortical modulations associated with different types of training. Here we focus on P1-N1-P2 auditory evoked responses (AEP), as there are consistent reports of gains in P2 amplitude following various types of auditory training experiences; including music and speech-sound training. The purpose of this experiment was to determine if the auditory evoked P2 response is a biomarker of learning. To do this, we taught native English speakers to identify a new pre-voiced temporal cue that is not used phonemically in the English language so that coinciding changes in evoked neural activity could be characterized. To differentiate possible effects of repeated stimulus exposure and a button-pushing task from learning itself, we examined modulations in brain activity in a group of participants who learned to identify the pre-voicing contrast and compared it to participants, matched in time, and stimulus exposure, that did not. The main finding was that the amplitude of the P2 auditory evoked response increased across repeated EEG sessions for all groups, regardless of any change in perceptual performance. What's more, these effects are retained for months. Changes in P2 amplitude were attributed to changes in neural activity associated with the acquisition process and not the learned outcome itself. A further finding was the expression of a late negativity (LN) wave 600-900 ms post-stimulus onset, post-training exclusively for the group that learned to identify the pre-voiced contrast

  10. Is the auditory evoked P2 response a biomarker of learning?

    Directory of Open Access Journals (Sweden)

    Kelly eTremblay

    2014-02-01

    Full Text Available Even though auditory training exercises for humans have been shown to improve certain perceptual skills of individuals with and without hearing loss, there is a lack of knowledge pertaining to which aspects of training are responsible for the perceptual gains, and which aspects of perception are changed. To better define how auditory training impacts brain and behavior, electroencephalography and magnetoencephalography have been used to determine the time course and coincidence of cortical modulations associated with different types of training. Here we focus on P1-N1-P2 auditory evoked responses (AEP, as there are consistent reports of gains in P2 amplitude following various types of auditory training experiences; including music and speech-sound training. The purpose of this experiment was to determine if the auditory evoked P2 response is a biomarker of learning. To do this, we taught native English speakers to identify a new pre-voiced temporal cue that is not used phonemically in the English language so that coinciding changes in evoked neural activity could be characterized. To differentiate possible effects of repeated stimulus exposure and a button-pushing task from learning itself, we examined modulations in brain activity in a group of participants who learned to identify the pre-voicing contrast and compared it to participants, matched in time, and stimulus exposure, that did not. The main finding was that the amplitude of the P2 auditory evoked response increased across repeated EEG sessions for all groups, regardless of any change in perceptual performance. What’s more, these effects were retained for months. Changes in P2 amplitude were attributed to changes in neural activity associated with the acquisition process and not the learned outcome itself. A further finding was the expression of a late negativity (LN wave 600-900 ms post-stimulus onset, post-training, exclusively for the group that learned to identify the pre

  11. The role of the auditory brainstem in processing musically-relevant pitch

    Directory of Open Access Journals (Sweden)

    Gavin M. Bidelman

    2013-05-01

    Full Text Available Neuroimaging work has shed light on the cerebral architecture involved in processing the melodic and harmonic aspects of music. Here, recent evidence is reviewed illustrating that subcortical auditory structures contribute to the early formation and processing of musically-relevant pitch. Electrophysiological recordings from the human brainstem and population responses from the auditory nerve reveal that nascent features of tonal music (e.g., consonance/dissonance, pitch salience, harmonic sonority are evident at early, subcortical levels of the auditory pathway. The salience and harmonicity of brainstem activity is strongly correlated with listeners’ perceptual preferences and perceived consonance for the tonal relationships of music. Moreover, the hierarchical ordering of pitch intervals/chords described by the Western music practice and their perceptual consonance is well-predicted by the salience with which pitch combinations are encoded in subcortical auditory structures. While the neural correlates of consonance can be tuned and exaggerated with musical training, they persist even in the absence of musicianship or long-term enculturation. As such, it is posited that the structural foundations of musical pitch might result from innate processing performed by the central auditory system. A neurobiological predisposition for consonant, pleasant sounding pitch relationships may be one reason why these pitch combinations have been favored by composers and listeners for centuries. It is suggested that important perceptual dimensions of music emerge well before the auditory signal reaches cerebral cortex and prior to attentional engagement. While cortical mechanisms are no doubt critical to the perception, production, and enjoyment of music, the contribution of subcortical structures implicates a more integrated, hierarchically organized network underlying music processing within the brain.

  12. Influence of age, spatial memory, and ocular fixation on localization of auditory, visual, and bimodal targets by human subjects.

    Science.gov (United States)

    Dobreva, Marina S; O'Neill, William E; Paige, Gary D

    2012-12-01

    A common complaint of the elderly is difficulty identifying and localizing auditory and visual sources, particularly in competing background noise. Spatial errors in the elderly may pose challenges and even threats to self and others during everyday activities, such as localizing sounds in a crowded room or driving in traffic. In this study, we investigated the influence of aging, spatial memory, and ocular fixation on the localization of auditory, visual, and combined auditory-visual (bimodal) targets. Head-restrained young and elderly subjects localized targets in a dark, echo-attenuated room using a manual laser pointer. Localization accuracy and precision (repeatability) were quantified for both ongoing and transient (remembered) targets at response delays up to 10 s. Because eye movements bias auditory spatial perception, localization was assessed under target fixation (eyes free, pointer guided by foveal vision) and central fixation (eyes fixed straight ahead, pointer guided by peripheral vision) conditions. Spatial localization across the frontal field in young adults demonstrated (1) horizontal overshoot and vertical undershoot for ongoing auditory targets under target fixation conditions, but near-ideal horizontal localization with central fixation; (2) accurate and precise localization of ongoing visual targets guided by foveal vision under target fixation that degraded when guided by peripheral vision during central fixation; (3) overestimation in horizontal central space (±10°) of remembered auditory, visual, and bimodal targets with increasing response delay. In comparison with young adults, elderly subjects showed (1) worse precision in most paradigms, especially when localizing with peripheral vision under central fixation; (2) greatly impaired vertical localization of auditory and bimodal targets; (3) increased horizontal overshoot in the central field for remembered visual and bimodal targets across response delays; (4) greater vulnerability to

  13. Ordinal models of audiovisual speech perception

    DEFF Research Database (Denmark)

    Andersen, Tobias

    2011-01-01

    Audiovisual information is integrated in speech perception. One manifestation of this is the McGurk illusion in which watching the articulating face alters the auditory phonetic percept. Understanding this phenomenon fully requires a computational model with predictive power. Here, we describe...... ordinal models that can account for the McGurk illusion. We compare this type of models to the Fuzzy Logical Model of Perception (FLMP) in which the response categories are not ordered. While the FLMP generally fit the data better than the ordinal model it also employs more free parameters in complex...... experiments when the number of response categories are high as it is for speech perception in general. Testing the predictive power of the models using a form of cross-validation we found that ordinal models perform better than the FLMP. Based on these findings we suggest that ordinal models generally have...

  14. Categorical Perception

    OpenAIRE

    Harnad, Stevan

    2003-01-01

    Differences can be perceived as gradual and quantitative, as with different shades of gray, or they can be perceived as more abrupt and qualitative, as with different colors. The first is called continuous perception and the second categorical perception. Categorical perception (CP) can be inborn or can be induced by learning. Formerly thought to be peculiar to speech and color perception, CP turns out to be far more general, and may be related to how the neural networks in our brains detect ...

  15. Auditory Efferent System Modulates Mosquito Hearing.

    Science.gov (United States)

    Andrés, Marta; Seifert, Marvin; Spalthoff, Christian; Warren, Ben; Weiss, Lukas; Giraldo, Diego; Winkler, Margret; Pauls, Stephanie; Göpfert, Martin C

    2016-08-01

    The performance of vertebrate ears is controlled by auditory efferents that originate in the brain and innervate the ear, synapsing onto hair cell somata and auditory afferent fibers [1-3]. Efferent activity can provide protection from noise and facilitate the detection and discrimination of sound by modulating mechanical amplification by hair cells and transmitter release as well as auditory afferent action potential firing [1-3]. Insect auditory organs are thought to lack efferent control [4-7], but when we inspected mosquito ears, we obtained evidence for its existence. Antibodies against synaptic proteins recognized rows of bouton-like puncta running along the dendrites and axons of mosquito auditory sensory neurons. Electron microscopy identified synaptic and non-synaptic sites of vesicle release, and some of the innervating fibers co-labeled with somata in the CNS. Octopamine, GABA, and serotonin were identified as efferent neurotransmitters or neuromodulators that affect auditory frequency tuning, mechanical amplification, and sound-evoked potentials. Mosquito brains thus modulate mosquito ears, extending the use of auditory efferent systems from vertebrates to invertebrates and adding new levels of complexity to mosquito sound detection and communication. PMID:27476597

  16. Spatial auditory processing in pinnipeds

    Science.gov (United States)

    Holt, Marla M.

    Given the biological importance of sound for a variety of activities, pinnipeds must be able to obtain spatial information about their surroundings thorough acoustic input in the absence of other sensory cues. The three chapters of this dissertation address spatial auditory processing capabilities of pinnipeds in air given that these amphibious animals use acoustic signals for reproduction and survival on land. Two chapters are comparative lab-based studies that utilized psychophysical approaches conducted in an acoustic chamber. Chapter 1 addressed the frequency-dependent sound localization abilities at azimuth of three pinniped species (the harbor seal, Phoca vitulina, the California sea lion, Zalophus californianus, and the northern elephant seal, Mirounga angustirostris). While performances of the sea lion and harbor seal were consistent with the duplex theory of sound localization, the elephant seal, a low-frequency hearing specialist, showed a decreased ability to localize the highest frequencies tested. In Chapter 2 spatial release from masking (SRM), which occurs when a signal and masker are spatially separated resulting in improvement in signal detectability relative to conditions in which they are co-located, was determined in a harbor seal and sea lion. Absolute and masked thresholds were measured at three frequencies and azimuths to determine the detection advantages afforded by this type of spatial auditory processing. Results showed that hearing sensitivity was enhanced by up to 19 and 12 dB in the harbor seal and sea lion, respectively, when the signal and masker were spatially separated. Chapter 3 was a field-based study that quantified both sender and receiver variables of the directional properties of male northern elephant seal calls produce within communication system that serves to delineate dominance status. This included measuring call directivity patterns, observing male-male vocally-mediated interactions, and an acoustic playback study

  17. Auditory Connections and Functions of Prefrontal Cortex

    Directory of Open Access Journals (Sweden)

    BethanyPlakke

    2014-07-01

    Full Text Available The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC. In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition.

  18. Speaker-independent factors affecting the perception of foreign accent in a second languagea)

    OpenAIRE

    Levi, Susannah V.; Winters, Stephen J.; Pisoni, David B.

    2007-01-01

    Previous research on foreign accent perception has largely focused on speaker-dependent factors such as age of learning and length of residence. Factors that are independent of a speaker’s language learning history have also been shown to affect perception of second language speech. The present study examined the effects of two such factors—listening context and lexical frequency—on the perception of foreign-accented speech. Listeners rated foreign accent in two listening contexts: auditory-o...

  19. Cortical-Evoked Potentials Reflect Speech-in-Noise Perception in Children

    OpenAIRE

    Samira, Anderson; Bharath, Chandrasekaran; Han-Gyol, Yi; Nina, Kraus

    2010-01-01

    Children are known to be particularly vulnerable to the effects of noise on speech perception, and it is commonly acknowledged that failure of central auditory processes can lead to these difficulties with speech-in-noise (SIN) perception. Still, little is known about the mechanistic relationship between central processes and the perception of speech in noise. Our aims were two-fold: to examine the effects of noise on the central encoding of speech through measurement of cortical event-relate...

  20. Voices to reckon with: perceptions of voice identity in clinical and non-clinical voice hearers

    OpenAIRE

    Badcock, Johanna C.; Chhabra, Saruchi

    2013-01-01

    The current review focuses on the perception of voice identity in clinical and non-clinical voice hearers. Identity perception in auditory verbal hallucinations (AVH) is grounded in the mechanisms of human (i.e., real, external) voice perception, and shapes the emotional (distress) and behavioral (help-seeking) response to the experience. Yet, the phenomenological assessment of voice identity is often limited, for example to the gender of the voice, and has failed to take advantage of recent ...

  1. Functional Neurochemistry of the Auditory System

    Directory of Open Access Journals (Sweden)

    Nourollah Agha Ebrahimi

    1993-03-01

    Full Text Available Functional Neurochemistry is one of the fields of studies in the auditory system which has had an outstanding development in the recent years. Many of the findings in the mentioned field had led not only the basic auditory researches but also the clinicians to new points of view in audiology.Here, we are aimed at discussing the latest investigations in the Functional Neurochemistry of the auditory system and have focused this review mainly on the researches which will arise flashes of hope for future clinical studies

  2. Auditory Neuropathy/Dyssynchrony in Biotinidase Deficiency

    Science.gov (United States)

    Yaghini, Omid

    2016-01-01

    Biotinidase deficiency is a disorder inherited autosomal recessively showing evidence of hearing loss and optic atrophy in addition to seizures, hypotonia, and ataxia. In the present study, a 2-year-old boy with Biotinidase deficiency is presented in which clinical symptoms have been reported with auditory neuropathy/auditory dyssynchrony (AN/AD). In this case, transient-evoked otoacoustic emissions showed bilaterally normal responses representing normal function of outer hair cells. In contrast, acoustic reflex test showed absent reflexes bilaterally, and visual reinforcement audiometry and auditory brainstem responses indicated severe to profound hearing loss in both ears. These results suggest AN/AD in patients with Biotinidase deficiency. PMID:27144235

  3. Functional Neurochemistry of the Auditory System

    OpenAIRE

    Nourollah Agha Ebrahimi

    1993-01-01

    Functional Neurochemistry is one of the fields of studies in the auditory system which has had an outstanding development in the recent years. Many of the findings in the mentioned field had led not only the basic auditory researches but also the clinicians to new points of view in audiology.Here, we are aimed at discussing the latest investigations in the Functional Neurochemistry of the auditory system and have focused this review mainly on the researches which will arise flashes of hope f...

  4. Assessing the aging effect on auditory-verbal memory by Persian version of dichotic auditory verbal memory test

    Directory of Open Access Journals (Sweden)

    Zahra Shahidipour

    2014-01-01

    Conclusion: Based on the obtained results, significant reduction in auditory memory was seen in aged group and the Persian version of dichotic auditory-verbal memory test, like many other auditory verbal memory tests, showed the aging effects on auditory verbal memory performance.

  5. Physiological Measures of Auditory Function

    Science.gov (United States)

    Kollmeier, Birger; Riedel, Helmut; Mauermann, Manfred; Uppenkamp, Stefan

    When acoustic signals enter the ears, they pass several processing stages of various complexities before they will be perceived. The auditory pathway can be separated into structures dealing with sound transmission in air (i.e. the outer ear, ear canal, and the vibration of tympanic membrane), structures dealing with the transformation of sound pressure waves into mechanical vibrations of the inner ear fluids (i.e. the tympanic membrane, ossicular chain, and the oval window), structures carrying mechanical vibrations in the fluid-filled inner ear (i.e. the cochlea with basilar membrane, tectorial membrane, and hair cells), structures that transform mechanical oscillations into a neural code, and finally several stages of neural processing in the brain along the pathway from the brainstem to the cortex.

  6. Seeing the Song: Left Auditory Structures May Track Auditory-Visual Dynamic Alignment

    OpenAIRE

    Mossbridge, Julia A.; Grabowecky, Marcia; Suzuki, Satoru

    2013-01-01

    Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements), it i...

  7. AUDITORY CORTICAL PLASTICITY: DOES IT PROVIDE EVIDENCE FOR COGNITIVE PROCESSING IN THE AUDITORY CORTEX?

    OpenAIRE

    Irvine, Dexter R. F.

    2007-01-01

    The past 20 years have seen substantial changes in our view of the nature of the processing carried out in auditory cortex. Some processing of a cognitive nature, previously attributed to higher order “association” areas, is now considered to take place in auditory cortex itself. One argument adduced in support of this view is the evidence indicating a remarkable degree of plasticity in the auditory cortex of adult animals. Such plasticity has been demonstrated in a wide range of paradigms, i...

  8. Assessment of visual, auditory, and kinesthetic learning style among undergraduate nursing students

    Directory of Open Access Journals (Sweden)

    Radhwan Hussein Ibrahim

    2015-12-01

    Full Text Available Background: Learning styles refer to the ability of learner to perceive and process information in learning situations. The ability to understand students’ learning styles can increase the educational outcomes. VAK (Visual, auditory, kinesthetic learning style is one of the learning style in which students use three of sensory perception to receive information. Teachers can incorporate these learning styles in their classroom activities so that students are competent to be successful in their courses. The purpose of this study is to assess Visual, Auditory, and Kinesthetic learning style among undergraduate nursing students. Methods: A descriptive study was carried out; the study was conducted during the period of 3rd. November, 2013-15, March, 2014, in two Nursing Colleges at Universities of Mosul and Kirkuk. A stratified random sampling was used for data collection. The target population was an undergraduate nursing students (210 students (60 male and 150 female. Statistical Package for the Social Science (SPSS, Chi-square, Frequencies and Percentage was used for data analysis. The results: the findings reveal that Visual, Auditory, and Kinesthetic learning style of the study sample was (40.0%, (29.5%, and 30.5% respectively. Females preferred auditory learning style (30.3% more than males (27.3%, while males preferred kinesthetic learning style (32.3% more than females (29.8%. Recommendation: The researcher recommended that nurse educators should aware of learning styles of the students and provide teaching style to be matched with their learning style.

  9. Auditory stimulus timing influences perceived duration of co-occurring visual stimuli

    Directory of Open Access Journals (Sweden)

    Vincenzo eRomei

    2011-09-01

    Full Text Available There is increasing interest in multisensory influences upon sensory-specific judgements, such as when auditory stimuli affect visual perception. Here we studied whether the duration of an auditory event can objectively affect the perceived duration of a co-occurring visual event. On each trial, participants were presented with a pair of successive flashes and had to judge whether the first or second was longer. Two beeps were presented with the flashes. The order of short and long stimuli could be the same across audition and vision (audiovisual congruent or reversed, so that the longer flash was accompanied by the shorter beep and vice versa (audiovisual incongruent; or the two beeps could have the same duration as each other. Beeps and flashes could onset synchronously or asynchronously. In a further control experiment, the beep durations were much longer (tripled than the flashes. Results showed that visual duration-discrimination sensitivity (d' was significantly higher for congruent (and significantly lower for incongruent audiovisual synchronous combinations, relative to the visual only presentation. This effect was abolished when auditory and visual stimuli were presented asynchronously, or when sound durations tripled those of flashes. We conclude that the temporal properties of co-occurring auditory stimuli influence the perceived duration of visual stimuli and that this can reflect genuine changes in visual sensitivity rather than mere response bias.

  10. Neural Representation of Concurrent Vowels in Macaque Primary Auditory Cortex123

    Science.gov (United States)

    Micheyl, Christophe; Steinschneider, Mitchell

    2016-01-01

    Abstract Successful speech perception in real-world environments requires that the auditory system segregate competing voices that overlap in frequency and time into separate streams. Vowels are major constituents of speech and are comprised of frequencies (harmonics) that are integer multiples of a common fundamental frequency (F0). The pitch and identity of a vowel are determined by its F0 and spectral envelope (formant structure), respectively. When two spectrally overlapping vowels differing in F0 are presented concurrently, they can be readily perceived as two separate “auditory objects” with pitches at their respective F0s. A difference in pitch between two simultaneous vowels provides a powerful cue for their segregation, which in turn, facilitates their individual identification. The neural mechanisms underlying the segregation of concurrent vowels based on pitch differences are poorly understood. Here, we examine neural population responses in macaque primary auditory cortex (A1) to single and double concurrent vowels (/a/ and /i/) that differ in F0 such that they are heard as two separate auditory objects with distinct pitches. We find that neural population responses in A1 can resolve, via a rate-place code, lower harmonics of both single and double concurrent vowels. Furthermore, we show that the formant structures, and hence the identities, of single vowels can be reliably recovered from the neural representation of double concurrent vowels. We conclude that A1 contains sufficient spectral information to enable concurrent vowel segregation and identification by downstream cortical areas.

  11. Auditory fMRI of Sound Intensity and Loudness for Unilateral Stimulation.

    Science.gov (United States)

    Behler, Oliver; Uppenkamp, Stefan

    2016-01-01

    We report a systematic exploration of the interrelation of sound intensity, ear of entry, individual loudness judgments, and brain activity across hemispheres, using auditory functional magnetic resonance imaging (fMRI). The stimuli employed were 4 kHz-bandpass filtered noise stimuli, presented monaurally to each ear at levels from 37 to 97 dB SPL. One diotic condition and a silence condition were included as control conditions. Normal hearing listeners completed a categorical loudness scaling procedure with similar stimuli before auditory fMRI was performed. The relationship between brain activity, as inferred from blood oxygenation level dependent (BOLD) contrasts, and both sound intensity and loudness estimates were analyzed by means of linear mixed effects models for various anatomically defined regions of interest in the ascending auditory pathway and in the cortex. The results indicate distinct functional differences between midbrain and cortical areas as well as between specific regions within auditory cortex, suggesting a systematic hierarchy in terms of lateralization and the representation of sensory stimulation and perception. PMID:27080657

  12. Improving the reading skills of Jordanian students with auditory discrimination problems

    Directory of Open Access Journals (Sweden)

    Afaf Abdullah Mukdadi

    2016-01-01

    Full Text Available Background: Some of the developmental problems facing students with difficulties in learning are those related to auditory perception which, in turn, can negatively affect the individual’s learning process. Aim: Evaluating a training program prepared to develop the auditory discrimination skills of students who suffer from auditory discrimination problems. Design: A quasi-experimental research design was used in this study. The study sample was divided into two equal groups: experimental and control. Participants and setting: The population of the study consists of students with learning difficulties from the second, third and fourth grades, whose ages range between 7-9 years old. Also involved were those enrolled in the resource rooms affiliated to the Jordanian Ministry of Education in Irbid province, amounting to (120 boy and girl students for the school year 2013- 2014. Results: The study showed that there was a significant difference in favor of the experimental group which indicated that the training program was effective. It helped the students with auditory discrimination problems to improve their reading skills. The results also showed a significant difference in favor of the students of older age in the experimental group. At the same time there were no significant differences in reading performance changes concerning gender.

  13. Volume Attenuation and High Frequency Loss as Auditory Depth Cues in Stereoscopic 3D Cinema

    Science.gov (United States)

    Manolas, Christos; Pauletto, Sandra

    2014-09-01

    Assisted by the technological advances of the past decades, stereoscopic 3D (S3D) cinema is currently in the process of being established as a mainstream form of entertainment. The main focus of this collaborative effort is placed on the creation of immersive S3D visuals. However, with few exceptions, little attention has been given so far to the potential effect of the soundtrack on such environments. The potential of sound both as a means to enhance the impact of the S3D visual information and to expand the S3D cinematic world beyond the boundaries of the visuals is large. This article reports on our research into the possibilities of using auditory depth cues within the soundtrack as a means of affecting the perception of depth within cinematic S3D scenes. We study two main distance-related auditory cues: high-end frequency loss and overall volume attenuation. A series of experiments explored the effectiveness of these auditory cues. Results, although not conclusive, indicate that the studied auditory cues can influence the audience judgement of depth in cinematic 3D scenes, sometimes in unexpected ways. We conclude that 3D filmmaking can benefit from further studies on the effectiveness of specific sound design techniques to enhance S3D cinema.

  14. Effects of spatially correlated acoustic-tactile information on judgments of auditory circular direction

    Science.gov (United States)

    Cohen, Annabel J.; Lamothe, M. J. Reina; Toms, Ian D.; Fleming, Richard A. G.

    2002-05-01

    Cohen, Lamothe, Fleming, MacIsaac, and Lamoureux [J. Acoust. Soc. Am. 109, 2460 (2001)] reported that proximity governed circular direction judgments (clockwise/counterclockwise) of two successive tones emanating from all pairs of 12 speakers located at 30-degree intervals around a listeners' head (cranium). Many listeners appeared to experience systematic front-back confusion. Diametrically opposed locations (180-degrees-theoretically ambiguous direction) produced a direction bias pattern resembling Deutsch's tritone paradox [Deutsch, Kuyper, and Fisher, Music Percept. 5, 7992 (1987)]. In Experiment 1 of the present study, the circular direction task was conducted in the tactile domain using 12 circumcranial points of vibration. For all 5 participants, proximity governed direction (without front-back confusion) and a simple clockwise bias was shown for 180-degree pairs. Experiment 2 tested 9 new participants in one unimodal auditory condition and two bimodal auditory-tactile conditions (spatially-correlated/spatially-uncorrelated). Correlated auditory-tactile information eliminated front-back confusion for 8 participants and replaced the ``paradoxical'' bias for 180-degree pairs with the clockwise bias. Thus, spatially correlated audio-tactile location information improves the veridical representation of 360-degree acoustic space, and modality-specific principles are implicated by the unique circular direction bias patterns for 180-degree pairs in the separate auditory and tactile modalities. [Work supported by NSERC.

  15. Comparison of Auditory Event-Related Potential P300 in Sighted and Early Blind Individuals

    Directory of Open Access Journals (Sweden)

    Fatemeh Heidari

    2010-06-01

    Full Text Available Background and Aim: Following an early visual deprivation, the neural network involved in processing auditory spatial information undergoes a profound reorganization. In order to investigate this process, event-related potentials provide accurate information about time course neural activation as well as perception and cognitive processes. In this study, the latency and amplitude of auditory P300 were compared in sighted and early blind individuals in age range of 18-25 years old.Methods: In this cross-sectional study, auditory P300 potential was measured in conventional oddball paradigm by using two tone burst stimuli (1000 and 2000 Hz on 40 sighted subjects and 19 early blind subjects with mean age 20.94 years old.Results: The mean latency of P300 in early blind subjects was significantly smaller than sighted subjects (p=0.00.( There was no significant difference in amplitude between two groups (p>0.05.Conclusion: Reduced latency of P300 in early blind subjects in comparison to sighted subjects probably indicates the rate of automatic processing and information categorization is faster in early blind subjects because of sensory compensation. It seems that neural plasticity increases the rate of auditory processing and attention in early blind subjects.

  16. Frequency band-importance functions for auditory and auditory-visual speech recognition

    Science.gov (United States)

    Grant, Ken W.

    2005-04-01

    In many everyday listening environments, speech communication involves the integration of both acoustic and visual speech cues. This is especially true in noisy and reverberant environments where the speech signal is highly degraded, or when the listener has a hearing impairment. Understanding the mechanisms involved in auditory-visual integration is a primary interest of this work. Of particular interest is whether listeners are able to allocate their attention to various frequency regions of the speech signal differently under auditory-visual conditions and auditory-alone conditions. For auditory speech recognition, the most important frequency regions tend to be around 1500-3000 Hz, corresponding roughly to important acoustic cues for place of articulation. The purpose of this study is to determine the most important frequency region under auditory-visual speech conditions. Frequency band-importance functions for auditory and auditory-visual conditions were obtained by having subjects identify speech tokens under conditions where the speech-to-noise ratio of different parts of the speech spectrum is independently and randomly varied on every trial. Point biserial correlations were computed for each separate spectral region and the normalized correlations are interpreted as weights indicating the importance of each region. Relations among frequency-importance functions for auditory and auditory-visual conditions will be discussed.

  17. Speech perception as an active cognitive process

    Directory of Open Access Journals (Sweden)

    Shannon eHeald

    2014-03-01

    Full Text Available One view of speech perception is that acoustic signals are transformed into representations for pattern matching to determine linguistic structure. This process can be taken as a statistical pattern-matching problem, assuming realtively stable linguistic categories are characterized by neural representations related to auditory properties of speech that can be compared to speech input. This kind of pattern matching can be termed a passive process which implies rigidity of processingd with few demands on cognitive processing. An alternative view is that speech recognition, even in early stages, is an active process in which speech analysis is attentionally guided. Note that this does not mean consciously guided but that information-contingent changes in early auditory encoding can occur as a function of context and experience. Active processing assumes that attention, plasticity, and listening goals are important in considering how listeners cope with adverse circumstances that impair hearing by masking noise in the environment or hearing loss. Although theories of speech perception have begun to incorporate some active processing, they seldom treat early speech encoding as plastic and attentionally guided. Recent research has suggested that speech perception is the product of both feedforward and feedback interactions between a number of brain regions that include descending projections perhaps as far downstream as the cochlea. It is important to understand how the ambiguity of the speech signal and constraints of context dynamically determine cognitive resources recruited during perception including focused attention, learning, and working memory. Theories of speech perception need to go beyond the current corticocentric approach in order to account for the intrinsic dynamics of the auditory encoding of speech. In doing so, this may provide new insights into ways in which hearing disorders and loss may be treated either through augementation or

  18. In search of an auditory engram

    Science.gov (United States)

    Fritz, Jonathan; Mishkin, Mortimer; Saunders, Richard C.

    2005-01-01

    Monkeys trained preoperatively on a task designed to assess auditory recognition memory were impaired after removal of either the rostral superior temporal gyrus or the medial temporal lobe but were unaffected by lesions of the rhinal cortex. Behavioral analysis indicated that this result occurred because the monkeys did not or could not use long-term auditory recognition, and so depended instead on short-term working memory, which is unaffected by rhinal lesions. The findings suggest that monkeys may be unable to place representations of auditory stimuli into a long-term store and thus question whether the monkey's cerebral memory mechanisms in audition are intrinsically different from those in other sensory modalities. Furthermore, it raises the possibility that language is unique to humans not only because it depends on speech but also because it requires long-term auditory memory. PMID:15967995

  19. Auditory stimulation and cardiac autonomic regulation

    Directory of Open Access Journals (Sweden)

    Vitor E. Valenti

    2012-08-01

    Full Text Available Previous studies have already demonstrated that auditory stimulation with music influences the cardiovascular system. In this study, we described the relationship between musical auditory stimulation and heart rate variability. Searches were performed with the Medline, SciELO, Lilacs and Cochrane databases using the following keywords: "auditory stimulation", "autonomic nervous system", "music" and "heart rate variability". The selected studies indicated that there is a strong correlation between noise intensity and vagal-sympathetic balance. Additionally, it was reported that music therapy improved heart rate variability in anthracycline-treated breast cancer patients. It was hypothesized that dopamine release in the striatal system induced by pleasurable songs is involved in cardiac autonomic regulation. Musical auditory stimulation influences heart rate variability through a neural mechanism that is not well understood. Further studies are necessary to develop new therapies to treat cardiovascular disorders.

  20. Auditory filters at low-frequencies

    DEFF Research Database (Denmark)

    Orellana, Carlos Andrés Jurado; Pedersen, Christian Sejer; Møller, Henrik

    2009-01-01

    Prediction and assessment of low-frequency noise problems requires information about the auditory filter characteristics at low-frequencies. Unfortunately, data at low-frequencies is scarce and practically no results have been published for frequencies below 100 Hz. Extrapolation of ERB results...... from previous studies suggests the filter bandwidth keeps decreasing below 100 Hz, although at a relatively lower rate than at higher frequencies. Main characteristics of the auditory filter were studied from below 100 Hz up to 1000 Hz. Center frequencies evaluated were 50, 63, 125, 250, 500, and 1000......-ear transfer function), the asymmetry of the auditory filter changed from steeper high-frequency slopes at 1000 Hz to steeper low-frequency slopes below 100 Hz. Increasing steepness at low-frequencies of the middle-ear high-pass filter is thought to cause this effect. The dynamic range of the auditory filter...

  1. Effect of omega-3 on auditory system

    Directory of Open Access Journals (Sweden)

    Vida Rahimi

    2014-01-01

    Full Text Available Background and Aim: Omega-3 fatty acid have structural and biological roles in the body 's various systems . Numerous studies have tried to research about it. Auditory system is affected a s well. The aim of this article was to review the researches about the effect of omega-3 on auditory system.Methods: We searched Medline , Google Scholar, PubMed, Cochrane Library and SID search engines with the "auditory" and "omega-3" keywords and read textbooks about this subject between 19 70 and 20 13.Conclusion: Both excess and deficient amounts of dietary omega-3 fatty acid can cause harmful effects on fetal and infant growth and development of brain and central nervous system esspesially auditory system. It is important to determine the adequate dosage of omega-3.

  2. Interaction of sight and sound in the perception and experience of musical performance

    OpenAIRE

    Vuoskoski, Jonna K.; Thompson, Marc; Spence, Charles; Clarke, Eric F.

    2016-01-01

    Recently, Vuoskoski, Thompson, Clarke, and Spence (2014) demonstrated that visual kinematic performance cues may be more important than auditory performance cues in terms of observers’ ratings of expressivity perceived in audiovisual excerpts of piano playing, and that visual kinematic performance cues had crossmodal effects on the perception of auditory expressivity. The present study was designed to extend these findings, and to provide additional information about the roles of sight and so...

  3. Prefrontal Cortex Based Sex Differences in Tinnitus Perception: Same Tinnitus Intensity, Same Tinnitus Distress, Different Mood

    OpenAIRE

    Sven Vanneste; Kathleen Joos; Dirk De Ridder

    2012-01-01

    BACKGROUND: Tinnitus refers to auditory phantom sensation. It is estimated that for 2% of the population this auditory phantom percept severely affects the quality of life, due to tinnitus related distress. Although the overall distress levels do not differ between sexes in tinnitus, females are more influenced by distress than males. Typically, pain, sleep, and depression are perceived as significantly more severe by female tinnitus patients. Studies on gender differences in emotional regula...

  4. Corticofugal modulation of peripheral auditory responses

    OpenAIRE

    Terreros, Gonzalo; Delano, Paul H.

    2015-01-01

    The auditory efferent system originates in the auditory cortex and projects to the medial geniculate body (MGB), inferior colliculus (IC), cochlear nucleus (CN) and superior olivary complex (SOC) reaching the cochlea through olivocochlear (OC) fibers. This unique neuronal network is organized in several afferent-efferent feedback loops including: the (i) colliculo-thalamic-cortico-collicular; (ii) cortico-(collicular)-OC; and (iii) cortico-(collicular)-CN pathways. Recent experiments demonstr...

  5. Corticofugal modulation of peripheral auditory responses

    OpenAIRE

    Paul Hinckley Delano

    2015-01-01

    The auditory efferent system originates in the auditory cortex and projects to the medial geniculate body, inferior colliculus, cochlear nucleus and superior olivary complex reaching the cochlea through olivocochlear fibers. This unique neuronal network is organized in several afferent-efferent feedback loops including: the (i) colliculo-thalamic-cortico-collicular, (ii) cortico-(collicular)-olivocochlear and (iii) cortico-(collicular)-cochlear nucleus pathways. Recent experiments demonstrate...

  6. Auditory memory function in expert chess players

    OpenAIRE

    Fattahi, Fariba; Geshani, Ahmad; Jafari, Zahra; Jalaie, Shohreh; Salman Mahini, Mona

    2015-01-01

    Background: Chess is a game that involves many aspects of high level cognition such as memory, attention, focus and problem solving. Long term practice of chess can improve cognition performances and behavioral skills. Auditory memory, as a kind of memory, can be influenced by strengthening processes following long term chess playing like other behavioral skills because of common processing pathways in the brain. The purpose of this study was to evaluate the auditory memory function of expert...

  7. Auditory brain-stem responses in syphilis.

    OpenAIRE

    Rosenhall, U; Roupe, G

    1981-01-01

    Analysis of auditory brain-stem electrical responses (BSER) provides an effective means of detecting lesions in the auditory pathways. In the present study the wave patterns were analysed in 11 patients with secondary or latent syphilis with no clinical symptoms referrable to the central nervous system and in two patients with congenital syphilis and general paralysis. Decreased amplitudes and prolonged latencies occurred frequently in patients with secondary and with advanced syphilis. This ...

  8. Auditory sequence analysis and phonological skill

    OpenAIRE

    Grube, Manon; Kumar, Sukhbinder; Cooper, Freya E.; Turton, Stuart; Griffiths, Timothy D

    2012-01-01

    This work tests the relationship between auditory and phonological skill in a non-selected cohort of 238 school students (age 11) with the specific hypothesis that sound-sequence analysis would be more relevant to phonological skill than the analysis of basic, single sounds. Auditory processing was assessed across the domains of pitch, time and timbre; a combination of six standard tests of literacy and language ability was used to assess phonological skill. A significant correlation between ...

  9. On the Physical Development and Plasticity of Auditory Neuro System%听觉体验的神经生理发展及可塑性研究

    Institute of Scientific and Technical Information of China (English)

    王晓宇; 黄玲玲

    2011-01-01

    人脑的许多重要功能(如感知、学习、记忆、语言、意识、情绪、运动控制等)都与听觉体验紧密相关,人有高度进化的听觉体验系统,能够全方位帮助人们检测、快速地加工并体验有生物意义的声信号(语言),指导特殊的行为(如言语辨知和交流)。听觉体验的神经生理机制在听觉的可塑性中具有举足轻重的作用。%A number of important functions of hu man brain (such as perception, learning, memory, language, consciousness,emotion,motion control,etc.) are closely related with auditory perception. People have highly evolved auditory perception system, which is capable of all-round testing, rapid processing and perception to be biologically significant acoustic signals (language) and to guide specific behaviors (such as verbal identified knowledge and communication) . Auditory experience nervous system development is the important route and plays a significant role in rebuilding auditory experience.

  10. Speech Evoked Auditory Brainstem Response in Stuttering

    Directory of Open Access Journals (Sweden)

    Ali Akbar Tahaei

    2014-01-01

    Full Text Available Auditory processing deficits have been hypothesized as an underlying mechanism for stuttering. Previous studies have demonstrated abnormal responses in subjects with persistent developmental stuttering (PDS at the higher level of the central auditory system using speech stimuli. Recently, the potential usefulness of speech evoked auditory brainstem responses in central auditory processing disorders has been emphasized. The current study used the speech evoked ABR to investigate the hypothesis that subjects with PDS have specific auditory perceptual dysfunction. Objectives. To determine whether brainstem responses to speech stimuli differ between PDS subjects and normal fluent speakers. Methods. Twenty-five subjects with PDS participated in this study. The speech-ABRs were elicited by the 5-formant synthesized syllable/da/, with duration of 40 ms. Results. There were significant group differences for the onset and offset transient peaks. Subjects with PDS had longer latencies for the onset and offset peaks relative to the control group. Conclusions. Subjects with PDS showed a deficient neural timing in the early stages of the auditory pathway consistent with temporal processing deficits and their abnormal timing may underlie to their disfluency.

  11. Time course of auditory streaming: Do CI users differ from normal-hearing listeners?

    Directory of Open Access Journals (Sweden)

    MartinBöckmann-Barthel

    2014-07-01

    Full Text Available In a complex acoustical environment with multiple sound sources the auditory system uses streaming as a tool to organize the incoming sounds in one or more streams depending on the stimulus parameters. Streaming is commonly studied by alternating sequences of signals. These are often tones with different frequencies. The present study investigates stream segregation in cochlear implant (CI users, where hearing is restored by electrical stimulation of the auditory nerve. CI users listened to 30-s long sequences of alternating A and B harmonic complexes at four different fundamental frequency separations, ranging from 2 to 14 semitones. They had to indicate as promptly as possible after sequence onset, if they perceived one stream or two streams and, in addition, any changes of the percept throughout the rest of the sequence. The conventional view is that the initial percept is always that of a single stream which may after some time change to a percept of two streams. This general build-up hypothesis has recently been challenged on the basis of a new analysis of data of normal-hearing listeners which showed a build-up response only for an intermediate frequency separation. Using the same experimental paradigm and analysis, the present study found that the results of CI users agree with those of the normal-hearing listeners: (i the probability of the first decision to be a one-stream percept decreased and that of a two-stream percept increased as Δf increased, and (ii a build-up was only found for 6 semitones. Only the time elapsed before the listeners made their first decision of the percept was prolonged as compared to normal-hearing listeners. The similarity in the data of the CI user and the normal-hearing listeners indicates that the quality of stream formation is similar in these groups of listeners.

  12. Action-based effects on music perception

    Directory of Open Access Journals (Sweden)

    Pieter-Jan eMaes

    2014-01-01

    Full Text Available The classical, disembodied approach to music cognition conceptualizes action and perception as separate, peripheral phenomena. In contrast, embodied accounts to music cognition emphasize the central role of the close coupling of action and perception. It is a commonly established fact that perception spurs action tendencies. We present a theoretical framework capturing the ways that the human motor system, and the actions it produces, can reciprocally influence the perception of music. The cornerstone of this framework is the common coding theory postulating a representational overlap in the brain between the planning, the execution, and the perception of movement. The integration of action and perception in so-called internal models is explained as a result of associative learning processes. Characteristic of internal models is that they allow intended or perceived sensory states to be transferred into corresponding motor commands (inverse modelling, and vice versa, to predict the sensory outcomes of planned actions (forward modelling. Embodied accounts typically adhere to inverse modelling to explain action effects on music perception (Leman, 2007. We extent this account by pinpointing forward modelling as an alternative mechanism by which action can modulate perception. We provide an extensive overview of recent empirical evidence in support of this idea. Additionally, we demonstrate that motor dysfunctions can cause perceptual disabilities, supporting the main idea of the paper that the human motor system plays a functional role in auditory perception. The finding that music perception is shaped by the human motor system, and the action it produces, suggests that the musical mind is highly embodied. However, we advocate for a more radical approach to embodied (music cognition in the sense that it needs to be considered as a dynamic process, in which aspects of action, perception, introspection, and social interaction are of crucial

  13. Action-based effects on music perception.

    Science.gov (United States)

    Maes, Pieter-Jan; Leman, Marc; Palmer, Caroline; Wanderley, Marcelo M

    2014-01-01

    The classical, disembodied approach to music cognition conceptualizes action and perception as separate, peripheral processes. In contrast, embodied accounts of music cognition emphasize the central role of the close coupling of action and perception. It is a commonly established fact that perception spurs action tendencies. We present a theoretical framework that captures the ways in which the human motor system and its actions can reciprocally influence the perception of music. The cornerstone of this framework is the common coding theory, postulating a representational overlap in the brain between the planning, the execution, and the perception of movement. The integration of action and perception in so-called internal models is explained as a result of associative learning processes. Characteristic of internal models is that they allow intended or perceived sensory states to be transferred into corresponding motor commands (inverse modeling), and vice versa, to predict the sensory outcomes of planned actions (forward modeling). Embodied accounts typically refer to inverse modeling to explain action effects on music perception (Leman, 2007). We extend this account by pinpointing forward modeling as an alternative mechanism by which action can modulate perception. We provide an extensive overview of recent empirical evidence in support of this idea. Additionally, we demonstrate that motor dysfunctions can cause perceptual disabilities, supporting the main idea of the paper that the human motor system plays a functional role in auditory perception. The finding that music perception is shaped by the human motor system and its actions suggests that the musical mind is highly embodied. However, we advocate for a more radical approach to embodied (music) cognition in the sense that it needs to be considered as a dynamical process, in which aspects of action, perception, introspection, and social interaction are of crucial importance. PMID:24454299

  14. Combined mirror visual and auditory feedback therapy for upper limb phantom pain: a case report

    Directory of Open Access Journals (Sweden)

    Yan Kun

    2011-01-01

    Full Text Available Abstract Introduction Phantom limb sensation and phantom limb pain is a very common issue after amputations. In recent years there has been accumulating data implicating 'mirror visual feedback' or 'mirror therapy' as helpful in the treatment of phantom limb sensation and phantom limb pain. Case presentation We present the case of a 24-year-old Caucasian man, a left upper limb amputee, treated with mirror visual feedback combined with auditory feedback with improved pain relief. Conclusion This case may suggest that auditory feedback might enhance the effectiveness of mirror visual feedback and serve as a valuable addition to the complex multi-sensory processing of body perception in patients who are amputees.

  15. Frequency Selectivity Behaviour in the Auditory Midbrain: Implications of Model Study

    Institute of Scientific and Technical Information of China (English)

    KUANG Shen-Bing; WANG Jia-Fu; ZENG Ting

    2006-01-01

    By numerical simulations on frequency dependence of the spiking threshold, i.e. on the critical amplitude of periodic stimulus, for a neuron to fire, we find that bushy cells in the cochlear nuclear exhibit frequency selectivity behaviour. However, the selective frequency band of a bushy cell is far away from that of the preferred spectral range in human and mammal auditory perception. The mechanism underlying this neural activity is also discussed. Further studies show that the ion channel densities have little impact on the selective frequency band of bushy cells. These findings suggest that the neuronal behaviour of frequency selectivity in bushy cells at both the single cell and population levels may be not functionally relevant to frequency discrimination. Our results may reveal a neural hint to the reconsideration on the bushy cell functional role in auditory information processing of sound frequency.

  16. Auditory-prefrontal axonal connectivity in the macaque cortex: quantitative assessment of processing streams.

    Science.gov (United States)

    Bezgin, Gleb; Rybacki, Konrad; van Opstal, A John; Bakker, Rembrandt; Shen, Kelly; Vakorin, Vasily A; McIntosh, Anthony R; Kötter, Rolf

    2014-08-01

    Primate sensory systems subserve complex neurocomputational functions. Consequently, these systems are organised anatomically in a distributed fashion, commonly linking areas to form specialised processing streams. Each stream is related to a specific function, as evidenced from studies of the visual cortex, which features rather prominent segregation into spatial and non-spatial domains. It has been hypothesised that other sensory systems, including auditory, are organised in a similar way on the cortical level. Recent studies offer rich qualitative evidence for the dual stream hypothesis. Here we provide a new paradigm to quantitatively uncover these patterns in the auditory system, based on an analysis of multiple anatomical studies using multivariate techniques. As a test case, we also apply our assessment techniques to more ubiquitously-explored visual system. Importantly, the introduced framework opens the possibility for these techniques to be applied to other neural systems featuring a dichotomised organisation, such as language or music perception. PMID:24980416

  17. Speech-specific audiovisual perception affects identification but not detection of speech

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Andersen, Tobias

    Speech perception is audiovisual as evidenced by the McGurk effect in which watching incongruent articulatory mouth movements can change the phonetic auditory speech percept. This type of audiovisual integration may be specific to speech or be applied to all stimuli in general. To investigate this...... audiovisual integration specific to speech perception. However, the results of Tuomainen et al. might have been influenced by another effect. When observers were naïve, they had little motivation to look at the face. When informed, they knew that the face was relevant for the task and this could increase...... noise were measured for naïve and informed participants. We found that the threshold for detecting speech in audiovisual stimuli was lower than for auditory-only stimuli. But there was no detection advantage for observers informed of the speech nature of the auditory signal. This may indicate that...

  18. Improvement of auditory hallucinations and reduction of primary auditory area's activation following TMS

    International Nuclear Information System (INIS)

    Background: In the present case study, improvement of auditory hallucinations following transcranial magnetic stimulation (TMS) therapy was investigated with respect to activation changes of the auditory cortices. Methods: Using functional magnetic resonance imaging (fMRI), activation of the auditory cortices was assessed prior to and after a 4-week TMS series of the left superior temporal gyrus in a schizophrenic patient with medication-resistant auditory hallucinations. Results: Hallucinations decreased slightly after the third and profoundly after the fourth week of TMS. Activation in the primary auditory area decreased, whereas activation in the operculum and insula remained stable. Conclusions: Combination of TMS and repetitive fMRI is promising to elucidate the physiological changes induced by TMS.

  19. External auditory canal carcinoma treatment

    International Nuclear Information System (INIS)

    External auditory canal (EAC) carcinomas are relatively rare conditions lack on established treatment strategy. We analyzed a treatment modalities and outcome in 32 cases of EAC squamous cell carcinoma treated between 1980 and 2008. Subjects-17 men and 15 women ranging from 33 to 92 years old (average: 66) were divided by Arriaga's tumor staging into 12 T1, 5 T2, 6 T3, and 9 T4. Survival was calculated by the Kaplan-Meier method. Disease-specific 5-year survival was 100% for T1, T2, 44% for T3, and 33% for T4. In contrast to 100% 5-year survival for T1+T2 cancer, the 5-year survival for T3+T4 cancer was 37% with high recurrence due to positive surgical margins. The first 22 years of the 29 years surveyed, we performed surgery mainly, and irradiation or chemotherapy was selected for early disease or cases with positive surgical margins as postoperative therapy. During the 22-years, 5-year survival with T3+T4 cancer was 20%. After we started superselective intra-arterial (IA) rapid infusion chemotherapy combined with radiotherapy in 2003, we achieved negative surgical margins for advanced disease, and 5-year survival for T3+T4 cancer rise to 80%. (author)

  20. Neural correlates of auditory-cognitive processing in older adult cochlear implant recipients.

    Science.gov (United States)

    Henkin, Yael; Yaar-Soffer, Yifat; Steinberg, Meidan; Muchnik, Chava

    2014-01-01

    With the growing number of older adults receiving cochlear implants (CI), there is general agreement that substantial benefits can be gained. Nonetheless, variability in speech perception performance is high, and the relative contribution and interactions among peripheral, central-auditory, and cognitive factors are not fully understood. The goal of the present study was to compare auditory-cognitive processing in older-adult CI recipients with that of older normal-hearing (NH) listeners by means of behavioral and electrophysiologic manifestations of a high-load cognitive task. Auditory event-related potentials (AERPs) were recorded from 9 older postlingually deafened adults with CI (age at CI >60) and 10 age-matched listeners with NH, while performing an auditory Stroop task. Participants were required to classify the speaker's gender (male/female) that produced the words 'mother' or 'father' while ignoring the irrelevant congruent or incongruent word meaning. Older CI and NH listeners exhibited comparable reaction time, performance accuracy, and initial sensory-perceptual processing (i.e. N1 potential). Nonetheless, older CI recipients showed substantially prolonged and less efficient perceptual processing (i.e. P3 potential). Congruency effects manifested in longer reaction time (i.e. Stroop effect), execution time, and P3 latency to incongruent versus congruent stimuli in both groups in a similar fashion; however, markedly prolonged P3 and shortened execution time were evident in older CI recipients. Collectively, older adults (CI and NH) employed a combined perceptual and postperceptual conflict processing strategy; nonetheless, the relative allotment of perceptual resources was substantially enhanced to maintain adequate performance in CI recipients. In sum, the recording of AERPs together with the simultaneously obtained behavioral measures during a Stroop task exposed a differential time course of auditory-cognitive processing in older CI recipients that

  1. The music of your emotions: neural substrates involved in detection of emotional correspondence between auditory and visual music actions

    OpenAIRE

    Petrini, K.; Crabbe, F.; Sheridan, C; Pollick, F. E.

    2011-01-01

    In humans, emotions from music serve important communicative roles. Despite a growing interest in the neural basis of music perception, action and emotion, the majority of previous studies in this area have focused on the auditory aspects of music performances. Here we investigate how the brain processes the emotions elicited by audiovisual music performances. We used event-related functional magnetic resonance imaging, and in Experiment 1 we defined the areas responding to audiovisual (music...

  2. Steroid-dependent auditory plasticity for the enhancement of acoustic communication: recent insights from a vocal teleost fish

    OpenAIRE

    Sisneros, Joseph A.

    2009-01-01

    The vocal plainfin midshipman fish (Porichthys notatus) has become an excellent model for identifying neural mechanisms of auditory perception that may be shared by all vertebrates. Recent neuroethological studies of the midshipman fish have yielded strong evidence for the steroid-dependent modulation of hearing sensitivity that leads to enhanced coupling of sender and receiver in this vocal-acoustic communication system. Previous work shows that non-reproductive females treated with either t...

  3. Efficacy of auditory training in elderly subjects

    Directory of Open Access Journals (Sweden)

    Aline Albuquerque Morais

    2015-05-01

    Full Text Available Auditory training (AT  has been used for auditory rehabilitation in elderly individuals and is an effective tool for optimizing speech processing in this population. However, it is necessary to distinguish training-related improvements from placebo and test-retest effects. Thus, we investigated the efficacy of short-term auditory training (acoustically controlled auditory training - ACAT in elderly subjects through behavioral measures and P300. Sixteen elderly individuals with APD received an initial evaluation (evaluation 1 - E1 consisting of behavioral and electrophysiological tests (P300 evoked by tone burst and speech sounds to evaluate their auditory processing. The individuals were divided into two groups. The Active Control Group [ACG (n=8] underwent placebo training. The Passive Control Group [PCG (n=8] did not receive any intervention. After 12 weeks, the subjects were  revaluated (evaluation 2 - E2. Then, all of the subjects underwent ACAT. Following another 12 weeks (8 training sessions, they underwent the final evaluation (evaluation 3 – E3. There was no significant difference between E1 and E2 in the behavioral test [F(9.6=0,.6 p=0.92, λ de Wilks=0.65] or P300 [F(8.7=2.11, p=0.17, λ de Wilks=0.29] (discarding the presence of placebo effects and test-retest. A significant improvement was observed between the pre- and post-ACAT conditions (E2 and E3 for all auditory skills according to the behavioral methods [F(4.27=0.18, p=0.94, λ de Wilks=0.97]. However, the same result was not observed for P300 in any condition. There was no significant difference between P300 stimuli. The ACAT improved the behavioral performance of the elderly for all auditory skills and was an effective method for hearing rehabilitation.

  4. Musical experience and the aging auditory system: implications for cognitive abilities and hearing speech in noise.

    Directory of Open Access Journals (Sweden)

    Alexandra Parbery-Clark

    Full Text Available Much of our daily communication occurs in the presence of background noise, compromising our ability to hear. While understanding speech in noise is a challenge for everyone, it becomes increasingly difficult as we age. Although aging is generally accompanied by hearing loss, this perceptual decline cannot fully account for the difficulties experienced by older adults for hearing in noise. Decreased cognitive skills concurrent with reduced perceptual acuity are thought to contribute to the difficulty older adults experience understanding speech in noise. Given that musical experience positively impacts speech perception in noise in young adults (ages 18-30, we asked whether musical experience benefits an older cohort of musicians (ages 45-65, potentially offsetting the age-related decline in speech-in-noise perceptual abilities and associated cognitive function (i.e., working memory. Consistent with performance in young adults, older musicians demonstrated enhanced speech-in-noise perception relative to nonmusicians along with greater auditory, but not visual, working memory capacity. By demonstrating that speech-in-noise perception and related cognitive function are enhanced in older musicians, our results imply that musical training may reduce the impact of age-related auditory decline.

  5. Musical experience and the aging auditory system: implications for cognitive abilities and hearing speech in noise.

    Science.gov (United States)

    Parbery-Clark, Alexandra; Strait, Dana L; Anderson, Samira; Hittner, Emily; Kraus, Nina

    2011-01-01

    Much of our daily communication occurs in the presence of background noise, compromising our ability to hear. While understanding speech in noise is a challenge for everyone, it becomes increasingly difficult as we age. Although aging is generally accompanied by hearing loss, this perceptual decline cannot fully account for the difficulties experienced by older adults for hearing in noise. Decreased cognitive skills concurrent with reduced perceptual acuity are thought to contribute to the difficulty older adults experience understanding speech in noise. Given that musical experience positively impacts speech perception in noise in young adults (ages 18-30), we asked whether musical experience benefits an older cohort of musicians (ages 45-65), potentially offsetting the age-related decline in speech-in-noise perceptual abilities and associated cognitive function (i.e., working memory). Consistent with performance in young adults, older musicians demonstrated enhanced speech-in-noise perception relative to nonmusicians along with greater auditory, but not visual, working memory capacity. By demonstrating that speech-in-noise perception and related cognitive function are enhanced in older musicians, our results imply that musical training may reduce the impact of age-related auditory decline. PMID:21589653

  6. Auditory color constancy: calibration to reliable spectral properties across nonspeech context and targets.

    Science.gov (United States)

    Stilp, Christian E; Alexander, Joshua M; Kiefte, Michael; Kluender, Keith R

    2010-02-01

    Brief experience with reliable spectral characteristics of a listening context can markedly alter perception of subsequent speech sounds, and parallels have been drawn between auditory compensation for listening context and visual color constancy. In order to better evaluate such an analogy, the generality of acoustic context effects for sounds with spectral-temporal compositions distinct from speech was investigated. Listeners identified nonspeech sounds-extensively edited samples produced by a French horn and a tenor saxophone-following either resynthesized speech or a short passage of music. Preceding contexts were "colored" by spectral envelope difference filters, which were created to emphasize differences between French horn and saxophone spectra. Listeners were more likely to report hearing a saxophone when the stimulus followed a context filtered to emphasize spectral characteristics of the French horn, and vice versa. Despite clear changes in apparent acoustic source, the auditory system calibrated to relatively predictable spectral characteristics of filtered context, differentially affecting perception of subsequent target nonspeech sounds. This calibration to listening context and relative indifference to acoustic sources operates much like visual color constancy, for which reliable properties of the spectrum of illumination are factored out of perception of color. PMID:20139460

  7. Assessing the validity of subjective reports in the auditory streaming paradigm.

    Science.gov (United States)

    Farkas, Dávid; Denham, Susan L; Bendixen, Alexandra; Winkler, István

    2016-04-01

    While subjective reports provide a direct measure of perception, their validity is not self-evident. Here, the authors tested three possible biasing effects on perceptual reports in the auditory streaming paradigm: errors due to imperfect understanding of the instructions, voluntary perceptual biasing, and susceptibility to implicit expectations. (1) Analysis of the responses to catch trials separately promoting each of the possible percepts allowed the authors to exclude participants who likely have not fully understood the instructions. (2) Explicit biasing instructions led to markedly different behavior than the conventional neutral-instruction condition, suggesting that listeners did not voluntarily bias their perception in a systematic way under the neutral instructions. Comparison with a random response condition further supported this conclusion. (3) No significant relationship was found between social desirability, a scale-based measure of susceptibility to implicit social expectations, and any of the perceptual measures extracted from the subjective reports. This suggests that listeners did not significantly bias their perceptual reports due to possible implicit expectations present in the experimental context. In sum, these results suggest that valid perceptual data can be obtained from subjective reports in the auditory streaming paradigm. PMID:27106324

  8. Auditory and motor imagery modulate learning in music performance

    Science.gov (United States)

    Brown, Rachel M.; Palmer, Caroline

    2013-01-01

    Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced novel melodies), and retrieval (during Recall of those melodies). Pianists learned melodies by listening without performing (auditory learning) or performing without sound (motor learning); following Learning, pianists performed the melodies from memory with auditory feedback (Recall). During either Learning (Experiment 1) or Recall (Experiment 2), pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced) and temporal regularity (variability of quarter-note interonset intervals) were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists' pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2). Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1): Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2): Higher auditory imagery skill predicted greater temporal regularity during Recall in the presence of

  9. Auditory and motor imagery modulate learning in music performance.

    Science.gov (United States)

    Brown, Rachel M; Palmer, Caroline

    2013-01-01

    Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced novel melodies), and retrieval (during Recall of those melodies). Pianists learned melodies by listening without performing (auditory learning) or performing without sound (motor learning); following Learning, pianists performed the melodies from memory with auditory feedback (Recall). During either Learning (Experiment 1) or Recall (Experiment 2), pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced) and temporal regularity (variability of quarter-note interonset intervals) were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists' pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2). Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1): Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2): Higher auditory imagery skill predicted greater temporal regularity during Recall in the presence of

  10. Changing Perceptions

    Science.gov (United States)

    Mallett, Susanne; Wren, Steve; Dawes, Mark; Blinco, Amy; Haines, Brett; Everton, Jenny; Morgan, Ellen; Barton, Craig; Breen, Debbie; Ellison, Geraldine; Burgess, Danny; Stavrou, Jim; Carre, Catherine; Watson, Fran; Cherry, David; Hawkins, Chris; Stapenhill-Hunt, Maria; Gilderdale, Charlie; Kiddle, Alison; Piggott, Jennifer

    2009-01-01

    A group of teachers involved in embedding NRICH tasks (http://nrich.maths.org) into their everyday practice were keen to challenge common perceptions of mathematics, and of the teaching and learning of mathematics. In this article, the teachers share what they are doing to change these perceptions in their schools.

  11. Auditory Model Identification Using REVCOR Method

    Directory of Open Access Journals (Sweden)

    Lamia Bouafif

    2014-08-01

    Full Text Available Auditory models are very useful in many applications such as speech coding and compression, cochlea prosthesis, and audio watermarking. In this paper we will develop a new auditory model based on the REVCOR method. This technique is based on the estimation of the impulse response of a suitable filter characterizing the auditory neuron and the cochlea. The first step of our study is focused on the development of a mathematical model based on the gammachirp system. This model is then programmed, implemented and simulated under Matlab. The obtained results are compared with the experimental values (REVCOR experiments for the validation and a better optimization of the model parameters. Two objective criteria are used in order to optimize the audio model estimation which are the SNR (signal to noise ratio and the MQE (mean quadratic error. The simulation results demonstrated that for the auditory model, only a reduced number of channels are excited (from 3 to 6. This result is very interesting for auditory implants because only significant channels will be stimulated. Besides, this simplifies the electronic implementation and medical intervention.

  12. Effects of Caffeine on Auditory Brainstem Response

    Directory of Open Access Journals (Sweden)

    Saleheh Soleimanian

    2008-06-01

    Full Text Available Background and Aim: Blocking of the adenosine receptor in central nervous system by caffeine can lead to increasing the level of neurotransmitters like glutamate. As the adenosine receptors are present in almost all brain areas like central auditory pathway, it seems caffeine can change conduction in this way. The purpose of this study was to evaluate the effects of caffeine on latency and amplitude of auditory brainstem response(ABR.Materials and Methods: In this clinical trial study 43 normal 18-25 years old male students were participated. The subjects consumed 0, 2 and 3 mg/kg BW caffeine in three different sessions. Auditory brainstem responses were recorded before and 30 minute after caffeine consumption. The results were analyzed by Friedman and Wilcoxone test to assess the effects of caffeine on auditory brainstem response.Results: Compared to control group the latencies of waves III,V and I-V interpeak interval of the cases decreased significantly after 2 and 3mg/kg BW caffeine consumption. Wave I latency significantly decreased after 3mg/kg BW caffeine consumption(p<0.01. Conclusion: Increasing of the glutamate level resulted from the adenosine receptor blocking brings about changes in conduction in the central auditory pathway.

  13. Auralization of CFD Vorticity Using an Auditory Illusion

    Science.gov (United States)

    Volpe, C. R.

    2005-12-01

    One way in which scientists and engineers interpret large quantities of data is through a process called visualization, i.e. generating graphical images that capture essential characteristics and highlight interesting relationships. Another approach, which has received far less attention, is to present complex information with sound. This approach, called ``auralization" or ``sonification", is the auditory analog of visualization. Early work in data auralization frequently involved directly mapping some variable in the data to a sound parameter, such as pitch or volume. Multi-variate data could be auralized by mapping several variables to several sound parameters simultaneously. A clear drawback of this approach is the limited practical range of sound parameters that can be presented to human listeners without exceeding their range of perception or comfort. A software auralization system built upon an existing visualization system is briefly described. This system incorporates an aural presentation synchronously and interactively with an animated scientific visualization, so that alternate auralization techniques can be investigated. One such alternate technique involves auditory illusions: sounds which trick the listener into perceiving something other than what is actually being presented. This software system will be used to present an auditory illusion, known for decades among cognitive psychologists, which produces a sound that seems to ascend or descend endlessly in pitch. The applicability of this illusion for presenting Computational Fluid Dynamics data will be demonstrated. CFD data is frequently visualized with thin stream-lines, but thicker stream-ribbons and stream-tubes can also be used, which rotate to convey fluid vorticity. But a purely graphical presentation can yield drawbacks of its own. Thicker stream-tubes can be self-obscuring, and can obscure other scene elements as well, thus motivating a different approach, such as using sound. Naturally

  14. Auditory sensory deficits in developmental dyslexia: a longitudinal ERP study.

    Science.gov (United States)

    Stefanics, Gabor; Fosker, Tim; Huss, Martina; Mead, Natasha; Szucs, Denes; Goswami, Usha

    2011-08-01

    The core difficulty in developmental dyslexia across languages is a "phonological deficit", a specific difficulty with the neural representation of the sound structure of words. Recent data across languages suggest that this phonological deficit arises in part from inefficient auditory processing of the rate of change of the amplitude envelope at syllable onset (inefficient sensory processing of rise time). Rise time is a complex percept that also involves changes in duration and perceived intensity. Understanding the neural mechanisms that give rise to the phonological deficit in dyslexia is important for optimising educational interventions. In a three-deviant passive 'oddball' paradigm and a corresponding blocked 'deviant-alone' control condition we recorded ERPs to tones varying in rise time, duration and intensity in children with dyslexia and typically developing children longitudinally. We report here results from test Phases 1 and 2, when participants were aged 8-10 years. We found an MMN to duration, but not to rise time nor intensity deviants, at both time points for both groups. For rise time, duration and intensity we found group effects in both the Oddball and Blocked conditions. There was a slower fronto-central P1 response in the dyslexic group compared to controls. The amplitude of the P1 fronto-centrally to tones with slower rise times and lower intensity was smaller compared to tones with sharper rise times and higher intensity in the Oddball condition, for children with dyslexia only. The latency of this ERP component for all three stimuli was shorter on the right compared to the left hemisphere, only for the dyslexic group in the Blocked condition. Furthermore, we found decreased N1c amplitude to tones with slower rise times compared to tones with sharper rise times for children with dyslexia, only in the Oddball condition. Several other effects of stimulus type, age and laterality were also observed. Our data suggest that neuronal responses

  15. The effect of background music in auditory health persuasion

    NARCIS (Netherlands)

    Elbert, Sarah; Dijkstra, Arie

    2013-01-01

    In auditory health persuasion, threatening information regarding health is communicated by voice only. One relevant context of auditory persuasion is the addition of background music. There are different mechanisms through which background music might influence persuasion, for example through mood (

  16. What determines auditory distraction? On the roles of local auditory changes and expectation violations.

    Directory of Open Access Journals (Sweden)

    Jan P Röer

    Full Text Available Both the acoustic variability of a distractor sequence and the degree to which it violates expectations are important determinants of auditory distraction. In four experiments we examined the relative contribution of local auditory changes on the one hand and expectation violations on the other hand in the disruption of serial recall by irrelevant sound. We present evidence for a greater disruption by auditory sequences ending in unexpected steady state distractor repetitions compared to auditory sequences with expected changing state endings even though the former contained fewer local changes. This effect was demonstrated with piano melodies (Experiment 1 and speech distractors (Experiment 2. Furthermore, it was replicated when the expectation violation occurred after the encoding of the target items (Experiment 3, indicating that the items' maintenance in short-term memory was disrupted by attentional capture and not their encoding. This seems to be primarily due to the violation of a model of the specific auditory distractor sequences because the effect vanishes and even reverses when the experiment provides no opportunity to build up a specific neural model about the distractor sequence (Experiment 4. Nevertheless, the violation of abstract long-term knowledge about auditory regularities seems to cause a small and transient capture effect: Disruption decreased markedly over the course of the experiments indicating that participants habituated to the unexpected distractor repetitions across trials. The overall pattern of results adds to the growing literature that the degree to which auditory distractors violate situation-specific expectations is a more important determinant of auditory distraction than the degree to which a distractor sequence contains local auditory changes.

  17. What Determines Auditory Distraction? On the Roles of Local Auditory Changes and Expectation Violations

    Science.gov (United States)

    Röer, Jan P.; Bell, Raoul; Buchner, Axel

    2014-01-01

    Both the acoustic variability of a distractor sequence and the degree to which it violates expectations are important determinants of auditory distraction. In four experiments we examined the relative contribution of local auditory changes on the one hand and expectation violations on the other hand in the disruption of serial recall by irrelevant sound. We present evidence for a greater disruption by auditory sequences ending in unexpected steady state distractor repetitions compared to auditory sequences with expected changing state endings even though the former contained fewer local changes. This effect was demonstrated with piano melodies (Experiment 1) and speech distractors (Experiment 2). Furthermore, it was replicated when the expectation violation occurred after the encoding of the target items (Experiment 3), indicating that the items' maintenance in short-term memory was disrupted by attentional capture and not their encoding. This seems to be primarily due to the violation of a model of the specific auditory distractor sequences because the effect vanishes and even reverses when the experiment provides no opportunity to build up a specific neural model about the distractor sequence (Experiment 4). Nevertheless, the violation of abstract long-term knowledge about auditory regularities seems to cause a small and transient capture effect: Disruption decreased markedly over the course of the experiments indicating that participants habituated to the unexpected distractor repetitions across trials. The overall pattern of results adds to the growing literature that the degree to which auditory distractors violate situation-specific expectations is a more important determinant of auditory distraction than the degree to which a distractor sequence contains local auditory changes. PMID:24400081

  18. Cooperative dynamics in auditory brain response

    CERN Document Server

    Kwapien, J; Liu, L C; Ioannides, A A

    1998-01-01

    Simultaneous estimates of the activity in the left and right auditory cortex of five normal human subjects were extracted from Multichannel Magnetoencephalography recordings. Left, right and binaural stimulation were used, in separate runs, for each subject. The resulting time-series of left and right auditory cortex activity were analysed using the concept of mutual information. The analysis constitutes an objective method to address the nature of inter-hemispheric correlations in response to auditory stimulations. The results provide a clear evidence for the occurrence of such correlations mediated by a direct information transport, with clear laterality effects: as a rule, the contralateral hemisphere leads by 10-20ms, as can be seen in the average signal. The strength of the inter-hemispheric coupling, which cannot be extracted from the average data, is found to be highly variable from subject to subject, but remarkably stable for each subject.

  19. A Circuit for Motor Cortical Modulation of Auditory Cortical Activity

    OpenAIRE

    Nelson, Anders; Schneider, David M.; Takatoh, Jun; Sakurai, Katsuyasu; Wang, Fan; Mooney, Richard

    2013-01-01

    Normal hearing depends on the ability to distinguish self-generated sounds from other sounds, and this ability is thought to involve neural circuits that convey copies of motor command signals to various levels of the auditory system. Although such interactions at the cortical level are believed to facilitate auditory comprehension during movements and drive auditory hallucinations in pathological states, the synaptic organization and function of circuitry linking the motor and auditory corti...

  20. Auditory and motor imagery modulate learning in music performance

    Directory of Open Access Journals (Sweden)

    Rachel M. Brown

    2013-07-01

    Full Text Available Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians’ encoding (during Learning, as they practiced novel melodies, and retrieval (during Recall of those melodies. Pianists learned melodies by listening without performing (auditory learning or performing without sound (motor learning; following Learning, pianists performed the melodies from memory with auditory feedback (Recall. During either Learning (Experiment 1 or Recall (Experiment 2, pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced and temporal regularity (variability of quarter-note interonset intervals were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists’ pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2. Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1: Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2: Higher auditory imagery skill predicted greater temporal regularity during Recall in the

  1. Functional neuroanatomy of auditory scene analysis in Alzheimer's disease

    OpenAIRE

    Golden, Hannah L.; Jennifer L. Agustus; Johanna C. Goll; Downey, Laura E; Mummery, Catherine J.; Jonathan M Schott; Crutch, Sebastian J.; Jason D Warren

    2015-01-01

    Auditory scene analysis is a demanding computational process that is performed automatically and efficiently by the healthy brain but vulnerable to the neurodegenerative pathology of Alzheimer's disease. Here we assessed the functional neuroanatomy of auditory scene analysis in Alzheimer's disease using the well-known ‘cocktail party effect’ as a model paradigm whereby stored templates for auditory objects (e.g., hearing one's spoken name) are used to segregate auditory ‘foreground’ and ‘back...

  2. Behavioral correlates of auditory streaming in rhesus macaques

    OpenAIRE

    Christison-Lagay, Kate L.; Cohen, Yale E.

    2013-01-01

    Perceptual representations of auditory stimuli (i.e., sounds) are derived from the auditory system’s ability to segregate and group the spectral, temporal, and spatial features of auditory stimuli—a process called “auditory scene analysis”. Psychophysical studies have identified several of the principles and mechanisms that underlie a listener’s ability to segregate and group acoustic stimuli. One important psychophysical task that has illuminated many of these principles and mechanisms is th...

  3. Auditory ERP response to successive stimuli in infancy

    OpenAIRE

    Chen, Ao; Peter, Varghese; Burnham, Denis

    2016-01-01

    Background. Auditory Event-Related Potentials (ERPs) are useful for understanding early auditory development among infants, as it allows the collection of a relatively large amount of data in a short time. So far, studies that have investigated development in auditory ERPs in infancy have mainly used single sounds as stimuli. Yet in real life, infants must decode successive rather than single acoustic events. In the present study, we tested 4-, 8-, and 12-month-old infants’ auditory ERPs to m...

  4. Auditory Neuropathy Spectrum Disorder Masquerading as Social Anxiety

    OpenAIRE

    Behere, Rishikesh V.; Rao, Mukund G.; Mishra, Shree; Varambally, Shivarama; Nagarajarao, Shivashankar; Bangalore N Gangadhar

    2015-01-01

    The authors report a case of a 47-year-old man who presented with treatment-resistant anxiety disorder. Behavioral observation raised clinical suspicion of auditory neuropathy spectrum disorder. The presence of auditory neuropathy spectrum disorder was confirmed on audiological investigations. The patient was experiencing extreme symptoms of anxiety, which initially masked the underlying diagnosis of auditory neuropathy spectrum disorder. Challenges in diagnosis and treatment of auditory neur...

  5. Auditory Brainstem Response Improvements in Hyperbillirubinemic Infants

    Science.gov (United States)

    Abdollahi, Farzaneh Zamiri; Manchaiah, Vinaya; Lotfi, Yones

    2016-01-01

    Background and Objectives Hyperbillirubinemia in infants have been associated with neuronal damage including in the auditory system. Some researchers have suggested that the bilirubin-induced auditory neuronal damages may be temporary and reversible. This study was aimed at investigating the auditory neuropathy and reversibility of auditory abnormalities in hyperbillirubinemic infants. Subjects and Methods The study participants included 41 full term hyperbilirubinemic infants (mean age 39.24 days) with normal birth weight (3,200-3,700 grams) that admitted in hospital for hyperbillirubinemia and 39 normal infants (mean age 35.54 days) without any hyperbillirubinemia or other hearing loss risk factors for ruling out maturational changes. All infants in hyperbilirubinemic group had serum bilirubin level more than 20 milligram per deciliter and undergone one blood exchange transfusion. Hearing evaluation for each infant was conducted twice: the first one after hyperbilirubinemia treatment and before leaving hospital and the second one three months after the first hearing evaluation. Hearing evaluations included transient evoked otoacoustic emission (TEOAE) screening and auditory brainstem response (ABR) threshold tracing. Results The TEOAE and ABR results of control group and TEOAE results of the hyperbilirubinemic group did not change significantly from the first to the second evaluation. However, the ABR results of the hyperbilirubinemic group improved significantly from the first to the second assessment (p=0.025). Conclusions The results suggest that the bilirubin induced auditory neuronal damage can be reversible over time so we suggest that infants with hyperbilirubinemia who fail the first hearing tests should be reevaluated after 3 months of treatment. PMID:27144228

  6. A virtual auditory environment for investigating the auditory signal processing of realistic sounds

    DEFF Research Database (Denmark)

    Favrot, Sylvain Emmanuel

    A loudspeaker-based virtual auditory environment (VAE) has been developed to provide a realistic versatile research environment for investigating the auditory signal processing in real environments, i.e., considering multiple sound sources and room reverberation. The VAE allows a full control of...... the acoustic scenario in order to systematically study the auditory processing of reverberant sounds. It is based on the ODEON software, which is state-of-the-art software for room acoustic simulations developed at Acoustic Technology, DTU. First, a MATLAB interface to the ODEON software has been...

  7. Auditory Hypersensitivity in Children with Autism Spectrum Disorders

    Science.gov (United States)

    Lucker, Jay R.

    2013-01-01

    A review of records was completed to determine whether children with auditory hypersensitivities have difficulty tolerating loud sounds due to auditory-system factors or some other factors not directly involving the auditory system. Records of 150 children identified as not meeting autism spectrum disorders (ASD) criteria and another 50 meeting…

  8. AN EVALUATION OF AUDITORY LEARNING IN FILIAL IMPRINTING

    NARCIS (Netherlands)

    BOLHUIS, JJ; VANKAMPEN, HS

    1992-01-01

    The characteristics of auditory learning in filial imprinting in precocial birds are reviewed. Numerous studies have demonstrated that the addition of an auditory stimulus improves following of a visual stimulus. This paper evaluates whether there is genuine auditory imprinting, i.e. the formation o

  9. Auditory Stream Biasing in Children with Reading Impairments

    Science.gov (United States)

    Ouimet, Tialee; Balaban, Evan

    2010-01-01

    Reading impairments have previously been associated with auditory processing differences. We examined "auditory stream biasing", a global aspect of auditory temporal processing. Children with reading impairments, control children and adults heard a 10 s long stream-bias-inducing sound sequence (a repeating 1000 Hz tone) and a test sequence (eight…

  10. On the Role of Crossmodal Prediction in Audiovisual Emotion Perception

    Directory of Open Access Journals (Sweden)

    Sarah eJessen

    2013-07-01

    Full Text Available Humans rely on multiple sensory modalities to determine the emotional state of others. In fact, such multisensory perception may be one of the mechanisms explaining the ease and efficiency by which others’ emotions are recognized. But how and when exactly do the different modalities interact? One aspect in multisensory perception that has received increasing interest in recent years is the concept of crossmodal prediction. In emotion perception, as in most other settings, visual information precedes the auditory one. Thereby, leading in visual information can facilitate subsequent auditory processing. While this mechanism has often been described in audiovisual speech perception, it has not been addressed so far in audiovisual emotion perception. Based on the current state of the art in (a crossmodal prediction and (b multisensory emotion perception research, we propose that it is essential to consider the former in order to fully understand the latter. Focusing on electroencephalographic (EEG and magnetoencephalographic (MEG studies, we provide a brief overview of the current research in both fields. In discussing these findings, we suggest that emotional visual information may allow for a more reliable prediction of auditory information compared to non-emotional visual information. In support of this hypothesis, we present a re-analysis of a previous data set that shows an inverse correlation between the N1 response in the EEG and the duration of visual emotional but not non-emotional information. If the assumption that emotional content allows for more reliable predictions can be corroborated in future studies, crossmodal prediction is a crucial factor in our understanding of multisensory emotion perception.

  11. Auditory issues in handheld land mine detectors

    Science.gov (United States)

    Vause, Nancy L.; Letowski, Tomasz R.; Ferguson, Larry G.; Mermagen, Timothy J.

    1999-08-01

    Most handled landmine detection systems use tones or other simple acoustic signals to provide detector information to the operator. Such signals are not necessarily the best carriers of information about the characteristics of hidden objects. To be effective, the auditory signals must present the information in a manner that the operator can comfortably and efficiently, the auditory signals must present the information in a manner that the operator can comfortably and efficiently interpret under stress and high mental load. The signals must also preserve their audibility and specific properties in various adverse acoustic environments. This paper will present several issues on optimizing the audio display interface between the operator and machine.

  12. Binaural processing by the gecko auditory periphery

    DEFF Research Database (Denmark)

    Christensen-Dalsgaard, Jakob; Tang, Ye Zhong; Carr, Catherine E

    2011-01-01

    Tokay gecko with neurophysiological recordings from the auditory nerve. Laser vibrometry shows that their ear is a two-input system with approximately unity interaural transmission gain at the peak frequency (around 1.6 kHz). Median interaural delays are 260 μs, almost three times larger than predicted...... from gecko head size, suggesting interaural transmission may be boosted by resonances in the large, open mouth cavity (Vossen et al., 2010). Auditory nerve recordings are sensitive to both interaural time differences (ITD) and interaural level differences (ILD), reflecting the acoustical interactions...

  13. Evaluating auditory stream segregation of SAM tone sequences by subjective and objective psychoacoustical tasks, and brain activity

    Directory of Open Access Journals (Sweden)

    Lena-Vanessa eDollezal

    2014-06-01

    Full Text Available Auditory stream segregation refers to a segregated percept of signal streams with different acoustic features. Different approaches have been pursued in studies of stream segregation. In psychoacoustics, stream segregation has mostly been investigated with a subjective task asking the subjects to report their percept. Few studies have applied an objective task in which stream segregation is evaluated indirectly by determining thresholds for a percept that depends on whether auditory streams are segregated or not. Furthermore, both perceptual measures and physiological measures of brain activity have been employed but only little is known about their relation. How the results from different tasks and measures are related is evaluated in the present study using examples relying on the ABA- stimulation paradigm that apply the same stimuli. We presented A and B signals that were sinusoidally amplitude modulated (SAM tones providing purely temporal, spectral or both types of cues to evaluate perceptual stream segregation and its physiological correlate. Which types of cues are most prominent was determined by the choice of carrier and modulation frequencies (fmod of the signals. In the subjective task subjects reported their percept and in the objective task we measured their sensitivity for detecting time-shifts of B signals in an ABA- sequence. As a further measure of processes underlying stream segregation we employed functional magnetic resonance imaging (fMRI. SAM tone parameters were chosen to evoke an integrated (1-stream, a segregated (2-stream or an ambiguous percept by adjusting the fmod difference between A and B tones (∆fmod. The results of both psychoacoustical tasks are significantly correlated. BOLD responses in fMRI depend on ∆fmod between A and B SAM tones. The effect of ∆fmod, however, differs between auditory cortex and frontal regions suggesting differences in representation related to the degree of perceptual ambiguity of

  14. To modulate and be modulated: estrogenic influences on auditory processing of communication signals within a socio-neuro-endocrine framework.

    Science.gov (United States)

    Yoder, Kathleen M; Vicario, David S

    2012-02-01

    Gonadal hormones modulate behavioral responses to sexual stimuli, and communication signals can also modulate circulating hormone levels. In several species, these combined effects appear to underlie a two-way interaction between circulating gonadal hormones and behavioral responses to socially salient stimuli. Recent work in songbirds has shown that manipulating local estradiol levels in the auditory forebrain produces physiological changes that affect discrimination of conspecific vocalizations and can affect behavior. These studies provide new evidence that estrogens can directly alter auditory processing and indirectly alter the behavioral response to a stimulus. These studies show that: 1) Local estradiol action within an auditory area is necessary for socially relevant sounds to induce normal physiological responses in the brains of both sexes; 2) These physiological effects occur much more quickly than predicted by the classical time-frame for genomic effects; 3) Estradiol action within the auditory forebrain enables behavioral discrimination among socially relevant sounds in males; and 4) Estradiol is produced locally in the male brain during exposure to particular social interactions. The accumulating evidence suggests a socio-neuro-endocrinology framework in which estradiol is essential to auditory processing, is increased by a socially relevant stimulus, acts rapidly to shape perception of subsequent stimuli experienced during social interactions, and modulates behavioral responses to these stimuli. Brain estrogens are likely to function similarly in both songbird sexes because aromatase and estrogen receptors are present in both male and female forebrain. Estrogenic modulation of perception in songbirds and perhaps other animals could fine-tune male advertising signals and female ability to discriminate them, facilitating mate selection by modulating behaviors. PMID:22201281

  15. Poor synchronization to the beat may result from deficient auditory-motor mapping.

    Science.gov (United States)

    Sowiński, Jakub; Dalla Bella, Simone

    2013-08-01

    Moving to the beat of music is natural and spontaneous for humans. Yet some individuals, so-called 'beat deaf', may differ from the majority by being unable to synchronize their movements to musical beat. This condition was recently described in Mathieu (Phillips-Silver et al. (2011). Neuropsychologia, 49, 961-969), a beat-deaf individual, showing inaccurate motor synchronization to the beat accompanied by poor beat perception, with spared pitch processing. It has been suggested that beat deafness is the outcome of impoverished beat perception. Deficient synchronization to the beat, however, may also result from inaccurate mapping of the perceived beat to movement. To test this possibility, we asked 99 non-musicians to synchronize with musical and non-musical stimuli via hand tapping. Ten among them who revealed particularly poor synchronization were submitted to a thorough assessment of motor synchronization to various pacing stimuli and of beat perception. Four participants showed poor synchronization in absence of poor pitch perception; moreover, among them, two individuals were unable to synchronize to music, in spite of unimpaired detection of small durational deviations in musical and non-musical sequences, and normal rhythm discrimination. This mismatch of perception and action points toward disrupted auditory-motor mapping as the key impairment accounting for poor synchronization to the beat. PMID:23838002

  16. Perception of acoustically complex phonological features in vowels is reflected in the induced brain-magnetic activity

    OpenAIRE

    Obleser Jonas; Eulitz Carsten

    2007-01-01

    Abstract A central issue in speech recognition is which basic units of speech are extracted by the auditory system and used for lexical access. One suggestion is that complex acoustic-phonetic information is mapped onto abstract phonological representations of speech and that a finite set of phonological features is used to guide speech perception. Previous studies analyzing the N1m component of the auditory evoked field have shown that this holds for the acoustically simple feature place of ...

  17. Music-induced cortical plasticity and lateral inhibition in the human auditory cortex as foundations for tonal tinnitus treatment

    Directory of Open Access Journals (Sweden)

    Christo Pantev

    2012-06-01

    Full Text Available Over the past 15 years, we have studied plasticity in the human auditory cortex by means of magnetoencephalography (MEG. Two main topics nurtured our curiosity: the effects of musical training on plasticity in the auditory system, and the effects of lateral inhibition. One of our plasticity studies found that listening to notched music for three hours inhibited the neuronal activity in the auditory cortex that corresponded to the center-frequency of the notch, suggesting suppression of neural activity by lateral inhibition. Crucially, the overall effects of lateral inhibition on human auditory cortical activity were stronger than the habituation effects. Based on these results we developed a novel treatment strategy for tonal tinnitus - tailor-made notched music training (TMNMT. By notching the music energy spectrum around the individual tinnitus frequency, we intended to attract lateral inhibition to auditory neurons involved in tinnitus perception. So far, the training strategy has been evaluated in two studies. The results of the initial long-term controlled study (12 months supported the validity of the treatment concept: subjective tinnitus loudness and annoyance were significantly reduced after TMNMT but not when notching spared the tinnitus frequencies. Correspondingly, tinnitus-related auditory evoked fields (AEFs were significantly reduced after training. The subsequent short-term (5 days training study indicated that training was more effective in the case of tinnitus frequencies ≤ 8 kHz compared to tinnitus frequencies > 8 kHz, and that training should be employed over a long-term in order to induce more persistent effects. Further development and evaluation of TMNMT therapy are planned. A goal is to transfer this novel, completely non-invasive, and low-cost treatment approach for tonal tinnitus into routine clinical practice.

  18. Asymmetries in the representation of space in the human auditory cortex depend on the global stimulus context.

    Science.gov (United States)

    Briley, Paul M; Goman, Adele M; Summerfield, A Quentin

    2016-03-01

    Studies on humans and other mammals have provided evidence for a two-channel or three-channel representation of horizontal space in the auditory system, with one channel maximally responsive to each of the left hemispace, the right hemispace and, possibly, the midline. Mammalian studies have suggested that the contralateral channel is larger in both cortices, but human studies have found this contralateral preference in only one of the cortices. However, human studies are in conflict as to whether the contralateral preference is in the left or the right auditory cortex, and there are a number of methodological differences that this conflict could be attributed to. A key difference between studies is the duration of the silent interval preceding each stimulus and any perception of sound-source movement that the absence of a silent interval creates. We presented auditory noises that alternated between -90° (left) and +90° (right) and recorded neural responses (event-related potentials) using electroencephalography. We randomly varied the duration of the silent interval preceding each stimulus to create a condition with an immediate (local) stimulus context similar to that used in a study reporting contralateral preference in the left auditory cortex, a condition with a local context similar to that in a study reporting contralateral preference in the right auditory cortex, and an intermediate condition. Surprisingly, we found that both auditory cortices exhibited a similarly strong contralateral preference under all conditions, with responses 27% greater, on average, to the contralateral than the ipsilateral space. This suggests that both the cortices can exhibit a contralateral preference, but whether these preferences manifest depends on the global, rather than the local, stimulus context. PMID:26730514

  19. Role of bimodal stimulation for auditory-perceptual skills development in children with a unilateral cochlear implant.

    Science.gov (United States)

    Marsella, P; Giannantonio, S; Scorpecci, A; Pianesi, F; Micardi, M; Resca, A

    2015-12-01

    This is a prospective randomised study that evaluated the differences arising from a bimodal stimulation compared to a monaural electrical stimulation in deaf children, particularly in terms of auditory-perceptual skills development. We enrolled 39 children aged 12 to 36 months, suffering from severe-to-profound bilateral sensorineural hearing loss with residual hearing on at least one side. All were unilaterally implanted: 21 wore only the cochlear implant (CI) (unilateral CI group), while the other 18 used the CI and a contralateral hearing aid at the same time (bimodal group). They were assessed with a test battery designed to appraise preverbal and verbal auditory-perceptual skills immediately before and 6 and 12 months after implantation. No statistically significant differences were observed between groups at time 0, while at 6 and 12 months children in the bimodal group had better scores in each test than peers in the unilateral CI group. Therefore, although unilateral deafness/hearing does not undermine hearing acuity in normal listening, the simultaneous use of a CI and a contralateral hearing aid (binaural hearing through a bimodal stimulation) provides an advantage in terms of acquisition of auditory-perceptual skills, allowing children to achieve the basic milestones of auditory perception faster and in greater number than children with only one CI. Thus, "keeping awake" the contralateral auditory pathway, albeit not crucial in determining auditory acuity, guarantees benefits compared with the use of the implant alone. These findings provide initial evidence to establish shared guidelines for better rehabilitation of patients undergoing unilateral cochlear implantation, and add more evidence regarding the correct indications for bilateral cochlear implantation. PMID:26900251

  20. Temporal recalibration in vocalization induced by adaptation of delayed auditory feedback.

    Directory of Open Access Journals (Sweden)

    Kosuke Yamamoto

    Full Text Available BACKGROUND: We ordinarily perceive our voice sound as occurring simultaneously with vocal production, but the sense of simultaneity in vocalization can be easily interrupted by delayed auditory feedback (DAF. DAF causes normal people to have difficulty speaking fluently but helps people with stuttering to improve speech fluency. However, the underlying temporal mechanism for integrating the motor production of voice and the auditory perception of vocal sound remains unclear. In this study, we investigated the temporal tuning mechanism integrating vocal sensory and voice sounds under DAF with an adaptation technique. METHODS AND FINDINGS: Participants produced a single voice sound repeatedly with specific delay times of DAF (0, 66, 133 ms during three minutes to induce 'Lag Adaptation'. They then judged the simultaneity between motor sensation and vocal sound given feedback. We found that lag adaptation induced a shift in simultaneity responses toward the adapted auditory delays. This indicates that the temporal tuning mechanism in vocalization can be temporally recalibrated after prolonged exposure to delayed vocal sounds. Furthermore, we found that the temporal recalibration in vocalization can be affected by averaging delay times in the adaptation phase. CONCLUSIONS: These findings suggest vocalization is finely tuned by the temporal recalibration mechanism, which acutely monitors the integration of temporal delays between motor sensation and vocal sound.

  1. Large cross-sectional study of presbycusis reveals rapid progressive decline in auditory temporal acuity.

    Science.gov (United States)

    Ozmeral, Erol J; Eddins, Ann C; Frisina, D Robert; Eddins, David A

    2016-07-01

    The auditory system relies on extraordinarily precise timing cues for the accurate perception of speech, music, and object identification. Epidemiological research has documented the age-related progressive decline in hearing sensitivity that is known to be a major health concern for the elderly. Although smaller investigations indicate that auditory temporal processing also declines with age, such measures have not been included in larger studies. Temporal gap detection thresholds (TGDTs; an index of auditory temporal resolution) measured in 1071 listeners (aged 18-98 years) were shown to decline at a minimum rate of 1.05 ms (15%) per decade. Age was a significant predictor of TGDT when controlling for audibility (partial correlation) and when restricting analyses to persons with normal-hearing sensitivity (n = 434). The TGDTs were significantly better for males (3.5 ms; 51%) than females when averaged across the life span. These results highlight the need for indices of temporal processing in diagnostics, as treatment targets, and as factors in models of aging. PMID:27255816

  2. Auditory Cortex tACS and tRNS for Tinnitus: Single versus Multiple Sessions

    Directory of Open Access Journals (Sweden)

    Laura Claes

    2014-01-01

    Full Text Available Tinnitus is the perception of a sound in the absence of an external acoustic source, which often exerts a significant impact on the quality of life. Currently there is evidence that neuroplastic changes in both neural pathways are involved in the generation and maintaining of tinnitus. Neuromodulation has been suggested to interfere with these neuroplastic alterations. In this study we aimed to compare the effect of two upcoming forms of transcranial electrical neuromodulation: alternating current stimulation (tACS and random noise stimulation (tRNS, both applied on the auditory cortex. A database with 228 patients with chronic tinnitus who underwent noninvasive neuromodulation was retrospectively analyzed. The results of this study show that a single session of tRNS induces a significant suppressive effect on tinnitus loudness and distress, in contrast to tACS. Multiple sessions of tRNS augment the suppressive effect on tinnitus loudness but have no effect on tinnitus distress. In conclusion this preliminary study shows a possibly beneficial effect of tRNS on tinnitus and can be a motivation for future randomized placebo-controlled clinical studies with auditory tRNS for tinnitus. Auditory alpha-modulated tACS does not seem to be contributing to the treatment of tinnitus.

  3. The auditory N1 suppression rebounds as prediction persists over time.

    Science.gov (United States)

    Hsu, Yi-Fang; Hämäläinen, Jarmo A; Waszak, Florian

    2016-04-01

    The predictive coding model of perception proposes that neuronal responses reflect prediction errors. Repeated as well as predicted stimuli trigger suppressed neuronal responses because they are associated with reduced prediction errors. However, many predictable events in our environment are not isolated but sequential, yet there is little empirical evidence documenting how suppressed neuronal responses reflecting reduced prediction errors change in the course of a predictable sequence of events. Here we conceived an auditory electroencephalography (EEG) experiment where prediction persists over series of four tones to allow for the delineation of the dynamics of the suppressed neuronal responses. It is possible that neuronal responses might decrease for the initial predictable stimuli and stay at the same level across the rest of the sequence, suggesting that they reflect the predictability of the stimuli in terms of mere probability. Alternatively, neuronal responses might decrease for the initial predictable stimuli and gradually recover across the rest of the sequence, suggesting that factors other than mere probability have to be considered in order to account for the way prediction is implemented in the brain. We found that initial presentation of the predictable stimuli was associated with suppression of the auditory N1. Further presentation of the predictable stimuli was associated with a rebound of the component's amplitude. Moreover, such pattern was independent of attention. The findings suggest that auditory N1 suppression reflecting reduced prediction errors is a transient phenomenon that can be modulated by multiple factors. PMID:26921479

  4. Shaping prestimulus neural activity with auditory rhythmic stimulation improves the temporal allocation of attention

    Science.gov (United States)

    Pincham, Hannah L.; Cristoforetti, Giulia; Facoetti, Andrea; Szűcs, Dénes

    2016-01-01

    Human attention fluctuates across time, and even when stimuli have identical physical characteristics and the task demands are the same, relevant information is sometimes consciously perceived and at other times not. A typical example of this phenomenon is the attentional blink, where participants show a robust deficit in reporting the second of two targets (T2) in a rapid serial visual presentation (RSVP) stream. Previous electroencephalographical (EEG) studies showed that neural correlates of correct T2 report are not limited to the RSVP period, but extend before visual stimulation begins. In particular, reduced oscillatory neural activity in the alpha band (8-12 Hz) before the onset of the RSVP has been linked to lower T2 accuracy. We therefore examined whether auditory rhythmic stimuli presented at a rate of 10 Hz (within the alpha band) could increase oscillatory alpha-band activity and improve T2 performance in the attentional blink time window. Behaviourally, the auditory rhythmic stimulation worked to enhance T2 accuracy. This enhanced perception was associated with increases in the posterior T2-evoked N2 component of the event-related potentials and this effect was observed selectively at lag 3. Frontal and posterior oscillatory alpha-band activity was also enhanced during auditory stimulation in the pre-RSVP period and positively correlated with T2 accuracy. These findings suggest that ongoing fluctuations can be shaped by sensorial events to improve the allocation of attention in time. PMID:26986506

  5. Shaping prestimulus neural activity with auditory rhythmic stimulation improves the temporal allocation of attention.

    Science.gov (United States)

    Ronconi, Luca; Pincham, Hannah L; Cristoforetti, Giulia; Facoetti, Andrea; Szűcs, Dénes

    2016-05-01

    Human attention fluctuates across time, and even when stimuli have identical physical characteristics and the task demands are the same, relevant information is sometimes consciously perceived and at other times not. A typical example of this phenomenon is the attentional blink, where participants show a robust deficit in reporting the second of two targets (T2) in a rapid serial visual presentation (RSVP) stream. Previous electroencephalographical (EEG) studies showed that neural correlates of correct T2 report are not limited to the RSVP period, but extend before visual stimulation begins. In particular, reduced oscillatory neural activity in the alpha band (8-12 Hz) before the onset of the RSVP has been linked to lower T2 accuracy. We therefore examined whether auditory rhythmic stimuli presented at a rate of 10 Hz (within the alpha band) could increase oscillatory alpha-band activity and improve T2 performance in the attentional blink time window. Behaviourally, the auditory rhythmic stimulation worked to enhance T2 accuracy. This enhanced perception was associated with increases in the posterior T2-evoked N2 component of the event-related potentials and this effect was observed selectively at lag 3. Frontal and posterior oscillatory alpha-band activity was also enhanced during auditory stimulation in the pre-RSVP period and positively correlated with T2 accuracy. These findings suggest that ongoing fluctuations can be shaped by sensorial events to improve the allocation of attention in time. PMID:26986506

  6. Brand perception

    OpenAIRE

    NOSKOVÁ, Simona

    2014-01-01

    The aim of this bachelor work was to analyze the perception of selected brands from the customer's perspective and from the perspective of brand founder called Regional Agrarian Chamber of the South Bohemian Region. To determine the attitudes of opinion on brand perception by the customer, I developed a questionnaire survey using semantic differential. First hypothesis, which argues that the selection of food plays an important role in regional mark at least half of the respondents, was refut...

  7. The behavioral impact of auditory and visual oddball distracters in a visual and auditory categorization tasks

    OpenAIRE

    Alicia Leiva

    2011-01-01

    Past cross-modal oddball studies have shown that participants respond slower to visual targets following the presentation of an unexpected change in a stream of auditory distracters. In the present study we examined the extent to which this novelty distraction may transcend the sensory distinction between distracter and target. In separate blocks of trials, participants categorized digits presented auditorily or visually in the face of visual or auditory standard and oddball distracters. The ...

  8. Reading adn Auditory-Visual Equivalences

    Science.gov (United States)

    Sidman, Murray

    1971-01-01

    A retarded boy, unable to read orally or with comprehension, was taught to match spoken to printed words and was then capable of reading comprehension (matching printed words to picture) and oral reading (naming printed words aloud), demonstrating that certain learned auditory-visual equivalences are sufficient prerequisites for reading…

  9. The Goldilocks Effect in Infant Auditory Attention

    Science.gov (United States)

    Kidd, Celeste; Piantadosi, Steven T.; Aslin, Richard N.

    2014-01-01

    Infants must learn about many cognitive domains (e.g., language, music) from auditory statistics, yet capacity limits on their cognitive resources restrict the quantity that they can encode. Previous research has established that infants can attend to only a subset of available acoustic input. Yet few previous studies have directly examined infant…

  10. Development of Receiver Stimulator for Auditory Prosthesis

    Directory of Open Access Journals (Sweden)

    K. Raja Kumar

    2010-05-01

    Full Text Available The Auditory Prosthesis (AP is an electronic device that can provide hearing sensations to people who are profoundly deaf by stimulating the auditory nerve via an array of electrodes with an electric current allowing them to understand the speech. The AP system consists of two hardware functional units such as Body Worn Speech Processor (BWSP and Receiver Stimulator. The prototype model of Receiver Stimulator for Auditory Prosthesis (RSAP consists of Speech Data Decoder, DAC, ADC, constant current generator, electrode selection logic, switch matrix and simulated electrode resistance array. The laboratory model of speech processor is designed to implement the Continuous Interleaved Sampling (CIS speech processing algorithm which generates the information required for electrode stimulation based on the speech / audio data. Speech Data Decoder receives the encoded speech data via an inductive RF transcutaneous link from speech processor. Twelve channels of auditory Prosthesis with selectable eight electrodes for stimulation of simulated electrode resistance array are used for testing. The RSAP is validated by using the test data generated by the laboratory prototype of speech processor. The experimental results are obtained from specific speech/sound tests using a high-speed data acquisition system and found satisfactory.

  11. Preferred levels of auditory danger signals.

    Science.gov (United States)

    Zera, J; Nagórski, A

    2000-01-01

    An important issue at the design stage of the auditory danger signal for a safety system is the signal audibility under various conditions of background noise. The auditory danger signal should be clearly audible but it should not be too loud to avoid fright, startling effects, and nuisance complaints. Criteria for designing auditory danger signals are the subject of the ISO 7731 (International Organization for Standardization [ISO], 1986) international standard and the EN 457 European standard (European Committee for Standardization [CEN], 1992). It is required that the A-weighted sound pressure level of the auditory danger signal is higher in level than the background noise by 15 dB. In this paper, the results of an experiment are reported, in which listeners adjusted most preferred levels of 3 danger signals (tone, sweep, complex sound) in the presence of a noise background (pink noise and industrial noise). The measurements were done for 60-, 70-, 80-, and 90-dB A-weighted levels of noise. Results show that for 60-dB level of noise the most preferred level of the danger signal is 10 to 20 dB above the noise level. However, for 90-dB level of noise, listeners selected a level of the danger signal that was equal to the noise level. Results imply that the criterion in the existing standards is conservative as it requires the level of the danger signal to be higher than the level of noise regardless of the noise level. PMID:10828157

  12. Lateralization of auditory-cortex functions.

    Science.gov (United States)

    Tervaniemi, Mari; Hugdahl, Kenneth

    2003-12-01

    In the present review, we summarize the most recent findings and current views about the structural and functional basis of human brain lateralization in the auditory modality. Main emphasis is given to hemodynamic and electromagnetic data of healthy adult participants with regard to music- vs. speech-sound encoding. Moreover, a selective set of behavioral dichotic-listening (DL) results and clinical findings (e.g., schizophrenia, dyslexia) are included. It is shown that human brain has a strong predisposition to process speech sounds in the left and music sounds in the right auditory cortex in the temporal lobe. Up to great extent, an auditory area located at the posterior end of the temporal lobe (called planum temporale [PT]) underlies this functional asymmetry. However, the predisposition is not bound to informational sound content but to rapid temporal information more common in speech than in music sounds. Finally, we obtain evidence for the vulnerability of the functional specialization of sound processing. These altered forms of lateralization may be caused by top-down and bottom-up effects inter- and intraindividually In other words, relatively small changes in acoustic sound features or in their familiarity may modify the degree in which the left vs. right auditory areas contribute to sound encoding. PMID:14629926

  13. Self-affirmation in auditory persuasion

    NARCIS (Netherlands)

    Elbert, Sarah; Dijkstra, Arie

    2011-01-01

    Persuasive health information can be presented through an auditory channel. Curiously enough, the effect of voice cues in health persuasion has hardly been studied. Research concerning visual persuasive messages showed that self-affirmation results in a more open-minded reaction to threatening infor

  14. Late Maturation of Auditory Perceptual Learning

    Science.gov (United States)

    Huyck, Julia Jones; Wright, Beverly A.

    2011-01-01

    Adults can improve their performance on many perceptual tasks with training, but when does the response to training become mature? To investigate this question, we trained 11-year-olds, 14-year-olds and adults on a basic auditory task (temporal-interval discrimination) using a multiple-session training regimen known to be effective for adults. The…

  15. Affective priming with auditory speech stimuli

    NARCIS (Netherlands)

    J. Degner

    2011-01-01

    Four experiments explored the applicability of auditory stimulus presentation in affective priming tasks. In Experiment 1, it was found that standard affective priming effects occur when prime and target words are presented simultaneously via headphones similar to a dichotic listening procedure. In

  16. Integration and segregation in auditory scene analysis

    Science.gov (United States)

    Sussman, Elyse S.

    2005-03-01

    Assessment of the neural correlates of auditory scene analysis, using an index of sound change detection that does not require the listener to attend to the sounds [a component of event-related brain potentials called the mismatch negativity (MMN)], has previously demonstrated that segregation processes can occur without attention focused on the sounds and that within-stream contextual factors influence how sound elements are integrated and represented in auditory memory. The current study investigated the relationship between the segregation and integration processes when they were called upon to function together. The pattern of MMN results showed that the integration of sound elements within a sound stream occurred after the segregation of sounds into independent streams and, further, that the individual streams were subject to contextual effects. These results are consistent with a view of auditory processing that suggests that the auditory scene is rapidly organized into distinct streams and the integration of sequential elements to perceptual units takes place on the already formed streams. This would allow for the flexibility required to identify changing within-stream sound patterns, needed to appreciate music or comprehend speech..

  17. Auditory confrontation naming in Alzheimer's disease.

    Science.gov (United States)

    Brandt, Jason; Bakker, Arnold; Maroof, David Aaron

    2010-11-01

    Naming is a fundamental aspect of language and is virtually always assessed with visual confrontation tests. Tests of the ability to name objects by their characteristic sounds would be particularly useful in the assessment of visually impaired patients, and may be particularly sensitive in Alzheimer's disease (AD). We developed an auditory naming task, requiring the identification of the source of environmental sounds (i.e., animal calls, musical instruments, vehicles) and multiple-choice recognition of those not identified. In two separate studies mild-to-moderate AD patients performed more poorly than cognitively normal elderly on the auditory naming task. This task was also more difficult than two versions of a comparable visual naming task, and correlated more highly with Mini-Mental State Exam score. Internal consistency reliability was acceptable, although ROC analysis revealed auditory naming to be slightly less successful than visual confrontation naming in discriminating AD patients from normal participants. Nonetheless, our auditory naming task may prove useful in research and clinical practice, especially with visually impaired patients. PMID:20981630

  18. Auditory risk estimates for youth target shooting

    Science.gov (United States)

    Meinke, Deanna K.; Murphy, William J.; Finan, Donald S.; Lankford, James E.; Flamme, Gregory A.; Stewart, Michael; Soendergaard, Jacob; Jerome, Trevor W.

    2015-01-01

    Objective To characterize the impulse noise exposure and auditory risk for youth recreational firearm users engaged in outdoor target shooting events. The youth shooting positions are typically standing or sitting at a table, which places the firearm closer to the ground or reflective surface when compared to adult shooters. Design Acoustic characteristics were examined and the auditory risk estimates were evaluated using contemporary damage-risk criteria for unprotected adult listeners and the 120-dB peak limit suggested by the World Health Organization (1999) for children. Study sample Impulses were generated by 26 firearm/ammunition configurations representing rifles, shotguns, and pistols used by youth. Measurements were obtained relative to a youth shooter’s left ear. Results All firearms generated peak levels that exceeded the 120 dB peak limit suggested by the WHO for children. In general, shooting from the seated position over a tabletop increases the peak levels, LAeq8 and reduces the unprotected maximum permissible exposures (MPEs) for both rifles and pistols. Pistols pose the greatest auditory risk when fired over a tabletop. Conclusion Youth should utilize smaller caliber weapons, preferably from the standing position, and always wear hearing protection whenever engaging in shooting activities to reduce the risk for auditory damage. PMID:24564688

  19. 40 Hz auditory steady state response to linguistic features of stimuli during auditory hallucinations.

    Science.gov (United States)

    Ying, Jun; Yan, Zheng; Gao, Xiao-rong

    2013-10-01

    The auditory steady state response (ASSR) may reflect activity from different regions of the brain, depending on the modulation frequency used. In general, responses induced by low rates (≤40 Hz) emanate mostly from central structures of the brain, and responses from high rates (≥80 Hz) emanate mostly from the peripheral auditory nerve or brainstem structures. Besides, it was reported that the gamma band ASSR (30-90 Hz) played an important role in working memory, speech understanding and recognition. This paper investigated the 40 Hz ASSR evoked by modulated speech and reversed speech. The speech was Chinese phrase voice, and the noise-like reversed speech was obtained by temporally reversing the speech. Both auditory stimuli were modulated with a frequency of 40 Hz. Ten healthy subjects and 5 patients with hallucination symptom participated in the experiment. Results showed reduction in left auditory cortex response when healthy subjects listened to the reversed speech compared with the speech. In contrast, when the patients who experienced auditory hallucinations listened to the reversed speech, the auditory cortex of left hemispheric responded more actively. The ASSR results were consistent with the behavior results of patients. Therefore, the gamma band ASSR is expected to be helpful for rapid and objective diagnosis of hallucination in clinic. PMID:24142731

  20. Spectrotemporal resolution tradeoff in auditory processing as revealed by human auditory brainstem responses and psychophysical indices.

    Science.gov (United States)

    Bidelman, Gavin M; Syed Khaja, Ameenuddin

    2014-06-20

    Auditory filter theory dictates a physiological compromise between frequency and temporal resolution of cochlear signal processing. We examined neurophysiological correlates of these spectrotemporal tradeoffs in the human auditory system using auditory evoked brain potentials and psychophysical responses. Temporal resolution was assessed using scalp-recorded auditory brainstem responses (ABRs) elicited by paired clicks. The inter-click interval (ICI) between successive pulses was parameterized from 0.7 to 25 ms to map ABR amplitude recovery as a function of stimulus spacing. Behavioral frequency difference limens (FDLs) and auditory filter selectivity (Q10 of psychophysical tuning curves) were obtained to assess relations between behavioral spectral acuity and electrophysiological estimates of temporal resolvability. Neural responses increased monotonically in amplitude with increasing ICI, ranging from total suppression (0.7 ms) to full recovery (25 ms) with a temporal resolution of ∼3-4 ms. ABR temporal thresholds were correlated with behavioral Q10 (frequency selectivity) but not FDLs (frequency discrimination); no correspondence was observed between Q10 and FDLs. Results suggest that finer frequency selectivity, but not discrimination, is associated with poorer temporal resolution. The inverse relation between ABR recovery and perceptual frequency tuning demonstrates a time-frequency tradeoff between the temporal and spectral resolving power of the human auditory system. PMID:24793771

  1. Biological impact of auditory expertise across the life span: musicians as a model of auditory learning.

    Science.gov (United States)

    Strait, Dana L; Kraus, Nina

    2014-02-01

    Experience-dependent characteristics of auditory function, especially with regard to speech-evoked auditory neurophysiology, have garnered increasing attention in recent years. This interest stems from both pragmatic and theoretical concerns as it bears implications for the prevention and remediation of language-based learning impairment in addition to providing insight into mechanisms engendering experience-dependent changes in human sensory function. Musicians provide an attractive model for studying the experience-dependency of auditory processing in humans due to their distinctive neural enhancements compared to nonmusicians. We have only recently begun to address whether these enhancements are observable early in life, during the initial years of music training when the auditory system is under rapid development, as well as later in life, after the onset of the aging process. Here we review neural enhancements in musically trained individuals across the life span in the context of cellular mechanisms that underlie learning, identified in animal models. Musicians' subcortical physiologic enhancements are interpreted according to a cognitive framework for auditory learning, providing a model in which to study mechanisms of experience-dependent changes in human auditory function. PMID:23988583

  2. Characterization of auditory synaptic inputs to gerbil perirhinal cortex

    Directory of Open Access Journals (Sweden)

    Vibhakar C Kotak

    2015-08-01

    Full Text Available The representation of acoustic cues involves regions downstream from the auditory cortex (ACx. One such area, the perirhinal cortex (PRh, processes sensory signals containing mnemonic information. Therefore, our goal was to assess whether PRh receives auditory inputs from the auditory thalamus (MG and ACx in an auditory thalamocortical brain slice preparation and characterize these afferent-driven synaptic properties. When the MG or ACx was electrically stimulated, synaptic responses were recorded from the PRh neurons. Blockade of GABA-A receptors dramatically increased the amplitude of evoked excitatory potentials. Stimulation of the MG or ACx also evoked calcium transients in most PRh neurons. Separately, when fluoro ruby was injected in ACx in vivo, anterogradely labeled axons and terminals were observed in the PRh. Collectively, these data show that the PRh integrates auditory information from the MG and ACx and that auditory driven inhibition dominates the postsynaptic responses in a non-sensory cortical region downstream from the auditory cortex.

  3. The Essential Complexity of Auditory Receptive Fields.

    Science.gov (United States)

    Thorson, Ivar L; Liénard, Jean; David, Stephen V

    2015-12-01

    Encoding properties of sensory neurons are commonly modeled using linear finite impulse response (FIR) filters. For the auditory system, the FIR filter is instantiated in the spectro-temporal receptive field (STRF), often in the framework of the generalized linear model. Despite widespread use of the FIR STRF, numerous formulations for linear filters are possible that require many fewer parameters, potentially permitting more efficient and accurate model estimates. To explore these alternative STRF architectures, we recorded single-unit neural activity from auditory cortex of awake ferrets during presentation of natural sound stimuli. We compared performance of > 1000 linear STRF architectures, evaluating their ability to predict neural responses to a novel natural stimulus. Many were able to outperform the FIR filter. Two basic constraints on the architecture lead to the improved performance: (1) factorization of the STRF matrix into a small number of spectral and temporal filters and (2) low-dimensional parameterization of the factorized filters. The best parameterized model was able to outperform the full FIR filter in both primary and secondary auditory cortex, despite requiring fewer than 30 parameters, about 10% of the number required by the FIR filter. After accounting for noise from finite data sampling, these STRFs were able to explain an average of 40% of A1 response variance. The simpler models permitted more straightforward interpretation of sensory tuning properties. They also showed greater benefit from incorporating nonlinear terms, such as short term plasticity, that provide theoretical advances over the linear model. Architectures that minimize parameter count while maintaining maximum predictive power provide insight into the essential degrees of freedom governing auditory cortical function. They also maximize statistical power available for characterizing additional nonlinear properties that limit current auditory models. PMID:26683490

  4. Tactile perception during action observation.

    Science.gov (United States)

    Vastano, Roberta; Inuggi, Alberto; Vargas, Claudia D; Baud-Bovy, Gabriel; Jacono, Marco; Pozzo, Thierry

    2016-09-01

    It has been suggested that tactile perception becomes less acute during movement to optimize motor control and to prevent an overload of afferent information generated during action. This empirical phenomenon, known as "tactile gating effect," has been associated with mechanisms of sensory feedback prediction. However, less attention has been given to the tactile attenuation effect during the observation of an action. The aim of this study was to investigate whether and how the observation of a goal-directed action influences tactile perception as during overt action. In a first experiment, we recorded vocal reaction times (RTs) of participants to tactile stimulations during the observation of a reach-to-grasp action. The stimulations were delivered on different body parts that could be either congruent or incongruent with the observed effector (the right hand and the right leg, respectively). The tactile stimulation was contrasted with a no body-related stimulation (an auditory beep). We found increased RTs for tactile congruent stimuli compared to both tactile incongruent and auditory stimuli. This effect was reported only during the observation of the reaching phase, whereas RTs were not modulated during the grasping phase. A tactile two-alternative forced-choice (2AFC) discrimination task was then conducted in order to quantify the changes in tactile sensitivity during the observation of the same goal-directed actions. In agreement with the first experiment, the tactile perceived intensity was reduced only during the reaching phase. These results suggest that tactile processing during action observation relies on a process similar to that occurring during action execution. PMID:27161552

  5. Being Moved by the Self and Others: Influence of Empathy on Self-Motion Perception

    OpenAIRE

    Lopez, Christophe; Falconer, Caroline J.; Mast, Fred W.; Avenanti, Alessio

    2013-01-01

    BACKGROUND: The observation of conspecifics influences our bodily perceptions and actions: Contagious yawning, contagious itching, or empathy for pain, are all examples of mechanisms based on resonance between our own body and others. While there is evidence for the involvement of the mirror neuron system in the processing of motor, auditory and tactile information, it has not yet been associated with the perception of self-motion. METHODOLOGY/PRINCIPAL FINDINGS: We investigated wh...

  6. The simultaneous perception of auditory–tactile stimuli in voluntary movement

    OpenAIRE

    Hao, Qiao; Ogata, Taiki; Ogawa, Ken-ichiro; Kwon, Jinhwan; Miyake, Yoshihiro

    2015-01-01

    The simultaneous perception of multimodal information in the environment during voluntary movement is very important for effective reactions to the environment. Previous studies have found that voluntary movement affects the simultaneous perception of auditory and tactile stimuli. However, the results of these experiments are not completely consistent, and the differences may be attributable to methodological differences in the previous studies. In this study, we investigated the effect of vo...

  7. An interactive Hebbian account of lexically guided tuning of speech perception

    OpenAIRE

    Mirman, Daniel; McClelland, James L; Holt, Lori L.

    2006-01-01

    We describe an account of lexically guided tuning of speech perception based on interactive processing and Hebbian learning. Interactive feedback provides lexical information to prelexical levels, and Hebbian learning uses that information to retune the mapping from auditory input to prelexical representations of speech. Simulations of an extension of the TRACE model of speech perception are presented that demonstrate the efficacy of this mechanism. Further simulations show that acoustic simi...

  8. IMPAIRED PROCESSING IN THE PRIMARY AUDITORY CORTEX OF AN ANIMAL MODEL OF AUTISM

    Directory of Open Access Journals (Sweden)

    Renata eAnomal

    2015-11-01

    Full Text Available Autism is a neurodevelopmental disorder clinically characterized by deficits in communication, lack of social interaction and, repetitive behaviors with restricted interests. A number of studies have reported that sensory perception abnormalities are common in autistic individuals and might contribute to the complex behavioral symptoms of the disorder. In this context, hearing incongruence is particularly prevalent. Considering that some of this abnormal processing might stem from the unbalance of inhibitory and excitatory drives in brain circuitries, we used an animal model of autism induced by valproic acid (VPA during pregnancy in order to investigate the tonotopic organization of the primary auditory cortex (AI and its local inhibitory circuitry. Our results show that VPA rats have distorted primary auditory maps with over-representation of high frequencies, broadly tuned receptive fields and higher sound intensity thresholds as compared to controls. However, we did not detect differences in the number of parvalbumin-positive interneurons in AI of VPA and control rats. Altogether our findings show that neurophysiological impairments of hearing perception in this autism model occur independently of alterations in the number of parvalbumin-expressing interneurons. These data support the notion that fine circuit alterations, rather than gross cellular modification, could lead to neurophysiological changes in the autistic brain.

  9. Multimodal Hierarchical Dirichlet Process-based Active Perception

    OpenAIRE

    Taniguchi, Tadahiro; Takano, Toshiaki; Yoshino, Ryo

    2015-01-01

    In this paper, we propose an active perception method for recognizing object categories based on the multimodal hierarchical Dirichlet process (MHDP). The MHDP enables a robot to form object categories using multimodal information, e.g., visual, auditory, and haptic information, which can be observed by performing actions on an object. However, performing many actions on a target object requires a long time. In a real-time scenario, i.e., when the time is limited, the robot has to determine t...

  10. Auditory hallucinations in tinnitus patients: Emotional relationships and depression

    Directory of Open Access Journals (Sweden)

    Santos, Rosa Maria Rodrigues dos

    2012-01-01

    Full Text Available Introduction: Over the last few years, our Tinnitus Research Group has identified an increasing number of patients with tinnitus who also complained of repeated perception of complex sounds, such as music and voices. Such hallucinatory phenomena motivated us to study their possible relation to the patients' psyches. Aims: To assess whether hallucinatory phenomena were related to the patients' psychosis and/or depression, and clarify their content and function in the patients' psyches. Method: Ten subjects (8 women; mean age = 65.7 years were selected by otolaryngologists and evaluated by the same psychologists through semi-structured interviews, the Hamilton Depression Rating Scale, and psychoanalysis interviews. Results: We found no association between auditory hallucinations and psychosis; instead, this phenomenon was associated with depressive aspects. The patients' discourse revealed that hallucinatory phenomena played unconscious roles in their emotional life. In all cases, there was a remarkable and strong tendency to recall/repeat unpleasant facts/situations, which tended to exacerbate the distress caused by the tinnitus and hallucinatory phenomena and worsen depressive aspects. Conclusions: There is an important relationship between tinnitus, hallucinatory phenomena, and depression based on persistent recall of facts/situations leading to psychic distress. The knowledge of such findings represents a further step towards the need to adapt the treatment of this particular subgroup of tinnitus patients through interdisciplinary teamwork. Prospective.

  11. Surgical findings and auditory performance after cochlear implant revision surgery.

    Science.gov (United States)

    Manrique-Huarte, R; Huarte, A; Manrique, M J

    2016-03-01

    The objective of this study was to review cochlear reimplantation outcomes in the tertiary hospital and analyze whether facts such as type of failure, surgical findings, or etiology of deafness have an influence. A retrospective study including 38 patients who underwent cochlear implant revision surgery in a tertiary center is performed. Auditory outcomes (pure tone audiometry,  % disyllabic words) along with etiology of deafness, type of complication, issues with insertion, and cochlear findings are included. Complication rate is 2.7 %. Technical failure rate is 57.9 % (50 % hard failure and 50 % soft failure), and medical failure (device infection or extrusion, migration, wound, or flap complication) is seen in 42.1 % of the cases. Management of cochlear implant complications and revision surgery is increasing due to a growing number of implantees. Cases that require explantation and reimplantation of the cochlear implant are safe procedures, where the depth of insertion and speech perception results are equal or higher in most cases. Nevertheless, there must be an increasing effort on using minimally traumatic electrode arrays and surgical techniques to improve currently obtained results. PMID:25814389

  12. Musicians show general enhancement of complex sound encoding and better inhibition of irrelevant auditory change in music: an ERP study.

    Science.gov (United States)

    Kaganovich, Natalya; Kim, Jihyun; Herring, Caryn; Schumaker, Jennifer; Macpherson, Megan; Weber-Fox, Christine

    2013-04-01

    Using electrophysiology, we have examined two questions in relation to musical training - namely, whether it enhances sensory encoding of the human voice and whether it improves the ability to ignore irrelevant auditory change. Participants performed an auditory distraction task, in which they identified each sound as either short (350 ms) or long (550 ms) and ignored a change in timbre of the sounds. Sounds consisted of a male and a female voice saying a neutral sound [a], and of a cello and a French Horn playing an F3 note. In some blocks, musical sounds occurred on 80% of trials, while voice sounds on 20% of trials. In other blocks, the reverse was true. Participants heard naturally recorded sounds in half of experimental blocks and their spectrally-rotated versions in the other half. Regarding voice perception, we found that musicians had a larger N1 event-related potential component not only to vocal sounds but also to their never before heard spectrally-rotated versions. We therefore conclude that musical training is associated with a general improvement in the early neural encoding of complex sounds. Regarding the ability to ignore irrelevant auditory change, musicians' accuracy tended to suffer less from the change in timbre of the sounds, especially when deviants were musical notes. This behavioral finding was accompanied by a marginally larger re-orienting negativity in musicians, suggesting that their advantage may lie in a more efficient disengagement of attention from the distracting auditory dimension. PMID:23301775

  13. The music of your emotions: neural substrates involved in detection of emotional correspondence between auditory and visual music actions.

    Directory of Open Access Journals (Sweden)

    Karin Petrini

    Full Text Available In humans, emotions from music serve important communicative roles. Despite a growing interest in the neural basis of music perception, action and emotion, the majority of previous studies in this area have focused on the auditory aspects of music performances. Here we investigate how the brain processes the emotions elicited by audiovisual music performances. We used event-related functional magnetic resonance imaging, and in Experiment 1 we defined the areas responding to audiovisual (musician's movements with music, visual (musician's movements only, and auditory emotional (music only displays. Subsequently a region of interest analysis was performed to examine if any of the areas detected in Experiment 1 showed greater activation for emotionally mismatching performances (combining the musician's movements with mismatching emotional sound than for emotionally matching music performances (combining the musician's movements with matching emotional sound as presented in Experiment 2 to the same participants. The insula and the left thalamus were found to respond consistently to visual, auditory and audiovisual emotional information and to have increased activation for emotionally mismatching displays in comparison with emotionally matching displays. In contrast, the right thalamus was found to respond to audiovisual emotional displays and to have similar activation for emotionally matching and mismatching displays. These results suggest that the insula and left thalamus have an active role in detecting emotional correspondence between auditory and visual information during music performances, whereas the right thalamus has a different role.

  14. Individual Differences in Categorical Perception Are Related to Sublexical/Phonological Processing in Reading

    Science.gov (United States)

    Lopez-Zamora, Miguel; Luque, Juan L.; Alvarez, Carlos J.; Cobos, Pedro L.

    2012-01-01

    This article examines the relationship between individual differences in speech perception and sublexical/phonological processing in reading. We used an auditory phoneme identification task in which a /ba/-/pa/ syllable continuum measured sensitivity to classify participants into three performance groups: poor, medium, and good categorizers. A…

  15. Vocal Learning in Grey Parrots: A Brief Review of Perception, Production, and Cross-Species Comparisons

    Science.gov (United States)

    Pepperberg, Irene M.

    2010-01-01

    This chapter briefly reviews what is known-and what remains to be understood--about Grey parrot vocal learning. I review Greys' physical capacities--issues of auditory perception and production--then discuss how these capacities are used in vocal learning and can be recruited for referential communication with humans. I discuss cross-species…

  16. Studies of infrasound production and perception by the Capercaillie (): a reply to Freeman and Hare

    OpenAIRE

    Manley, Geoffrey A.; Lieser, Manfred; Berthold, Peter

    2011-01-01

    Studies of infrasound production and perception by the Capercaillie (Tetrao urogallus): a reply to Freeman and Hare (Manley, Geoffrey A.) Cochlear and Auditory Brainstem Physiology, IBU, Faculty V, Carl Von Ossietzky University - 26111 - Oldenburg - GERMANY (Manley, Geoffrey A.) Franz-Xaver-Oexle-Str. 30 - 78256 - Steisslingen - GERMANY (Lieser, Manfred) Max-Planck-Institute for Ornithology - Schlossallee 2 - 78315 - Radolfzell - GERMANY (Berth...

  17. Risk perception

    International Nuclear Information System (INIS)

    The present paper outlines perceptions of radiation risks within a general framework of risk perception research. The authors comment on the importance and the implications related to the choice of terminology, including the multiple definitions of ''risk'', for risk perception and risk communication. They describe the main factors influencing subjective risk assessments found in the literature, and illustrate how these factors guide the different reactions to indoor radon and radioactive fall-out due to nuclear accidents. They also exemplify the differences between risk assessments by experts and the public, present some successful models of perceived risk and risk acceptance, and draw some general conclusions from the research field. (author). 96 refs, 4 figs, 1 tab

  18. Shaping the aging brain: Role of auditory input patterns in the emergence of auditory cortical impairments

    Directory of Open Access Journals (Sweden)

    Brishna Soraya Kamal

    2013-09-01

    Full Text Available Age-related impairments in the primary auditory cortex (A1 include poor tuning selectivity, neural desynchronization and degraded responses to low-probability sounds. These changes have been largely attributed to reduced inhibition in the aged brain, and are thought to contribute to substantial hearing impairment in both humans and animals. Since many of these changes can be partially reversed with auditory training, it has been speculated that they might not be purely degenerative, but might rather represent negative plastic adjustments to noisy or distorted auditory signals reaching the brain. To test this hypothesis, we examined the impact of exposing young adult rats to 8 weeks of low-grade broadband noise on several aspects of A1 function and structure. We then characterized the same A1 elements in aging rats for comparison. We found that the impact of noise exposure on A1 tuning selectivity, temporal processing of auditory signal and responses to oddball tones was almost indistinguishable from the effect of natural aging. Moreover, noise exposure resulted in a reduction in the population of parvalbumin inhibitory interneurons and cortical myelin as previously documented in the aged group. Most of these changes reversed after returning the rats to a quiet environment. These results support the hypothesis that age-related changes in A1 have a strong activity-dependent component and indicate that the presence or absence of clear auditory input patterns might be a key factor in sustaining adult A1 function.

  19. Hyperactive auditory processing in Williams syndrome: Evidence from auditory evoked potentials.

    Science.gov (United States)

    Zarchi, Omer; Avni, Chen; Attias, Josef; Frisch, Amos; Carmel, Miri; Michaelovsky, Elena; Green, Tamar; Weizman, Abraham; Gothelf, Doron

    2015-06-01

    The neurophysiologic aberrations underlying the auditory hypersensitivity in Williams syndrome (WS) are not well defined. The P1-N1-P2 obligatory complex and mismatch negativity (MMN) response were investigated in 18 participants with WS, and the results were compared with those of 18 age- and gender-matched typically developing (TD) controls. Results revealed significantly higher amplitudes of both the P1-N1-P2 obligatory complex and the MMN response in the WS participants than in the TD controls. The P1-N1-P2 complex showed an age-dependent reduction in the TD but not in the WS participants. Moreover, high P1-N1-P2 complex was associated with low verbal comprehension scores in WS. This investigation demonstrates that central auditory processing is hyperactive in WS. The increase in auditory brain responses of both the obligatory complex and MMN response suggests aberrant processes of auditory encoding and discrimination in WS. Results also imply that auditory processing may be subjected to a delayed or diverse maturation and may affect the development of high cognitive functioning in WS. PMID:25603839

  20. Implicit temporal expectation attenuates auditory attentional blink.

    Directory of Open Access Journals (Sweden)

    Dawei Shen

    Full Text Available Attentional blink (AB describes a phenomenon whereby correct identification of a first target impairs the processing of a second target (i.e., probe nearby in time. Evidence suggests that explicit attention orienting in the time domain can attenuate the AB. Here, we used scalp-recorded, event-related potentials to examine whether auditory AB is also sensitive to implicit temporal attention orienting. Expectations were set up implicitly by varying the probability (i.e., 80% or 20% that the probe would occur at the +2 or +8 position following target presentation. Participants showed a significant AB, which was reduced with the increased probe probability at the +2 position. The probe probability effect was paralleled by an increase in P3b amplitude elicited by the probe. The results suggest that implicit temporal attention orienting can facilitate short-term consolidation of the probe and attenuate auditory AB.

  1. Anatomy and Physiology of the Auditory Tracts

    Directory of Open Access Journals (Sweden)

    Mohammad hosein Hekmat Ara

    1999-03-01

    Full Text Available Hearing is one of the excel sense of human being. Sound waves travel through the medium of air and enter the ear canal and then hit the tympanic membrane. Middle ear transfer almost 60-80% of this mechanical energy to the inner ear by means of “impedance matching”. Then, the sound energy changes to traveling wave and is transferred based on its specific frequency and stimulates organ of corti. Receptors in this organ and their synapses transform mechanical waves to the neural waves and transfer them to the brain. The central nervous system tract of conducting the auditory signals in the auditory cortex will be explained here briefly.

  2. Delayed auditory feedback in polyglot simultaneous interpreters.

    Science.gov (United States)

    Fabbro, F; Darò, V

    1995-03-01

    Twelve polyglot students of simultaneous interpretation and 12 controls (students of the faculty of Medicine) were submitted to a task of verbal fluency under amplified normal auditory feedback (NAF) and under three delayed auditory feedback (DAF) conditions with three different delay intervals (150, 200, and 250 msec). The control group showed a significant reduction in verbal fluency and a significant increase in the number of mistakes in all three DAF conditions. The interpreters' group, however, did not show any significant speech disruption neither in the subjects' mother tongue (L1) nor in their second language (L2) across all DAF conditions. Interpreters' general high verbal fluency along with their ability to pay less attention to their own verbal output make them more resistant to the interfering effects of DAF on speech. PMID:7757448

  3. Central auditory neurons have composite receptive fields.

    Science.gov (United States)

    Kozlov, Andrei S; Gentner, Timothy Q

    2016-02-01

    High-level neurons processing complex, behaviorally relevant signals are sensitive to conjunctions of features. Characterizing the receptive fields of such neurons is difficult with standard statistical tools, however, and the principles governing their organization remain poorly understood. Here, we demonstrate multiple distinct receptive-field features in individual high-level auditory neurons in a songbird, European starling, in response to natural vocal signals (songs). We then show that receptive fields with similar characteristics can be reproduced by an unsupervised neural network trained to represent starling songs with a single learning rule that enforces sparseness and divisive normalization. We conclude that central auditory neurons have composite receptive fields that can arise through a combination of sparseness and normalization in neural circuits. Our results, along with descriptions of random, discontinuous receptive fields in the central olfactory neurons in mammals and insects, suggest general principles of neural computation across sensory systems and animal classes. PMID:26787894

  4. Relationship between Speech Production and Perception in People Who Stutter

    Science.gov (United States)

    Lu, Chunming; Long, Yuhang; Zheng, Lifen; Shi, Guang; Liu, Li; Ding, Guosheng; Howell, Peter

    2016-01-01

    Speech production difficulties are apparent in people who stutter (PWS). PWS also have difficulties in speech perception compared to controls. It is unclear whether the speech perception difficulties in PWS are independent of, or related to, their speech production difficulties. To investigate this issue, functional MRI data were collected on 13 PWS and 13 controls whilst the participants performed a speech production task and a speech perception task. PWS performed poorer than controls in the perception task and the poorer performance was associated with a functional activity difference in the left anterior insula (part of the speech motor area) compared to controls. PWS also showed a functional activity difference in this and the surrounding area [left inferior frontal cortex (IFC)/anterior insula] in the production task compared to controls. Conjunction analysis showed that the functional activity differences between PWS and controls in the left IFC/anterior insula coincided across the perception and production tasks. Furthermore, Granger Causality Analysis on the resting-state fMRI data of the participants showed that the causal connection from the left IFC/anterior insula to an area in the left primary auditory cortex (Heschl’s gyrus) differed significantly between PWS and controls. The strength of this connection correlated significantly with performance in the perception task. These results suggest that speech perception difficulties in PWS are associated with anomalous functional activity in the speech motor area, and the altered functional connectivity from this area to the auditory area plays a role in the speech perception difficulties of PWS.

  5. Encoding frequency contrast in primate auditory cortex

    OpenAIRE

    Malone, Brian J.; Scott, Brian H.; Semple, Malcolm N.

    2014-01-01

    Changes in amplitude and frequency jointly determine much of the communicative significance of complex acoustic signals, including human speech. We have previously described responses of neurons in the core auditory cortex of awake rhesus macaques to sinusoidal amplitude modulation (SAM) signals. Here we report a complementary study of sinusoidal frequency modulation (SFM) in the same neurons. Responses to SFM were analogous to SAM responses in that changes in multiple parameters defining SFM...

  6. Inhibition in the Human Auditory Cortex

    OpenAIRE

    Koji Inui; Kei Nakagawa; Makoto Nishihara; Eishi Motomura; Ryusuke Kakigi

    2016-01-01

    Despite their indispensable roles in sensory processing, little is known about inhibitory interneurons in humans. Inhibitory postsynaptic potentials cannot be recorded non-invasively, at least in a pure form, in humans. We herein sought to clarify whether prepulse inhibition (PPI) in the auditory cortex reflected inhibition via interneurons using magnetoencephalography. An abrupt increase in sound pressure by 10 dB in a continuous sound was used to evoke the test response, and PPI was observe...

  7. Idealized computational models for auditory receptive fields.

    Directory of Open Access Journals (Sweden)

    Tony Lindeberg

    Full Text Available We present a theory by which idealized models of auditory receptive fields can be derived in a principled axiomatic manner, from a set of structural properties to (i enable invariance of receptive field responses under natural sound transformations and (ii ensure internal consistency between spectro-temporal receptive fields at different temporal and spectral scales. For defining a time-frequency transformation of a purely temporal sound signal, it is shown that the framework allows for a new way of deriving the Gabor and Gammatone filters as well as a novel family of generalized Gammatone filters, with additional degrees of freedom to obtain different trade-offs between the spectral selectivity and the temporal delay of time-causal temporal window functions. When applied to the definition of a second-layer of receptive fields from a spectrogram, it is shown that the framework leads to two canonical families of spectro-temporal receptive fields, in terms of spectro-temporal derivatives of either spectro-temporal Gaussian kernels for non-causal time or a cascade of time-causal first-order integrators over the temporal domain and a Gaussian filter over the logspectral domain. For each filter family, the spectro-temporal receptive fields can be either separable over the time-frequency domain or be adapted to local glissando transformations that represent variations in logarithmic frequencies over time. Within each domain of either non-causal or time-causal time, these receptive field families are derived by uniqueness from the assumptions. It is demonstrated how the presented framework allows for computation of basic auditory features for audio processing and that it leads to predictions about auditory receptive fields with good qualitative similarity to biological receptive fields measured in the inferior colliculus (ICC and primary auditory cortex (A1 of mammals.

  8. Implicit Temporal Expectation Attenuates Auditory Attentional Blink

    OpenAIRE

    Shen, Dawei; Alain, Claude

    2012-01-01

    Attentional blink (AB) describes a phenomenon whereby correct identification of a first target impairs the processing of a second target (i.e., probe) nearby in time. Evidence suggests that explicit attention orienting in the time domain can attenuate the AB. Here, we used scalp-recorded, event-related potentials to examine whether auditory AB is also sensitive to implicit temporal attention orienting. Expectations were set up implicitly by varying the probability (i.e., 80% or 20%) that the ...

  9. Low Power Adder Based Auditory Filter Architecture

    OpenAIRE

    2014-01-01

    Cochlea devices are powered up with the help of batteries and they should possess long working life to avoid replacing of devices at regular interval of years. Hence the devices with low power consumptions are required. In cochlea devices there are numerous filters, each responsible for frequency variant signals, which helps in identifying speech signals of different audible range. In this paper, multiplierless lookup table (LUT) based auditory filter is implemented. Power aware adder archite...

  10. Spontaneous activity in the developing auditory system.

    Science.gov (United States)

    Wang, Han Chin; Bergles, Dwight E

    2015-07-01

    Spontaneous electrical activity is a common feature of sensory systems during early development. This sensory-independent neuronal activity has been implicated in promoting their survival and maturation, as well as growth and refinement of their projections to yield circuits that can rapidly extract information about the external world. Periodic bursts of action potentials occur in auditory neurons of mammals before hearing onset. This activity is induced by inner hair cells (IHCs) within the developing cochlea, which establish functional connections with spiral ganglion neurons (SGNs) several weeks before they are capable of detecting external sounds. During this pre-hearing period, IHCs fire periodic bursts of Ca(2+) action potentials that excite SGNs, triggering brief but intense periods of activity that pass through auditory centers of the brain. Although spontaneous activity requires input from IHCs, there is ongoing debate about whether IHCs are intrinsically active and their firing periodically interrupted by external inhibitory input (IHC-inhibition model), or are intrinsically silent and their firing periodically promoted by an external excitatory stimulus (IHC-excitation model). There is accumulating evidence that inner supporting cells in Kölliker's organ spontaneously release ATP during this time, which can induce bursts of Ca(2+) spikes in IHCs that recapitulate many features of auditory neuron activity observed in vivo. Nevertheless, the role of supporting cells in this process remains to be established in vivo. A greater understanding of the molecular mechanisms responsible for generating IHC activity in the developing cochlea will help reveal how these events contribute to the maturation of nascent auditory circuits. PMID:25296716

  11. Predictive uncertainty in auditory sequence processing

    OpenAIRE

    Niels Chr.Hansen; MarcusT.Pearce

    2014-01-01

    Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty - a property of listeners’ prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic e...

  12. Effect of signal to noise ratio on the speech perception ability of older adults

    Science.gov (United States)

    Shojaei, Elahe; Ashayeri, Hassan; Jafari, Zahra; Zarrin Dast, Mohammad Reza; Kamali, Koorosh

    2016-01-01

    Background: Speech perception ability depends on auditory and extra-auditory elements. The signal- to-noise ratio (SNR) is an extra-auditory element that has an effect on the ability to normally follow speech and maintain a conversation. Speech in noise perception difficulty is a common complaint of the elderly. In this study, the importance of SNR magnitude as an extra-auditory effect on speech perception in noise was examined in the elderly. Methods: The speech perception in noise test (SPIN) was conducted on 25 elderly participants who had bilateral low–mid frequency normal hearing thresholds at three SNRs in the presence of ipsilateral white noise. These participants were selected by available sampling method. Cognitive screening was done using the Persian Mini Mental State Examination (MMSE) test. Results: Independent T- test, ANNOVA and Pearson Correlation Index were used for statistical analysis. There was a significant difference in word discrimination scores at silence and at three SNRs in both ears (p≤0.047). Moreover, there was a significant difference in word discrimination scores for paired SNRs (0 and +5, 0 and +10, and +5 and +10 (p≤0.04)). No significant correlation was found between age and word recognition scores at silence and at three SNRs in both ears (p≥0.386). Conclusion: Our results revealed that decreasing the signal level and increasing the competing noise considerably reduced the speech perception ability in normal hearing at low–mid thresholds in the elderly. These results support the critical role of SNRs for speech perception ability in the elderly. Furthermore, our results revealed that normal hearing elderly participants required compensatory strategies to maintain normal speech perception in challenging acoustic situations. PMID:27390712

  13. Dissociation of Neural Networks for Predisposition and for Training-Related Plasticity in Auditory-Motor Learning.

    Science.gov (United States)

    Herholz, Sibylle C; Coffey, Emily B J; Pantev, Christo; Zatorre, Robert J

    2016-07-01

    Skill learning results in changes to brain function, but at the same time individuals strongly differ in their abilities to learn specific skills. Using a 6-week piano-training protocol and pre- and post-fMRI of melody perception and imagery in adults, we dissociate learning-related patterns of neural activity from pre-training activity that predicts learning rates. Fronto-parietal and cerebellar areas related to storage of newly learned auditory-motor associations increased their response following training; in contrast, pre-training activity in areas related to stimulus encoding and motor control, including right auditory cortex, hippocampus, and caudate nuclei, was predictive of subsequent learning rate. We discuss the implications of these results for models of perceptual and of motor learning. These findings highlight the importance of considering individual predisposition in plasticity research and applications. PMID:26139842

  14. Stroke caused auditory attention deficits in children

    Directory of Open Access Journals (Sweden)

    Karla Maria Ibraim da Freiria Elias

    2013-01-01

    Full Text Available OBJECTIVE: To verify the auditory selective attention in children with stroke. METHODS: Dichotic tests of binaural separation (non-verbal and consonant-vowel and binaural integration - digits and Staggered Spondaic Words Test (SSW - were applied in 13 children (7 boys, from 7 to 16 years, with unilateral stroke confirmed by neurological examination and neuroimaging. RESULTS: The attention performance showed significant differences in comparison to the control group in both kinds of tests. In the non-verbal test, identifications the ear opposite the lesion in the free recall stage was diminished and, in the following stages, a difficulty in directing attention was detected. In the consonant- vowel test, a modification in perceptual asymmetry and difficulty in focusing in the attended stages was found. In the digits and SSW tests, ipsilateral, contralateral and bilateral deficits were detected, depending on the characteristics of the lesions and demand of the task. CONCLUSION: Stroke caused auditory attention deficits when dealing with simultaneous sources of auditory information.

  15. Concentric scheme of monkey auditory cortex

    Science.gov (United States)

    Kosaki, Hiroko; Saunders, Richard C.; Mishkin, Mortimer

    2003-04-01

    The cytoarchitecture of the rhesus monkey's auditory cortex was examined using immunocytochemical staining with parvalbumin, calbindin-D28K, and SMI32, as well as staining for cytochrome oxidase (CO). The results suggest that Kaas and Hackett's scheme of the auditory cortices can be extended to include five concentric rings surrounding an inner core. The inner core, containing areas A1 and R, is the most densely stained with parvalbumin and CO and can be separated on the basis of laminar patterns of SMI32 staining into lateral and medial subdivisions. From the inner core to the fifth (outermost) ring, parvalbumin staining gradually decreases and calbindin staining gradually increases. The first ring corresponds to Kaas and Hackett's auditory belt, and the second, to their parabelt. SMI32 staining revealed a clear border between these two. Rings 2 through 5 extend laterally into the dorsal bank of the superior temporal sulcus. The results also suggest that the rostral tip of the outermost ring adjoins the rostroventral part of the insula (area Pro) and the temporal pole, while the caudal tip adjoins the ventral part of area 7a.

  16. BALDEY: A database of auditory lexical decisions.

    Science.gov (United States)

    Ernestus, Mirjam; Cutler, Anne

    2015-01-01

    In an auditory lexical decision experiment, 5541 spoken content words and pseudowords were presented to 20 native speakers of Dutch. The words vary in phonological make-up and in number of syllables and stress pattern, and are further representative of the native Dutch vocabulary in that most are morphologically complex, comprising two stems or one stem plus derivational and inflectional suffixes, with inflections representing both regular and irregular paradigms; the pseudowords were matched in these respects to the real words. The BALDEY ("biggest auditory lexical decision experiment yet") data file includes response times and accuracy rates, with for each item morphological information plus phonological and acoustic information derived from automatic phonemic segmentation of the stimuli. Two initial analyses illustrate how this data set can be used. First, we discuss several measures of the point at which a word has no further neighbours and compare the degree to which each measure predicts our lexical decision response outcomes. Second, we investigate how well four different measures of frequency of occurrence (from written corpora, spoken corpora, subtitles, and frequency ratings by 75 participants) predict the same outcomes. These analyses motivate general conclusions about the auditory lexical decision task. The (publicly available) BALDEY database lends itself to many further analyses. PMID:25397865

  17. The effects of auditory enrichment on gorillas.

    Science.gov (United States)

    Robbins, Lindsey; Margulis, Susan W

    2014-01-01

    Several studies have demonstrated that auditory enrichment can reduce stereotypic behaviors in captive animals. The purpose of this study was to determine the relative effectiveness of three different types of auditory enrichment-naturalistic sounds, classical music, and rock music-in reducing stereotypic behavior displayed by Western lowland gorillas (Gorilla gorilla gorilla). Three gorillas (one adult male, two adult females) were observed at the Buffalo Zoo for a total of 24 hr per music trial. A control observation period, during which no sounds were presented, was also included. Each music trial consisted of a total of three weeks with a 1-week control period in between each music type. The results reveal a decrease in stereotypic behaviors from the control period to naturalistic sounds. The naturalistic sounds also affected patterns of several other behaviors including locomotion. In contrast, stereotypy increased in the presence of classical and rock music. These results suggest that auditory enrichment, which is not commonly used in zoos in a systematic way, can be easily utilized by keepers to help decrease stereotypic behavior, but the nature of the stimulus, as well as the differential responses of individual animals, need to be considered. PMID:24715297

  18. Central auditory masking by an illusory tone.

    Directory of Open Access Journals (Sweden)

    Christopher J Plack

    Full Text Available Many natural sounds fluctuate over time. The detectability of sounds in a sequence can be reduced by prior stimulation in a process known as forward masking. Forward masking is thought to reflect neural adaptation or neural persistence in the auditory nervous system, but it has been unclear where in the auditory pathway this processing occurs. To address this issue, the present study used a "Huggins pitch" stimulus, the perceptual effects of which depend on central auditory processing. Huggins pitch is an illusory tonal sensation produced when the same noise is presented to the two ears except for a narrow frequency band that is different (decorrelated between the ears. The pitch sensation depends on the combination of the inputs to the two ears, a process that first occurs at the level of the superior olivary complex in the brainstem. Here it is shown that a Huggins pitch stimulus produces more forward masking in the frequency region of the decorrelation than a noise stimulus identical to the Huggins-pitch stimulus except with perfect correlation between the ears. This stimulus has a peripheral neural representation that is identical to that of the Huggins-pitch stimulus. The results show that processing in, or central to, the superior olivary complex can contribute to forward masking in human listeners.

  19. Hierarchical processing of auditory objects in humans.

    Directory of Open Access Journals (Sweden)

    Sukhbinder Kumar

    2007-06-01

    Full Text Available This work examines the computational architecture used by the brain during the analysis of the spectral envelope of sounds, an important acoustic feature for defining auditory objects. Dynamic causal modelling and Bayesian model selection were used to evaluate a family of 16 network models explaining functional magnetic resonance imaging responses in the right temporal lobe during spectral envelope analysis. The models encode different hypotheses about the effective connectivity between Heschl's Gyrus (HG, containing the primary auditory cortex, planum temporale (PT, and superior temporal sulcus (STS, and the modulation of that coupling during spectral envelope analysis. In particular, we aimed to determine whether information processing during spectral envelope analysis takes place in a serial or parallel fashion. The analysis provides strong support for a serial architecture with connections from HG to PT and from PT to STS and an increase of the HG to PT connection during spectral envelope analysis. The work supports a computational model of auditory object processing, based on the abstraction of spectro-temporal "templates" in the PT before further analysis of the abstracted form in anterior temporal lobe areas.

  20. Consumer perceptions

    DEFF Research Database (Denmark)

    Ngapo, T. M.; Dransfield, E.; Martin, J. F.;

    2004-01-01

    Consumer focus groups in France, England, Sweden and Denmark were used to obtain insights into the decision-making involved in the choice of fresh pork and attitudes towards today's pig production systems. Many positive perceptions of pork meat were evoked. Negative images of the production systems...