WorldWideScience

Sample records for auditory perception

  1. Auditory adaptation improves tactile frequency perception.

    Science.gov (United States)

    Crommett, Lexi E; Pérez-Bellido, Alexis; Yau, Jeffrey M

    2017-01-11

    Our ability to process temporal frequency information by touch underlies our capacity to perceive and discriminate surface textures. Auditory signals, which also provide extensive temporal frequency information, can systematically alter the perception of vibrations on the hand. How auditory signals shape tactile processing is unclear: perceptual interactions between contemporaneous sounds and vibrations are consistent with multiple neural mechanisms. Here we used a crossmodal adaptation paradigm, which separated auditory and tactile stimulation in time, to test the hypothesis that tactile frequency perception depends on neural circuits that also process auditory frequency. We reasoned that auditory adaptation effects would transfer to touch only if signals from both senses converge on common representations. We found that auditory adaptation can improve tactile frequency discrimination thresholds. This occurred only when adaptor and test frequencies overlapped. In contrast, auditory adaptation did not influence tactile intensity judgments. Thus, auditory adaptation enhances touch in a frequency- and feature-specific manner. A simple network model in which tactile frequency information is decoded from sensory neurons that are susceptible to auditory adaptation recapitulates these behavioral results. Our results imply that the neural circuits supporting tactile frequency perception also process auditory signals. This finding is consistent with the notion of supramodal operators performing canonical operations, like temporal frequency processing, regardless of input modality.

  2. Speech perception as complex auditory categorization

    Science.gov (United States)

    Holt, Lori L.

    2002-05-01

    Despite a long and rich history of categorization research in cognitive psychology, very little work has addressed the issue of complex auditory category formation. This is especially unfortunate because the general underlying cognitive and perceptual mechanisms that guide auditory category formation are of great importance to understanding speech perception. I will discuss a new methodological approach to examining complex auditory category formation that specifically addresses issues relevant to speech perception. This approach utilizes novel nonspeech sound stimuli to gain full experimental control over listeners' history of experience. As such, the course of learning is readily measurable. Results from this methodology indicate that the structure and formation of auditory categories are a function of the statistical input distributions of sound that listeners hear, aspects of the operating characteristics of the auditory system, and characteristics of the perceptual categorization system. These results have important implications for phonetic acquisition and speech perception.

  3. Are auditory percepts determined by experience?

    Science.gov (United States)

    Monson, Brian B; Han, Shui'Er; Purves, Dale

    2013-01-01

    Audition--what listeners hear--is generally studied in terms of the physical properties of sound stimuli and physiological properties of the auditory system. Based on recent work in vision, we here consider an alternative perspective that sensory percepts are based on past experience. In this framework, basic auditory qualities (e.g., loudness and pitch) are based on the frequency of occurrence of stimulus patterns in natural acoustic stimuli. To explore this concept of audition, we examined five well-documented psychophysical functions. The frequency of occurrence of acoustic patterns in a database of natural sound stimuli (speech) predicts some qualitative aspects of these functions, but with substantial quantitative discrepancies. This approach may offer a rationale for auditory phenomena that are difficult to explain in terms of the physical attributes of the stimuli as such.

  4. Are auditory percepts determined by experience?

    Directory of Open Access Journals (Sweden)

    Brian B Monson

    Full Text Available Audition--what listeners hear--is generally studied in terms of the physical properties of sound stimuli and physiological properties of the auditory system. Based on recent work in vision, we here consider an alternative perspective that sensory percepts are based on past experience. In this framework, basic auditory qualities (e.g., loudness and pitch are based on the frequency of occurrence of stimulus patterns in natural acoustic stimuli. To explore this concept of audition, we examined five well-documented psychophysical functions. The frequency of occurrence of acoustic patterns in a database of natural sound stimuli (speech predicts some qualitative aspects of these functions, but with substantial quantitative discrepancies. This approach may offer a rationale for auditory phenomena that are difficult to explain in terms of the physical attributes of the stimuli as such.

  5. Phonetic categorization in auditory word perception.

    Science.gov (United States)

    Ganong, W F

    1980-02-01

    To investigate the interaction in speech perception of auditory information and lexical knowledge (in particular, knowledge of which phonetic sequences are words), acoustic continua varying in voice onset time were constructed so that for each acoustic continuum, one of the two possible phonetic categorizations made a word and the other did not. For example, one continuum ranged between the word dash and the nonword tash; another used the nonword dask and the word task. In two experiments, subjects showed a significant lexical effect--that is, a tendency to make phonetic categorizations that make words. This lexical effect was greater at the phoneme boundary (where auditory information is ambiguous) than at the ends of the condinua. Hence the lexical effect must arise at a stage of processing sensitive to both lexical knowledge and auditory information.

  6. Auditory Perception of Complex Sounds.

    Science.gov (United States)

    1987-10-30

    twice the length of "short" (1). In such series we can exemplify rhythms that have both equally and unequally spaced accents. Specifically, we were...C.B., Kendall, R.A., & Carterette, E.C. (1987). "The effect of melodic and rhythmic contour on recognition memory for pitch change," Perception...34Parallels in rhythm and melody." In W.J. Dowling & T.J. Tighe (Eds.), The understanding of melody and rhythm . Potomac, MD: Erlbaum. Monahan, C.B

  7. Auditory perception of a human walker.

    Science.gov (United States)

    Cottrell, David; Campbell, Megan E J

    2014-01-01

    When one hears footsteps in the hall, one is able to instantly recognise it as a person: this is an everyday example of auditory biological motion perception. Despite the familiarity of this experience, research into this phenomenon is in its infancy compared with visual biological motion perception. Here, two experiments explored sensitivity to, and recognition of, auditory stimuli of biological and nonbiological origin. We hypothesised that the cadence of a walker gives rise to a temporal pattern of impact sounds that facilitates the recognition of human motion from auditory stimuli alone. First a series of detection tasks compared sensitivity with three carefully matched impact sounds: footsteps, a ball bouncing, and drumbeats. Unexpectedly, participants were no more sensitive to footsteps than to impact sounds of nonbiological origin. In the second experiment participants made discriminations between pairs of the same stimuli, in a series of recognition tasks in which the temporal pattern of impact sounds was manipulated to be either that of a walker or the pattern more typical of the source event (a ball bouncing or a drumbeat). Under these conditions, there was evidence that both temporal and nontemporal cues were important in recognising theses stimuli. It is proposed that the interval between footsteps, which reflects a walker's cadence, is a cue for the recognition of the sounds of a human walking.

  8. Auditory Spatial Perception without Vision

    Science.gov (United States)

    Voss, Patrice

    2016-01-01

    Valuable insights into the role played by visual experience in shaping spatial representations can be gained by studying the effects of visual deprivation on the remaining sensory modalities. For instance, it has long been debated how spatial hearing evolves in the absence of visual input. While several anecdotal accounts tend to associate complete blindness with exceptional hearing abilities, experimental evidence supporting such claims is, however, matched by nearly equal amounts of evidence documenting spatial hearing deficits. The purpose of this review is to summarize the key findings which support either enhancements or deficits in spatial hearing observed following visual loss and to provide a conceptual framework that isolates the specific conditions under which they occur. Available evidence will be examined in terms of spatial dimensions (horizontal, vertical, and depth perception) and in terms of frames of reference (egocentric and allocentric). Evidence suggests that while early blind individuals show superior spatial hearing in the horizontal plane, they also show significant deficits in the vertical plane. Potential explanations underlying these contrasting findings will be discussed. Early blind individuals also show spatial hearing impairments when performing tasks that require the use of an allocentric frame of reference. Results obtained with late-onset blind individuals suggest that early visual experience plays a key role in the development of both spatial hearing enhancements and deficits. PMID:28066286

  9. Behind the Scenes of Auditory Perception

    OpenAIRE

    Shamma, Shihab A.; Micheyl, Christophe

    2010-01-01

    Auditory scenes” often contain contributions from multiple acoustic sources. These are usually heard as separate auditory “streams”, which can be selectively followed over time. How and where these auditory streams are formed in the auditory system is one of the most fascinating questions facing auditory scientists today. Findings published within the last two years indicate that both cortical and sub-cortical processes contribute to the formation of auditory streams, and they raise importan...

  10. Integration of auditory and tactile inputs in musical meter perception.

    Science.gov (United States)

    Huang, Juan; Gamble, Darik; Sarnlertsophon, Kristine; Wang, Xiaoqin; Hsiao, Steven

    2013-01-01

    Musicians often say that they not only hear but also "feel" music. To explore the contribution of tactile information to "feeling" music, we investigated the degree that auditory and tactile inputs are integrated in humans performing a musical meter-recognition task. Subjects discriminated between two types of sequences, "duple" (march-like rhythms) and "triple" (waltz-like rhythms), presented in three conditions: (1) unimodal inputs (auditory or tactile alone); (2) various combinations of bimodal inputs, where sequences were distributed between the auditory and tactile channels such that a single channel did not produce coherent meter percepts; and (3) bimodal inputs where the two channels contained congruent or incongruent meter cues. We first show that meter is perceived similarly well (70-85 %) when tactile or auditory cues are presented alone. We next show in the bimodal experiments that auditory and tactile cues are integrated to produce coherent meter percepts. Performance is high (70-90 %) when all of the metrically important notes are assigned to one channel and is reduced to 60 % when half of these notes are assigned to one channel. When the important notes are presented simultaneously to both channels, congruent cues enhance meter recognition (90 %). Performance dropped dramatically when subjects were presented with incongruent auditory cues (10 %), as opposed to incongruent tactile cues (60 %), demonstrating that auditory input dominates meter perception. These observations support the notion that meter perception is a cross-modal percept with tactile inputs underlying the perception of "feeling" music.

  11. Perceptual Load Influences Auditory Space Perception in the Ventriloquist Aftereffect

    Science.gov (United States)

    Eramudugolla, Ranmalee; Kamke, Marc. R.; Soto-Faraco, Salvador; Mattingley, Jason B.

    2011-01-01

    A period of exposure to trains of simultaneous but spatially offset auditory and visual stimuli can induce a temporary shift in the perception of sound location. This phenomenon, known as the "ventriloquist aftereffect", reflects a realignment of auditory and visual spatial representations such that they approach perceptual alignment despite their…

  12. Context, Contrast, and Tone of Voice in Auditory Sarcasm Perception

    Science.gov (United States)

    Voyer, Daniel; Thibodeau, Sophie-Hélène; Delong, Breanna J.

    2016-01-01

    Four experiments were conducted to investigate the interplay between context and tone of voice in the perception of sarcasm. These experiments emphasized the role of contrast effects in sarcasm perception exclusively by means of auditory stimuli whereas most past research has relied on written material. In all experiments, a positive or negative…

  13. Influence of Auditory and Haptic Stimulation in Visual Perception

    Directory of Open Access Journals (Sweden)

    Shunichi Kawabata

    2011-10-01

    Full Text Available While many studies have shown that visual information affects perception in the other modalities, little is known about how auditory and haptic information affect visual perception. In this study, we investigated how auditory, haptic, or auditory and haptic stimulation affects visual perception. We used a behavioral task based on the subjects observing the phenomenon of two identical visual objects moving toward each other, overlapping and then continuing their original motion. Subjects may perceive the objects as either streaming each other or bouncing and reversing their direction of motion. With only visual motion stimulus, subjects usually report the objects as streaming, whereas if a sound or flash is played when the objects touch each other, subjects report the objects as bouncing (Bounce-Inducing Effect. In this study, “auditory stimulation”, “haptic stimulation” or “haptic and auditory stimulation” were presented at various times relative to the visual overlap of objects. Our result shows the bouncing rate when haptic and auditory stimulation were presented were the highest. This result suggests that the Bounce-Inducing Effect is enhanced by simultaneous modality presentation to visual motion. In the future, a neuroscience approach (eg, TMS, fMRI may be required to elucidate the brain mechanism in this study.

  14. Auditory perception of self-similarity in water sounds.

    Directory of Open Access Journals (Sweden)

    Maria Neimark Geffen

    2011-05-01

    Full Text Available Many natural signals, including environmental sounds, exhibit scale-invariant statistics: their structure is repeated at multiple scales. Such scale invariance has been identified separately across spectral and temporal correlations of natural sounds (Clarke and Voss, 1975; Attias and Schreiner, 1997; Escabi et al., 2003; Singh and Theunissen, 2003. Yet the role of scale-invariance across overall spectro-temporal structure of the sound has not been explored directly in auditory perception. Here, we identify that the sound wave of a recording of running water is a self-similar fractal, exhibiting scale-invariance not only within spectral channels, but also across the full spectral bandwidth. The auditory perception of the water sound did not change with its scale. We tested the role of scale-invariance in perception by using an artificial sound, which could be rendered scale-invariant. We generated a random chirp stimulus: an auditory signal controlled by two parameters, Q, controlling the relative, and r, controlling the absolute, temporal structure of the sound. Imposing scale-invariant statistics on the artificial sound was required for its perception as natural and water-like. Further, Q had to be restricted to a specific range for the sound to be perceived as natural. To detect self-similarity in the water sound, and identify Q, the auditory system needs to process the temporal dynamics of the waveform across spectral bands in terms of the number of cycles, rather than absolute timing. We propose a two-stage neural model implementing this computation. This computation may be carried out by circuits of neurons in the auditory cortex. The set of auditory stimuli developed in this study are particularly suitable for measurements of response properties of neurons in the auditory pathway, allowing for quantification of the effects of varying the statistics of the spectro-temporal statistical structure of the stimulus.

  15. Rodent Auditory Perception: Critical Band Limitations and Plasticity

    Science.gov (United States)

    King, Julia; Insanally, Michele; Jin, Menghan; Martins, Ana Raquel O.; D'amour, James A.; Froemke, Robert C.

    2015-01-01

    What do animals hear? While it remains challenging to adequately assess sensory perception in animal models, it is important to determine perceptual abilities in model systems to understand how physiological processes and plasticity relate to perception, learning, and cognition. Here we discuss hearing in rodents, reviewing previous and recent behavioral experiments querying acoustic perception in rats and mice, and examining the relation between behavioral data and electrophysiological recordings from the central auditory system. We focus on measurements of critical bands, which are psychoacoustic phenomena that seem to have a neural basis in the functional organization of the cochlea and the inferior colliculus. We then discuss how behavioral training, brain stimulation, and neuropathology impact auditory processing and perception. PMID:25827498

  16. Odors bias time perception in visual and auditory modalities

    Directory of Open Access Journals (Sweden)

    Zhenzhu eYue

    2016-04-01

    Full Text Available Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 ms or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor. The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a

  17. The plastic ear and perceptual relearning in auditory spatial perception.

    Science.gov (United States)

    Carlile, Simon

    2014-01-01

    The auditory system of adult listeners has been shown to accommodate to altered spectral cues to sound location which presumably provides the basis for recalibration to changes in the shape of the ear over a life time. Here we review the role of auditory and non-auditory inputs to the perception of sound location and consider a range of recent experiments looking at the role of non-auditory inputs in the process of accommodation to these altered spectral cues. A number of studies have used small ear molds to modify the spectral cues that result in significant degradation in localization performance. Following chronic exposure (10-60 days) performance recovers to some extent and recent work has demonstrated that this occurs for both audio-visual and audio-only regions of space. This begs the questions as to the teacher signal for this remarkable functional plasticity in the adult nervous system. Following a brief review of influence of the motor state in auditory localization, we consider the potential role of auditory-motor learning in the perceptual recalibration of the spectral cues. Several recent studies have considered how multi-modal and sensory-motor feedback might influence accommodation to altered spectral cues produced by ear molds or through virtual auditory space stimulation using non-individualized spectral cues. The work with ear molds demonstrates that a relatively short period of training involving audio-motor feedback (5-10 days) significantly improved both the rate and extent of accommodation to altered spectral cues. This has significant implications not only for the mechanisms by which this complex sensory information is encoded to provide spatial cues but also for adaptive training to altered auditory inputs. The review concludes by considering the implications for rehabilitative training with hearing aids and cochlear prosthesis.

  18. The plastic ear and perceptual relearning in auditory spatial perception.

    Directory of Open Access Journals (Sweden)

    Simon eCarlile

    2014-08-01

    Full Text Available The auditory system of adult listeners has been shown to accommodate to altered spectral cues to sound location which presumably provides the basis for recalibration to changes in the shape of the ear over a life time. Here we review the role of auditory and non-auditory inputs to the perception of sound location and consider a range of recent experiments looking at the role of non-auditory inputs in the process of accommodation to these altered spectral cues. A number of studies have used small ear moulds to modify the spectral cues that result in significant degradation in localization performance. Following chronic exposure (10-60 days performance recovers to some extent and recent work has demonstrated that this occurs for both audio-visual and audio-only regions of space. This begs the questions as to the teacher signal for this remarkable functional plasticity in the adult nervous system. Following a brief review of influence of the motor state in auditory localisation, we consider the potential role of auditory-motor learning in the perceptual recalibration of the spectral cues. Several recent studies have considered how multi-modal and sensory-motor feedback might influence accommodation to altered spectral cues produced by ear moulds or through virtual auditory space stimulation using non-individualised spectral cues. The work with ear moulds demonstrates that a relatively short period of training involving sensory-motor feedback (5 – 10 days significantly improved both the rate and extent of accommodation to altered spectral cues. This has significant implications not only for the mechanisms by which this complex sensory information is encoded to provide a spatial code but also for adaptive training to altered auditory inputs. The review concludes by considering the implications for rehabilitative training with hearing aids and cochlear prosthesis.

  19. Visual Timing of Structured Dance Movements Resembles Auditory Rhythm Perception

    Directory of Open Access Journals (Sweden)

    Yi-Huang Su

    2016-01-01

    Full Text Available Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance.

  20. Classification of Underwater Target Echoes Based on Auditory Perception Characteristics

    Institute of Scientific and Technical Information of China (English)

    Xiukun Li; Xiangxia Meng; Hang Liu; Mingye Liu

    2014-01-01

    In underwater target detection, the bottom reverberation has some of the same properties as the target echo, which has a great impact on the performance. It is essential to study the difference between target echo and reverberation. In this paper, based on the unique advantage of human listening ability on objects distinction, the Gammatone filter is taken as the auditory model. In addition, time-frequency perception features and auditory spectral features are extracted for active sonar target echo and bottom reverberation separation. The features of the experimental data have good concentration characteristics in the same class and have a large amount of differences between different classes, which shows that this method can effectively distinguish between the target echo and reverberation.

  1. The relationship of phonological ability, speech perception and auditory perception in adults with dyslexia.

    Directory of Open Access Journals (Sweden)

    Jeremy eLaw

    2014-07-01

    Full Text Available This study investigated whether auditory, speech perception and phonological skills are tightly interrelated or independently contributing to reading. We assessed each of these three skills in 36 adults with a past diagnosis of dyslexia and 54 matched normal reading adults. Phonological skills were tested by the typical threefold tasks, i.e. rapid automatic naming, verbal short term memory and phonological awareness. Dynamic auditory processing skills were assessed by means of a frequency modulation (FM and an amplitude rise time (RT; an intensity discrimination task (ID was included as a non-dynamic control task. Speech perception was assessed by means of sentences and words in noise tasks. Group analysis revealed significant group differences in auditory tasks (i.e. RT and ID and in phonological processing measures, yet no differences were found for speech perception. In addition, performance on RT discrimination correlated with reading but this relation was mediated by phonological processing and not by speech in noise. Finally, inspection of the individual scores revealed that the dyslexic readers showed an increased proportion of deviant subjects on the slow-dynamic auditory and phonological tasks, yet each individual dyslexic reader does not display a clear pattern of deficiencies across the levels of processing skills. Although our results support phonological and slow-rate dynamic auditory deficits which relate to literacy, they suggest that at the individual level, problems in reading and writing cannot be explained by the cascading auditory theory. Instead, dyslexic adults seem to vary considerably in the extent to which each of the auditory and phonological factors are expressed and interact with environmental and higher-order cognitive influences.

  2. Toward a neurobiology of auditory object perception: What can we learn from the songbird forebrain?

    Institute of Scientific and Technical Information of China (English)

    Kai LU; David S. VICARIO

    2011-01-01

    In the acoustic world,no sounds occur entirely in isolation; they always reach the ears in combination with other sounds.How any given sound is discriminated and perceived as an independent auditory object is a challenging question in neuroscience.Although our knowledge of neural processing in the auditory pathway has expanded over the years,no good theory exists to explain how perception of auditory objects is achieved.A growing body of evidence suggests that the selectivity of neurons in the auditory forebrain is under dynamic modulation,and this plasticity may contribute to auditory object perception.We propose that stimulus-specific adaptation in the auditory forebrain of the songbird (and perhaps in other systems) may play an important role in modulating sensitivity in a way that aids discrimination,and thus can potentially contribute to auditory object perception [Current Zoology 57 (6):671-683,2011].

  3. Effects of auditory information on self-motion perception during simultaneous presentation of visual shearing motion.

    Science.gov (United States)

    Tanahashi, Shigehito; Ashihara, Kaoru; Ujike, Hiroyasu

    2015-01-01

    Recent studies have found that self-motion perception induced by simultaneous presentation of visual and auditory motion is facilitated when the directions of visual and auditory motion stimuli are identical. They did not, however, examine possible contributions of auditory motion information for determining direction of self-motion perception. To examine this, a visual stimulus projected on a hemisphere screen and an auditory stimulus presented through headphones were presented separately or simultaneously, depending on experimental conditions. The participant continuously indicated the direction and strength of self-motion during the 130-s experimental trial. When the visual stimulus with a horizontal shearing rotation and the auditory stimulus with a horizontal one-directional rotation were presented simultaneously, the duration and strength of self-motion perceived in the opposite direction of the auditory rotation stimulus were significantly longer and stronger than those perceived in the same direction of the auditory rotation stimulus. However, the auditory stimulus alone could not sufficiently induce self-motion perception, and if it did, its direction was not consistent within each experimental trial. We concluded that auditory motion information can determine perceived direction of self-motion during simultaneous presentation of visual and auditory motion information, at least when visual stimuli moved in opposing directions (around the yaw-axis). We speculate that the contribution of auditory information depends on the plausibility and information balance of visual and auditory information.

  4. Visual and auditory perception in preschool children at risk for dyslexia.

    Science.gov (United States)

    Ortiz, Rosario; Estévez, Adelina; Muñetón, Mercedes; Domínguez, Carolina

    2014-11-01

    Recently, there has been renewed interest in perceptive problems of dyslexics. A polemic research issue in this area has been the nature of the perception deficit. Another issue is the causal role of this deficit in dyslexia. Most studies have been carried out in adult and child literates; consequently, the observed deficits may be the result rather than the cause of dyslexia. This study addresses these issues by examining visual and auditory perception in children at risk for dyslexia. We compared children from preschool with and without risk for dyslexia in auditory and visual temporal order judgment tasks and same-different discrimination tasks. Identical visual and auditory, linguistic and nonlinguistic stimuli were presented in both tasks. The results revealed that the visual as well as the auditory perception of children at risk for dyslexia is impaired. The comparison between groups in auditory and visual perception shows that the achievement of children at risk was lower than children without risk for dyslexia in the temporal tasks. There were no differences between groups in auditory discrimination tasks. The difficulties of children at risk in visual and auditory perceptive processing affected both linguistic and nonlinguistic stimuli. Our conclusions are that children at risk for dyslexia show auditory and visual perceptive deficits for linguistic and nonlinguistic stimuli. The auditory impairment may be explained by temporal processing problems and these problems are more serious for processing language than for processing other auditory stimuli. These visual and auditory perceptive deficits are not the consequence of failing to learn to read, thus, these findings support the theory of temporal processing deficit.

  5. Context, Contrast, and Tone of Voice in Auditory Sarcasm Perception.

    Science.gov (United States)

    Voyer, Daniel; Thibodeau, Sophie-Hélène; Delong, Breanna J

    2016-02-01

    Four experiments were conducted to investigate the interplay between context and tone of voice in the perception of sarcasm. These experiments emphasized the role of contrast effects in sarcasm perception exclusively by means of auditory stimuli whereas most past research has relied on written material. In all experiments, a positive or negative computer-generated context spoken in a flat emotional tone was followed by a literally positive statement spoken in a sincere or sarcastic tone of voice. Participants indicated for each statement whether the intonation was sincere or sarcastic. In Experiment 1, a congruent context/tone of voice pairing (negative/sarcastic, positive/sincere) produced fast response times and proportions of sarcastic responses in the direction predicted by the tone of voice. Incongruent pairings produced mid-range proportions and slower response times. Experiment 2 introduced ambiguous contexts to determine whether a lower context/statements contrast would affect the proportion of sarcastic responses and response time. Results showed the expected findings for proportions (values between those obtained for congruent and incongruent pairings in the direction predicted by the tone of voice). However, response time failed to produce the predicted pattern, suggesting potential issues with the choice of stimuli. Experiments 3 and 4 extended the results of Experiments 1 and 2, respectively, to auditory stimuli based on written vignettes used in neuropsychological assessment. Results were exactly as predicted by contrast effects in both experiments. Taken together, the findings suggest that both context and tone influence how sarcasm is perceived while supporting the importance of contrast effects in sarcasm perception.

  6. Relating binaural pitch perception to the individual listener's auditory profile

    DEFF Research Database (Denmark)

    Santurette, Sébastien; Dau, Torsten

    2012-01-01

    The ability of eight normal-hearing listeners and fourteen listeners with sensorineural hearing loss to detect and identify pitch contours was measured for binaural-pitch stimuli and salience-matched monaurally detectable pitches. In an effort to determine whether impaired binaural pitch perception...... were found not to perceive binaural pitch at all, despite a clear detection of monaural pitch. While both binaural and monaural pitches were detectable by all other listeners, identification scores were significantly lower for binaural than for monaural pitch. A total absence of binaural pitch...... sensation coexisted with a loss of a binaural signal-detection advantage in noise, without implying reduced cognitive function. Auditory filter bandwidths did not correlate with the difference in pitch identification scores between binaural and monaural pitches. However, subjects with impaired binaural...

  7. Direct Contribution of Auditory Motion Information to Sound-Induced Visual Motion Perception

    Directory of Open Access Journals (Sweden)

    Souta Hidaka

    2011-10-01

    Full Text Available We have recently demonstrated that alternating left-right sound sources induce motion perception to static visual stimuli along the horizontal plane (SIVM: sound-induced visual motion perception, Hidaka et al., 2009. The aim of the current study was to elucidate whether auditory motion signals, rather than auditory positional signals, can directly contribute to the SIVM. We presented static visual flashes at retinal locations outside the fovea together with a lateral auditory motion provided by a virtual stereo noise source smoothly shifting in the horizontal plane. The flashes appeared to move in the situation where auditory positional information would have little influence on the perceived position of visual stimuli; the spatiotemporal position of the flashes was in the middle of the auditory motion trajectory. Furthermore, the auditory motion altered visual motion perception in a global motion display; in this display, different localized motion signals of multiple visual stimuli were combined to produce a coherent visual motion perception so that there was no clear one-to-one correspondence between the auditory stimuli and each visual stimulus. These findings suggest the existence of direct interactions between the auditory and visual modalities in motion processing and motion perception.

  8. Auditory feedback affects perception of effort when exercising with a Pulley machine

    DEFF Research Database (Denmark)

    Bordegoni, Monica; Ferrise, Francesco; Grani, Francesco

    2013-01-01

    In this paper we describe an experiment that investigates the role of auditory feedback in affecting the perception of effort when using a physical pulley machine. Specifically, we investigated whether variations in the amplitude and frequency content of the pulley sound affect perception of effort....... Results show that variations in frequency content affect the perception of effort....

  9. Neural coding and perception of pitch in the normal and impaired human auditory system

    DEFF Research Database (Denmark)

    Santurette, Sébastien

    2011-01-01

    for a variety of basic auditory tasks, indicating that it may be a crucial measure to consider for hearing-loss characterization. In contrast to hearing-impaired listeners, adults with dyslexia showed no deficits in binaural pitch perception, suggesting intact low-level auditory mechanisms. The second part...

  10. Comparison of Auditory Perception in Cochlear Implanted Children with and without Additional Disabilities

    Directory of Open Access Journals (Sweden)

    Seyed Basir Hashemi

    2016-05-01

    Full Text Available Background: The number of children with cochlear implants who have other difficulties such as attention deficiency and cerebral palsy has increased dramatically. Despite the need for information on the results of cochlear implantation in this group, the available literature is extremely limited. We, therefore, sought to compare the levels of auditory perception in children with cochlear implants with and without additional disabilities. Methods: A spondee test comprising 20 two-syllable words was performed. The data analysis was done using SPSS, version 19. Results: Thirty-one children who had received cochlear implants 2 years previously and were at an average age of 7.5 years were compared via the spondee test. From the 31 children,15 had one or more additional disabilities. The data analysis indicated that the mean score of auditory perception in this group was approximately 30 scores below that of the children with cochlear implants who had no additional disabilities. Conclusion: Although there was an improvement in the auditory perception of all the children with cochlear implants, there was a noticeable difference in the level of auditory perception between those with and without additional disabilities. Deafness and additional disabilities depended the children on lip reading alongside the auditory ways of communication. In addition, the level of auditory perception in the children with cochlear implants who had more than one additional disability was significantly less than that of the other children with cochlear implants who had one additional disability.

  11. The Perception of Cooperativeness Without Any Visual or Auditory Communication.

    Science.gov (United States)

    Chang, Dong-Seon; Burger, Franziska; Bülthoff, Heinrich H; de la Rosa, Stephan

    2015-12-01

    Perceiving social information such as the cooperativeness of another person is an important part of human interaction. But can people perceive the cooperativeness of others even without any visual or auditory information? In a novel experimental setup, we connected two people with a rope and made them accomplish a point-collecting task together while they could not see or hear each other. We observed a consistently emerging turn-taking behavior in the interactions and installed a confederate in a subsequent experiment who either minimized or maximized this behavior. Participants experienced this only through the haptic force-feedback of the rope and made evaluations about the confederate after each interaction. We found that perception of cooperativeness was significantly affected only by the manipulation of this turn-taking behavior. Gender- and size-related judgments also significantly differed. Our results suggest that people can perceive social information such as the cooperativeness of other people even in situations where possibilities for communication are minimal.

  12. Feeling music: integration of auditory and tactile inputs in musical meter perception.

    Science.gov (United States)

    Huang, Juan; Gamble, Darik; Sarnlertsophon, Kristine; Wang, Xiaoqin; Hsiao, Steven

    2012-01-01

    Musicians often say that they not only hear, but also "feel" music. To explore the contribution of tactile information in "feeling" musical rhythm, we investigated the degree that auditory and tactile inputs are integrated in humans performing a musical meter recognition task. Subjects discriminated between two types of sequences, 'duple' (march-like rhythms) and 'triple' (waltz-like rhythms) presented in three conditions: 1) Unimodal inputs (auditory or tactile alone), 2) Various combinations of bimodal inputs, where sequences were distributed between the auditory and tactile channels such that a single channel did not produce coherent meter percepts, and 3) Simultaneously presented bimodal inputs where the two channels contained congruent or incongruent meter cues. We first show that meter is perceived similarly well (70%-85%) when tactile or auditory cues are presented alone. We next show in the bimodal experiments that auditory and tactile cues are integrated to produce coherent meter percepts. Performance is high (70%-90%) when all of the metrically important notes are assigned to one channel and is reduced to 60% when half of these notes are assigned to one channel. When the important notes are presented simultaneously to both channels, congruent cues enhance meter recognition (90%). Performance drops dramatically when subjects were presented with incongruent auditory cues (10%), as opposed to incongruent tactile cues (60%), demonstrating that auditory input dominates meter perception. We believe that these results are the first demonstration of cross-modal sensory grouping between any two senses.

  13. Feeling music: integration of auditory and tactile inputs in musical meter perception.

    Directory of Open Access Journals (Sweden)

    Juan Huang

    Full Text Available Musicians often say that they not only hear, but also "feel" music. To explore the contribution of tactile information in "feeling" musical rhythm, we investigated the degree that auditory and tactile inputs are integrated in humans performing a musical meter recognition task. Subjects discriminated between two types of sequences, 'duple' (march-like rhythms and 'triple' (waltz-like rhythms presented in three conditions: 1 Unimodal inputs (auditory or tactile alone, 2 Various combinations of bimodal inputs, where sequences were distributed between the auditory and tactile channels such that a single channel did not produce coherent meter percepts, and 3 Simultaneously presented bimodal inputs where the two channels contained congruent or incongruent meter cues. We first show that meter is perceived similarly well (70%-85% when tactile or auditory cues are presented alone. We next show in the bimodal experiments that auditory and tactile cues are integrated to produce coherent meter percepts. Performance is high (70%-90% when all of the metrically important notes are assigned to one channel and is reduced to 60% when half of these notes are assigned to one channel. When the important notes are presented simultaneously to both channels, congruent cues enhance meter recognition (90%. Performance drops dramatically when subjects were presented with incongruent auditory cues (10%, as opposed to incongruent tactile cues (60%, demonstrating that auditory input dominates meter perception. We believe that these results are the first demonstration of cross-modal sensory grouping between any two senses.

  14. Auditory cortical activity during cochlear implant-mediated perception of spoken language, melody, and rhythm.

    Science.gov (United States)

    Limb, Charles J; Molloy, Anne T; Jiradejvong, Patpong; Braun, Allen R

    2010-03-01

    Despite the significant advances in language perception for cochlear implant (CI) recipients, music perception continues to be a major challenge for implant-mediated listening. Our understanding of the neural mechanisms that underlie successful implant listening remains limited. To our knowledge, this study represents the first neuroimaging investigation of music perception in CI users, with the hypothesis that CI subjects would demonstrate greater auditory cortical activation than normal hearing controls. H(2) (15)O positron emission tomography (PET) was used here to assess auditory cortical activation patterns in ten postlingually deafened CI patients and ten normal hearing control subjects. Subjects were presented with language, melody, and rhythm tasks during scanning. Our results show significant auditory cortical activation in implant subjects in comparison to control subjects for language, melody, and rhythm. The greatest activity in CI users compared to controls was seen for language tasks, which is thought to reflect both implant and neural specializations for language processing. For musical stimuli, PET scanning revealed significantly greater activation during rhythm perception in CI subjects (compared to control subjects), and the least activation during melody perception, which was the most difficult task for CI users. These results may suggest a possible relationship between auditory performance and degree of auditory cortical activation in implant recipients that deserves further study.

  15. Modeling auditory processing and speech perception in hearing-impaired listeners

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve

    A better understanding of how the human auditory system represents and analyzes sounds and how hearing impairment affects such processing is of great interest for researchers in the fields of auditory neuroscience, audiology, and speech communication as well as for applications in hearing......-instrument and speech technology. In this thesis, the primary focus was on the development and evaluation of a computational model of human auditory signal-processing and perception. The model was initially designed to simulate the normal-hearing auditory system with particular focus on the nonlinear processing...... aimed at experimentally characterizing the effects of cochlear damage on listeners' auditory processing, in terms of sensitivity loss and reduced temporal and spectral resolution. The results showed that listeners with comparable audiograms can have very different estimated cochlear input...

  16. Task-dependent calibration of auditory spatial perception through environmental visual observation.

    Science.gov (United States)

    Tonelli, Alessia; Brayda, Luca; Gori, Monica

    2015-01-01

    Visual information is paramount to space perception. Vision influences auditory space estimation. Many studies show that simultaneous visual and auditory cues improve precision of the final multisensory estimate. However, the amount or the temporal extent of visual information, that is sufficient to influence auditory perception, is still unknown. It is therefore interesting to know if vision can improve auditory precision through a short-term environmental observation preceding the audio task and whether this influence is task-specific or environment-specific or both. To test these issues we investigate possible improvements of acoustic precision with sighted blindfolded participants in two audio tasks [minimum audible angle (MAA) and space bisection] and two acoustically different environments (normal room and anechoic room). With respect to a baseline of auditory precision, we found an improvement of precision in the space bisection task but not in the MAA after the observation of a normal room. No improvement was found when performing the same task in an anechoic chamber. In addition, no difference was found between a condition of short environment observation and a condition of full vision during the whole experimental session. Our results suggest that even short-term environmental observation can calibrate auditory spatial performance. They also suggest that echoes can be the cue that underpins visual calibration. Echoes may mediate the transfer of information from the visual to the auditory system.

  17. Auditory Signal Processing in Communication: Perception and Performance of Vocal Sounds

    Science.gov (United States)

    Prather, Jonathan F.

    2013-01-01

    Learning and maintaining the sounds we use in vocal communication require accurate perception of the sounds we hear performed by others and feedback-dependent imitation of those sounds to produce our own vocalizations. Understanding how the central nervous system integrates auditory and vocal-motor information to enable communication is a fundamental goal of systems neuroscience, and insights into the mechanisms of those processes will profoundly enhance clinical therapies for communication disorders. Gaining the high-resolution insight necessary to define the circuits and cellular mechanisms underlying human vocal communication is presently impractical. Songbirds are the best animal model of human speech, and this review highlights recent insights into the neural basis of auditory perception and feedback-dependent imitation in those animals. Neural correlates of song perception are present in auditory areas, and those correlates are preserved in the auditory responses of downstream neurons that are also active when the bird sings. Initial tests indicate that singing-related activity in those downstream neurons is associated with vocal-motor performance as opposed to the bird simply hearing itself sing. Therefore, action potentials related to auditory perception and action potentials related to vocal performance are co-localized in individual neurons. Conceptual models of song learning involve comparison of vocal commands and the associated auditory feedback to compute an error signal that is used to guide refinement of subsequent song performances, yet the sites of that comparison remain unknown. Convergence of sensory and motor activity onto individual neurons points to a possible mechanism through which auditory and vocal-motor signals may be linked to enable learning and maintenance of the sounds used in vocal communication. PMID:23827717

  18. The simultaneous perception of auditory-tactile stimuli in voluntary movement.

    Science.gov (United States)

    Hao, Qiao; Ogata, Taiki; Ogawa, Ken-Ichiro; Kwon, Jinhwan; Miyake, Yoshihiro

    2015-01-01

    The simultaneous perception of multimodal information in the environment during voluntary movement is very important for effective reactions to the environment. Previous studies have found that voluntary movement affects the simultaneous perception of auditory and tactile stimuli. However, the results of these experiments are not completely consistent, and the differences may be attributable to methodological differences in the previous studies. In this study, we investigated the effect of voluntary movement on the simultaneous perception of auditory and tactile stimuli using a temporal order judgment task with voluntary movement, involuntary movement, and no movement. To eliminate the potential effect of stimulus predictability and the effect of spatial information associated with large-scale movement in the previous studies, we randomized the interval between the start of movement and the first stimulus, and used small-scale movement. As a result, the point of subjective simultaneity (PSS) during voluntary movement shifted from the tactile stimulus being first during involuntary movement or no movement to the auditory stimulus being first. The just noticeable difference (JND), an indicator of temporal resolution, did not differ across the three conditions. These results indicate that voluntary movement itself affects the PSS in auditory-tactile simultaneous perception, but it does not influence the JND. In the discussion of these results, we suggest that simultaneous perception may be affected by the efference copy.

  19. The effect of music on auditory perception in cochlear-implant users and normal-hearing listeners

    NARCIS (Netherlands)

    Fuller, Christina Diechina

    2016-01-01

    Cochlear implants (CIs) are auditory prostheses for severely deaf people that do not benefit from conventional hearing aids. Speech perception is reasonably good with CIs; other signals such as music perception are challenging. First, the perception of music and music related perception in CI users

  20. You can't stop the music: reduced auditory alpha power and coupling between auditory and memory regions facilitate the illusory perception of music during noise.

    Science.gov (United States)

    Müller, Nadia; Keil, Julian; Obleser, Jonas; Schulz, Hannah; Grunwald, Thomas; Bernays, René-Ludwig; Huppertz, Hans-Jürgen; Weisz, Nathan

    2013-10-01

    Our brain has the capacity of providing an experience of hearing even in the absence of auditory stimulation. This can be seen as illusory conscious perception. While increasing evidence postulates that conscious perception requires specific brain states that systematically relate to specific patterns of oscillatory activity, the relationship between auditory illusions and oscillatory activity remains mostly unexplained. To investigate this we recorded brain activity with magnetoencephalography and collected intracranial data from epilepsy patients while participants listened to familiar as well as unknown music that was partly replaced by sections of pink noise. We hypothesized that participants have a stronger experience of hearing music throughout noise when the noise sections are embedded in familiar compared to unfamiliar music. This was supported by the behavioral results showing that participants rated the perception of music during noise as stronger when noise was presented in a familiar context. Time-frequency data show that the illusory perception of music is associated with a decrease in auditory alpha power pointing to increased auditory cortex excitability. Furthermore, the right auditory cortex is concurrently synchronized with the medial temporal lobe, putatively mediating memory aspects associated with the music illusion. We thus assume that neuronal activity in the highly excitable auditory cortex is shaped through extensive communication between the auditory cortex and the medial temporal lobe, thereby generating the illusion of hearing music during noise.

  1. The evolutionary neuroscience of musical beat perception: the Action Simulation for Auditory Prediction (ASAP hypothesis.

    Directory of Open Access Journals (Sweden)

    Aniruddh D. Patel

    2014-05-01

    Full Text Available Every human culture has some form of music with a beat: a perceived periodic pulse that structures the perception of musical rhythm and which serves as a framework for synchronized movement to music. What are the neural mechanisms of musical beat perception, and how did they evolve? One view, which dates back to Darwin and implicitly informs some current models of beat perception, is that the relevant neural mechanisms are relatively general and are widespread among animal species. On the basis of recent neural and cross-species data on musical beat processing, this paper argues for a different view. Here we argue that beat perception is a complex brain function involving temporally-precise communication between auditory regions and motor planning regions of the cortex (even in the absence of overt movement. More specifically, we propose that simulation of periodic movement in motor planning regions provides a neural signal that helps the auditory system predict the timing of upcoming beats. This action simulation for auditory prediction (ASAP hypothesis leads to testable predictions. We further suggest that ASAP relies on dorsal auditory pathway connections between auditory regions and motor planning regions via the parietal cortex, and suggest that these connections may be stronger in humans than in nonhuman primates due to the evolution of vocal learning in our lineage. This suggestion motivates cross-species research to determine which species are capable of human-like beat perception, i.e., beat perception that involves accurate temporal prediction of beat times across a fairly broad range of tempi.

  2. Perception of stochastically undersampled sound waveforms: A model of auditory deafferentation

    Directory of Open Access Journals (Sweden)

    Enrique A Lopez-Poveda

    2013-07-01

    Full Text Available Auditory deafferentation, or permanent loss of auditory nerve afferent terminals, occurs after noise overexposure and aging and may accompany many forms of hearing loss. It could cause significant auditory impairment but is undetected by regular clinical tests and so its effects on perception are poorly understood. Here, we hypothesize and test a neural mechanism by which deafferentation could deteriorate perception. The basic idea is that the spike train produced by each auditory afferent resembles a stochastically digitized version of the sound waveform and that the quality of the waveform representation in the whole nerve depends on the number of aggregated spike trains or auditory afferents. We reason that because spikes occur stochastically in time with a higher probability for high- than for low-intensity sounds, more afferents would be required for the nerve to faithfully encode high-frequency or low-intensity waveform features than low-frequency or high-intensity features. Deafferentation would thus degrade the encoding of these features. We further reason that due to the stochastic nature of nerve firing, the degradation would be greater in noise than in quiet. This hypothesis is tested using a vocoder. Sounds were filtered through ten adjacent frequency bands. For the signal in each band, multiple stochastically subsampled copies were obtained to roughly mimic different stochastic representations of that signal conveyed by different auditory afferents innervating a given cochlear region. These copies were then aggregated to obtain an acoustic stimulus. Tone detection and speech identification tests were performed by young, normal-hearing listeners using different numbers of stochastic samplers per frequency band in the vocoder. Results support the hypothesis that stochastic undersampling of the sound waveform, inspired by deafferentation, impairs speech perception in noise more than in quiet, consistent with auditory aging effects.

  3. Early Experience of Sex Hormones as a Predictor of Reading, Phonology, and Auditory Perception

    Science.gov (United States)

    Beech, John R.; Beauvois, Michael W.

    2006-01-01

    Previous research has indicated possible reciprocal connections between phonology and reading, and also connections between aspects of auditory perception and reading. The present study investigates these associations further by examining the potential influence of prenatal androgens using measures of digit ratio (the ratio of the lengths of the…

  4. DEVELOPMENT OF AUDITORY PERCEPTION OF MUSICAL SOUNDS BY CHILDREN IN THE FIRST SIX GRADES.

    Science.gov (United States)

    PETZOLD, ROBERT G.

    THE AUDITORY PERCEPTION OF MUSICAL SOUNDS BY A SAMPLE OF 600 CHILDREN IN THE FIRST 6 GRADES WAS STUDIED. THREE TESTS WERE CONSTRUCTED FOR THIS STUDY. THEIR CONTENT WAS BASED UPON AN EXTENSIVE ANALYSIS OF TONAL AND RHYTHMIC CONFIGURATIONS FOUND IN THE SONGS CHILDREN SING. THE 45-ITEM AND ONE OF THE 20-ITEM TESTS WERE DESIGNED TO COLLECT DATA…

  5. Auditory-Visual Perception of Changing Distance by Human Infants.

    Science.gov (United States)

    Walker-Andrews, Arlene S.; Lennon, Elizabeth M.

    1985-01-01

    Examines, in two experiments, 5-month-old infants' sensitivity to auditory-visual specification of distance and direction of movement. One experiment presented two films with soundtracks in either a match or mismatch condition; the second showed the two films side-by-side with a single soundtrack appropriate to one. Infants demonstrated visual…

  6. Tracing the emergence of categorical speech perception in the human auditory system.

    Science.gov (United States)

    Bidelman, Gavin M; Moreno, Sylvain; Alain, Claude

    2013-10-01

    Speech perception requires the effortless mapping from smooth, seemingly continuous changes in sound features into discrete perceptual units, a conversion exemplified in the phenomenon of categorical perception. Explaining how/when the human brain performs this acoustic-phonetic transformation remains an elusive problem in current models and theories of speech perception. In previous attempts to decipher the neural basis of speech perception, it is often unclear whether the alleged brain correlates reflect an underlying percept or merely changes in neural activity that covary with parameters of the stimulus. Here, we recorded neuroelectric activity generated at both cortical and subcortical levels of the auditory pathway elicited by a speech vowel continuum whose percept varied categorically from /u/ to /a/. This integrative approach allows us to characterize how various auditory structures code, transform, and ultimately render the perception of speech material as well as dissociate brain responses reflecting changes in stimulus acoustics from those that index true internalized percepts. We find that activity from the brainstem mirrors properties of the speech waveform with remarkable fidelity, reflecting progressive changes in speech acoustics but not the discrete phonetic classes reported behaviorally. In comparison, patterns of late cortical evoked activity contain information reflecting distinct perceptual categories and predict the abstract phonetic speech boundaries heard by listeners. Our findings demonstrate a critical transformation in neural speech representations between brainstem and early auditory cortex analogous to an acoustic-phonetic mapping necessary to generate categorical speech percepts. Analytic modeling demonstrates that a simple nonlinearity accounts for the transformation between early (subcortical) brain activity and subsequent cortical/behavioral responses to speech (>150-200 ms) thereby describing a plausible mechanism by which the

  7. Auditory Perception, Suprasegmental Speech Processing, and Vocabulary Development in Chinese Preschoolers.

    Science.gov (United States)

    Wang, Hsiao-Lan S; Chen, I-Chen; Chiang, Chun-Han; Lai, Ying-Hui; Tsao, Yu

    2016-10-01

    The current study examined the associations between basic auditory perception, speech prosodic processing, and vocabulary development in Chinese kindergartners, specifically, whether early basic auditory perception may be related to linguistic prosodic processing in Chinese Mandarin vocabulary acquisition. A series of language, auditory, and linguistic prosodic tests were given to 100 preschool children who had not yet learned how to read Chinese characters. The results suggested that lexical tone sensitivity and intonation production were significantly correlated with children's general vocabulary abilities. In particular, tone awareness was associated with comprehensive language development, whereas intonation production was associated with both comprehensive and expressive language development. Regression analyses revealed that tone sensitivity accounted for 36% of the unique variance in vocabulary development, whereas intonation production accounted for 6% of the variance in vocabulary development. Moreover, auditory frequency discrimination was significantly correlated with lexical tone sensitivity, syllable duration discrimination, and intonation production in Mandarin Chinese. Also it provided significant contributions to tone sensitivity and intonation production. Auditory frequency discrimination may indirectly affect early vocabulary development through Chinese speech prosody.

  8. (Amusicality in Williams syndrome: Examining relationships among auditory perception, musical skill, and emotional responsiveness to music

    Directory of Open Access Journals (Sweden)

    Miriam eLense

    2013-08-01

    Full Text Available Williams syndrome (WS, a genetic, neurodevelopmental disorder, is of keen interest to music cognition researchers because of its characteristic auditory sensitivities and emotional responsiveness to music. However, actual musical perception and production abilities are more variable. We examined musicality in WS through the lens of amusia and explored how their musical perception abilities related to their auditory sensitivities, musical production skills, and emotional responsiveness to music. In our sample of 73 adolescents and adults with WS, 11% met criteria for amusia, which is higher than the 4% prevalence rate reported in the typically developing population. Amusia was not related to auditory sensitivities but was related to musical training. Performance on the amusia measure strongly predicted musical skill but not emotional responsiveness to music, which was better predicted by general auditory sensitivities. This study represents the first time amusia has been examined in a population with a known neurodevelopmental genetic disorder with a range of cognitive abilities. Results have implications for the relationships across different levels of auditory processing, musical skill development, and emotional responsiveness to music, as well as the understanding of gene-brain-behavior relationships in individuals with WS and typically developing individuals with and without amusia.

  9. The effect of psychological stress and expectation on auditory perception: A signal detection analysis.

    Science.gov (United States)

    Hoskin, Robert; Hunter, Mike D; Woodruff, Peter W R

    2014-11-01

    Both psychological stress and predictive signals relating to expected sensory input are believed to influence perception, an influence which, when disrupted, may contribute to the generation of auditory hallucinations. The effect of stress and semantic expectation on auditory perception was therefore examined in healthy participants using an auditory signal detection task requiring the detection of speech from within white noise. Trait anxiety was found to predict the extent to which stress influenced response bias, resulting in more anxious participants adopting a more liberal criterion, and therefore experiencing more false positives, when under stress. While semantic expectation was found to increase sensitivity, its presence also generated a shift in response bias towards reporting a signal, suggesting that the erroneous perception of speech became more likely. These findings provide a potential cognitive mechanism that may explain the impact of stress on hallucination-proneness, by suggesting that stress has the tendency to alter response bias in highly anxious individuals. These results also provide support for the idea that top-down processes such as those relating to semantic expectation may contribute to the generation of auditory hallucinations.

  10. Tactile stimulation and hemispheric asymmetries modulate auditory perception and neural responses in primary auditory cortex.

    Science.gov (United States)

    Hoefer, M; Tyll, S; Kanowski, M; Brosch, M; Schoenfeld, M A; Heinze, H-J; Noesselt, T

    2013-10-01

    Although multisensory integration has been an important area of recent research, most studies focused on audiovisual integration. Importantly, however, the combination of audition and touch can guide our behavior as effectively which we studied here using psychophysics and functional magnetic resonance imaging (fMRI). We tested whether task-irrelevant tactile stimuli would enhance auditory detection, and whether hemispheric asymmetries would modulate these audiotactile benefits using lateralized sounds. Spatially aligned task-irrelevant tactile stimuli could occur either synchronously or asynchronously with the sounds. Auditory detection was enhanced by non-informative synchronous and asynchronous tactile stimuli, if presented on the left side. Elevated fMRI-signals to left-sided synchronous bimodal stimulation were found in primary auditory cortex (A1). Adjacent regions (planum temporale, PT) expressed enhanced BOLD-responses for synchronous and asynchronous left-sided bimodal conditions. Additional connectivity analyses seeded in right-hemispheric A1 and PT for both bimodal conditions showed enhanced connectivity with right-hemispheric thalamic, somatosensory and multisensory areas that scaled with subjects' performance. Our results indicate that functional asymmetries interact with audiotactile interplay which can be observed for left-lateralized stimulation in the right hemisphere. There, audiotactile interplay recruits a functional network of unisensory cortices, and the strength of these functional network connections is directly related to subjects' perceptual sensitivity.

  11. Cortical oscillations in auditory perception and speech: evidence for two temporal windows in human auditory cortex

    Directory of Open Access Journals (Sweden)

    Huan eLuo

    2012-05-01

    Full Text Available Natural sounds, including vocal communication sounds, contain critical information at multiple time scales. Two essential temporal modulation rates in speech have been argued to be in the low gamma band (~20-80 ms duration information and the theta band (~150-300 ms, corresponding to segmental and syllabic modulation rates, respectively. On one hypothesis, auditory cortex implements temporal integration using time constants closely related to these values. The neural correlates of a proposed dual temporal window mechanism in human auditory cortex remain poorly understood. We recorded MEG responses from participants listening to non-speech auditory stimuli with different temporal structures, created by concatenating frequency-modulated segments of varied segment durations. We show that these non-speech stimuli with temporal structure matching speech-relevant scales (~25 ms and ~200 ms elicit reliable phase tracking in the corresponding associated oscillatory frequencies (low gamma and theta bands. In contrast, stimuli with non-matching temporal structure do not. Furthermore, the topography of theta band phase tracking shows rightward lateralization while gamma band phase tracking occurs bilaterally. The results support the hypothesis that there exists multi-time resolution processing in cortex on discontinuous scales and provide evidence for an asymmetric organization of temporal analysis (asymmetrical sampling in time, AST. The data argue for a macroscopic-level neural mechanism underlying multi-time resolution processing: the sliding and resetting of intrinsic temporal windows on privileged time scales.

  12. Adaptation to delayed auditory feedback induces the temporal recalibration effect in both speech perception and production.

    Science.gov (United States)

    Yamamoto, Kosuke; Kawabata, Hideaki

    2014-12-01

    We ordinarily speak fluently, even though our perceptions of our own voices are disrupted by various environmental acoustic properties. The underlying mechanism of speech is supposed to monitor the temporal relationship between speech production and the perception of auditory feedback, as suggested by a reduction in speech fluency when the speaker is exposed to delayed auditory feedback (DAF). While many studies have reported that DAF influences speech motor processing, its relationship to the temporal tuning effect on multimodal integration, or temporal recalibration, remains unclear. We investigated whether the temporal aspects of both speech perception and production change due to adaptation to the delay between the motor sensation and the auditory feedback. This is a well-used method of inducing temporal recalibration. Participants continually read texts with specific DAF times in order to adapt to the delay. Then, they judged the simultaneity between the motor sensation and the vocal feedback. We measured the rates of speech with which participants read the texts in both the exposure and re-exposure phases. We found that exposure to DAF changed both the rate of speech and the simultaneity judgment, that is, participants' speech gained fluency. Although we also found that a delay of 200 ms appeared to be most effective in decreasing the rates of speech and shifting the distribution on the simultaneity judgment, there was no correlation between these measurements. These findings suggest that both speech motor production and multimodal perception are adaptive to temporal lag but are processed in distinct ways.

  13. Modeling auditory perception of individual hearing-impaired listeners

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve; Dau, Torsten

    showed that, in most cases, the reduced or absent cochlear compression, associated with outer hair-cell loss, quantitatively accounts for broadened auditory filters, while a combination of reduced compression and reduced inner hair-cell function accounts for decreased sensitivity and slower recovery from...... selectivity. Three groups of listeners were considered: (a) normal hearing listeners; (b) listeners with a mild-to-moderate sensorineural hearing loss; and (c) listeners with a severe sensorineural hearing loss. A fixed set of model parameters were derived for each hearing-impaired listener. The simulations...

  14. Maintaining realism in auditory length-perception experiments

    DEFF Research Database (Denmark)

    Kirkwood, Brent Christopher

    2005-01-01

    Humans are capable of hearing the lengths of wooden rods dropped onto hard floors. In an attempt to understand the influence of the stimulus presentation method for testing this kind of everyday listening task, listener performance was compared for three presentation methods in an auditory length......-estimation experiment. A comparison of the length-estimation accuracy for the three presentation methods indicates that the choice of presentation method is important for maintaining realism and for maintaining the acoustic cues utilized by listeners in perceiving length....

  15. The influence of presentation method on auditory length perception

    DEFF Research Database (Denmark)

    Kirkwood, Brent Christopher

    2005-01-01

    Humans are capable of hearing the lengths of wooden rods dropped onto hard floors. In an attempt to understand the influence of the stimulus presentation method for testing this kind of everyday listening task, listener performance was compared for three presentation methods in an auditory length......-estimation experiment. A comparison of the length-estimation accuracy for the three presentation methods indicates that the choice of presentation method is important for maintaining realism and for maintaining the acoustic cues utilized by listeners in perceiving length....

  16. The Influence of Presentation Method on Auditory Length Perception

    DEFF Research Database (Denmark)

    Kirkwood, Brent Christopher

    Humans are capable of hearing the lengths of wooden rods dropped onto hard floors. In an attempt to understand the influence of the stimulus presentation method for testing this kind of everyday listening task, listener performance was compared for three presentation methods in an auditory length......-estimation experiment. A comparison of the length-estimation accuracy for the three presentation methods indicates that the choice of presentation method is important for maintaining realism and for maintaining the acoustic cues utilized by listeners in perceiving length....

  17. Neuromodulatory Effects of Auditory Training and Hearing Aid Use on Audiovisual Speech Perception in Elderly Individuals

    Science.gov (United States)

    Yu, Luodi; Rao, Aparna; Zhang, Yang; Burton, Philip C.; Rishiq, Dania; Abrams, Harvey

    2017-01-01

    Although audiovisual (AV) training has been shown to improve overall speech perception in hearing-impaired listeners, there has been a lack of direct brain imaging data to help elucidate the neural networks and neural plasticity associated with hearing aid (HA) use and auditory training targeting speechreading. For this purpose, the current clinical case study reports functional magnetic resonance imaging (fMRI) data from two hearing-impaired patients who were first-time HA users. During the study period, both patients used HAs for 8 weeks; only one received a training program named ReadMyQuipsTM (RMQ) targeting speechreading during the second half of the study period for 4 weeks. Identical fMRI tests were administered at pre-fitting and at the end of the 8 weeks. Regions of interest (ROI) including auditory cortex and visual cortex for uni-sensory processing, and superior temporal sulcus (STS) for AV integration, were identified for each person through independent functional localizer task. The results showed experience-dependent changes involving ROIs of auditory cortex, STS and functional connectivity between uni-sensory ROIs and STS from pretest to posttest in both cases. These data provide initial evidence for the malleable experience-driven cortical functionality for AV speech perception in elderly hearing-impaired people and call for further studies with a much larger subject sample and systematic control to fill in the knowledge gap to understand brain plasticity associated with auditory rehabilitation in the aging population. PMID:28270763

  18. Meaningful auditory information enhances perception of visual biological motion.

    Science.gov (United States)

    Arrighi, Roberto; Marini, Francesco; Burr, David

    2009-04-30

    Robust perception requires efficient integration of information from our various senses. Much recent electrophysiology points to neural areas responsive to multisensory stimulation, particularly audiovisual stimulation. However, psychophysical evidence for functional integration of audiovisual motion has been ambiguous. In this study we measure perception of an audiovisual form of biological motion, tap dancing. The results show that the audio tap information interacts with visual motion information, but only when in synchrony, demonstrating a functional combination of audiovisual information in a natural task. The advantage of multimodal combination was better than the optimal maximum likelihood prediction.

  19. Age differences in visual-auditory self-motion perception during a simulated driving task

    Directory of Open Access Journals (Sweden)

    Robert eRamkhalawansingh

    2016-04-01

    Full Text Available Recent evidence suggests that visual-auditory cue integration may change as a function of age such that integration is heightened among older adults. Our goal was to determine whether these changes in multisensory integration are also observed in the context of self-motion perception under realistic task constraints. Thus, we developed a simulated driving paradigm in which we provided older and younger adults with visual motion cues (i.e. optic flow and systematically manipulated the presence or absence of congruent auditory cues to self-motion (i.e. engine, tire, and wind sounds. Results demonstrated that the presence or absence of congruent auditory input had different effects on older and younger adults. Both age groups demonstrated a reduction in speed variability when auditory cues were present compared to when they were absent, but older adults demonstrated a proportionally greater reduction in speed variability under combined sensory conditions. These results are consistent with evidence indicating that multisensory integration is heightened in older adults. Importantly, this study is the first to provide evidence to suggest that age differences in multisensory integration may generalize from simple stimulus detection tasks to the integration of the more complex and dynamic visual and auditory cues that are experienced during self-motion.

  20. Beat Gestures Modulate Auditory Integration in Speech Perception

    Science.gov (United States)

    Biau, Emmanuel; Soto-Faraco, Salvador

    2013-01-01

    Spontaneous beat gestures are an integral part of the paralinguistic context during face-to-face conversations. Here we investigated the time course of beat-speech integration in speech perception by measuring ERPs evoked by words pronounced with or without an accompanying beat gesture, while participants watched a spoken discourse. Words…

  1. Tactile enhancement of auditory and visual speech perception in untrained perceivers

    Science.gov (United States)

    Gick, Bryan; Jóhannsdóttir, Kristín M.; Gibraiel, Diana; Mühlbauer, Jeff

    2008-01-01

    A single pool of untrained subjects was tested for interactions across two bimodal perception conditions: audio-tactile, in which subjects heard and felt speech, and visual-tactile, in which subjects saw and felt speech. Identifications of English obstruent consonants were compared in bimodal and no-tactile baseline conditions. Results indicate that tactile information enhances speech perception by about 10 percent, regardless of which other mode (auditory or visual) is active. However, within-subject analysis indicates that individual subjects who benefit more from tactile information in one cross-modal condition tend to benefit less from tactile information in the other. PMID:18396924

  2. Effects of prior stimulus and prior perception on neural correlates of auditory stream segregation.

    Science.gov (United States)

    Snyder, Joel S; Holder, W Trent; Weintraub, David M; Carter, Olivia L; Alain, Claude

    2009-11-01

    We examined whether effects of prior experience are mediated by distinct brain processes from those processing current stimulus features. We recorded event-related potentials (ERPs) during an auditory stream segregation task that presented an adaptation sequence with a small, intermediate, or large frequency separation between low and high tones (Deltaf), followed by a test sequence with intermediate Deltaf. Perception of two streams during the test was facilitated by small prior Deltaf and by prior perception of two streams and was accompanied by more positive ERPs. The scalp topography of these perception-related changes in ERPs was different from that observed for ERP modulations due to increasing the current Deltaf. These results reveal complex interactions between stimulus-driven activity and temporal-context-based processes and suggest a complex set of brain areas involved in modulating perception based on current and previous experience.

  3. Mechanisms Underlying Auditory Hallucinations—Understanding Perception without Stimulus

    Directory of Open Access Journals (Sweden)

    Sukhwinder S. Shergill

    2013-04-01

    Full Text Available Auditory verbal hallucinations (AVH are a common phenomenon, occurring in the “healthy” population as well as in several mental illnesses, most notably schizophrenia. Current thinking supports a spectrum conceptualisation of AVH: several neurocognitive hypotheses of AVH have been proposed, including the “feed-forward” model of failure to provide appropriate information to somatosensory cortices so that stimuli appear unbidden, and an “aberrant memory model” implicating deficient memory processes. Neuroimaging and connectivity studies are in broad agreement with these with a general dysconnectivity between frontotemporal regions involved in language, memory and salience properties. Disappointingly many AVH remain resistant to standard treatments and persist for many years. There is a need to develop novel therapies to augment existing pharmacological and psychological therapies: transcranial magnetic stimulation has emerged as a potential treatment, though more recent clinical data has been less encouraging. Our understanding of AVH remains incomplete though much progress has been made in recent years. We herein provide a broad overview and review of this.

  4. Mechanisms Underlying Auditory Hallucinations—Understanding Perception without Stimulus

    Science.gov (United States)

    Tracy, Derek K.; Shergill, Sukhwinder S.

    2013-01-01

    Auditory verbal hallucinations (AVH) are a common phenomenon, occurring in the “healthy” population as well as in several mental illnesses, most notably schizophrenia. Current thinking supports a spectrum conceptualisation of AVH: several neurocognitive hypotheses of AVH have been proposed, including the “feed-forward” model of failure to provide appropriate information to somatosensory cortices so that stimuli appear unbidden, and an “aberrant memory model” implicating deficient memory processes. Neuroimaging and connectivity studies are in broad agreement with these with a general dysconnectivity between frontotemporal regions involved in language, memory and salience properties. Disappointingly many AVH remain resistant to standard treatments and persist for many years. There is a need to develop novel therapies to augment existing pharmacological and psychological therapies: transcranial magnetic stimulation has emerged as a potential treatment, though more recent clinical data has been less encouraging. Our understanding of AVH remains incomplete though much progress has been made in recent years. We herein provide a broad overview and review of this. PMID:24961419

  5. Attentional modulation and domain-specificity underlying the neural organization of auditory categorical perception.

    Science.gov (United States)

    Bidelman, Gavin M; Walker, Breya S

    2017-03-01

    Categorical perception (CP) is highly evident in audition when listeners' perception of speech sounds abruptly shifts identity despite equidistant changes in stimulus acoustics. While CP is an inherent property of speech perception, how (if) it is expressed in other auditory modalities (e.g., music) is less clear. Moreover, prior neuroimaging studies have been equivocal on whether attentional engagement is necessary for the brain to categorically organize sound. To address these questions, we recorded neuroelectric brain responses [event-related potentials (ERPs)] from listeners as they rapidly categorized sounds along a speech and music continuum (active task) or during passive listening. Behaviorally, listeners' achieved sharper psychometric functions and faster identification for speech than musical stimuli, which was perceived in a continuous mode. Behavioral results coincided with stronger ERP differentiation between prototypical and ambiguous tokens (i.e., categorical processing) for speech but not for music. Neural correlates of CP were only observed when listeners actively attended to the auditory signal. These findings were corroborated by brain-behavior associations; changes in neural activity predicted more successful CP (psychometric slopes) for active but not passively evoked ERPs. Our results demonstrate auditory categorization is influenced by attention (active > passive) and is stronger for more familiar/overlearned stimulus domains (speech > music). In contrast to previous studies examining highly trained listeners (i.e., musicians), we infer that (i) CP skills are largely domain-specific and do not generalize to stimuli for which a listener has no immediate experience and (ii) categorical neural processing requires active engagement with the auditory stimulus.

  6. Auditory-visual speech integration by prelinguistic infants: perception of an emergent consonant in the McGurk effect.

    Science.gov (United States)

    Burnham, Denis; Dodd, Barbara

    2004-12-01

    The McGurk effect, in which auditory [ba] dubbed onto [ga] lip movements is perceived as "da" or "tha," was employed in a real-time task to investigate auditory-visual speech perception in prelingual infants. Experiments 1A and 1B established the validity of real-time dubbing for producing the effect. In Experiment 2, 4 1/2-month-olds were tested in a habituation-test paradigm, in which an auditory-visual stimulus was presented contingent upon visual fixation of a live face. The experimental group was habituated to a McGurk stimulus (auditory [ba] visual [ga]), and the control group to matching auditory-visual [ba]. Each group was then presented with three auditory-only test trials, [ba], [da], and [(delta)a] (as in then). Visual-fixation durations in test trials showed that the experimental group treated the emergent percept in the McGurk effect, [da] or [(delta)a], as familiar (even though they had not heard these sounds previously) and [ba] as novel. For control group infants [da] and [(delta)a] were no more familiar than [ba]. These results are consistent with infants' perception of the McGurk effect, and support the conclusion that prelinguistic infants integrate auditory and visual speech information.

  7. PERCEVAL: a Computer-Driven System for Experimentation on Auditory and Visual Perception

    CERN Document Server

    André, Carine; Cavé, Christian; Teston, Bernard

    2007-01-01

    Since perception tests are highly time-consuming, there is a need to automate as many operations as possible, such as stimulus generation, procedure control, perception testing, and data analysis. The computer-driven system we are presenting here meets these objectives. To achieve large flexibility, the tests are controlled by scripts. The system's core software resembles that of a lexical-syntactic analyzer, which reads and interprets script files sent to it. The execution sequence (trial) is modified in accordance with the commands and data received. This type of operation provides a great deal of flexibility and supports a wide variety of tests such as auditory-lexical decision making, phoneme monitoring, gating, phonetic categorization, word identification, voice quality, etc. To achieve good performance, we were careful about timing accuracy, which is the greatest problem in computerized perception tests.

  8. [Age differences of event-related potentials in the perception of successive and spacial components of auditory information].

    Science.gov (United States)

    Portnova, G V; Martynova, O V; Ivanitskiĭ, G A

    2014-01-01

    The perception of spatial and successive contexts of auditory information develops during human ontogeny. We compared event-related potentials (ERPs) recorded in 5- to 6-year-old children (N = 15) and adults (N = 15) in response to a digital series with omitted digits to explore age differences in the perception of successive auditory information. In addition, ERPs in response to the sound of falling drops delivered binaurally were obtained to examine the spatial context of auditory information. The ERPs obtained from the omitted digits significantly differed in the amplitude and latency of the N200 and P300 components between adults and children, which supports the hypothesis that the perception of a successive auditory structure is less automated in children compared with adults. Although no significant differences were found in adults, the sound of falling drops presented to the left ears of children elicited ERPs with earlier latencies and higher amplitudes of P300 and N400 components in the right temporal area. Stimulation of the right ear caused increasing amplitude of the N100 component in children. Thus, the observed differences in auditory ERPs of children and adults reflect developmental changes in the perception of spatial and successive auditory information.

  9. Harmony perception and regularity of spike trains in a simple auditory model

    Science.gov (United States)

    Spagnolo, B.; Ushakov, Y. V.; Dubkov, A. A.

    2013-01-01

    A probabilistic approach for investigating the phenomena of dissonance and consonance in a simple auditory sensory model, composed by two sensory neurons and one interneuron, is presented. We calculated the interneuron's firing statistics, that is the interspike interval statistics of the spike train at the output of the interneuron, for consonant and dissonant inputs in the presence of additional "noise", representing random signals from other, nearby neurons and from the environment. We find that blurry interspike interval distributions (ISIDs) characterize dissonant accords, while quite regular ISIDs characterize consonant accords. The informational entropy of the non-Markov spike train at the output of the interneuron and its dependence on the frequency ratio of input sinusoidal signals is estimated. We introduce the regularity of spike train and suggested the high or low regularity level of the auditory system's spike trains as an indicator of feeling of harmony during sound perception or disharmony, respectively.

  10. Voluntary movement affects simultaneous perception of auditory and tactile stimuli presented to a non-moving body part

    Science.gov (United States)

    Hao, Qiao; Ora, Hiroki; Ogawa, Ken-ichiro; Ogata, Taiki; Miyake, Yoshihiro

    2016-01-01

    The simultaneous perception of multimodal sensory information has a crucial role for effective reactions to the external environment. Voluntary movements are known to occasionally affect simultaneous perception of auditory and tactile stimuli presented to the moving body part. However, little is known about spatial limits on the effect of voluntary movements on simultaneous perception, especially when tactile stimuli are presented to a non-moving body part. We examined the effect of voluntary movement on the simultaneous perception of auditory and tactile stimuli presented to the non-moving body part. We considered the possible mechanism using a temporal order judgement task under three experimental conditions: voluntary movement, where participants voluntarily moved their right index finger and judged the temporal order of auditory and tactile stimuli presented to their non-moving left index finger; passive movement; and no movement. During voluntary movement, the auditory stimulus needed to be presented before the tactile stimulus so that they were perceived as occurring simultaneously. This subjective simultaneity differed significantly from the passive movement and no movement conditions. This finding indicates that the effect of voluntary movement on simultaneous perception of auditory and tactile stimuli extends to the non-moving body part. PMID:27622584

  11. Voluntary movement affects simultaneous perception of auditory and tactile stimuli presented to a non-moving body part.

    Science.gov (United States)

    Hao, Qiao; Ora, Hiroki; Ogawa, Ken-Ichiro; Ogata, Taiki; Miyake, Yoshihiro

    2016-09-13

    The simultaneous perception of multimodal sensory information has a crucial role for effective reactions to the external environment. Voluntary movements are known to occasionally affect simultaneous perception of auditory and tactile stimuli presented to the moving body part. However, little is known about spatial limits on the effect of voluntary movements on simultaneous perception, especially when tactile stimuli are presented to a non-moving body part. We examined the effect of voluntary movement on the simultaneous perception of auditory and tactile stimuli presented to the non-moving body part. We considered the possible mechanism using a temporal order judgement task under three experimental conditions: voluntary movement, where participants voluntarily moved their right index finger and judged the temporal order of auditory and tactile stimuli presented to their non-moving left index finger; passive movement; and no movement. During voluntary movement, the auditory stimulus needed to be presented before the tactile stimulus so that they were perceived as occurring simultaneously. This subjective simultaneity differed significantly from the passive movement and no movement conditions. This finding indicates that the effect of voluntary movement on simultaneous perception of auditory and tactile stimuli extends to the non-moving body part.

  12. Auditory perception and syntactic cognition: brain activity-based decoding within and across subjects.

    Science.gov (United States)

    Herrmann, Björn; Maess, Burkhard; Kalberlah, Christian; Haynes, John-Dylan; Friederici, Angela D

    2012-05-01

    The present magnetoencephalography study investigated whether the brain states of early syntactic and auditory-perceptual processes can be decoded from single-trial recordings with a multivariate pattern classification approach. In particular, it was investigated whether the early neural activation patterns in response to rule violations in basic auditory perception and in high cognitive processes (syntax) reflect a functional organization that largely generalizes across individuals or is subject-specific. On this account, subjects were auditorily presented with correct sentences, syntactically incorrect sentences, correct sentences including an interaural time difference change, and sentences containing both violations. For the analysis, brain state decoding was carried out within and across subjects with three pairwise classifications. Neural patterns elicited by each of the violation sentences were separately classified with the patterns elicited by the correct sentences. The results revealed the highest decoding accuracies over temporal cortex areas for all three classification types. Importantly, both the magnitude and the spatial distribution of decoding accuracies for the early neural patterns were very similar for within-subject and across-subject decoding. At the same time, across-subject decoding suggested a hemispheric bias, with the most consistent patterns in the left hemisphere. Thus, the present data show that not only auditory-perceptual processing brain states but also cognitive brain states of syntactic rule processing can be decoded from single-trial brain activations. Moreover, the findings indicate that the neural patterns in response to syntactic cognition and auditory perception reflect a functional organization that is highly consistent across individuals.

  13. Auditory Cortical Deactivation during Speech Production and following Speech Perception: An EEG investigation of the temporal dynamics of the auditory alpha rhythm

    Directory of Open Access Journals (Sweden)

    David E Jenson

    2015-10-01

    Full Text Available Sensorimotor integration within the dorsal stream enables online monitoring of speech. Jenson et al. (2014 used independent component analysis (ICA and event related spectral perturbation (ERSP analysis of EEG data to describe anterior sensorimotor (e.g., premotor cortex; PMC activity during speech perception and production. The purpose of the current study was to identify and temporally map neural activity from posterior (i.e., auditory regions of the dorsal stream in the same tasks. Perception tasks required ‘active’ discrimination of syllable pairs (/ba/ and /da/ in quiet and noisy conditions. Production conditions required overt production of syllable pairs and nouns. ICA performed on concatenated raw 68 channel EEG data from all tasks identified bilateral ‘auditory’ alpha (α components in 15 of 29 participants localized to pSTG (left and pMTG (right. ERSP analyses were performed to reveal fluctuations in the spectral power of the α rhythm clusters across time. Production conditions were characterized by significant α event related synchronization (ERS; pFDR < .05 concurrent with EMG activity from speech production, consistent with speech-induced auditory inhibition. Discrimination conditions were also characterized by α ERS following stimulus offset. Auditory α ERS in all conditions also temporally aligned with PMC activity reported in Jenson et al. (2014. These findings are indicative of speech-induced suppression of auditory regions, possibly via efference copy. The presence of the same pattern following stimulus offset in discrimination conditions suggests that sensorimotor contributions following speech perception reflect covert replay, and that covert replay provides one source of the motor activity previously observed in some speech perception tasks. To our knowledge, this is the first time that inhibition of auditory regions by speech has been observed in real-time with the ICA/ERSP technique.

  14. Sensory entrainment mechanisms in auditory perception: neural synchronization and cortico-striatal activation

    Directory of Open Access Journals (Sweden)

    Catia M Sameiro-Barbosa

    2016-08-01

    Full Text Available The auditory system displays modulations in sensitivity that can align with the temporal structure of the acoustic environment. This sensory entrainment can facilitate sensory perception and is particularly relevant for audition. Systems neuroscience is slowly uncovering the neural mechanisms underlying the behaviorally observed sensory entrainment effects in the human sensory system. The present article summarizes the prominent behavioral effects of sensory entrainment and reviews our current understanding of the neural basis of sensory entrainment, such as synchronized neural oscillations and, potentially, neural activation in the cortico-striatal system.

  15. Using auditory-visual speech to probe the basis of noise-impaired consonant-vowel perception in dyslexia and auditory neuropathy

    Science.gov (United States)

    Ramirez, Joshua; Mann, Virginia

    2005-08-01

    Both dyslexics and auditory neuropathy (AN) subjects show inferior consonant-vowel (CV) perception in noise, relative to controls. To better understand these impairments, natural acoustic speech stimuli that were masked in speech-shaped noise at various intensities were presented to dyslexic, AN, and control subjects either in isolation or accompanied by visual articulatory cues. AN subjects were expected to benefit from the pairing of visual articulatory cues and auditory CV stimuli, provided that their speech perception impairment reflects a relatively peripheral auditory disorder. Assuming that dyslexia reflects a general impairment of speech processing rather than a disorder of audition, dyslexics were not expected to similarly benefit from an introduction of visual articulatory cues. The results revealed an increased effect of noise masking on the perception of isolated acoustic stimuli by both dyslexic and AN subjects. More importantly, dyslexics showed less effective use of visual articulatory cues in identifying masked speech stimuli and lower visual baseline performance relative to AN subjects and controls. Last, a significant positive correlation was found between reading ability and the ameliorating effect of visual articulatory cues on speech perception in noise. These results suggest that some reading impairments may stem from a central deficit of speech processing.

  16. Physiological activation of the human cerebral cortex during auditory perception and speech revealed by regional increases in cerebral blood flow

    DEFF Research Database (Denmark)

    Lassen, N A; Friberg, L

    1988-01-01

    Specific types of brain activity as sensory perception auditory, somato-sensory or visual -or the performance of movements are accompanied by increases of blood flow and oxygen consumption in the cortical areas involved with performing the respective tasks. The activation patterns observed...... by measuring regional cerebral blood flow CBF after intracarotid Xenon-133 injection are reviewed with emphasis on tests involving auditory perception and speech, and approach allowing to visualize Wernicke and Broca's areas and their contralateral homologues in vivo. The completely atraumatic tomographic CBF...

  17. A bio-inspired auditory perception model for amplitude-frequency clustering (keynote Paper)

    Science.gov (United States)

    Arena, Paolo; Fortuna, Luigi; Frasca, Mattia; Ganci, Gaetana; Patane, Luca

    2005-06-01

    In this paper a model for auditory perception is introduced. This model is based on a network of integrate-and-fire and resonate-and-fire neurons and is aimed to control the phonotaxis behavior of a roving robot. The starting point is the model of phonotaxis in Gryllus Bimaculatus: the model consists of four integrate-and-fire neurons and is able of discriminating the calling song of male cricket and orienting the robot towards the sound source. This paper aims to extend the model to include an amplitude-frequency clustering. The proposed spiking network shows different behaviors associated with different characteristics of the input signals (amplitude and frequency). The behavior implemented on the robot is similar to the cricket behavior, where some frequencies are associated with the calling song of male crickets, while other ones indicate the presence of predators. Therefore, the whole model for auditory perception is devoted to control different responses (attractive or repulsive) depending on the input characteristics. The performance of the control system has been evaluated with several experiments carried out on a roving robot.

  18. The Influence of Tactile Cognitive Maps on Auditory Space Perception in Sighted Persons.

    Science.gov (United States)

    Tonelli, Alessia; Gori, Monica; Brayda, Luca

    2016-01-01

    We have recently shown that vision is important to improve spatial auditory cognition. In this study, we investigate whether touch is as effective as vision to create a cognitive map of a soundscape. In particular, we tested whether the creation of a mental representation of a room, obtained through tactile exploration of a 3D model, can influence the perception of a complex auditory task in sighted people. We tested two groups of blindfolded sighted people - one experimental and one control group - in an auditory space bisection task. In the first group, the bisection task was performed three times: specifically, the participants explored with their hands the 3D tactile model of the room and were led along the perimeter of the room between the first and the second execution of the space bisection. Then, they were allowed to remove the blindfold for a few minutes and look at the room between the second and third execution of the space bisection. Instead, the control group repeated for two consecutive times the space bisection task without performing any environmental exploration in between. Considering the first execution as a baseline, we found an improvement in the precision after the tactile exploration of the 3D model. Interestingly, no additional gain was obtained when room observation followed the tactile exploration, suggesting that no additional gain was obtained by vision cues after spatial tactile cues were internalized. No improvement was found between the first and the second execution of the space bisection without environmental exploration in the control group, suggesting that the improvement was not due to task learning. Our results show that tactile information modulates the precision of an ongoing space auditory task as well as visual information. This suggests that cognitive maps elicited by touch may participate in cross-modal calibration and supra-modal representations of space that increase implicit knowledge about sound propagation.

  19. The influence of tactile cognitive maps on auditory space perception in sighted persons.

    Directory of Open Access Journals (Sweden)

    Alessia Tonelli

    2016-11-01

    Full Text Available We have recently shown that vision is important to improve spatial auditory cognition. In this study we investigate whether touch is as effective as vision to create a cognitive map of a soundscape. In particular we tested whether the creation of a mental representation of a room, obtained through tactile exploration of a 3D model, can influence the perception of a complex auditory task in sighted people. We tested two groups of blindfolded sighted people – one experimental and one control group – in an auditory space bisection task. In the first group the bisection task was performed three times: specifically, the participants explored with their hands the 3D tactile model of the room and were led along the perimeter of the room between the first and the second execution of the space bisection. Then, they were allowed to remove the blindfold for a few minutes and look at the room between the second and third execution of the space bisection. Instead, the control group repeated for two consecutive times the space bisection task without performing any environmental exploration in between. Considering the first execution as a baseline, we found an improvement in the precision after the tactile exploration of the 3D model. Interestingly, no additional gain was obtained when room observation followed the tactile exploration, suggesting that no additional gain was obtained by vision cues after spatial tactile cues were internalized. No improvement was found between the first and the second execution of the space bisection without environmental exploration in the control group, suggesting that the improvement was not due to task learning. Our results show that tactile information modulates the precision of an ongoing space auditory task as well as visual information. This suggests that cognitive maps elicited by touch may participate in cross-modal calibration and supra-modal representations of space that increase implicit knowledge about sound

  20. Auditory cortical deactivation during speech production and following speech perception: an EEG investigation of the temporal dynamics of the auditory alpha rhythm.

    Science.gov (United States)

    Jenson, David; Harkrider, Ashley W; Thornton, David; Bowers, Andrew L; Saltuklaroglu, Tim

    2015-01-01

    Sensorimotor integration (SMI) across the dorsal stream enables online monitoring of speech. Jenson et al. (2014) used independent component analysis (ICA) and event related spectral perturbation (ERSP) analysis of electroencephalography (EEG) data to describe anterior sensorimotor (e.g., premotor cortex, PMC) activity during speech perception and production. The purpose of the current study was to identify and temporally map neural activity from posterior (i.e., auditory) regions of the dorsal stream in the same tasks. Perception tasks required "active" discrimination of syllable pairs (/ba/ and /da/) in quiet and noisy conditions. Production conditions required overt production of syllable pairs and nouns. ICA performed on concatenated raw 68 channel EEG data from all tasks identified bilateral "auditory" alpha (α) components in 15 of 29 participants localized to pSTG (left) and pMTG (right). ERSP analyses were performed to reveal fluctuations in the spectral power of the α rhythm clusters across time. Production conditions were characterized by significant α event related synchronization (ERS; pFDR covert replay, and that covert replay provides one source of the motor activity previously observed in some speech perception tasks. To our knowledge, this is the first time that inhibition of auditory regions by speech has been observed in real-time with the ICA/ERSP technique.

  1. Vocal development and auditory perception in CBA/CaJ mice

    Science.gov (United States)

    Radziwon, Kelly E.

    Mice are useful laboratory subjects because of their small size, their modest cost, and the fact that researchers have created many different strains to study a variety of disorders. In particular, researchers have found nearly 100 naturally occurring mouse mutations with hearing impairments. For these reasons, mice have become an important model for studies of human deafness. Although much is known about the genetic makeup and physiology of the laboratory mouse, far less is known about mouse auditory behavior. To fully understand the effects of genetic mutations on hearing, it is necessary to determine the hearing abilities of these mice. Two experiments here examined various aspects of mouse auditory perception using CBA/CaJ mice, a commonly used mouse strain. The frequency difference limens experiment tested the mouse's ability to discriminate one tone from another based solely on the frequency of the tone. The mice had similar thresholds as wild mice and gerbils but needed a larger change in frequency than humans and cats. The second psychoacoustic experiment sought to determine which cue, frequency or duration, was more salient when the mice had to identify various tones. In this identification task, the mice overwhelmingly classified the tones based on frequency instead of duration, suggesting that mice are using frequency when differentiating one mouse vocalization from another. The other two experiments were more naturalistic and involved both auditory perception and mouse vocal production. Interest in mouse vocalizations is growing because of the potential for mice to become a model of human speech disorders. These experiments traced mouse vocal development from infant to adult, and they tested the mouse's preference for various vocalizations. This was the first known study to analyze the vocalizations of individual mice across development. Results showed large variation in calling rates among the three cages of adult mice but results were highly

  2. Open your eyes and listen carefully. Auditory and audiovisual speech perception and the McGurk effect in aphasia

    NARCIS (Netherlands)

    Klitsch, Julia Ulrike

    2008-01-01

    This dissertation investigates speech perception in three different groups of native adult speakers of Dutch; an aphasic and two age-varying control groups. By means of two different experiments it is examined if the availability of visual articulatory information is beneficial to the auditory speec

  3. Auditory Sensitivity, Speech Perception, L1 Chinese, and L2 English Reading Abilities in Hong Kong Chinese Children

    Science.gov (United States)

    Zhang, Juan; McBride-Chang, Catherine

    2014-01-01

    A 4-stage developmental model, in which auditory sensitivity is fully mediated by speech perception at both the segmental and suprasegmental levels, which are further related to word reading through their associations with phonological awareness, rapid automatized naming, verbal short-term memory and morphological awareness, was tested with…

  4. The Development and Validation of an Auditory Perception Test in Spanish for Hispanic Children Receiving Reading Instruction in Spanish.

    Science.gov (United States)

    Morrison, James A.; Michael, William B.

    1982-01-01

    A Spanish auditory perception test, La Prueba de Analisis Auditivo, was developed and administered to 158 Spanish-speaking Latino children, kindergarten through grade 3. Psychometric data for the test are presented, including its relationship to SOBER, a criterion-referenced Spanish reading measure. (Author/BW)

  5. Auditory enhancement of visual perception at threshold depends on visual abilities.

    Science.gov (United States)

    Caclin, Anne; Bouchet, Patrick; Djoulah, Farida; Pirat, Elodie; Pernier, Jacques; Giard, Marie-Hélène

    2011-06-17

    Whether or not multisensory interactions can improve detection thresholds, and thus widen the range of perceptible events is a long-standing debate. Here we revisit this question, by testing the influence of auditory stimuli on visual detection threshold, in subjects exhibiting a wide range of visual-only performance. Above the perceptual threshold, crossmodal interactions have indeed been reported to depend on the subject's performance when the modalities are presented in isolation. We thus tested normal-seeing subjects and short-sighted subjects wearing their usual glasses. We used a paradigm limiting potential shortcomings of previous studies: we chose a criterion-free threshold measurement procedure and precluded exogenous cueing effects by systematically presenting a visual cue whenever a visual target (a faint Gabor patch) might occur. Using this carefully controlled procedure, we found that concurrent sounds only improved visual detection thresholds in the sub-group of subjects exhibiting the poorest performance in the visual-only conditions. In these subjects, for oblique orientations of the visual stimuli (but not for vertical or horizontal targets), the auditory improvement was still present when visual detection was already helped with flanking visual stimuli generating a collinear facilitation effect. These findings highlight that crossmodal interactions are most efficient to improve perceptual performance when an isolated modality is deficient.

  6. Brain dynamics that correlate with effects of learning on auditory distance perception.

    Science.gov (United States)

    Wisniewski, Matthew G; Mercado, Eduardo; Church, Barbara A; Gramann, Klaus; Makeig, Scott

    2014-01-01

    Accuracy in auditory distance perception can improve with practice and varies for sounds differing in familiarity. Here, listeners were trained to judge the distances of English, Bengali, and backwards speech sources pre-recorded at near (2-m) and far (30-m) distances. Listeners' accuracy was tested before and after training. Improvements from pre-test to post-test were greater for forward speech, demonstrating a learning advantage for forward speech sounds. Independent component (IC) processes identified in electroencephalographic (EEG) data collected during pre- and post-testing revealed three clusters of ICs across subjects with stimulus-locked spectral perturbations related to learning and accuracy. One cluster exhibited a transient stimulus-locked increase in 4-8 Hz power (theta event-related synchronization; ERS) that was smaller after training and largest for backwards speech. For a left temporal cluster, 8-12 Hz decreases in power (alpha event-related desynchronization; ERD) were greatest for English speech and less prominent after training. In contrast, a cluster of IC processes centered at or near anterior portions of the medial frontal cortex showed learning-related enhancement of sustained increases in 10-16 Hz power (upper-alpha/low-beta ERS). The degree of this enhancement was positively correlated with the degree of behavioral improvements. Results suggest that neural dynamics in non-auditory cortical areas support distance judgments. Further, frontal cortical networks associated with attentional and/or working memory processes appear to play a role in perceptual learning for source distance.

  7. Brain dynamics that correlate with effects of learning on auditory distance perception

    Directory of Open Access Journals (Sweden)

    Matthew G. Wisniewski

    2014-12-01

    Full Text Available Accuracy in auditory distance perception can improve with practice and varies for sounds differing in familiarity. Here, listeners were trained to judge the distances of English, Bengali, and backwards speech sources pre-recorded at near (2-m and far (30-m distances. Listeners’ accuracy was tested before and after training. Improvements from pre-test to post-test were greater for forward speech, demonstrating a learning advantage for forward speech sounds. Independent component (IC processes identified in electroencephalographic (EEG data collected during pre- and post-testing revealed three clusters of ICs across subjects with stimulus-locked spectral perturbations related to learning and accuracy. One cluster exhibited a transient stimulus-locked increase in 4-8 Hz power (theta event-related synchronization; ERS that was smaller after training and largest for backwards speech. For a left temporal cluster, 8-12 Hz decreases in power (alpha event-related desynchronization; ERD were greatest for English speech and less prominent after training. In contrast, a cluster of IC processes centered at or near anterior portions of the medial frontal cortex showed learning-related enhancement of sustained increases in 10-16 Hz power (upper-alpha/low-beta ERS. The degree of this enhancement was positively correlated with the degree of behavioral improvements. Results suggest that neural dynamics in non-auditory cortical areas support distance judgments. Further, frontal cortical networks associated with attentional and/or working memory processes appear to play a role in perceptual learning for source distance.

  8. The effect of auditory perception training on reading performance of the 8-9-year old female students with dyslexia: A preliminary study

    Directory of Open Access Journals (Sweden)

    Nafiseh Vatandoost

    2014-01-01

    Full Text Available Background and Aim: Dyslexia is the most common learning disability. One of the main factors have role in this disability is auditory perception imperfection that cause a lot of problems in education. We aimed to study the effect of auditory perception training on reading performance of female students with dyslexia at the third grade of elementary school.Methods: Thirty-eight female students at the third grade of elementary schools of Khomeinishahr City, Iran, were selected by multistage cluster random sampling of them, 20 students which were diagnosed dyslexic by Reading test and Wechsler test, devided randomly to two equal groups of experimental and control. For experimental group, during ten 45-minute sessions, auditory perception training were conducted, but no intervention was done for control group. An participants were re-assessed by Reading test after the intervention (pre- and post- test method. Data were analyed by covariance test.Results: The effect of auditory perception training on reading performance (81% was significant (p<0.0001 for all subtests execpt the separate compound word test.Conclusion: Findings of our study confirm the hypothesis that auditory perception training effects on students' functional reading. So, auditory perception training seems to be necessary for the students with dyslexia.

  9. Auditory, Visual, and Auditory-Visual Perception of Emotions by Individuals with Cochlear Implants, Hearing Aids, and Normal Hearing

    Science.gov (United States)

    Most, Tova; Aviner, Chen

    2009-01-01

    This study evaluated the benefits of cochlear implant (CI) with regard to emotion perception of participants differing in their age of implantation, in comparison to hearing aid users and adolescents with normal hearing (NH). Emotion perception was examined by having the participants identify happiness, anger, surprise, sadness, fear, and disgust.…

  10. The importance of laughing in your face: influences of visual laughter on auditory laughter perception.

    Science.gov (United States)

    Jordan, Timothy R; Abedipour, Lily

    2010-01-01

    Hearing the sound of laughter is important for social communication, but processes contributing to the audibility of laughter remain to be determined. Production of laughter resembles production of speech in that both involve visible facial movements accompanying socially significant auditory signals. However, while it is known that speech is more audible when the facial movements producing the speech sound can be seen, similar visual enhancement of the audibility of laughter remains unknown. To address this issue, spontaneously occurring laughter was edited to produce stimuli comprising visual laughter, auditory laughter, visual and auditory laughter combined, and no laughter at all (either visual or auditory), all presented in four levels of background noise. Visual laughter and no-laughter stimuli produced very few reports of auditory laughter. However, visual laughter consistently made auditory laughter more audible, compared to the same auditory signal presented without visual laughter, resembling findings reported previously for speech.

  11. GRM7 variants associated with age-related hearing loss based on auditory perception.

    Science.gov (United States)

    Newman, Dina L; Fisher, Laurel M; Ohmen, Jeffrey; Parody, Robert; Fong, Chin-To; Frisina, Susan T; Mapes, Frances; Eddins, David A; Robert Frisina, D; Frisina, Robert D; Friedman, Rick A

    2012-12-01

    Age-related hearing impairment (ARHI), or presbycusis, is a common condition of the elderly that results in significant communication difficulties in daily life. Clinically, it has been defined as a progressive loss of sensitivity to sound, starting at the high frequencies, inability to understand speech, lengthening of the minimum discernable temporal gap in sounds, and a decrease in the ability to filter out background noise. The causes of presbycusis are likely a combination of environmental and genetic factors. Previous research into the genetics of presbycusis has focused solely on hearing as measured by pure-tone thresholds. A few loci have been identified, based on a best ear pure-tone average phenotype, as having a likely role in susceptibility to this type of hearing loss; and GRM7 is the only gene that has achieved genome-wide significance. We examined the association of GRM7 variants identified from the previous study, which used an European cohort with Z-scores based on pure-tone thresholds, in a European-American population from Rochester, NY (N = 687), and used novel phenotypes of presbycusis. In the present study mixed modeling analyses were used to explore the relationship of GRM7 haplotype and SNP genotypes with various measures of auditory perception. Here we show that GRM7 alleles are associated primarily with peripheral measures of hearing loss, and particularly with speech detection in older adults.

  12. Effect of 24 Hours of Sleep Deprivation on Auditory and Linguistic Perception: A Comparison among Young Controls, Sleep-Deprived Participants, Dyslexic Readers, and Aging Adults

    Science.gov (United States)

    Fostick, Leah; Babkoff, Harvey; Zukerman, Gil

    2014-01-01

    Purpose: To test the effects of 24 hr of sleep deprivation on auditory and linguistic perception and to assess the magnitude of this effect by comparing such performance with that of aging adults on speech perception and with that of dyslexic readers on phonological awareness. Method: Fifty-five sleep-deprived young adults were compared with 29…

  13. Musical experience, auditory perception and reading-related skills in children.

    Directory of Open Access Journals (Sweden)

    Karen Banai

    Full Text Available BACKGROUND: The relationships between auditory processing and reading-related skills remain poorly understood despite intensive research. Here we focus on the potential role of musical experience as a confounding factor. Specifically we ask whether the pattern of correlations between auditory and reading related skills differ between children with different amounts of musical experience. METHODOLOGY/PRINCIPAL FINDINGS: Third grade children with various degrees of musical experience were tested on a battery of auditory processing and reading related tasks. Very poor auditory thresholds and poor memory skills were abundant only among children with no musical education. In this population, indices of auditory processing (frequency and interval discrimination thresholds were significantly correlated with and accounted for up to 13% of the variance in reading related skills. Among children with more than one year of musical training, auditory processing indices were better, yet reading related skills were not correlated with them. A potential interpretation for the reduction in the correlations might be that auditory and reading-related skills improve at different rates as a function of musical training. CONCLUSIONS/SIGNIFICANCE: Participants' previous musical training, which is typically ignored in studies assessing the relations between auditory and reading related skills, should be considered. Very poor auditory and memory skills are rare among children with even a short period of musical training, suggesting musical training could have an impact on both. The lack of correlation in the musically trained population suggests that a short period of musical training does not enhance reading related skills of individuals with within-normal auditory processing skills. Further studies are required to determine whether the associations between musical training, auditory processing and memory are indeed causal or whether children with poor auditory and

  14. Auditory, Visual, and Auditory-Visual Speech Perception by Individuals with Cochlear Implants versus Individuals with Hearing Aids

    Science.gov (United States)

    Most, Tova; Rothem, Hilla; Luntz, Michal

    2009-01-01

    The researchers evaluated the contribution of cochlear implants (CIs) to speech perception by a sample of prelingually deaf individuals implanted after age 8 years. This group was compared with a group with profound hearing impairment (HA-P), and with a group with severe hearing impairment (HA-S), both of which used hearing aids. Words and…

  15. Auditory and Visual Differences in Time Perception? An Investigation from a Developmental Perspective with Neuropsychological Tests

    Science.gov (United States)

    Zelanti, Pierre S.; Droit-Volet, Sylvie

    2012-01-01

    Adults and children (5- and 8-year-olds) performed a temporal bisection task with either auditory or visual signals and either a short (0.5-1.0s) or long (4.0-8.0s) duration range. Their working memory and attentional capacities were assessed by a series of neuropsychological tests administered in both the auditory and visual modalities. Results…

  16. From Perception to Metacognition: Auditory and Olfactory Functions in Early Blind, Late Blind, and Sighted Individuals

    Science.gov (United States)

    Cornell Kärnekull, Stina; Arshamian, Artin; Nilsson, Mats E.; Larsson, Maria

    2016-01-01

    Although evidence is mixed, studies have shown that blind individuals perform better than sighted at specific auditory, tactile, and chemosensory tasks. However, few studies have assessed blind and sighted individuals across different sensory modalities in the same study. We tested early blind (n = 15), late blind (n = 15), and sighted (n = 30) participants with analogous olfactory and auditory tests in absolute threshold, discrimination, identification, episodic recognition, and metacognitive ability. Although the multivariate analysis of variance (MANOVA) showed no overall effect of blindness and no interaction with modality, follow-up between-group contrasts indicated a blind-over-sighted advantage in auditory episodic recognition, that was most pronounced in early blind individuals. In contrast to the auditory modality, there was no empirical support for compensatory effects in any of the olfactory tasks. There was no conclusive evidence for group differences in metacognitive ability to predict episodic recognition performance. Taken together, the results showed no evidence of an overall superior performance in blind relative sighted individuals across olfactory and auditory functions, although early blind individuals exceled in episodic auditory recognition memory. This observation may be related to an experience-induced increase in auditory attentional capacity. PMID:27729884

  17. From perception to metacognition: Auditory and olfactory functions in early blind, late blind, and sighted individuals

    Directory of Open Access Journals (Sweden)

    Stina Cornell Kärnekull

    2016-09-01

    Full Text Available Although evidence is mixed, studies have shown that blind individuals perform better than sighted at specific auditory, tactile, and chemosensory tasks. However, few studies have assessed blind and sighted individuals across different sensory modalities in the same study. We tested early blind (n = 15, late blind (n = 15, and sighted (n = 30 participants with analogous olfactory and auditory tests in absolute threshold, discrimination, identification, episodic recognition, and metacognitive ability. Although the multivariate analysis of variance (MANOVA showed no overall effect of blindness and no interaction with modality, follow-up between-group contrasts indicated a blind-over-sighted advantage in auditory episodic recognition, that was most pronounced in early blind individuals. In contrast to the auditory modality, there was no empirical support for compensatory effects in any of the olfactory tasks. There was no conclusive evidence for group differences in metacognitive ability to predict episodic recognition performance. Taken together, the results showed no evidence of an overall superior performance in blind relative sighted individuals across olfactory and auditory functions, although early blind individuals exceled in episodic auditory recognition memory. This observation may be related to an experience-induced increase in auditory attentional capacity.

  18. Changes in auditory perceptions and cortex resulting from hearing recovery after extended congenital unilateral hearing loss

    Directory of Open Access Journals (Sweden)

    Jill B Firszt

    2013-12-01

    Full Text Available Monaural hearing induces auditory system reorganization. Imbalanced input also degrades time-intensity cues for sound localization and signal segregation for listening in noise. While there have been studies of bilateral auditory deprivation and later hearing restoration (e.g. cochlear implants, less is known about unilateral auditory deprivation and subsequent hearing improvement. We investigated effects of long-term congenital unilateral hearing loss on localization, speech understanding, and cortical organization following hearing recovery. Hearing in the congenitally affected ear of a 41 year old female improved significantly after stapedotomy and reconstruction. Pre-operative hearing threshold levels showed unilateral, mixed, moderately-severe to profound hearing loss. The contralateral ear had hearing threshold levels within normal limits. Testing was completed prior to, and three and nine months after surgery. Measurements were of sound localization with intensity-roved stimuli and speech recognition in various noise conditions. We also evoked magnetic resonance signals with monaural stimulation to the unaffected ear. Activation magnitudes were determined in core, belt, and parabelt auditory cortex regions via an interrupted single event design. Hearing improvement following 40 years of congenital unilateral hearing loss resulted in substantially improved sound localization and speech recognition in noise. Auditory cortex also reorganized. Contralateral auditory cortex responses were increased after hearing recovery and the extent of activated cortex was bilateral, including a greater portion of the posterior superior temporal plane. Thus, prolonged predominant monaural stimulation did not prevent auditory system changes consequent to restored binaural hearing. Results support future research of unilateral auditory deprivation effects and plasticity, with consideration for length of deprivation, age at hearing correction, degree and type

  19. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events

    Directory of Open Access Journals (Sweden)

    Jeroen eStekelenburg

    2012-05-01

    Full Text Available In many natural audiovisual events (e.g., a clap of the two hands, the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have already reported that there are distinct neural correlates of temporal (when versus phonetic/semantic (which content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual part of the audiovisual stimulus. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical subadditive amplitude reductions (AV – V < A were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that the N1 suppression was larger for spatially congruent stimuli. A very early audiovisual interaction was also found at 30-50 ms in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  20. Auditory-Acoustic Basis of Consonant Perception. Attachments A thru I

    Science.gov (United States)

    1991-01-22

    of Illi- Pisoni, D. B. (1973). "Auditory and phonetic memory codes in the dis- nois, Urbana , IL. crimination of consonants and vowels," Percep...following carrier sentences, respectively: Greek: [Oa po - ksana]. "I will say- again." German: lch sage - noch einmal. "I say- one more time." 228 Vowels

  1. Bird brains and songs : Neural mechanisms of auditory memory and perception in zebra finches

    NARCIS (Netherlands)

    Gobes, S.M.H.

    2009-01-01

    Songbirds, such as zebra finches, learn their songs from a ‘tutor’ (usually the father), early in life. There are strong parallels between the behavioural, cognitive and neural processes that underlie vocal learning in humans and songbirds. In both cases there is a sensitive period for auditory lear

  2. Auditory hallucinations.

    Science.gov (United States)

    Blom, Jan Dirk

    2015-01-01

    Auditory hallucinations constitute a phenomenologically rich group of endogenously mediated percepts which are associated with psychiatric, neurologic, otologic, and other medical conditions, but which are also experienced by 10-15% of all healthy individuals in the general population. The group of phenomena is probably best known for its verbal auditory subtype, but it also includes musical hallucinations, echo of reading, exploding-head syndrome, and many other types. The subgroup of verbal auditory hallucinations has been studied extensively with the aid of neuroimaging techniques, and from those studies emerges an outline of a functional as well as a structural network of widely distributed brain areas involved in their mediation. The present chapter provides an overview of the various types of auditory hallucination described in the literature, summarizes our current knowledge of the auditory networks involved in their mediation, and draws on ideas from the philosophy of science and network science to reconceptualize the auditory hallucinatory experience, and point out directions for future research into its neurobiologic substrates. In addition, it provides an overview of known associations with various clinical conditions and of the existing evidence for pharmacologic and non-pharmacologic treatments.

  3. Auditory Hallucination

    Directory of Open Access Journals (Sweden)

    MohammadReza Rajabi

    2003-09-01

    Full Text Available Auditory Hallucination or Paracusia is a form of hallucination that involves perceiving sounds without auditory stimulus. A common is hearing one or more talking voices which is associated with psychotic disorders such as schizophrenia or mania. Hallucination, itself, is the most common feature of perceiving the wrong stimulus or to the better word perception of the absence stimulus. Here we will discuss four definitions of hallucinations:1.Perceiving of a stimulus without the presence of any subject; 2. hallucination proper which are the wrong perceptions that are not the falsification of real perception, Although manifest as a new subject and happen along with and synchronously with a real perception;3. hallucination is an out-of-body perception which has no accordance with a real subjectIn a stricter sense, hallucinations are defined as perceptions in a conscious and awake state in the absence of external stimuli which have qualities of real perception, in that they are vivid, substantial, and located in external objective space. We are going to discuss it in details here.

  4. Valid cues for auditory or somatosensory targets affect their perception: a signal detection approach.

    Science.gov (United States)

    Van Hulle, Lore; Van Damme, Stefaan; Crombez, Geert

    2013-01-01

    We investigated the effects of focusing attention towards auditory or somatosensory stimuli on perceptual sensitivity and response bias using a signal detection task. Participants (N = 44) performed an unspeeded detection task in which weak (individually calibrated) somatosensory or auditory stimuli were delivered. The focus of attention was manipulated by the presentation of a visual cue at the start of each trial. The visual cue consisted of the word "warmth" or the word "tone". This word cue was predictive of the corresponding target on two-thirds of the trials. As hypothesised, the results showed that cueing attention to a specific sensory modality resulted in a higher perceptual sensitivity for validly cued targets than for invalidly cued targets, as well as in a more liberal response criterion for reporting stimuli in the valid modality than in the invalid modality. The value of this experimental paradigm for investigating excessive attentional focus or hypervigilance in various non-clinical and clinical populations is discussed.

  5. Impaired Pitch Perception and Memory in Congenital Amusia: The Deficit Starts in the Auditory Cortex

    Science.gov (United States)

    Albouy, Philippe; Mattout, Jeremie; Bouet, Romain; Maby, Emmanuel; Sanchez, Gaetan; Aguera, Pierre-Emmanuel; Daligault, Sebastien; Delpuech, Claude; Bertrand, Olivier; Caclin, Anne; Tillmann, Barbara

    2013-01-01

    Congenital amusia is a lifelong disorder of music perception and production. The present study investigated the cerebral bases of impaired pitch perception and memory in congenital amusia using behavioural measures, magnetoencephalography and voxel-based morphometry. Congenital amusics and matched control subjects performed two melodic tasks (a…

  6. An investigation of the auditory perception of western lowland gorillas in an enrichment study.

    Science.gov (United States)

    Brooker, Jake S

    2016-09-01

    Previous research has highlighted the varied effects of auditory enrichment on different captive animals. This study investigated how manipulating musical components can influence the behavior of a group of captive western lowland gorillas (Gorilla gorilla gorilla) at Bristol Zoo. The gorillas were observed during exposure to classical music, rock-and-roll music, and rainforest sounds. The two music conditions were modified to create five further conditions: unmanipulated, decreased pitch, increased pitch, decreased tempo, and increased tempo. We compared the prevalence of activity, anxiety, and social behaviors between the standard conditions. We also compared the prevalence of each of these behaviors across the manipulated conditions of each type of music independently and collectively. Control observations with no sound exposure were regularly scheduled between the observations of the 12 auditory conditions. The results suggest that naturalistic rainforest sounds had no influence on the anxiety of captive gorillas, contrary to past research. The tempo of music appears to be significantly associated with activity levels among this group, and social behavior may be affected by pitch. Low tempo music also may be effective at reducing anxiety behavior in captive gorillas. Regulated auditory enrichment may provide effective means of calming gorillas, or for facilitating active behavior. Zoo Biol. 35:398-408, 2016. © 2016 Wiley Periodicals, Inc.

  7. Auditory Spatial Perception: Auditory Localization

    Science.gov (United States)

    2012-05-01

    for normal directional hearing. A number of animal studies have demonstrated that the interruption of the neural pathways passing through the TB...view is supported by results from animal studies, indicating that some types of lesions in the brain affect the precision of absolute localization...psichici. Archives Fisiologia 1911, 9, 523–574. [Cited by Gulick et al. (1989)]. Aharonson, V.; Furst, M.; Levine, R. A.; Chaigrecht, M.; Korczyn

  8. The use of auditory and visual context in speech perception by listeners with normal hearing and listeners with cochlear implants

    Directory of Open Access Journals (Sweden)

    Matthew eWinn

    2013-11-01

    Full Text Available There is a wide range of acoustic and visual variability across different talkers and different speaking contexts. Listeners with normal hearing accommodate that variability in ways that facilitate efficient perception, but it is not known whether listeners with cochlear implants can do the same. In this study, listeners with normal hearing (NH and listeners with cochlear implants (CIs were tested for accommodation to auditory and visual phonetic contexts created by gender-driven speech differences as well as vowel coarticulation and lip rounding in both consonants and vowels. Accommodation was measured as the shifting of perceptual boundaries between /s/ and /ʃ/ sounds in various contexts, as modeled by mixed-effects logistic regression. Owing to the spectral contrasts thought to underlie these context effects, CI listeners were predicted to perform poorly, but showed considerable success. Listeners with cochlear implants not only showed sensitivity to auditory cues to gender, they were also able to use visual cues to gender (i.e. faces as a supplement or proxy for information in the acoustic domain, in a pattern that was not observed for listeners with normal hearing. Spectrally-degraded stimuli heard by listeners with normal hearing generally did not elicit strong context effects, underscoring the limitations of noise vocoders and/or the importance of experience with electric hearing. Visual cues for consonant lip rounding and vowel lip rounding were perceived in a manner consistent with coarticulation and were generally used more heavily by listeners with CIs. Results suggest that listeners with cochlear implants are able to accommodate various sources of acoustic variability either by attending to appropriate acoustic cues or by inferring them via the visual signal.

  9. Comparative study of three techniques of palatoplasty in patients with cleft of lip and palate via instrumental and auditory-perceptive evaluations

    OpenAIRE

    Paniagua, Lauren Medeiros; Collares, Marcus Vinícius Martins; Costa, Sady Selaimen da

    2010-01-01

    Introduction: Palatoplasty is a surgical procedure that aims at the reconstruction of the soft and/or hard palate. Actually, we dispose of different techniques that look for the bigger stretching of the soft palate joint to the nasofaryngeal wall to contribute in the appropriate operation of the velopharyngeal sphincter. Failure in its closing brings on speech dysfunctions. Objective: To compare the auditory-perceptive' evaluations and instrumental findings in patients with cleft lip and pala...

  10. AUDITORY PERCEPTION OF MUSICAL SOUNDS BY CHILDREN IN THE FIRST SIX GRADES.

    Science.gov (United States)

    PETZOLD, ROBERT G.

    THE NATURE AND DEVELOPMENT OF CERTAIN FUNDAMENTAL MUSICAL SKILLS WERE STUDIED. THIS STUDY FOCUSED ON AURAL PERCEPTION AS AN INTEGRAL FACTOR IN THE CHILD'S MUSICAL DEVELOPMENT. TWO MAJOR ASPECTS OF THIS 5-YEAR STUDY INCLUDED (1) LONGITUDINAL STUDY OF THREE GROUPS OF CHILDREN AND (2) A SERIES OF 1-YEAR PILOT STUDIES DEALING WITH RHYTHM, TIMBRE, AND…

  11. Compensation for Coarticulation: Disentangling Auditory and Gestural Theories of Perception of Coarticulatory Effects in Speech

    Science.gov (United States)

    Viswanathan, Navin; Magnuson, James S.; Fowler, Carol A.

    2010-01-01

    According to one approach to speech perception, listeners perceive speech by applying general pattern matching mechanisms to the acoustic signal (e.g., Diehl, Lotto, & Holt, 2004). An alternative is that listeners perceive the phonetic gestures that structured the acoustic signal (e.g., Fowler, 1986). The two accounts have offered different…

  12. Context-dependent changes in functional connectivity of auditory cortices during the perception of object words

    NARCIS (Netherlands)

    Dam, W.O. van; Dongen, E.V. van; Bekkering, H.; Rüschemeyer, S.A.

    2012-01-01

    Embodied theories hold that cognitive concepts are grounded in our sensorimotor systems. Specifically, a number of behavioral and neuroimaging studies have buttressed the idea that language concepts are represented in areas involved in perception and action [Pulvermueller, F. Brain mechanisms linkin

  13. Perception of parents about the auditory attention skills of his kid with cleft lip and palate: retrospective study

    Directory of Open Access Journals (Sweden)

    Mondelli, Maria Fernanda Capoani Garcia

    2012-01-01

    Full Text Available Introduction: To process and decode the acoustic stimulation are necessary cognitive and neurophysiological mechanisms. The hearing stimulation is influenced by cognitive factor from the highest levels, such as the memory, attention and learning. The sensory deprivation caused by hearing loss from the conductive type, frequently in population with cleft lip and palate, can affect many cognitive functions - among them the attention, besides harm the school performance, linguistic and interpersonal. Objective: Verify the perception of the parents of children with cleft lip and palate about the hearing attention of their kids. Method: Retrospective study of infants with any type of cleft lip and palate, without any genetic syndrome associate which parents answered a relevant questionnaire about the auditory attention skills. Results: 44 are from the male kind and 26 from the female kind, 35,71% of the answers were affirmative for the hearing loss and 71,43% to otologic infections. Conclusion: Most of the interviewed parents pointed at least one of the behaviors related to attention contained in the questionnaire, indicating that the presence of cleft lip and palate can be related to difficulties in hearing attention.

  14. The consistency of crossmodal synchrony perception across the visual, auditory, and tactile senses.

    Science.gov (United States)

    Machulla, Tonja-Katrin; Di Luca, Massimiliano; Ernst, Marc O

    2016-07-01

    Crossmodal judgments of relative timing commonly yield a nonzero point of subjective simultaneity (PSS). Here, we test whether subjective simultaneity is coherent across all pairwise combinations of the visual, auditory, and tactile modalities. To this end, we examine PSS estimates for transitivity: If Stimulus A has to be presented x ms before Stimulus B to result in subjective simultaneity, and B y ms before C, then A and C should appear simultaneous when A precedes C by z ms, where z = x + y. We obtained PSS estimates via 2 different timing judgment tasks-temporal order judgments (TOJs) and synchrony judgments (SJs)-thus allowing us to examine the relationship between TOJ and SJ. We find that (a) SJ estimates do not violate transitivity, and that (b) TOJ and SJ data are linearly related. Together, these findings suggest that both TOJ and SJ access the same perceptual representation of simultaneity and that this representation is globally coherent across the tested modalities. Furthermore, we find that (b) TOJ estimates are intransitive. This is consistent with the proposal that while the perceptual representation of simultaneity is coherent, relative timing judgments that access this representation can at times be incoherent with each other because of postperceptual response biases. (PsycINFO Database Record

  15. Visual and Auditory Components in the Perception of Asynchronous Audiovisual Speech.

    Science.gov (United States)

    García-Pérez, Miguel A; Alcalá-Quintana, Rocío

    2015-12-01

    Research on asynchronous audiovisual speech perception manipulates experimental conditions to observe their effects on synchrony judgments. Probabilistic models establish a link between the sensory and decisional processes underlying such judgments and the observed data, via interpretable parameters that allow testing hypotheses and making inferences about how experimental manipulations affect such processes. Two models of this type have recently been proposed, one based on independent channels and the other using a Bayesian approach. Both models are fitted here to a common data set, with a subsequent analysis of the interpretation they provide about how experimental manipulations affected the processes underlying perceived synchrony. The data consist of synchrony judgments as a function of audiovisual offset in a speech stimulus, under four within-subjects manipulations of the quality of the visual component. The Bayesian model could not accommodate asymmetric data, was rejected by goodness-of-fit statistics for 8/16 observers, and was found to be nonidentifiable, which renders uninterpretable parameter estimates. The independent-channels model captured asymmetric data, was rejected for only 1/16 observers, and identified how sensory and decisional processes mediating asynchronous audiovisual speech perception are affected by manipulations that only alter the quality of the visual component of the speech signal.

  16. Development of Attentional Control of Verbal Auditory Perception from Middle to Late Childhood: Comparisons to Healthy Aging

    Science.gov (United States)

    Passow, Susanne; Müller, Maike; Westerhausen, René; Hugdahl, Kenneth; Wartenburger, Isabell; Heekeren, Hauke R.; Lindenberger, Ulman; Li, Shu-Chen

    2013-01-01

    Multitalker situations confront listeners with a plethora of competing auditory inputs, and hence require selective attention to relevant information, especially when the perceptual saliency of distracting inputs is high. This study augmented the classical forced-attention dichotic listening paradigm by adding an interaural intensity manipulation…

  17. [Anesthesia with flunitrazepam/fentanyl and isoflurane/fentanyl. Unconscious perception and mid-latency auditory evoked potentials].

    Science.gov (United States)

    Schwender, D; Kaiser, A; Klasing, S; Faber-Züllig, E; Golling, W; Pöppel, E; Peter, K

    1994-05-01

    There is a high incidence of intraoperative awareness during cardiac surgery. Mid-latency auditory evoked potentials (MLAEP) reflect the primary cortical processing of auditory stimuli. In the present study, we investigated MLAEP and explicit and implicit memory for information presented during cardiac anaesthesia. PATIENTS AND METHODS. Institutional approval and informed consent was obtained in 30 patients scheduled for elective cardiac surgery. Anaesthesia was induced in group I (n = 10) with flunitrazepam/fentanyl (0.01 mg/kg) and maintained with flunitrazepam/fentanyl (1.2 mg/h). The patients in group II (n = 10) received etomidate (0.25 mg/kg) and fentanyl (0.005 mg/kg) for induction and isoflurane (0.6-1.2 vol%)/fentanyl (1.2 mg/h) for maintenance of general anaesthesia. Group III (n = 10) served as a control and patients were anaesthetized as in I or II. After sternotomy an audiotape that included an implicit memory task was presented to the patients in groups I and II. The story of Robinson Crusoe was told, and it was suggested to the patients that they remember Robinson Crusoe when asked what they associated with the word Friday 3-5 days postoperatively. Auditory evoked potentials were recorded awake and during general anaesthesia before and after the audiotape presentation on vertex (positive) and mastoids on both sides (negative). Auditory clicks were presented binaurally at 70 dBnHL at a rate of 9.3 Hz. Using the electrodiagnostic system Pathfinder I (Nicolet), 1000 successive stimulus responses were averaged over a 100 ms poststimulus interval and analyzed off-line. Latencies of the peak V, Na, Pa were measured. V belongs to the brainstem-generated potentials, which demonstrates that auditory stimuli were correctly transduced. Na, Pa are generated in the primary auditory cortex of the temporal lobe and are the electrophysiological correlate of the primary cortical processing of the auditory stimuli. RESULTS. None of the patients had an explicit memory

  18. Perceiving temporal regularity in music: the role of auditory event-related potentials (ERPs) in probing beat perception.

    Science.gov (United States)

    Honing, Henkjan; Bouwer, Fleur L; Háden, Gábor P

    2014-01-01

    The aim of this chapter is to give an overview of how the perception of a regular beat in music can be studied in humans adults, human newborns, and nonhuman primates using event-related brain potentials (ERPs). Next to a review of the recent literature on the perception of temporal regularity in music, we will discuss in how far ERPs, and especially the component called mismatch negativity (MMN), can be instrumental in probing beat perception. We conclude with a discussion on the pitfalls and prospects of using ERPs to probe the perception of a regular beat, in which we present possible constraints on stimulus design and discuss future perspectives.

  19. Auditory Motion Elicits a Visual Motion Aftereffect

    Directory of Open Access Journals (Sweden)

    Christopher C. Berger

    2016-12-01

    Full Text Available The visual motion aftereffect is a visual illusion in which exposure to continuous motion in one direction leads to a subsequent illusion of visual motion in the opposite direction. Previous findings have been mixed with regard to whether this visual illusion can be induced cross-modally by auditory stimuli. Based on research on multisensory perception demonstrating the profound influence auditory perception can have on the interpretation and perceived motion of visual stimuli, we hypothesized that exposure to auditory stimuli with strong directional motion cues should induce a visual motion aftereffect. Here, we demonstrate that horizontally moving auditory stimuli induced a significant visual motion aftereffect—an effect that was driven primarily by a change in visual motion perception following exposure to leftward moving auditory stimuli. This finding is consistent with the notion that visual and auditory motion perception rely on at least partially overlapping neural substrates.

  20. Comparative study of three techniques of palatoplasty in patients with cleft of lip and palate via instrumental and auditory-perceptive evaluations

    Directory of Open Access Journals (Sweden)

    Paniagua, Lauren Medeiros

    2010-03-01

    Full Text Available Introduction: Palatoplasty is a surgical procedure that aims at the reconstruction of the soft and/or hard palate. Actually, we dispose of different techniques that look for the bigger stretching of the soft palate joint to the nasofaryngeal wall to contribute in the appropriate operation of the velopharyngeal sphincter. Failure in its closing brings on speech dysfunctions. Objective: To compare the auditory-perceptive' evaluations and instrumental findings in patients with cleft lip and palate operate through three distinctive techniques of palatoplasty. Method: A prospective transversal study of a group of patients with complete unilateral cleft lip and palate. Everybody was subjected to a randomized clinical essay, through distinctive techniques of palatoplasty performed for a single surgeon, about 8 years. In the period of the surgery, the patients were divided in three distinctive groups with 10 participants each one. The present study has evaluates: 10 patients of the Furlow technique, 7 patients of the Veau-Wardill-Kilner+Braithwaite technique and, 9 patients of the Veau-Wardill-Kilner+Braithwaite+Zetaplasty technique; having a total sample of 26 individuals. All the patients were subjected to auditory-perceptive evaluation through speech recording. An instrumental evaluation was also performed through video endoscopy exam. Results: The findings were satisfactory in the three techniques, in other words, the majority of the individuals does not present hyper nasality, compensatory articulatory disturbance and audible nasal air emission. In addition, in the instrumental evaluation, the majority of the individuals of the three techniques of palatoplasty present an appropriate velopharyngeal function. Conclusion: Was not found statistically significant difference between the palatoplasty techniques in both evaluations

  1. Animal models for auditory streaming.

    Science.gov (United States)

    Itatani, Naoya; Klump, Georg M

    2017-02-19

    Sounds in the natural environment need to be assigned to acoustic sources to evaluate complex auditory scenes. Separating sources will affect the analysis of auditory features of sounds. As the benefits of assigning sounds to specific sources accrue to all species communicating acoustically, the ability for auditory scene analysis is widespread among different animals. Animal studies allow for a deeper insight into the neuronal mechanisms underlying auditory scene analysis. Here, we will review the paradigms applied in the study of auditory scene analysis and streaming of sequential sounds in animal models. We will compare the psychophysical results from the animal studies to the evidence obtained in human psychophysics of auditory streaming, i.e. in a task commonly used for measuring the capability for auditory scene analysis. Furthermore, the neuronal correlates of auditory streaming will be reviewed in different animal models and the observations of the neurons' response measures will be related to perception. The across-species comparison will reveal whether similar demands in the analysis of acoustic scenes have resulted in similar perceptual and neuronal processing mechanisms in the wide range of species being capable of auditory scene analysis.This article is part of the themed issue 'Auditory and visual scene analysis'.

  2. Functional asymmetry and effective connectivity of the auditory system during speech perception is modulated by the place of articulation of the consonant- A 7T fMRI study

    Directory of Open Access Journals (Sweden)

    Karsten eSpecht

    2014-06-01

    Full Text Available To differentiate between stop-consonants, the auditory system has to detect subtle place of articulation (PoA and voice onset time (VOT differences between stop-consonants. How this differential processing is represented on the cortical level remains unclear. The present functional magnetic resonance (fMRI study takes advantage of the superior spatial resolution and high sensitivity of ultra high field 7T MRI. Subjects were attentively listening to consonant-vowel syllables with an alveolar or bilabial stop-consonant and either a short or long voice-onset time. The results showed an overall bilateral activation pattern in the posterior temporal lobe during the processing of the consonant-vowel syllables. This was however modulated strongest by place of articulation such that syllables with an alveolar stop-consonant showed stronger left lateralized activation. In addition, analysis of underlying functional and effective connectivity revealed an inhibitory effect of the left planum temporale onto the right auditory cortex during the processing of alveolar consonant-vowel syllables. Further, the connectivity result indicated also a directed information flow from the right to the left auditory cortex, and further to the left planum temporale for all syllables. These results indicate that auditory speech perception relies on an interplay between the left and right auditory cortex, with the left planum temporale as modulator. Furthermore, the degree of functional asymmetry is determined by the acoustic properties of the consonant-vowel syllables.

  3. New Phenomenon of Abnormal Auditory Perception Associated with Emotional and Head Trauma: Pathological Confirmation by SPECT Scan

    Science.gov (United States)

    Stephane, Massoud; Hill, Thomas; Matthew, Elizabeth; Folstein, Marshal

    2004-01-01

    We report the case of an immigrant who suffered from death threats and head trauma while a prisoner of war in Kuwait. Two months later, he began to hear conversations that had taken place previously. These perceptions occurred spontaneously or were induced by the patient's effortful concentration. The single photon emission computerized tomography…

  4. 基于听觉感知任务下的脑电驾驶实验研究%Study of Auditory Perception Tasks Based on EEG Driving

    Institute of Scientific and Technical Information of China (English)

    梁静坤; 徐桂芝; 李明钊; 于洪利

    2012-01-01

    The aim of this work is to explore the driving EEG in the traffic information based on the auditory perception by motor imagery. In the simulated whistle - brake environment,subjects control the vehicle by the left and right hand motor imagery,the feature was extracted and classified by CSP and linear regression algorithms and the recognition was 96. 67%. Results; the signals sampled from C3 and C4 all showed that the voltage amplitude caused by the right hand imagery was higher,the difference between the maximum amplitude was 120 μV. Conclusion: compared to traditional perception of the visual task,the auditory one have advantages including (1) higher voltage amplitude can improve the signal extraction,and (2 ) the collection of EEG can optimize access to the traditional two-electrode acquisition mode by using the single one.%探索在听觉感知的交通信息提示下,进行左右手想象运动对车辆进行辅助控制的脑电特征.受试者在模拟的鸣笛和刹车环境中进行左右手想象运动控制车辆的前进和停止,对采集的脑电进行了基于公共空间模式CSP和线性回归算法的特征提取及分类,识别率达96.67%.结果显示:在与左右手想象运动相关的电极C3、C4上,均表现出右手想象任务引起的脑电幅值高于左手任务,最大幅值差可达120 μV.结论:与传统的在视觉感知下进行的左右手想象任务相比,听觉感知任务下的脑电存在着很大优势:(1)两种想象任务引起的脑电幅值差异更大,更有利于信号的提取;(2)脑电的获取可以取代传统的双电极采集方式,选用C3或C4单电极实现.

  5. Visually guided auditory attention in a dynamic "cocktail-party" speech perception task: ERP evidence for age-related differences.

    Science.gov (United States)

    Getzmann, Stephan; Wascher, Edmund

    2017-02-01

    Speech understanding in the presence of concurring sound is a major challenge especially for older persons. In particular, conversational turn-takings usually result in switch costs, as indicated by declined speech perception after changes in the relevant target talker. Here, we investigated whether visual cues indicating the future position of a target talker may reduce the costs of switching in younger and older adults. We employed a speech perception task, in which sequences of short words were simultaneously presented by three talkers, and analysed behavioural measures and event-related potentials (ERPs). Informative cues resulted in increased performance after a spatial change in target talker compared to uninformative cues, not indicating the future target position. Especially the older participants benefited from knowing the future target position in advance, indicated by reduced response times after informative cues. The ERP analysis revealed an overall reduced N2, and a reduced P3b to changes in the target talker location in older participants, suggesting reduced inhibitory control and context updating. On the other hand, a pronounced frontal late positive complex (f-LPC) to the informative cues indicated increased allocation of attentional resources to changes in target talker in the older group, in line with the decline-compensation hypothesis. Thus, knowing where to listen has the potential to compensate for age-related decline in attentional switching in a highly variable cocktail-party environment.

  6. Environment for Auditory Research Facility (EAR)

    Data.gov (United States)

    Federal Laboratory Consortium — EAR is an auditory perception and communication research center enabling state-of-the-art simulation of various indoor and outdoor acoustic environments. The heart...

  7. Fingers Phrase Music Differently: Trial-to-Trial Variability in Piano Scale Playing and Auditory Perception Reveal Motor Chunking.

    Science.gov (United States)

    van Vugt, Floris Tijmen; Jabusch, Hans-Christian; Altenmüller, Eckart

    2012-01-01

    We investigated how musical phrasing and motor sequencing interact to yield timing patterns in the conservatory students' playing piano scales. We propose a novel analysis method that compared the measured note onsets to an objectively regular scale fitted to the data. Subsequently, we segment the timing variability into (i) systematic deviations from objective evenness that are perhaps residuals of expressive timing or of perceptual biases and (ii) non-systematic deviations that can be interpreted as motor execution errors, perhaps due to noise in the nervous system. The former, systematic deviations reveal that the two-octave scales are played as a single musical phrase. The latter, trial-to-trial variabilities reveal that pianists' timing was less consistent at the boundaries between the octaves, providing evidence that the octave is represented as a single motor sequence. These effects cannot be explained by low-level properties of the motor task such as the thumb passage and also did not show up in simulated scales with temporal jitter. Intriguingly, this instability in motor production around the octave boundary is mirrored by an impairment in the detection of timing deviations at those positions, suggesting that chunks overlap between perception and action. We conclude that the octave boundary instability in the scale playing motor program provides behavioral evidence that our brain chunks musical sequences into octave units that do not coincide with musical phrases. Our results indicate that trial-to-trial variability is a novel and meaningful indicator of this chunking. The procedure can readily be extended to a variety of tasks to help understand how movements are divided into units and what processing occurs at their boundaries.

  8. Adaptation in the auditory system: an overview

    Directory of Open Access Journals (Sweden)

    David ePérez-González

    2014-02-01

    Full Text Available The early stages of the auditory system need to preserve the timing information of sounds in order to extract the basic features of acoustic stimuli. At the same time, different processes of neuronal adaptation occur at several levels to further process the auditory information. For instance, auditory nerve fiber responses already experience adaptation of their firing rates, a type of response that can be found in many other auditory nuclei and may be useful for emphasizing the onset of the stimuli. However, it is at higher levels in the auditory hierarchy where more sophisticated types of neuronal processing take place. For example, stimulus-specific adaptation, where neurons show adaptation to frequent, repetitive stimuli, but maintain their responsiveness to stimuli with different physical characteristics, thus representing a distinct kind of processing that may play a role in change and deviance detection. In the auditory cortex, adaptation takes more elaborate forms, and contributes to the processing of complex sequences, auditory scene analysis and attention. Here we review the multiple types of adaptation that occur in the auditory system, which are part of the pool of resources that the neurons employ to process the auditory scene, and are critical to a proper understanding of the neuronal mechanisms that govern auditory perception.

  9. Auditory short-term memory in the primate auditory cortex.

    Science.gov (United States)

    Scott, Brian H; Mishkin, Mortimer

    2016-06-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory.

  10. Perception of Complex Auditory Scenes

    Science.gov (United States)

    2014-07-02

    display on a computer monitor. The questions were highly structured and based on a typical radio-telephone procedure (e.g., "Hello Darkstar. This is Tango ...Is there a cross on position five? Over."). Subjects were asked to respond using one of three possible response formats; long ("Hello Tango . This...is Darkstar. Yes. Over."), medium (" Tango from Darkstar. Yes. Over."), or short (" Tango . Yes."). Listener-talker gender configurations and the rate

  11. Attention within Auditory Word Perception.

    Science.gov (United States)

    1985-11-01

    Orasanu Dr. Steven Pinker Army Research Institute Department of Psychology 5001 Eisenhower Avenue E10-018 Alexandria, VA 22333 M.I.T. Cambridge, MA 02139...Village Dr. Steven W. Keele Apt. # 15J Department of Psychology New York, NY 10012 University of Oregon Eugene, OR 97403 r - - . 7- 7. -0. ------- 1985110...CO 80309 3939 O’Hara Street Pittsburgh, PA 15213 Dr. Steven E. Poltrock MCC Dr. Mary S. Riley 9430 Research Blvd. Program in Cognitive Science Echelon

  12. Auditory imagery and the poor-pitch singer.

    Science.gov (United States)

    Pfordresher, Peter Q; Halpern, Andrea R

    2013-08-01

    The vocal imitation of pitch by singing requires one to plan laryngeal movements on the basis of anticipated target pitch events. This process may rely on auditory imagery, which has been shown to activate motor planning areas. As such, we hypothesized that poor-pitch singing, although not typically associated with deficient pitch perception, may be associated with deficient auditory imagery. Participants vocally imitated simple pitch sequences by singing, discriminated pitch pairs on the basis of pitch height, and completed an auditory imagery self-report questionnaire (the Bucknell Auditory Imagery Scale). The percentage of trials participants sung in tune correlated significantly with self-reports of vividness for auditory imagery, although not with the ability to control auditory imagery. Pitch discrimination was not predicted by auditory imagery scores. The results thus support a link between auditory imagery and vocal imitation.

  13. Auditory excitation patterns : the significance of the pulsation threshold method for the measurement of auditory nonlinearity

    NARCIS (Netherlands)

    H. Verschuure (Hans)

    1978-01-01

    textabstractThe auditory system is the toto[ of organs that translates an acoustical signal into the perception of a sound. An acoustic signal is a vibration. It is decribed by physical parameters. The perception of sound is the awareness of a signal being present and the attribution of certain qual

  14. Auditory temporal processing skills in musicians with dyslexia.

    Science.gov (United States)

    Bishop-Liebler, Paula; Welch, Graham; Huss, Martina; Thomson, Jennifer M; Goswami, Usha

    2014-08-01

    The core cognitive difficulty in developmental dyslexia involves phonological processing, but adults and children with dyslexia also have sensory impairments. Impairments in basic auditory processing show particular links with phonological impairments, and recent studies with dyslexic children across languages reveal a relationship between auditory temporal processing and sensitivity to rhythmic timing and speech rhythm. As rhythm is explicit in music, musical training might have a beneficial effect on the auditory perception of acoustic cues to rhythm in dyslexia. Here we took advantage of the presence of musicians with and without dyslexia in musical conservatoires, comparing their auditory temporal processing abilities with those of dyslexic non-musicians matched for cognitive ability. Musicians with dyslexia showed equivalent auditory sensitivity to musicians without dyslexia and also showed equivalent rhythm perception. The data support the view that extensive rhythmic experience initiated during childhood (here in the form of music training) can affect basic auditory processing skills which are found to be deficient in individuals with dyslexia.

  15. Audiovisual Speech Perception in Infancy: The Influence of Vowel Identity and Infants' Productive Abilities on Sensitivity to (Mis)Matches between Auditory and Visual Speech Cues

    Science.gov (United States)

    Altvater-Mackensen, Nicole; Mani, Nivedita; Grossmann, Tobias

    2016-01-01

    Recent studies suggest that infants' audiovisual speech perception is influenced by articulatory experience (Mugitani et al., 2008; Yeung & Werker, 2013). The current study extends these findings by testing if infants' emerging ability to produce native sounds in babbling impacts their audiovisual speech perception. We tested 44 6-month-olds…

  16. 视、听感知下模拟驾驶行为的脑电实验比较研究%Research on EEG for virtual driving between visual and auditory perception

    Institute of Scientific and Technical Information of China (English)

    梁静坤; 徐桂芝; 孙辉

    2011-01-01

    Objective To evaluate the effect of electroencephalography(EEG) recognition rate resulted by stimulus of visual, auditory and combined perception. Methods In virtual traffic environment, the stimuli were designed in traffic information based on visual, auditory and audio-visual fusion. When one of the three stimuli appeared, the subjects completed starting and braking vehicle by imaginary using of right and left hands at the same time. The EEG signals were recorded and the imaginary motion-related features from C3, C4 were extracted and classified. Results The best recognition rates in visual, auditory and fusion perception were 100%, 100%and 83%, respectively, while the averages of the rates were 68.8% 、82.2%、76.9%, respectively. Conclusion The recognition rates are affected by the audio, visual and fusion stimulus and apparent individual differences exist.%目的 研究基于视觉、听觉以及视听融合感知的刺激源对脑电(EEG)识别率的影响。方法试验分别从视觉、听觉和视听融合3种情况提供与交通环境相关的刺激源,在不同刺激源的提示下,受试者利用左右手想象运动对车辆进行启动和制动控制,并对采集的脑电数据进行特征提取和分类处理,得到与想象运动相关的C3、C4导联上的特征信号和分类结果。结果视觉、听觉、视听融合环境下5名受试者最高识别率分别为100%、100%、83.3%,平均识别率分别为68.8%、82.2%、76.9%。结论基于视听感官的刺激源对脑电的识别率有一定影响,并且存在较大的个体差异。

  17. Binaural technology for e.g. rendering auditory virtual environments

    DEFF Research Database (Denmark)

    Hammershøi, Dorte

    2008-01-01

    , helped mediate the understanding that if the transfer functions could be mastered, then important dimensions of the auditory percept could also be controlled. He early understood the potential of using the HRTFs and numerical sound transmission analysis programs for rendering auditory virtual...... environments. Jens Blauert participated in many European cooperation projects exploring  this field (and others), among other the SCATIS project addressing the auditory-tactile dimensions in the absence of visual information....

  18. Auditory Dysfunction and Its Communicative Impact in the Classroom.

    Science.gov (United States)

    Friedrich, Brad W.

    1982-01-01

    The origins and nature of auditory dysfunction in school age children and the role of the audiologist in the evaluation of the learning disabled child are reviewed. Specific structures and mechanisms responsible for the reception and perception of auditory signals are specified. (Author/SEW)

  19. Mode-locking neurodynamics predict human auditory brainstem responses to musical intervals.

    Science.gov (United States)

    Lerud, Karl D; Almonte, Felix V; Kim, Ji Chul; Large, Edward W

    2014-02-01

    The auditory nervous system is highly nonlinear. Some nonlinear responses arise through active processes in the cochlea, while others may arise in neural populations of the cochlear nucleus, inferior colliculus and higher auditory areas. In humans, auditory brainstem recordings reveal nonlinear population responses to combinations of pure tones, and to musical intervals composed of complex tones. Yet the biophysical origin of central auditory nonlinearities, their signal processing properties, and their relationship to auditory perception remain largely unknown. Both stimulus components and nonlinear resonances are well represented in auditory brainstem nuclei due to neural phase-locking. Recently mode-locking, a generalization of phase-locking that implies an intrinsically nonlinear processing of sound, has been observed in mammalian auditory brainstem nuclei. Here we show that a canonical model of mode-locked neural oscillation predicts the complex nonlinear population responses to musical intervals that have been observed in the human brainstem. The model makes predictions about auditory signal processing and perception that are different from traditional delay-based models, and may provide insight into the nature of auditory population responses. We anticipate that the application of dynamical systems analysis will provide the starting point for generic models of auditory population dynamics, and lead to a deeper understanding of nonlinear auditory signal processing possibly arising in excitatory-inhibitory networks of the central auditory nervous system. This approach has the potential to link neural dynamics with the perception of pitch, music, and speech, and lead to dynamical models of auditory system development.

  20. Effective Connectivity Hierarchically Links Temporoparietal and Frontal Areas of the Auditory Dorsal Stream with the Motor Cortex Lip Area during Speech Perception

    Science.gov (United States)

    Murakami, Takenobu; Restle, Julia; Ziemann, Ulf

    2012-01-01

    A left-hemispheric cortico-cortical network involving areas of the temporoparietal junction (Tpj) and the posterior inferior frontal gyrus (pIFG) is thought to support sensorimotor integration of speech perception into articulatory motor activation, but how this network links with the lip area of the primary motor cortex (M1) during speech…

  1. Analysis of Auditory Perception Skills in Pre - lingual Deaf Children with Inner Ear Malformation after Cochlear Implantation%内耳畸形语前聋患儿人工耳蜗植入术后听觉能力分析

    Institute of Scientific and Technical Information of China (English)

    陈秀兰; 秦兆冰; 张玉玲; 付凯辉

    2015-01-01

    Objective The aim of this study was to evaluate the outcomes of auditory perception skills after cochlear implantation in children with malformed inner ear and compare them with a group of congenitally deaf chil ‐dren implantees with a normal inner ear .Methods 21 children with inner ear malformation were retrospective stud‐ied .There were 9 cases with large vestibular aqueduct syndrome (LVAS) ,7 with Mondini abnormality ,5 with com‐mon cavity .The postoperative outcomes of these 21 cases were compared with 21 cases with normal inner ear struc‐ture .The outcomes of all the children after the surgery in 1 year were studied using the soundfield test in frequency ranging 0 .5 to 4 kHz and the auditory perception skills ,and the auditory perception skills consisted of natural envi‐ronmental sound recognition ,consonants recognition ,vowels recognition ,numeral recognition ,tone recognition , monosyllabe recognition ,disyllabe recognition ,trisyllabe recognition ,short sentences recognition ,selective hearing recognition .Results The results of soundfield test and auditory perception skills after cochlear implantation in 9 children with LVAS and 6 with Mondini abnormality had no significant difference (P > 0 .05) comparing with the control cases .Postoperative thresholds in soundfield test were 50 ~ 75 dB HL for 1 case with severe Mondini abnor‐mality ,the mean value of hearing ability score was 70 .5% ,and less than the results of control cases .Postoperative mean thresholds in soundfield test were 65 .26 ± 5 .13 dB HL for 5 cases with common cavity ,the hearing ability score was less than the results of control cases (P< 0 .05) .Conclusion The effect of rehabilitation had no difference between the children with LVAS and the cases with normal inner structure after cochlear implantation ,but was poo‐rer in children with severe Mondini abnormality and common cavity .It was necessary to evaluate the degree of mal‐formation of inner ear structure before

  2. Auditory Imagery: Empirical Findings

    Science.gov (United States)

    Hubbard, Timothy L.

    2010-01-01

    The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d)…

  3. Tinnitus intensity dependent gamma oscillations of the contralateral auditory cortex.

    Directory of Open Access Journals (Sweden)

    Elsa van der Loo

    Full Text Available BACKGROUND: Non-pulsatile tinnitus is considered a subjective auditory phantom phenomenon present in 10 to 15% of the population. Tinnitus as a phantom phenomenon is related to hyperactivity and reorganization of the auditory cortex. Magnetoencephalography studies demonstrate a correlation between gamma band activity in the contralateral auditory cortex and the presence of tinnitus. The present study aims to investigate the relation between objective gamma-band activity in the contralateral auditory cortex and subjective tinnitus loudness scores. METHODS AND FINDINGS: In unilateral tinnitus patients (N = 15; 10 right, 5 left source analysis of resting state electroencephalographic gamma band oscillations shows a strong positive correlation with Visual Analogue Scale loudness scores in the contralateral auditory cortex (max r = 0.73, p<0.05. CONCLUSION: Auditory phantom percepts thus show similar sound level dependent activation of the contralateral auditory cortex as observed in normal audition. In view of recent consciousness models and tinnitus network models these results suggest tinnitus loudness is coded by gamma band activity in the contralateral auditory cortex but might not, by itself, be responsible for tinnitus perception.

  4. Distúrbio de voz em professores: autorreferência, avaliação perceptiva da voz e das pregas vocais Voice disorders in teachers: self-report, auditory-perceptive assessment of voice and vocal fold assessment

    Directory of Open Access Journals (Sweden)

    Maria Fabiana Bonfim de Lima-Silva

    2012-12-01

    Full Text Available OBJETIVO: Analisar a presença do distúrbio de voz em professores na concordância entre autorreferência, avaliação perceptiva da voz e das pregas vocais. MÉTODOS: Deste estudo transversal, participaram 60 professores de duas escolas públicas de ensino fundamental e médio. Após responderem questionário de autopercepção (Condição de Produção Vocal do Professor - CPV-P para caracterização da amostra e levantamento de dados sobre autorreferência ao distúrbio de voz, foram submetidos à coleta de amostra de fala e exame nasofibrolaringoscópico. Para classificar as vozes, três juízes fonoaudiólogos utilizaram à escala GRBASI e, para pregas vocais (PPVV, um otorrinolaringologista descreveu as alterações encontradas. Os dados foram analisados descritivamente, e a seguir submetidos a testes de associação. RESULTADOS: No questionário, 63,3% dos participantes referiram ter ou ter tido distúrbio de voz. Do total, 43,3% foram diagnosticados com alteração em voz e 46,7%, em prega vocal. Não houve associação entre autorreferência e avaliação da voz, nem entre autorreferência e avaliação de PPVV, com registro de concordância baixa entre as três avaliações. Porém, houve associação entre a avaliação da voz e de PPVV, com concordância intermediária entre elas. CONCLUSÃO: Há maior autorreferência a distúrbio de voz do que o constatado pela avaliação perceptiva da voz e das pregas vocais. A concordância intermediária entre as duas avaliações prediz a necessidade da realização de pelo menos uma delas por ocasião da triagem em professores.PURPOSE: To analyze the presence of voice disorders in teachers in agreement between self-report, auditory-perceptive assessment of voice quality and vocal fold assessment. METHODS: The subjects of this cross-sectional study were 60 public elementary, middle and high-school teachers. After answering a self-awareness questionnaire (Voice Production Conditions of

  5. Statistical representation of sound textures in the impaired auditory system

    DEFF Research Database (Denmark)

    McWalter, Richard Ian; Dau, Torsten

    2015-01-01

    Many challenges exist when it comes to understanding and compensating for hearing impairment. Traditional methods, such as pure tone audiometry and speech intelligibility tests, offer insight into the deficiencies of a hearingimpaired listener, but can only partially reveal the mechanisms...... that underlie the hearing loss. An alternative approach is to investigate the statistical representation of sounds for hearing-impaired listeners along the auditory pathway. Using models of the auditory periphery and sound synthesis, we aimed to probe hearing impaired perception for sound textures – temporally...... homogenous sounds such as rain, birds, or fire. It has been suggested that sound texture perception is mediated by time-averaged statistics measured from early auditory representations (McDermott et al., 2013). Changes to early auditory processing, such as broader “peripheral” filters or reduced compression...

  6. Conserved mechanisms of vocalization coding in mammalian and songbird auditory midbrain.

    Science.gov (United States)

    Woolley, Sarah M N; Portfors, Christine V

    2013-11-01

    The ubiquity of social vocalizations among animals provides the opportunity to identify conserved mechanisms of auditory processing that subserve communication. Identifying auditory coding properties that are shared across vocal communicators will provide insight into how human auditory processing leads to speech perception. Here, we compare auditory response properties and neural coding of social vocalizations in auditory midbrain neurons of mammalian and avian vocal communicators. The auditory midbrain is a nexus of auditory processing because it receives and integrates information from multiple parallel pathways and provides the ascending auditory input to the thalamus. The auditory midbrain is also the first region in the ascending auditory system where neurons show complex tuning properties that are correlated with the acoustics of social vocalizations. Single unit studies in mice, bats and zebra finches reveal shared principles of auditory coding including tonotopy, excitatory and inhibitory interactions that shape responses to vocal signals, nonlinear response properties that are important for auditory coding of social vocalizations and modulation tuning. Additionally, single neuron responses in the mouse and songbird midbrain are reliable, selective for specific syllables, and rely on spike timing for neural discrimination of distinct vocalizations. We propose that future research on auditory coding of vocalizations in mouse and songbird midbrain neurons adopt similar experimental and analytical approaches so that conserved principles of vocalization coding may be distinguished from those that are specialized for each species. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives".

  7. Neurodynamics, tonality, and the auditory brainstem response.

    Science.gov (United States)

    Large, Edward W; Almonte, Felix V

    2012-04-01

    Tonal relationships are foundational in music, providing the basis upon which musical structures, such as melodies, are constructed and perceived. A recent dynamic theory of musical tonality predicts that networks of auditory neurons resonate nonlinearly to musical stimuli. Nonlinear resonance leads to stability and attraction relationships among neural frequencies, and these neural dynamics give rise to the perception of relationships among tones that we collectively refer to as tonal cognition. Because this model describes the dynamics of neural populations, it makes specific predictions about human auditory neurophysiology. Here, we show how predictions about the auditory brainstem response (ABR) are derived from the model. To illustrate, we derive a prediction about population responses to musical intervals that has been observed in the human brainstem. Our modeled ABR shows qualitative agreement with important features of the human ABR. This provides a source of evidence that fundamental principles of auditory neurodynamics might underlie the perception of tonal relationships, and forces reevaluation of the role of learning and enculturation in tonal cognition.

  8. Weak responses to auditory feedback perturbation during articulation in persons who stutter: evidence for abnormal auditory-motor transformation.

    Directory of Open Access Journals (Sweden)

    Shanqing Cai

    Full Text Available Previous empirical observations have led researchers to propose that auditory feedback (the auditory perception of self-produced sounds when speaking functions abnormally in the speech motor systems of persons who stutter (PWS. Researchers have theorized that an important neural basis of stuttering is the aberrant integration of auditory information into incipient speech motor commands. Because of the circumstantial support for these hypotheses and the differences and contradictions between them, there is a need for carefully designed experiments that directly examine auditory-motor integration during speech production in PWS. In the current study, we used real-time manipulation of auditory feedback to directly investigate whether the speech motor system of PWS utilizes auditory feedback abnormally during articulation and to characterize potential deficits of this auditory-motor integration. Twenty-one PWS and 18 fluent control participants were recruited. Using a short-latency formant-perturbation system, we examined participants' compensatory responses to unanticipated perturbation of auditory feedback of the first formant frequency during the production of the monophthong [ε]. The PWS showed compensatory responses that were qualitatively similar to the controls' and had close-to-normal latencies (∼150 ms, but the magnitudes of their responses were substantially and significantly smaller than those of the control participants (by 47% on average, p<0.05. Measurements of auditory acuity indicate that the weaker-than-normal compensatory responses in PWS were not attributable to a deficit in low-level auditory processing. These findings are consistent with the hypothesis that stuttering is associated with functional defects in the inverse models responsible for the transformation from the domain of auditory targets and auditory error information into the domain of speech motor commands.

  9. Music lessons improve auditory perceptual and cognitive performance in deaf children

    Directory of Open Access Journals (Sweden)

    Françoise eROCHETTE

    2014-07-01

    Full Text Available Despite advanced technologies in auditory rehabilitation of profound deafness, deaf children often exhibit delayed cognitive and linguistic development and auditory training remains a crucial element of their education. In the present cross-sectional study, we assess whether music would be a relevant tool for deaf children rehabilitation. In normal-hearing children, music lessons have been shown to improve cognitive and linguistic-related abilities, such as phonetic discrimination and reading. We compared auditory perception, auditory cognition, and phonetic discrimination between 14 profoundly deaf children who completed weekly music lessons for a period of 1.5 to 4 years and 14 deaf children who did not receive musical instruction. Children were assessed on perceptual and cognitive auditory tasks using environmental sounds: discrimination, identification, auditory scene analysis, auditory working memory. Transfer to the linguistic domain was tested with a phonetic discrimination task. Musically-trained children showed better performance in auditory scene analysis, auditory working memory and phonetic discrimination tasks, and multiple regressions showed that success on these tasks was at least partly driven by music lessons. We propose that musical education contributes to development of general processes such as auditory attention and perception, which, in turn, facilitate auditory-related cognitive and linguistic processes.

  10. Auditory sustained field responses to periodic noise

    Directory of Open Access Journals (Sweden)

    Keceli Sumru

    2012-01-01

    Full Text Available Abstract Background Auditory sustained responses have been recently suggested to reflect neural processing of speech sounds in the auditory cortex. As periodic fluctuations below the pitch range are important for speech perception, it is necessary to investigate how low frequency periodic sounds are processed in the human auditory cortex. Auditory sustained responses have been shown to be sensitive to temporal regularity but the relationship between the amplitudes of auditory evoked sustained responses and the repetitive rates of auditory inputs remains elusive. As the temporal and spectral features of sounds enhance different components of sustained responses, previous studies with click trains and vowel stimuli presented diverging results. In order to investigate the effect of repetition rate on cortical responses, we analyzed the auditory sustained fields evoked by periodic and aperiodic noises using magnetoencephalography. Results Sustained fields were elicited by white noise and repeating frozen noise stimuli with repetition rates of 5-, 10-, 50-, 200- and 500 Hz. The sustained field amplitudes were significantly larger for all the periodic stimuli than for white noise. Although the sustained field amplitudes showed a rising and falling pattern within the repetition rate range, the response amplitudes to 5 Hz repetition rate were significantly larger than to 500 Hz. Conclusions The enhanced sustained field responses to periodic noises show that cortical sensitivity to periodic sounds is maintained for a wide range of repetition rates. Persistence of periodicity sensitivity below the pitch range suggests that in addition to processing the fundamental frequency of voice, sustained field generators can also resolve low frequency temporal modulations in speech envelope.

  11. Missing a trick: Auditory load modulates conscious awareness in audition.

    Science.gov (United States)

    Fairnie, Jake; Moore, Brian C J; Remington, Anna

    2016-07-01

    In the visual domain there is considerable evidence supporting the Load Theory of Attention and Cognitive Control, which holds that conscious perception of background stimuli depends on the level of perceptual load involved in a primary task. However, literature on the applicability of this theory to the auditory domain is limited and, in many cases, inconsistent. Here we present a novel "auditory search task" that allows systematic investigation of the impact of auditory load on auditory conscious perception. An array of simultaneous, spatially separated sounds was presented to participants. On half the trials, a critical stimulus was presented concurrently with the array. Participants were asked to detect which of 2 possible targets was present in the array (primary task), and whether the critical stimulus was present or absent (secondary task). Increasing the auditory load of the primary task (raising the number of sounds in the array) consistently reduced the ability to detect the critical stimulus. This indicates that, at least in certain situations, load theory applies in the auditory domain. The implications of this finding are discussed both with respect to our understanding of typical audition and for populations with altered auditory processing. (PsycINFO Database Record

  12. Absence of auditory 'global interference' in autism.

    Science.gov (United States)

    Foxton, Jessica M; Stewart, Mary E; Barnard, Louise; Rodgers, Jacqui; Young, Allan H; O'Brien, Gregory; Griffiths, Timothy D

    2003-12-01

    There has been considerable recent interest in the cognitive style of individuals with Autism Spectrum Disorder (ASD). One theory, that of weak central coherence, concerns an inability to combine stimulus details into a coherent whole. Here we test this theory in the case of sound patterns, using a new definition of the details (local structure) and the coherent whole (global structure). Thirteen individuals with a diagnosis of autism or Asperger's syndrome and 15 control participants were administered auditory tests, where they were required to match local pitch direction changes between two auditory sequences. When the other local features of the sequence pairs were altered (the actual pitches and relative time points of pitch direction change), the control participants obtained lower scores compared with when these details were left unchanged. This can be attributed to interference from the global structure, defined as the combination of the local auditory details. In contrast, the participants with ASD did not obtain lower scores in the presence of such mismatches. This was attributed to the absence of interference from an auditory coherent whole. The results are consistent with the presence of abnormal interactions between local and global auditory perception in ASD.

  13. Interhemispheric auditory connectivity: structure and function related to auditory verbal hallucinations.

    Science.gov (United States)

    Steinmann, Saskia; Leicht, Gregor; Mulert, Christoph

    2014-01-01

    Auditory verbal hallucinations (AVH) are one of the most common and most distressing symptoms of schizophrenia. Despite fundamental research, the underlying neurocognitive and neurobiological mechanisms are still a matter of debate. Previous studies suggested that "hearing voices" is associated with a number of factors including local deficits in the left auditory cortex and a disturbed connectivity of frontal and temporoparietal language-related areas. In addition, it is hypothesized that the interhemispheric pathways connecting right and left auditory cortices might be involved in the pathogenesis of AVH. Findings based on Diffusion-Tensor-Imaging (DTI) measurements revealed a remarkable interindividual variability in size and shape of the interhemispheric auditory pathways. Interestingly, schizophrenia patients suffering from AVH exhibited increased fractional anisotropy (FA) in the interhemispheric fibers than non-hallucinating patients. Thus, higher FA-values indicate an increased severity of AVH. Moreover, a dichotic listening (DL) task showed that the interindividual variability in the interhemispheric auditory pathways was reflected in the behavioral outcome: stronger pathways supported a better information transfer and consequently improved speech perception. This detection indicates a specific structure-function relationship, which seems to be interindividually variable. This review focuses on recent findings concerning the structure-function relationship of the interhemispheric pathways in controls, hallucinating and non-hallucinating schizophrenia patients and concludes that changes in the structural and functional connectivity of auditory areas are involved in the pathophysiology of AVH.

  14. Contextual modulation of primary visual cortex by auditory signals

    Science.gov (United States)

    Paton, A. T.

    2017-01-01

    Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol. 23, 195–201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol. 24, 1256–1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044015

  15. Contextual modulation of primary visual cortex by auditory signals.

    Science.gov (United States)

    Petro, L S; Paton, A T; Muckli, L

    2017-02-19

    Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol. 23, 195-201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol. 24, 1256-1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame.This article is part of the themed issue 'Auditory and visual scene analysis'.

  16. Temporal factors affecting somatosensory-auditory interactions in speech processing

    Directory of Open Access Journals (Sweden)

    Takayuki eIto

    2014-11-01

    Full Text Available Speech perception is known to rely on both auditory and visual information. However, sound specific somatosensory input has been shown also to influence speech perceptual processing (Ito et al., 2009. In the present study we addressed further the relationship between somatosensory information and speech perceptual processing by addressing the hypothesis that the temporal relationship between orofacial movement and sound processing contributes to somatosensory-auditory interaction in speech perception. We examined the changes in event-related potentials in response to multisensory synchronous (simultaneous and asynchronous (90 ms lag and lead somatosensory and auditory stimulation compared to individual unisensory auditory and somatosensory stimulation alone. We used a robotic device to apply facial skin somatosensory deformations that were similar in timing and duration to those experienced in speech production. Following synchronous multisensory stimulation the amplitude of the event-related potential was reliably different from the two unisensory potentials. More importantly, the magnitude of the event-related potential difference varied as a function of the relative timing of the somatosensory-auditory stimulation. Event-related activity change due to stimulus timing was seen between 160-220 ms following somatosensory onset, mostly around the parietal area. The results demonstrate a dynamic modulation of somatosensory-auditory convergence and suggest the contribution of somatosensory information for speech processing process is dependent on the specific temporal order of sensory inputs in speech production.

  17. Auditory function in individuals within Leber's hereditary optic neuropathy pedigrees.

    Science.gov (United States)

    Rance, Gary; Kearns, Lisa S; Tan, Johanna; Gravina, Anthony; Rosenfeld, Lisa; Henley, Lauren; Carew, Peter; Graydon, Kelley; O'Hare, Fleur; Mackey, David A

    2012-03-01

    The aims of this study are to investigate whether auditory dysfunction is part of the spectrum of neurological abnormalities associated with Leber's hereditary optic neuropathy (LHON) and to determine the perceptual consequences of auditory neuropathy (AN) in affected listeners. Forty-eight subjects confirmed by genetic testing as having one of four mitochondrial mutations associated with LHON (mt11778, mtDNA14484, mtDNA14482 and mtDNA3460) participated. Thirty-two of these had lost vision, and 16 were asymptomatic at the point of data collection. While the majority of individuals showed normal sound detection, >25% (of both symptomatic and asymptomatic participants) showed electrophysiological evidence of AN with either absent or severely delayed auditory brainstem potentials. Abnormalities were observed for each of the mutations, but subjects with the mtDNA11778 type were the most affected. Auditory perception was also abnormal in both symptomatic and asymptomatic subjects, with >20% of cases showing impaired detection of auditory temporal (timing) cues and >30% showing abnormal speech perception both in quiet and in the presence of background noise. The findings of this study indicate that a relatively high proportion of individuals with the LHON genetic profile may suffer functional hearing difficulties due to neural abnormality in the central auditory pathways.

  18. Auditory Integration Training

    Directory of Open Access Journals (Sweden)

    Zahra Jafari

    2002-07-01

    Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depressin, and hyperacute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of The Sound of a Moracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.

  19. Tuning up the developing auditory CNS.

    Science.gov (United States)

    Sanes, Dan H; Bao, Shaowen

    2009-04-01

    Although the auditory system has limited information processing resources, the acoustic environment is infinitely variable. To properly encode the natural environment, the developing central auditory system becomes somewhat specialized through experience-dependent adaptive mechanisms that operate during a sensitive time window. Recent studies have demonstrated that cellular and synaptic plasticity occurs throughout the central auditory pathway. Acoustic-rearing experiments can lead to an over-representation of the exposed sound frequency, and this is associated with specific changes in frequency discrimination. These forms of cellular plasticity are manifest in brain regions, such as midbrain and cortex, which interact through feed-forward and feedback pathways. Hearing loss leads to a profound re-weighting of excitatory and inhibitory synaptic gain throughout the auditory CNS, and this is associated with an over-excitability that is observed in vivo. Further behavioral and computational analyses may provide insights into how theses cellular and systems plasticity effects underlie the development of cognitive functions such as speech perception.

  20. Auditory and visual scene analysis: an overview

    Science.gov (United States)

    2017-01-01

    We perceive the world as stable and composed of discrete objects even though auditory and visual inputs are often ambiguous owing to spatial and temporal occluders and changes in the conditions of observation. This raises important questions regarding where and how ‘scene analysis’ is performed in the brain. Recent advances from both auditory and visual research suggest that the brain does not simply process the incoming scene properties. Rather, top-down processes such as attention, expectations and prior knowledge facilitate scene perception. Thus, scene analysis is linked not only with the extraction of stimulus features and formation and selection of perceptual objects, but also with selective attention, perceptual binding and awareness. This special issue covers novel advances in scene-analysis research obtained using a combination of psychophysics, computational modelling, neuroimaging and neurophysiology, and presents new empirical and theoretical approaches. For integrative understanding of scene analysis beyond and across sensory modalities, we provide a collection of 15 articles that enable comparison and integration of recent findings in auditory and visual scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044011

  1. Modality specific neural correlates of auditory and somatic hallucinations

    Science.gov (United States)

    Shergill, S; Cameron, L; Brammer, M; Williams, S; Murray, R; McGuire, P

    2001-01-01

    Somatic hallucinations occur in schizophrenia and other psychotic disorders, although auditory hallucinations are more common. Although the neural correlates of auditory hallucinations have been described in several neuroimaging studies, little is known of the pathophysiology of somatic hallucinations. Functional magnetic resonance imaging (fMRI) was used to compare the distribution of brain activity during somatic and auditory verbal hallucinations, occurring at different times in a 36 year old man with schizophrenia. Somatic hallucinations were associated with activation in the primary somatosensory and posterior parietal cortex, areas that normally mediate tactile perception. Auditory hallucinations were associated with activation in the middle and superior temporal cortex, areas involved in processing external speech. Hallucinations in a given modality seem to involve areas that normally process sensory information in that modality.

 PMID:11606687

  2. Auditory Perception in Open Field: Distance Estimation

    Science.gov (United States)

    2013-07-01

    Proprietary software was used to control the experiments, present sounds, and collect listener responses. Figure 4. Block diagram of the...were crickets, katydids, cicada, bees, beetles, and grasshoppers . The average weather and noise conditions observed during the study are listed in table

  3. Auditory object formation affects modulation perception

    DEFF Research Database (Denmark)

    Piechowiak, Tobias

    2005-01-01

    Most sounds in our environment, including speech, music, animal vocalizations and environmental noise, have fluctuations in intensity that are often highly correlated across different frequency regions. Because across-frequency modulation is so common, the ability to process such information...

  4. Subliminal Speech Perception and Auditory Streaming

    Science.gov (United States)

    Dupoux, Emmanuel; de Gardelle, Vincent; Kouider, Sid

    2008-01-01

    Current theories of consciousness assume a qualitative dissociation between conscious and unconscious processing: while subliminal stimuli only elicit a transient activity, supraliminal stimuli have long-lasting influences. Nevertheless, the existence of this qualitative distinction remains controversial, as past studies confounded awareness and…

  5. Translation and adaptation of functional auditory performance indicators (FAPI

    Directory of Open Access Journals (Sweden)

    Karina Ferreira

    2011-12-01

    Full Text Available Work with deaf children has gained new attention since the expectation and goal of therapy has expanded to language development and subsequent language learning. Many clinical tests were developed for evaluation of speech sound perception in young children in response to the need for accurate assessment of hearing skills that developed from the use of individual hearing aids or cochlear implants. These tests also allow the evaluation of the rehabilitation program. However, few of these tests are available in Portuguese. Evaluation with the Functional Auditory Performance Indicators (FAPI generates a child's functional auditory skills profile, which lists auditory skills in an integrated and hierarchical order. It has seven hierarchical categories, including sound awareness, meaningful sound, auditory feedback, sound source localizing, auditory discrimination, short-term auditory memory, and linguistic auditory processing. FAPI evaluation allows the therapist to map the child's hearing profile performance, determine the target for increasing the hearing abilities, and develop an effective therapeutic plan. Objective: Since the FAPI is an American test, the inventory was adapted for application in the Brazilian population. Material and Methods: The translation was done following the steps of translation and back translation, and reproducibility was evaluated. Four translated versions (two originals and two back-translated were compared, and revisions were done to ensure language adaptation and grammatical and idiomatic equivalence. Results: The inventory was duly translated and adapted. Conclusion: Further studies about the application of the translated FAPI are necessary to make the test practicable in Brazilian clinical use.

  6. Auditory Responses of Infants

    Science.gov (United States)

    Watrous, Betty Springer; And Others

    1975-01-01

    Forty infants, 3- to 12-months-old, participated in a study designed to differentiate the auditory response characteristics of normally developing infants in the age ranges 3 - 5 months, 6 - 8 months, and 9 - 12 months. (Author)

  7. Tuned with a tune: Talker normalization via general auditory processes

    Directory of Open Access Journals (Sweden)

    Erika J C Laing

    2012-06-01

    Full Text Available Voices have unique acoustic signatures, contributing to the acoustic variability listeners must contend with in perceiving speech, and it has long been proposed that listeners normalize speech perception to information extracted from a talker’s speech. Initial attempts to explain talker normalization relied on extraction of articulatory referents, but recent studies of context-dependent auditory perception suggest that general auditory referents such as the long-term average spectrum (LTAS of a talker’s speech similarly affect speech perception. The present study aimed to differentiate the contributions of articulatory/linguistic versus auditory referents for context-driven talker normalization effects and, more specifically, to identify the specific constraints under which such contexts impact speech perception. Synthesized sentences manipulated to sound like different talkers influenced categorization of a subsequent speech target only when differences in the sentences’ LTAS were in the frequency range of the acoustic cues relevant for the target phonemic contrast. This effect was true both for speech targets preceded by spoken sentence contexts and for targets preceded by nonspeech tone sequences that were LTAS-matched to the spoken sentence contexts. Specific LTAS characteristics, rather than perceived talker, predicted the results suggesting that general auditory mechanisms play an important role in effects considered to be instances of perceptual talker normalization.

  8. Influence of Syllable Structure on L2 Auditory Word Learning

    Science.gov (United States)

    Hamada, Megumi; Goya, Hideki

    2015-01-01

    This study investigated the role of syllable structure in L2 auditory word learning. Based on research on cross-linguistic variation of speech perception and lexical memory, it was hypothesized that Japanese L1 learners of English would learn English words with an open-syllable structure without consonant clusters better than words with a…

  9. Discovering Structure in Auditory Input: Evidence from Williams Syndrome

    Science.gov (United States)

    Elsabbagh, Mayada; Cohen, Henri; Karmiloff-Smith, Annette

    2010-01-01

    We examined auditory perception in Williams syndrome by investigating strategies used in organizing sound patterns into coherent units. In Experiment 1, we investigated the streaming of sound sequences into perceptual units, on the basis of pitch cues, in a group of children and adults with Williams syndrome compared to typical controls. We showed…

  10. Abnormal connectivity between attentional, language and auditory networks in schizophrenia

    NARCIS (Netherlands)

    Liemburg, Edith J.; Vercammen, Ans; Ter Horst, Gert J.; Curcic-Blake, Branislava; Knegtering, Henderikus; Aleman, Andre

    2012-01-01

    Brain circuits involved in language processing have been suggested to be compromised in patients with schizophrenia. This does not only include regions subserving language production and perception, but also auditory processing and attention. We investigated resting state network connectivity of aud

  11. Deafness in cochlear and auditory nerve disorders.

    Science.gov (United States)

    Hopkins, Kathryn

    2015-01-01

    Sensorineural hearing loss is the most common type of hearing impairment worldwide. It arises as a consequence of damage to the cochlea or auditory nerve, and several structures are often affected simultaneously. There are many causes, including genetic mutations affecting the structures of the inner ear, and environmental insults such as noise, ototoxic substances, and hypoxia. The prevalence increases dramatically with age. Clinical diagnosis is most commonly accomplished by measuring detection thresholds and comparing these to normative values to determine the degree of hearing loss. In addition to causing insensitivity to weak sounds, sensorineural hearing loss has a number of adverse perceptual consequences, including loudness recruitment, poor perception of pitch and auditory space, and difficulty understanding speech, particularly in the presence of background noise. The condition is usually incurable; treatment focuses on restoring the audibility of sounds made inaudible by hearing loss using either hearing aids or cochlear implants.

  12. Neurophysiological mechanisms involved in auditory perceptual organization

    Directory of Open Access Journals (Sweden)

    Aurélie Bidet-Caulet

    2009-09-01

    Full Text Available In our complex acoustic environment, we are confronted with a mixture of sounds produced by several simultaneous sources. However, we rarely perceive these sounds as incomprehensible noise. Our brain uses perceptual organization processes to independently follow the emission of each sound source over time. If the acoustic properties exploited in these processes are well-established, the neurophysiological mechanisms involved in auditory scene analysis have raised interest only recently. Here, we review the studies investigating these mechanisms using electrophysiological recordings from the cochlear nucleus to the auditory cortex, in animals and humans. Their findings reveal that basic mechanisms such as frequency selectivity, forward suppression and multi-second habituation shape the automatic brain responses to sounds in a way that can account for several important characteristics of perceptual organization of both simultaneous and successive sounds. One challenging question remains unresolved: how are the resulting activity patterns integrated to yield the corresponding conscious perceptsµ

  13. Estudo do comportamento vocal no ciclo menstrual: avaliação perceptivo-auditiva, acústica e auto-perceptiva Vocal behavior during menstrual cycle: perceptual-auditory, acoustic and self-perception analysis

    Directory of Open Access Journals (Sweden)

    Luciane C. de Figueiredo

    2004-06-01

    Full Text Available Durante o período pré-menstrual é comum a ocorrência de disfonia, e são poucas as mulheres que se dão conta dessa variação da voz dentro do ciclo menstrual (Quinteiro, 1989. OBJETIVO: Verificar se há diferença no padrão vocal de mulheres no período de ovulação em relação ao primeiro dia do ciclo menstrual, utilizando-se da análise perceptivo-auditiva, da espectrografia, dos parâmetros acústicos e quando esta diferença está presente, se é percebida pelas mulheres. FORMA DE ESTUDO: Caso-controle. MATERIAL E MÉTODO: A amostra coletada foi de 30 estudantes de Fonoaudiologia, na faixa etária de 18 a 25 anos, não-fumantes, com ciclo menstrual regular e sem o uso de contraceptivo oral. As vozes foram gravadas no primeiro dia de menstruação e no décimo-terceiro dia pós-menstruação (ovulação, para posterior comparação. RESULTADOS: Observou-se durante o período menstrual que as vozes estão rouco-soprosa de grau leve a moderado, instáveis, sem a presença de quebra de sonoridade, com pitch e loudness adequados e ressonância equilibrada. Há pior qualidade de definição dos harmônicos, maior quantidade de ruído entre eles e menor extensão dos harmônicos superiores. Encontramos uma f0 mais aguda, jitter e shimmer aumentados e PHR diminuída. CONCLUSÃO: No período menstrual há mudanças na qualidade vocal, no comportamento dos harmônicos e nos parâmetros vocais (f0,jitter, shimmer e PHR. Além disso, a maioria das estudantes de Fonoaudiologia não percebeu a variação da voz durante o ciclo menstrual.During the premenstruation period dysphonia often can be observed and only few women are aware of this voice variation (Quinteiro, 1989. AIM: To verify if there are vocal quality variations between the ovulation period and the first day of the menstrual cycle, by using perceptual-auditory and acoustic analysis, including spectrography, and the self perception of the vocal changes when it occurs. STUDY DESIGN: Case

  14. Perception of Audio-Visual Speech Synchrony in Spanish-Speaking Children with and without Specific Language Impairment

    Science.gov (United States)

    Pons, Ferran; Andreu, Llorenc; Sanz-Torrent, Monica; Buil-Legaz, Lucia; Lewkowicz, David J.

    2013-01-01

    Speech perception involves the integration of auditory and visual articulatory information, and thus requires the perception of temporal synchrony between this information. There is evidence that children with specific language impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the…

  15. Psychophysical and Neural Correlates of Auditory Attraction and Aversion

    Science.gov (United States)

    Patten, Kristopher Jakob

    This study explores the psychophysical and neural processes associated with the perception of sounds as either pleasant or aversive. The underlying psychophysical theory is based on auditory scene analysis, the process through which listeners parse auditory signals into individual acoustic sources. The first experiment tests and confirms that a self-rated pleasantness continuum reliably exists for 20 various stimuli (r = .48). In addition, the pleasantness continuum correlated with the physical acoustic characteristics of consonance/dissonance (r = .78), which can facilitate auditory parsing processes. The second experiment uses an fMRI block design to test blood oxygen level dependent (BOLD) changes elicited by a subset of 5 exemplar stimuli chosen from Experiment 1 that are evenly distributed over the pleasantness continuum. Specifically, it tests and confirms that the pleasantness continuum produces systematic changes in brain activity for unpleasant acoustic stimuli beyond what occurs with pleasant auditory stimuli. Results revealed that the combination of two positively and two negatively valenced experimental sounds compared to one neutral baseline control elicited BOLD increases in the primary auditory cortex, specifically the bilateral superior temporal gyrus, and left dorsomedial prefrontal cortex; the latter being consistent with a frontal decision-making process common in identification tasks. The negatively-valenced stimuli yielded additional BOLD increases in the left insula, which typically indicates processing of visceral emotions. The positively-valenced stimuli did not yield any significant BOLD activation, consistent with consonant, harmonic stimuli being the prototypical acoustic pattern of auditory objects that is optimal for auditory scene analysis. Both the psychophysical findings of Experiment 1 and the neural processing findings of Experiment 2 support that consonance is an important dimension of sound that is processed in a manner that aids

  16. Hemodynamic responses in human multisensory and auditory association cortex to purely visual stimulation

    Directory of Open Access Journals (Sweden)

    Baumann Simon

    2007-02-01

    Full Text Available Abstract Background Recent findings of a tight coupling between visual and auditory association cortices during multisensory perception in monkeys and humans raise the question whether consistent paired presentation of simple visual and auditory stimuli prompts conditioned responses in unimodal auditory regions or multimodal association cortex once visual stimuli are presented in isolation in a post-conditioning run. To address this issue fifteen healthy participants partook in a "silent" sparse temporal event-related fMRI study. In the first (visual control habituation phase they were presented with briefly red flashing visual stimuli. In the second (auditory control habituation phase they heard brief telephone ringing. In the third (conditioning phase we coincidently presented the visual stimulus (CS paired with the auditory stimulus (UCS. In the fourth phase participants either viewed flashes paired with the auditory stimulus (maintenance, CS- or viewed the visual stimulus in isolation (extinction, CS+ according to a 5:10 partial reinforcement schedule. The participants had no other task than attending to the stimuli and indicating the end of each trial by pressing a button. Results During unpaired visual presentations (preceding and following the paired presentation we observed significant brain responses beyond primary visual cortex in the bilateral posterior auditory association cortex (planum temporale, planum parietale and in the right superior temporal sulcus whereas the primary auditory regions were not involved. By contrast, the activity in auditory core regions was markedly larger when participants were presented with auditory stimuli. Conclusion These results demonstrate involvement of multisensory and auditory association areas in perception of unimodal visual stimulation which may reflect the instantaneous forming of multisensory associations and cannot be attributed to sensation of an auditory event. More importantly, we are able

  17. 语前聋儿童人工耳蜗植入术后一年内的听觉发展和言语识别%Development of early auditory and speech perception skills within one year after cochlear implantion in prelingual deaf children

    Institute of Scientific and Technical Information of China (English)

    傅莹; 陈源; 郗昕; 洪梦迪; 陈艾婷; 王倩; 黄丽娜

    2015-01-01

    目的 探索语前聋儿童人工耳蜗植入术后早期的听觉发展和言语识别,同时探讨现行中文听障儿童听觉和言语识别测试工具的可行性.方法 共有83例重度-极重度语前聋患儿参与本研究,按耳蜗植入手术时的年龄将患儿分为四组:A组(1~2岁)18例、B组(2~3岁)30例、C组(3~4岁)24例、D组(4~5岁)11例.使用婴幼儿/有意义听觉整合量表(Infant-Toddler/ Meaningful Auditory Integration Scale,MAIS/IT-MAIS)问卷,普通话早期言语感知测试(Mandarin Early Speech Perception Test,MESP)、普通话儿童言语能力测试(Mandarin Pediatric Speech Intelligibility Test,MPSI)等闭合式言语识别测试,由经过培训的听力学专业人员对患儿术后的听觉发展和言语识别能力进行评估.分别在术前和开机后3、6、12个月时对各组患儿进行评估,评估人员记录测试结果并采用SPSS19.0软件进行统计学分析.结果 术后早期的听觉发育和言语识别的发展随康复时间延长而逐步提高.各组受试患儿的MAIS/IT-MAIS得分随着康复时间的延长,均呈现相同的增长趋势,组间差异具有统计学意义(F=5.743,P=0.007).植入年龄3~4岁的C组儿童术前MAIS/IT-MAIS得分即已高于其他各组,在术后各随访节点得分也高于另外三组.术前助听器使用经验多的儿童,其MAIS/IT-MAIS得分也高于未使用助听器或使用经验少的儿童(F=4.947,P=0.000).MESP测试显示,在植入后1年内,患儿的言语察觉能力始终优先于言语识别能力,但言语识别能力随着人工耳蜗使用时间延长而有明显提高;各亚测试项的难度呈现较明显的渐次增大的层级.MPSI在开机12个月时才仅有约40%的受试儿童能完成安静条件下的测试,且随着语竞比的降低,能完成该测试的儿童的比例显著下降.结论 人工耳蜗植入术后1年,患儿即发展出一定的听力及言语识别能力.中文版IT-MAIS/MAIS、MESP、MPSI是评估植

  18. Role of the auditory system in speech production.

    Science.gov (United States)

    Guenther, Frank H; Hickok, Gregory

    2015-01-01

    This chapter reviews evidence regarding the role of auditory perception in shaping speech output. Evidence indicates that speech movements are planned to follow auditory trajectories. This in turn is followed by a description of the Directions Into Velocities of Articulators (DIVA) model, which provides a detailed account of the role of auditory feedback in speech motor development and control. A brief description of the higher-order brain areas involved in speech sequencing (including the pre-supplementary motor area and inferior frontal sulcus) is then provided, followed by a description of the Hierarchical State Feedback Control (HSFC) model, which posits internal error detection and correction processes that can detect and correct speech production errors prior to articulation. The chapter closes with a treatment of promising future directions of research into auditory-motor interactions in speech, including the use of intracranial recording techniques such as electrocorticography in humans, the investigation of the potential roles of various large-scale brain rhythms in speech perception and production, and the development of brain-computer interfaces that use auditory feedback to allow profoundly paralyzed users to learn to produce speech using a speech synthesizer.

  19. Auditory and Visual Sensations

    CERN Document Server

    Ando, Yoichi

    2010-01-01

    Professor Yoichi Ando, acoustic architectural designer of the Kirishima International Concert Hall in Japan, presents a comprehensive rational-scientific approach to designing performance spaces. His theory is based on systematic psychoacoustical observations of spatial hearing and listener preferences, whose neuronal correlates are observed in the neurophysiology of the human brain. A correlation-based model of neuronal signal processing in the central auditory system is proposed in which temporal sensations (pitch, timbre, loudness, duration) are represented by an internal autocorrelation representation, and spatial sensations (sound location, size, diffuseness related to envelopment) are represented by an internal interaural crosscorrelation function. Together these two internal central auditory representations account for the basic auditory qualities that are relevant for listening to music and speech in indoor performance spaces. Observed psychological and neurophysiological commonalities between auditor...

  20. A unique cellular scaling rule in the avian auditory system.

    Science.gov (United States)

    Corfield, Jeremy R; Long, Brendan; Krilow, Justin M; Wylie, Douglas R; Iwaniuk, Andrew N

    2016-06-01

    Although it is clear that neural structures scale with body size, the mechanisms of this relationship are not well understood. Several recent studies have shown that the relationship between neuron numbers and brain (or brain region) size are not only different across mammalian orders, but also across auditory and visual regions within the same brains. Among birds, similar cellular scaling rules have not been examined in any detail. Here, we examine the scaling of auditory structures in birds and show that the scaling rules that have been established in the mammalian auditory pathway do not necessarily apply to birds. In galliforms, neuronal densities decrease with increasing brain size, suggesting that auditory brainstem structures increase in size faster than neurons are added; smaller brains have relatively more neurons than larger brains. The cellular scaling rules that apply to auditory brainstem structures in galliforms are, therefore, different to that found in primate auditory pathway. It is likely that the factors driving this difference are associated with the anatomical specializations required for sound perception in birds, although there is a decoupling of neuron numbers in brain structures and hair cell numbers in the basilar papilla. This study provides significant insight into the allometric scaling of neural structures in birds and improves our understanding of the rules that govern neural scaling across vertebrates.

  1. Multidimensional perception of concert hall acoustics - Studies with the loudspeaker orchestra

    OpenAIRE

    Kuusinen, Antti

    2016-01-01

    Concert halls and the perception of acoustics have fascinated scientists for over a hundred years. The overall perception is known to consist of such factors as strength, reverberance, definition and envelopment, but the underlying structure of the perceptual space is not well understood. Moreover, an essential aspect of hearing, auditory distance perception, has not been studied closely in concert halls. This dissertation studies the multidimensional perception, preferences and auditory dist...

  2. Development of auditory localization accuracy and auditory spatial discrimination in children and adolescents.

    Science.gov (United States)

    Kühnle, S; Ludwig, A A; Meuret, S; Küttner, C; Witte, C; Scholbach, J; Fuchs, M; Rübsamen, R

    2013-01-01

    The present study investigated the development of two parameters of spatial acoustic perception in children and adolescents with normal hearing, aged 6-18 years. Auditory localization accuracy was quantified by means of a sound source identification task and auditory spatial discrimination acuity by measuring minimum audible angles (MAA). Both low- and high-frequency noise bursts were employed in the tests, thereby separately addressing auditory processing based on interaural time and intensity differences. Setup consisted of 47 loudspeakers mounted in the frontal azimuthal hemifield, ranging from 90° left to 90° right (-90°, +90°). Target signals were presented from 8 loudspeaker positions in the left and right hemifields (±4°, ±30°, ±60° and ±90°). Localization accuracy and spatial discrimination acuity showed different developmental courses. Localization accuracy remained stable from the age of 6 onwards. In contrast, MAA thresholds and interindividual variability of spatial discrimination decreased significantly with increasing age. Across all age groups, localization was most accurate and MAA thresholds were lower for frontal than for lateral sound sources, and for low-frequency compared to high-frequency noise bursts. The study also shows better performance in spatial hearing based on interaural time differences rather than on intensity differences throughout development. These findings confirm that specific aspects of central auditory processing show continuous development during childhood up to adolescence.

  3. Correlates of perceptual awareness in human primary auditory cortex revealed by an informational masking experiment.

    Science.gov (United States)

    Wiegand, Katrin; Gutschalk, Alexander

    2012-05-15

    The presence of an auditory event may remain undetected in crowded environments, even when it is well above the sensory threshold. This effect, commonly known as informational masking, allows for isolating neural activity related to perceptual awareness, by comparing repetitions of the same physical stimulus where the target is either detected or not. Evidence from magnetoencephalography (MEG) suggests that auditory-cortex activity in the latency range 50-250 ms is closely coupled with perceptual awareness. Here, BOLD fMRI and MEG were combined to investigate at which stage in the auditory cortex neural correlates of conscious auditory perception can be observed. Participants were asked to indicate the perception of a regularly repeating target tone, embedded within a random multi-tone masking background. Results revealed widespread activation within the auditory cortex for detected target tones, which was delayed but otherwise similar to the activation of an unmasked control stimulus. The contrast of detected versus undetected targets revealed activity confined to medial Heschl's gyrus, where the primary auditory cortex is located. These results suggest that activity related to conscious perception involves the primary auditory cortex and is not restricted to activity in secondary areas.

  4. Audiovisual integration in speech perception: a multi-stage process

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias

    2011-01-01

    investigate whether the integration of auditory and visual speech observed in these two audiovisual integration effects are specific traits of speech perception. We further ask whether audiovisual integration is undertaken in a single processing stage or multiple processing stages.......Integration of speech signals from ear and eye is a well-known feature of speech perception. This is evidenced by the McGurk illusion in which visual speech alters auditory speech perception and by the advantage observed in auditory speech detection when a visual signal is present. Here we...

  5. Inversion of Auditory Spectrograms, Traditional Spectrograms, and Other Envelope Representations

    DEFF Research Database (Denmark)

    Decorsière, Remi Julien Blaise; Søndergaard, Peter Lempel; MacDonald, Ewen

    2015-01-01

    implementations of this framework are presented for auditory spectrograms, where the filterbank is based on the behavior of the basilar membrane and envelope extraction is modeled on the response of inner hair cells. One implementation is direct while the other is a two-stage approach that is computationally...... simpler. While both can accurately invert an auditory spectrogram, the two-stage approach performs better on time-domain metrics. The same framework is applied to traditional spectrograms based on the magnitude of the short-time Fourier transform. Inspired by human perception of loudness, a modification...

  6. Quadri-stability of a spatially ambiguous auditory illusion

    Directory of Open Access Journals (Sweden)

    Constance May Bainbridge

    2015-01-01

    Full Text Available In addition to vision, audition plays an important role in sound localization in our world. One way we estimate the motion of an auditory object moving towards or away from us is from changes in volume intensity. However, the human auditory system has unequally distributed spatial resolution, including difficulty distinguishing sounds in front versus behind the listener. Here, we introduce a novel quadri-stable illusion, the Transverse-and-Bounce Auditory Illusion, which combines front-back confusion with changes in volume levels of a nonspatial sound to create ambiguous percepts of an object approaching and withdrawing from the listener. The sound can be perceived as traveling transversely from front to back or back to front, or bouncing to remain exclusively in front of or behind the observer. Here we demonstrate how human listeners experience this illusory phenomenon by comparing ambiguous and unambiguous stimuli for each of the four possible motion percepts. When asked to rate their confidence in perceiving each sound’s motion, participants reported equal confidence for the illusory and unambiguous stimuli. Participants perceived all four illusory motion percepts, and could not distinguish the illusion from the unambiguous stimuli. These results show that this illusion is effectively quadri-stable. In a second experiment, the illusory stimulus was looped continuously in headphones while participants identified its perceived path of motion to test properties of perceptual switching, locking, and biases. Participants were biased towards perceiving transverse compared to bouncing paths, and they became perceptually locked into alternating between front-to-back and back-to-front percepts, perhaps reflecting how auditory objects commonly move in the real world. This multi-stable auditory illusion opens opportunities for studying the perceptual, cognitive, and neural representation of objects in motion, as well as exploring multimodal perceptual

  7. Effects of aging on peripheral and central auditory processing in rats.

    Science.gov (United States)

    Costa, Margarida; Lepore, Franco; Prévost, François; Guillemot, Jean-Paul

    2016-08-01

    Hearing loss is a hallmark sign in the elderly population. Decline in auditory perception provokes deficits in the ability to localize sound sources and reduces speech perception, particularly in noise. In addition to a loss of peripheral hearing sensitivity, changes in more complex central structures have also been demonstrated. Related to these, this study examines the auditory directional maps in the deep layers of the superior colliculus of the rat. Hence, anesthetized Sprague-Dawley adult (10 months) and aged (22 months) rats underwent distortion product of otoacoustic emissions (DPOAEs) to assess cochlear function. Then, auditory brainstem responses (ABRs) were assessed, followed by extracellular single-unit recordings to determine age-related effects on central auditory functions. DPOAE amplitude levels were decreased in aged rats although they were still present between 3.0 and 24.0 kHz. ABR level thresholds in aged rats were significantly elevated at an early (cochlear nucleus - wave II) stage in the auditory brainstem. In the superior colliculus, thresholds were increased and the tuning widths of the directional receptive fields were significantly wider. Moreover, no systematic directional spatial arrangement was present among the neurons of the aged rats, implying that the topographical organization of the auditory directional map was abolished. These results suggest that the deterioration of the auditory directional spatial map can, to some extent, be attributable to age-related dysfunction at more central, perceptual stages of auditory processing.

  8. Visual speech gestures modulate efferent auditory system.

    Science.gov (United States)

    Namasivayam, Aravind Kumar; Wong, Wing Yiu Stephanie; Sharma, Dinaay; van Lieshout, Pascal

    2015-03-01

    Visual and auditory systems interact at both cortical and subcortical levels. Studies suggest a highly context-specific cross-modal modulation of the auditory system by the visual system. The present study builds on this work by sampling data from 17 young healthy adults to test whether visual speech stimuli evoke different responses in the auditory efferent system compared to visual non-speech stimuli. The descending cortical influences on medial olivocochlear (MOC) activity were indirectly assessed by examining the effects of contralateral suppression of transient-evoked otoacoustic emissions (TEOAEs) at 1, 2, 3 and 4 kHz under three conditions: (a) in the absence of any contralateral noise (Baseline), (b) contralateral noise + observing facial speech gestures related to productions of vowels /a/ and /u/ and (c) contralateral noise + observing facial non-speech gestures related to smiling and frowning. The results are based on 7 individuals whose data met strict recording criteria and indicated a significant difference in TEOAE suppression between observing speech gestures relative to the non-speech gestures, but only at the 1 kHz frequency. These results suggest that observing a speech gesture compared to a non-speech gesture may trigger a difference in MOC activity, possibly to enhance peripheral neural encoding. If such findings can be reproduced in future research, sensory perception models and theories positing the downstream convergence of unisensory streams of information in the cortex may need to be revised.

  9. Motor Training: Comparison of Visual and Auditory Coded Proprioceptive Cues

    Directory of Open Access Journals (Sweden)

    Philip Jepson

    2012-05-01

    Full Text Available Self-perception of body posture and movement is achieved through multi-sensory integration, particularly the utilisation of vision, and proprioceptive information derived from muscles and joints. Disruption to these processes can occur following a neurological accident, such as stroke, leading to sensory and physical impairment. Rehabilitation can be helped through use of augmented visual and auditory biofeedback to stimulate neuro-plasticity, but the effective design and application of feedback, particularly in the auditory domain, is non-trivial. Simple auditory feedback was tested by comparing the stepping accuracy of normal subjects when given a visual spatial target (step length and an auditory temporal target (step duration. A baseline measurement of step length and duration was taken using optical motion capture. Subjects (n=20 took 20 ‘training’ steps (baseline ±25% using either an auditory target (950 Hz tone, bell-shaped gain envelope or visual target (spot marked on the floor and were then asked to replicate the target step (length or duration corresponding to training with all feedback removed. Visual cues (mean percentage error=11.5%; SD ± 7.0%; auditory cues (mean percentage error = 12.9%; SD ± 11.8%. Visual cues elicit a high degree of accuracy both in training and follow-up un-cued tasks; despite the novelty of the auditory cues present for subjects, the mean accuracy of subjects approached that for visual cues, and initial results suggest a limited amount of practice using auditory cues can improve performance.

  10. Auditory evacuation beacons

    NARCIS (Netherlands)

    Wijngaarden, S.J. van; Bronkhorst, A.W.; Boer, L.C.

    2005-01-01

    Auditory evacuation beacons can be used to guide people to safe exits, even when vision is totally obscured by smoke. Conventional beacons make use of modulated noise signals. Controlled evacuation experiments show that such signals require explicit instructions and are often misunderstood. A new si

  11. Virtual Auditory Displays

    Science.gov (United States)

    2000-01-01

    timbre , intensity, distance, room modeling, radio communication Virtual Environments Handbook Chapter 4 Virtual Auditory Displays Russell D... musical note “A” as a pure sinusoid, there will be 440 condensations and rarefactions per second. The distance between two adjacent condensations or...and complexity are pitch, loudness, and timbre respectively. This distinction between physical and perceptual measures of sound properties is an

  12. The neglected neglect: auditory neglect.

    Science.gov (United States)

    Gokhale, Sankalp; Lahoti, Sourabh; Caplan, Louis R

    2013-08-01

    Whereas visual and somatosensory forms of neglect are commonly recognized by clinicians, auditory neglect is often not assessed and therefore neglected. The auditory cortical processing system can be functionally classified into 2 distinct pathways. These 2 distinct functional pathways deal with recognition of sound ("what" pathway) and the directional attributes of the sound ("where" pathway). Lesions of higher auditory pathways produce distinct clinical features. Clinical bedside evaluation of auditory neglect is often difficult because of coexisting neurological deficits and the binaural nature of auditory inputs. In addition, auditory neglect and auditory extinction may show varying degrees of overlap, which makes the assessment even harder. Shielding one ear from the other as well as separating the ear from space is therefore critical for accurate assessment of auditory neglect. This can be achieved by use of specialized auditory tests (dichotic tasks and sound localization tests) for accurate interpretation of deficits. Herein, we have reviewed auditory neglect with an emphasis on the functional anatomy, clinical evaluation, and basic principles of specialized auditory tests.

  13. Impairments in musical abilities reflected in the auditory brainstem: evidence from congenital amusia.

    Science.gov (United States)

    Lehmann, Alexandre; Skoe, Erika; Moreau, Patricia; Peretz, Isabelle; Kraus, Nina

    2015-07-01

    Congenital amusia is a neurogenetic condition, characterized by a deficit in music perception and production, not explained by hearing loss, brain damage or lack of exposure to music. Despite inferior musical performance, amusics exhibit normal auditory cortical responses, with abnormal neural correlates suggested to lie beyond auditory cortices. Here we show, using auditory brainstem responses to complex sounds in humans, that fine-grained automatic processing of sounds is impoverished in amusia. Compared with matched non-musician controls, spectral amplitude was decreased in amusics for higher harmonic components of the auditory brainstem response. We also found a delayed response to the early transient aspects of the auditory stimulus in amusics. Neural measures of spectral amplitude and response timing correlated with participants' behavioral assessments of music processing. We demonstrate, for the first time, that amusia affects how complex acoustic signals are processed in the auditory brainstem. This neural signature of amusia mirrors what is observed in musicians, such that the aspects of the auditory brainstem responses that are enhanced in musicians are degraded in amusics. By showing that gradients of music abilities are reflected in the auditory brainstem, our findings have implications not only for current models of amusia but also for auditory functioning in general.

  14. Robust speech features representation based on computational auditory model

    Institute of Scientific and Technical Information of China (English)

    LU Xugang; JIA Chuan; DANG Jianwu

    2004-01-01

    A speech signal processing and features extracting method based on computational auditory model is proposed. The computational model is based on psychological, physiological knowledge and digital signal processing methods. In each stage of a hearing perception system, there is a corresponding computational model to simulate its function. Based on this model, speech features are extracted. In each stage, the features in different kinds of level are extracted. A further processing for primary auditory spectrum based on lateral inhibition is proposed to extract much more robust speech features. All these features can be regarded as the internal representations of speech stimulation in hearing system. The robust speech recognition experiments are conducted to test the robustness of the features. Results show that the representations based on the proposed computational auditory model are robust representations for speech signals.

  15. An auditory feature detection circuit for sound pattern recognition.

    Science.gov (United States)

    Schöneich, Stefan; Kostarakos, Konstantinos; Hedwig, Berthold

    2015-09-01

    From human language to birdsong and the chirps of insects, acoustic communication is based on amplitude and frequency modulation of sound signals. Whereas frequency processing starts at the level of the hearing organs, temporal features of the sound amplitude such as rhythms or pulse rates require processing by central auditory neurons. Besides several theoretical concepts, brain circuits that detect temporal features of a sound signal are poorly understood. We focused on acoustically communicating field crickets and show how five neurons in the brain of females form an auditory feature detector circuit for the pulse pattern of the male calling song. The processing is based on a coincidence detector mechanism that selectively responds when a direct neural response and an intrinsically delayed response to the sound pulses coincide. This circuit provides the basis for auditory mate recognition in field crickets and reveals a principal mechanism of sensory processing underlying the perception of temporal patterns.

  16. Auditory pathways: anatomy and physiology.

    Science.gov (United States)

    Pickles, James O

    2015-01-01

    This chapter outlines the anatomy and physiology of the auditory pathways. After a brief analysis of the external, middle ears, and cochlea, the responses of auditory nerve fibers are described. The central nervous system is analyzed in more detail. A scheme is provided to help understand the complex and multiple auditory pathways running through the brainstem. The multiple pathways are based on the need to preserve accurate timing while extracting complex spectral patterns in the auditory input. The auditory nerve fibers branch to give two pathways, a ventral sound-localizing stream, and a dorsal mainly pattern recognition stream, which innervate the different divisions of the cochlear nucleus. The outputs of the two streams, with their two types of analysis, are progressively combined in the inferior colliculus and onwards, to produce the representation of what can be called the "auditory objects" in the external world. The progressive extraction of critical features in the auditory stimulus in the different levels of the central auditory system, from cochlear nucleus to auditory cortex, is described. In addition, the auditory centrifugal system, running from cortex in multiple stages to the organ of Corti of the cochlea, is described.

  17. Development of visuo-auditory integration in space and time

    Directory of Open Access Journals (Sweden)

    Monica eGori

    2012-09-01

    Full Text Available Adults integrate multisensory information optimally (e.g. Ernst & Banks, 2002 while children are not able to integrate multisensory visual haptic cues until 8-10 years of age (e.g. Gori, Del Viva, Sandini, & Burr, 2008. Before that age strong unisensory dominance is present for size and orientation visual-haptic judgments maybe reflecting a process of cross-sensory calibration between modalities. It is widely recognized that audition dominates time perception, while vision dominates space perception. If the cross sensory calibration process is necessary for development, then the auditory modality should calibrate vision in a bimodal temporal task, and the visual modality should calibrate audition in a bimodal spatial task. Here we measured visual-auditory integration in both the temporal and the spatial domains reproducing for the spatial task a child-friendly version of the ventriloquist stimuli used by Alais and Burr (2004 and for the temporal task a child-friendly version of the stimulus used by Burr, Banks and Morrone (2009. Unimodal and bimodal (conflictual or not conflictual audio-visual thresholds and PSEs were measured and compared with the Bayesian predictions. In the temporal domain, we found that both in children and adults, audition dominates the bimodal visuo-auditory task both in perceived time and precision thresholds. Contrarily, in the visual-auditory spatial task, children younger than 12 years of age show clear visual dominance (on PSEs and bimodal thresholds higher than the Bayesian prediction. Only in the adult group bimodal thresholds become optimal. In agreement with previous studies, our results suggest that also visual-auditory adult-like behaviour develops late. Interestingly, the visual dominance for space and the auditory dominance for time that we found might suggest a cross-sensory comparison of vision in a spatial visuo-audio task and a cross-sensory comparison of audition in a temporal visuo-audio task.

  18. Resizing Auditory Communities

    DEFF Research Database (Denmark)

    Kreutzfeldt, Jacob

    2012-01-01

    Heard through the ears of the Canadian composer and music teacher R. Murray Schafer the ideal auditory community had the shape of a village. Schafer’s work with the World Soundscape Project in the 70s represent an attempt to interpret contemporary environments through musical and auditory...... parameters highlighting harmonious and balanced qualities while criticizing the noisy and cacophonous qualities of modern urban settings. This paper present a reaffirmation of Schafer’s central methodological claim: that environments can be analyzed through their sound, but offers considerations on the role...... musicalized through electro acoustic equipment installed in shops, shopping streets, transit areas etc. Urban noise no longer acts only as disturbance, but also structure and shape the places and spaces in which urban life enfold. Based on research done in Japanese shopping streets and in Copenhagen the paper...

  19. Neuronal representations of distance in human auditory cortex.

    Science.gov (United States)

    Kopčo, Norbert; Huang, Samantha; Belliveau, John W; Raij, Tommi; Tengshe, Chinmayi; Ahveninen, Jyrki

    2012-07-03

    Neuronal mechanisms of auditory distance perception are poorly understood, largely because contributions of intensity and distance processing are difficult to differentiate. Typically, the received intensity increases when sound sources approach us. However, we can also distinguish between soft-but-nearby and loud-but-distant sounds, indicating that distance processing can also be based on intensity-independent cues. Here, we combined behavioral experiments, fMRI measurements, and computational analyses to identify the neural representation of distance independent of intensity. In a virtual reverberant environment, we simulated sound sources at varying distances (15-100 cm) along the right-side interaural axis. Our acoustic analysis suggested that, of the individual intensity-independent depth cues available for these stimuli, direct-to-reverberant ratio (D/R) is more reliable and robust than interaural level difference (ILD). However, on the basis of our behavioral results, subjects' discrimination performance was more consistent with complex intensity-independent distance representations, combining both available cues, than with representations on the basis of either D/R or ILD individually. fMRI activations to sounds varying in distance (containing all cues, including intensity), compared with activations to sounds varying in intensity only, were significantly increased in the planum temporale and posterior superior temporal gyrus contralateral to the direction of stimulation. This fMRI result suggests that neurons in posterior nonprimary auditory cortices, in or near the areas processing other auditory spatial features, are sensitive to intensity-independent sound properties relevant for auditory distance perception.

  20. The use of visual stimuli during auditory assessment.

    Science.gov (United States)

    Pearlman, R C; Cunningham, D R; Williamson, D G; Amerman, J D

    1975-01-01

    Two groups of male subjects beyond 50 years of age were given audiometric tasks with and without visual stimulation to determine if visual stimuli changed auditory perception. The first group consisted of 10 subjects with normal auditory acuity; the second, 10 with sensorineural hearing losses greater than 30 decibels. The rate of presentation of the visual stimuli, consisting of photographic slides of various subjects, was determined in experiment I of the study. The subjects, while viewing the slides at their own rate, took an audiotry speech discrimination test. Advisedly they changed the slides at a speed which they felt facilitated attention while performing the auditory task. The mean rate of slide-changing behavior was used as the "optimum" visual stimulation rate in experiment II, which was designed to explore the interaction of the bisensory presentation of stimuli. Bekesy tracings and Rush Hughes recordings were administered without and with visual stimuli, the latter presented at the mean rate of slide changes found in experiment I. Analysis of data indicated that (1) no statistically significant difference exists between visual and nonvisual conditions during speech discrimination and Bekesy testing; and (2) subjects did not believe that visual stimuli as presented in this study helped them to listen more effectively. The experimenter concluded that the various auditory stimuli encountered in the auditory test situation may actually be a deterrent to boredom because of the variety of tasks required in a testing situation.

  1. Coding of melodic gestalt in human auditory cortex.

    Science.gov (United States)

    Schindler, Andreas; Herdener, Marcus; Bartels, Andreas

    2013-12-01

    The perception of a melody is invariant to the absolute properties of its constituting notes, but depends on the relation between them-the melody's relative pitch profile. In fact, a melody's "Gestalt" is recognized regardless of the instrument or key used to play it. Pitch processing in general is assumed to occur at the level of the auditory cortex. However, it is unknown whether early auditory regions are able to encode pitch sequences integrated over time (i.e., melodies) and whether the resulting representations are invariant to specific keys. Here, we presented participants different melodies composed of the same 4 harmonic pitches during functional magnetic resonance imaging recordings. Additionally, we played the same melodies transposed in different keys and on different instruments. We found that melodies were invariantly represented by their blood oxygen level-dependent activation patterns in primary and secondary auditory cortices across instruments, and also across keys. Our findings extend common hierarchical models of auditory processing by showing that melodies are encoded independent of absolute pitch and based on their relative pitch profile as early as the primary auditory cortex.

  2. Head Tracking of Auditory, Visual, and Audio-Visual Targets.

    Science.gov (United States)

    Leung, Johahn; Wei, Vincent; Burgess, Martin; Carlile, Simon

    2015-01-01

    The ability to actively follow a moving auditory target with our heads remains unexplored even though it is a common behavioral response. Previous studies of auditory motion perception have focused on the condition where the subjects are passive. The current study examined head tracking behavior to a moving auditory target along a horizontal 100° arc in the frontal hemisphere, with velocities ranging from 20 to 110°/s. By integrating high fidelity virtual auditory space with a high-speed visual presentation we compared tracking responses of auditory targets against visual-only and audio-visual "bisensory" stimuli. Three metrics were measured-onset, RMS, and gain error. The results showed that tracking accuracy (RMS error) varied linearly with target velocity, with a significantly higher rate in audition. Also, when the target moved faster than 80°/s, onset and RMS error were significantly worst in audition the other modalities while responses in the visual and bisensory conditions were statistically identical for all metrics measured. Lastly, audio-visual facilitation was not observed when tracking bisensory targets.

  3. Interactions across Multiple Stimulus Dimensions in Primary Auditory Cortex.

    Science.gov (United States)

    Sloas, David C; Zhuo, Ran; Xue, Hongbo; Chambers, Anna R; Kolaczyk, Eric; Polley, Daniel B; Sen, Kamal

    2016-01-01

    Although sensory cortex is thought to be important for the perception of complex objects, its specific role in representing complex stimuli remains unknown. Complex objects are rich in information along multiple stimulus dimensions. The position of cortex in the sensory hierarchy suggests that cortical neurons may integrate across these dimensions to form a more gestalt representation of auditory objects. Yet, studies of cortical neurons typically explore single or few dimensions due to the difficulty of determining optimal stimuli in a high dimensional stimulus space. Evolutionary algorithms (EAs) provide a potentially powerful approach for exploring multidimensional stimulus spaces based on real-time spike feedback, but two important issues arise in their application. First, it is unclear whether it is necessary to characterize cortical responses to multidimensional stimuli or whether it suffices to characterize cortical responses to a single dimension at a time. Second, quantitative methods for analyzing complex multidimensional data from an EA are lacking. Here, we apply a statistical method for nonlinear regression, the generalized additive model (GAM), to address these issues. The GAM quantitatively describes the dependence between neural response and all stimulus dimensions. We find that auditory cortical neurons in mice are sensitive to interactions across dimensions. These interactions are diverse across the population, indicating significant integration across stimulus dimensions in auditory cortex. This result strongly motivates using multidimensional stimuli in auditory cortex. Together, the EA and the GAM provide a novel quantitative paradigm for investigating neural coding of complex multidimensional stimuli in auditory and other sensory cortices.

  4. Modulating human auditory processing by transcranial electrical stimulation

    Directory of Open Access Journals (Sweden)

    Kai eHeimrath

    2016-03-01

    Full Text Available Transcranial electrical stimulation (tES has become a valuable research tool for the investigation of neurophysiological processes underlying human action and cognition. In recent years, striking evidence for the neuromodulatory effects of transcranial direct current stimulation (tDCS, transcranial alternating current stimulation (tACS, and transcranial random noise stimulation (tRNS has emerged. However, while the wealth of knowledge has been gained about tES in the motor domain and, to a lesser extent, about its ability to modulate human cognition, surprisingly little is known about its impact on perceptual processing, particularly in the auditory domain. Moreover, while only a few studies systematically investigated the impact of auditory tES, it has already been applied in a large number of clinical trials, leading to a remarkable imbalance between basic and clinical research on auditory tES. Here, we review the state of the art of tES application in the auditory domain focussing on the impact of neuromodulation on acoustic perception and its potential for clinical application in the treatment of auditory related disorders.

  5. Head Tracking of Auditory, Visual and Audio-Visual Targets

    Directory of Open Access Journals (Sweden)

    Johahn eLeung

    2016-01-01

    Full Text Available The ability to actively follow a moving auditory target with our heads remains unexplored even though it is a common behavioral response. Previous studies of auditory motion perception have focused on the condition where the subjects are passive. The current study examined head tracking behavior to a moving auditory target along a horizontal 100° arc in the frontal hemisphere, with velocities ranging from 20°/s to 110°/s. By integrating high fidelity virtual auditory space with a high-speed visual presentation we compared tracking responses of auditory targets against visual-only and audio-visual bisensory stimuli. Three metrics were measured – onset, RMS and gain error. The results showed that tracking accuracy (RMS error varied linearly with target velocity, with a significantly higher rate in audition. Also, when the target moved faster than 80°/s, onset and RMS error were significantly worst in audition the other modalities while responses in the visual and bisensory conditions were statistically identical for all metrics measured. Lastly, audio-visual facilitation was not observed when tracking bisensory targets.

  6. Interactions across Multiple Stimulus Dimensions in Primary Auditory Cortex

    Science.gov (United States)

    Zhuo, Ran; Xue, Hongbo; Chambers, Anna R.; Kolaczyk, Eric; Polley, Daniel B.

    2016-01-01

    Although sensory cortex is thought to be important for the perception of complex objects, its specific role in representing complex stimuli remains unknown. Complex objects are rich in information along multiple stimulus dimensions. The position of cortex in the sensory hierarchy suggests that cortical neurons may integrate across these dimensions to form a more gestalt representation of auditory objects. Yet, studies of cortical neurons typically explore single or few dimensions due to the difficulty of determining optimal stimuli in a high dimensional stimulus space. Evolutionary algorithms (EAs) provide a potentially powerful approach for exploring multidimensional stimulus spaces based on real-time spike feedback, but two important issues arise in their application. First, it is unclear whether it is necessary to characterize cortical responses to multidimensional stimuli or whether it suffices to characterize cortical responses to a single dimension at a time. Second, quantitative methods for analyzing complex multidimensional data from an EA are lacking. Here, we apply a statistical method for nonlinear regression, the generalized additive model (GAM), to address these issues. The GAM quantitatively describes the dependence between neural response and all stimulus dimensions. We find that auditory cortical neurons in mice are sensitive to interactions across dimensions. These interactions are diverse across the population, indicating significant integration across stimulus dimensions in auditory cortex. This result strongly motivates using multidimensional stimuli in auditory cortex. Together, the EA and the GAM provide a novel quantitative paradigm for investigating neural coding of complex multidimensional stimuli in auditory and other sensory cortices. PMID:27622211

  7. Measuring the dynamics of neural responses in primary auditory cortex

    CERN Document Server

    Depireux, D A; Shamma, S A; Depireux, Didier A.; Simon, Jonathan Z.; Shamma, Shihab A.

    1998-01-01

    We review recent developments in the measurement of the dynamics of the response properties of auditory cortical neurons to broadband sounds, which is closely related to the perception of timbre. The emphasis is on a method that characterizes the spectro-temporal properties of single neurons to dynamic, broadband sounds, akin to the drifting gratings used in vision. The method treats the spectral and temporal aspects of the response on an equal footing.

  8. Aero-tactile integration in speech perception

    OpenAIRE

    Gick, Bryan; Derrick, Donald

    2009-01-01

    Visual information from a speaker’s face can enhance1 or interfere with2 accurate auditory perception. This integration of information across auditory and visual streams has been observed in functional imaging studies3,4, and has typically been attributed to the frequency and robustness with which perceivers jointly encounter event-specific information from these two modalities5. Adding the tactile modality has long been considered a crucial next step in understanding multisensory integration...

  9. Development of auditory-vocal perceptual skills in songbirds.

    Directory of Open Access Journals (Sweden)

    Vanessa C Miller-Sims

    Full Text Available Songbirds are one of the few groups of animals that learn the sounds used for vocal communication during development. Like humans, songbirds memorize vocal sounds based on auditory experience with vocalizations of adult "tutors", and then use auditory feedback of self-produced vocalizations to gradually match their motor output to the memory of tutor sounds. In humans, investigations of early vocal learning have focused mainly on perceptual skills of infants, whereas studies of songbirds have focused on measures of vocal production. In order to fully exploit songbirds as a model for human speech, understand the neural basis of learned vocal behavior, and investigate links between vocal perception and production, studies of songbirds must examine both behavioral measures of perception and neural measures of discrimination during development. Here we used behavioral and electrophysiological assays of the ability of songbirds to distinguish vocal calls of varying frequencies at different stages of vocal learning. The results show that neural tuning in auditory cortex mirrors behavioral improvements in the ability to make perceptual distinctions of vocal calls as birds are engaged in vocal learning. Thus, separate measures of neural discrimination and behavioral perception yielded highly similar trends during the course of vocal development. The timing of this improvement in the ability to distinguish vocal sounds correlates with our previous work showing substantial refinement of axonal connectivity in cortico-basal ganglia pathways necessary for vocal learning.

  10. Development of auditory-vocal perceptual skills in songbirds.

    Science.gov (United States)

    Miller-Sims, Vanessa C; Bottjer, Sarah W

    2012-01-01

    Songbirds are one of the few groups of animals that learn the sounds used for vocal communication during development. Like humans, songbirds memorize vocal sounds based on auditory experience with vocalizations of adult "tutors", and then use auditory feedback of self-produced vocalizations to gradually match their motor output to the memory of tutor sounds. In humans, investigations of early vocal learning have focused mainly on perceptual skills of infants, whereas studies of songbirds have focused on measures of vocal production. In order to fully exploit songbirds as a model for human speech, understand the neural basis of learned vocal behavior, and investigate links between vocal perception and production, studies of songbirds must examine both behavioral measures of perception and neural measures of discrimination during development. Here we used behavioral and electrophysiological assays of the ability of songbirds to distinguish vocal calls of varying frequencies at different stages of vocal learning. The results show that neural tuning in auditory cortex mirrors behavioral improvements in the ability to make perceptual distinctions of vocal calls as birds are engaged in vocal learning. Thus, separate measures of neural discrimination and behavioral perception yielded highly similar trends during the course of vocal development. The timing of this improvement in the ability to distinguish vocal sounds correlates with our previous work showing substantial refinement of axonal connectivity in cortico-basal ganglia pathways necessary for vocal learning.

  11. Acoustic Shadows: An Auditory Exploration of the Sense of Space

    Directory of Open Access Journals (Sweden)

    Frank Dufour

    2011-12-01

    Full Text Available This paper examines the question of auditory detection of the movements of silent objects in noisy environments. The approach to studying and exploring this phenomenon is primarily based on the framework of the ecology of perception defined by James Gibson (Gibson, 1979 in the sense that it focuses on the direct auditory perception of events, or “structured energy that specifies properties of the environment” (Michaels & Carello, 1981 P. 157. The goal of this study is triple: -Theoretical; for various reasons, this kind of acoustic situations has not been extensively studied by traditional acoustics and psychoacoustics, therefore, this project demonstrates and supports the pertinence of the Ecology of Perception for the description and explanation of such complex phenomena. -Practical; like echolocation, perception of acoustic shadows can be improved by practice, this project intends to contribute to the acknowledgment of this way of listening and to help individuals placed in noisy environments without the support of vision acquiring a detailed detection of the movements occurring in these environments. -Artistic; this project explores a new artistic expression based on the creation and exploration of complex multisensory environments. Acoustic Shadows, a multimedia interactive composition is being developed on the premises of the ecological approach to perception. The last dimension of this project is meant to be a contribution to the sonic representation of space in films and in computer generated virtual environments by producing simulations of acoustic shadows.

  12. Auditory and non-auditory effects of noise on health

    NARCIS (Netherlands)

    Basner, M.; Babisch, W.; Davis, A.; Brink, M.; Clark, C.; Janssen, S.A.; Stansfeld, S.

    2013-01-01

    Noise is pervasive in everyday life and can cause both auditory and non-auditory health eff ects. Noise-induced hearing loss remains highly prevalent in occupational settings, and is increasingly caused by social noise exposure (eg, through personal music players). Our understanding of molecular mec

  13. Auditory Temporal Structure Processing in Dyslexia: Processing of Prosodic Phrase Boundaries Is Not Impaired in Children with Dyslexia

    Science.gov (United States)

    Geiser, Eveline; Kjelgaard, Margaret; Christodoulou, Joanna A.; Cyr, Abigail; Gabrieli, John D. E.

    2014-01-01

    Reading disability in children with dyslexia has been proposed to reflect impairment in auditory timing perception. We investigated one aspect of timing perception--"temporal grouping"--as present in prosodic phrase boundaries of natural speech, in age-matched groups of children, ages 6-8 years, with and without dyslexia. Prosodic phrase…

  14. Partial Epilepsy with Auditory Features

    Directory of Open Access Journals (Sweden)

    J Gordon Millichap

    2004-07-01

    Full Text Available The clinical characteristics of 53 sporadic (S cases of idiopathic partial epilepsy with auditory features (IPEAF were analyzed and compared to previously reported familial (F cases of autosomal dominant partial epilepsy with auditory features (ADPEAF in a study at the University of Bologna, Italy.

  15. Word Recognition in Auditory Cortex

    Science.gov (United States)

    DeWitt, Iain D. J.

    2013-01-01

    Although spoken word recognition is more fundamental to human communication than text recognition, knowledge of word-processing in auditory cortex is comparatively impoverished. This dissertation synthesizes current models of auditory cortex, models of cortical pattern recognition, models of single-word reading, results in phonetics and results in…

  16. Auditory Streaming as an Online Classification Process with Evidence Accumulation.

    Science.gov (United States)

    Barniv, Dana; Nelken, Israel

    2015-01-01

    When human subjects hear a sequence of two alternating pure tones, they often perceive it in one of two ways: as one integrated sequence (a single "stream" consisting of the two tones), or as two segregated sequences, one sequence of low tones perceived separately from another sequence of high tones (two "streams"). Perception of this stimulus is thus bistable. Moreover, subjects report on-going switching between the two percepts: unless the frequency separation is large, initial perception tends to be of integration, followed by toggling between integration and segregation phases. The process of stream formation is loosely named "auditory streaming". Auditory streaming is believed to be a manifestation of human ability to analyze an auditory scene, i.e. to attribute portions of the incoming sound sequence to distinct sound generating entities. Previous studies suggested that the durations of the successive integration and segregation phases are statistically independent. This independence plays an important role in current models of bistability. Contrary to this, we show here, by analyzing a large set of data, that subsequent phase durations are positively correlated. To account together for bistability and positive correlation between subsequent durations, we suggest that streaming is a consequence of an evidence accumulation process. Evidence for segregation is accumulated during the integration phase and vice versa; a switch to the opposite percept occurs stochastically based on this evidence. During a long phase, a large amount of evidence for the opposite percept is accumulated, resulting in a long subsequent phase. In contrast, a short phase is followed by another short phase. We implement these concepts using a probabilistic model that shows both bistability and correlations similar to those observed experimentally.

  17. Auditory Streaming as an Online Classification Process with Evidence Accumulation

    Science.gov (United States)

    Barniv, Dana; Nelken, Israel

    2015-01-01

    When human subjects hear a sequence of two alternating pure tones, they often perceive it in one of two ways: as one integrated sequence (a single "stream" consisting of the two tones), or as two segregated sequences, one sequence of low tones perceived separately from another sequence of high tones (two "streams"). Perception of this stimulus is thus bistable. Moreover, subjects report on-going switching between the two percepts: unless the frequency separation is large, initial perception tends to be of integration, followed by toggling between integration and segregation phases. The process of stream formation is loosely named “auditory streaming”. Auditory streaming is believed to be a manifestation of human ability to analyze an auditory scene, i.e. to attribute portions of the incoming sound sequence to distinct sound generating entities. Previous studies suggested that the durations of the successive integration and segregation phases are statistically independent. This independence plays an important role in current models of bistability. Contrary to this, we show here, by analyzing a large set of data, that subsequent phase durations are positively correlated. To account together for bistability and positive correlation between subsequent durations, we suggest that streaming is a consequence of an evidence accumulation process. Evidence for segregation is accumulated during the integration phase and vice versa; a switch to the opposite percept occurs stochastically based on this evidence. During a long phase, a large amount of evidence for the opposite percept is accumulated, resulting in a long subsequent phase. In contrast, a short phase is followed by another short phase. We implement these concepts using a probabilistic model that shows both bistability and correlations similar to those observed experimentally. PMID:26671774

  18. Subcortical neural coding mechanisms for auditory temporal processing.

    Science.gov (United States)

    Frisina, R D

    2001-08-01

    Biologically relevant sounds such as speech, animal vocalizations and music have distinguishing temporal features that are utilized for effective auditory perception. Common temporal features include sound envelope fluctuations, often modeled in the laboratory by amplitude modulation (AM), and starts and stops in ongoing sounds, which are frequently approximated by hearing researchers as gaps between two sounds or are investigated in forward masking experiments. The auditory system has evolved many neural processing mechanisms for encoding important temporal features of sound. Due to rapid progress made in the field of auditory neuroscience in the past three decades, it is not possible to review all progress in this field in a single article. The goal of the present report is to focus on single-unit mechanisms in the mammalian brainstem auditory system for encoding AM and gaps as illustrative examples of how the system encodes key temporal features of sound. This report, following a systems analysis approach, starts with findings in the auditory nerve and proceeds centrally through the cochlear nucleus, superior olivary complex and inferior colliculus. Some general principles can be seen when reviewing this entire field. For example, as one ascends the central auditory system, a neural encoding shift occurs. An emphasis on synchronous responses for temporal coding exists in the auditory periphery, and more reliance on rate coding occurs as one moves centrally. In addition, for AM, modulation transfer functions become more bandpass as the sound level of the signal is raised, but become more lowpass in shape as background noise is added. In many cases, AM coding can actually increase in the presence of background noise. For gap processing or forward masking, coding for gaps changes from a decrease in spike firing rate for neurons of the peripheral auditory system that have sustained response patterns, to an increase in firing rate for more central neurons with

  19. Peripheral Auditory Mechanisms

    CERN Document Server

    Hall, J; Hubbard, A; Neely, S; Tubis, A

    1986-01-01

    How weIl can we model experimental observations of the peripheral auditory system'? What theoretical predictions can we make that might be tested'? It was with these questions in mind that we organized the 1985 Mechanics of Hearing Workshop, to bring together auditory researchers to compare models with experimental observations. Tbe workshop forum was inspired by the very successful 1983 Mechanics of Hearing Workshop in Delft [1]. Boston University was chosen as the site of our meeting because of the Boston area's role as a center for hearing research in this country. We made a special effort at this meeting to attract students from around the world, because without students this field will not progress. Financial support for the workshop was provided in part by grant BNS- 8412878 from the National Science Foundation. Modeling is a traditional strategy in science and plays an important role in the scientific method. Models are the bridge between theory and experiment. Tbey test the assumptions made in experim...

  20. Neural dynamics of phonological processing in the dorsal auditory stream.

    Science.gov (United States)

    Liebenthal, Einat; Sabri, Merav; Beardsley, Scott A; Mangalathu-Arumana, Jain; Desai, Anjali

    2013-09-25

    Neuroanatomical models hypothesize a role for the dorsal auditory pathway in phonological processing as a feedforward efferent system (Davis and Johnsrude, 2007; Rauschecker and Scott, 2009; Hickok et al., 2011). But the functional organization of the pathway, in terms of time course of interactions between auditory, somatosensory, and motor regions, and the hemispheric lateralization pattern is largely unknown. Here, ambiguous duplex syllables, with elements presented dichotically at varying interaural asynchronies, were used to parametrically modulate phonological processing and associated neural activity in the human dorsal auditory stream. Subjects performed syllable and chirp identification tasks, while event-related potentials and functional magnetic resonance images were concurrently collected. Joint independent component analysis was applied to fuse the neuroimaging data and study the neural dynamics of brain regions involved in phonological processing with high spatiotemporal resolution. Results revealed a highly interactive neural network associated with phonological processing, composed of functional fields in posterior temporal gyrus (pSTG), inferior parietal lobule (IPL), and ventral central sulcus (vCS) that were engaged early and almost simultaneously (at 80-100 ms), consistent with a direct influence of articulatory somatomotor areas on phonemic perception. Left hemispheric lateralization was observed 250 ms earlier in IPL and vCS than pSTG, suggesting that functional specialization of somatomotor (and not auditory) areas determined lateralization in the dorsal auditory pathway. The temporal dynamics of the dorsal auditory pathway described here offer a new understanding of its functional organization and demonstrate that temporal information is essential to resolve neural circuits underlying complex behaviors.

  1. Auditory Neuropathy in Two Patients with Generalized Neuropathic Disorder: A Case Report

    Directory of Open Access Journals (Sweden)

    P Ahmadi

    2008-06-01

    Full Text Available Background: Although it is not a new disorder, in recent times we have attained a greater understanding of auditory neuropathy (AN. In this type of hearing impairment, cochlear hair cells function but AN victims suffer from disordered neural transmission in the auditory pathway. The auditory neuropathy result profile often occurs as a part of that of the generalized neuropathic disorders, indicated in approximately 30-40% of all reported auditory neuropathy/auditory dyssynchrony (AN/AD cases, with approximately 80% of patients reporting symptom onset over the age of 15 years. In the present report, the results of audiologic tests (behavioral, physiologic and evoked potentials on two young patients with generalized neuropathy are discussed.Case report: Two brothers, 26 and 17 years old, presented with speech perception weakness and movement difficulties that started at 12 years of age and progressed as time passed. In their last examination, there was a moderate to severe flat audiogram in the older patient and mild low tone loss in the younger one. The major difficulty of the patients was severe speech perception impairment that was not compatible with their hearing thresholds. Paresthesia, sural muscle contraction and pain, and balance disorder were the first symptoms of the older brother. Now he can only move with crutches and his finger muscle tonicity has decreased remarkably, with marked fatigue after a short period of walking. Increasing movement difficulties were noted in his last visit. Visual neuropathy had been reported in repeated visual system examinations for the older brother, with similar, albeit less severe, symptoms in the younger brother.In the present study of these patients, behavioral investigations included pure-tone audiometry and speech discrimination scoring. Physiologic studies consisted Transient Evoked Otoacoustic Emission (TEOAE and acoustic reflexes. Electrophysiologic auditory tests were also performed to determine

  2. Neural Representation of Concurrent Vowels in Macaque Primary Auditory Cortex.

    Science.gov (United States)

    Fishman, Yonatan I; Micheyl, Christophe; Steinschneider, Mitchell

    2016-01-01

    Successful speech perception in real-world environments requires that the auditory system segregate competing voices that overlap in frequency and time into separate streams. Vowels are major constituents of speech and are comprised of frequencies (harmonics) that are integer multiples of a common fundamental frequency (F0). The pitch and identity of a vowel are determined by its F0 and spectral envelope (formant structure), respectively. When two spectrally overlapping vowels differing in F0 are presented concurrently, they can be readily perceived as two separate "auditory objects" with pitches at their respective F0s. A difference in pitch between two simultaneous vowels provides a powerful cue for their segregation, which in turn, facilitates their individual identification. The neural mechanisms underlying the segregation of concurrent vowels based on pitch differences are poorly understood. Here, we examine neural population responses in macaque primary auditory cortex (A1) to single and double concurrent vowels (/a/ and /i/) that differ in F0 such that they are heard as two separate auditory objects with distinct pitches. We find that neural population responses in A1 can resolve, via a rate-place code, lower harmonics of both single and double concurrent vowels. Furthermore, we show that the formant structures, and hence the identities, of single vowels can be reliably recovered from the neural representation of double concurrent vowels. We conclude that A1 contains sufficient spectral information to enable concurrent vowel segregation and identification by downstream cortical areas.

  3. Synchronization and phonological skills: precise auditory timing hypothesis (PATH

    Directory of Open Access Journals (Sweden)

    Adam eTierney

    2014-11-01

    Full Text Available Phonological skills are enhanced by music training, but the mechanisms enabling this cross-domain enhancement remain unknown. To explain this cross-domain transfer, we propose a precise auditory timing hypothesis (PATH whereby entrainment practice is the core mechanism underlying enhanced phonological abilities in musicians. Both rhythmic synchronization and language skills such as consonant discrimination, detection of word and phrase boundaries, and conversational turn-taking rely on the perception of extremely fine-grained timing details in sound. Auditory-motor timing is an acoustic feature which meets all five of the pre-conditions necessary for cross-domain enhancement to occur (Patel 2011, 2012, 2014. There is overlap between the neural networks that process timing in the context of both music and language. Entrainment to music demands more precise timing sensitivity than does language processing. Moreover, auditory-motor timing integration captures the emotion of the trainee, is repeatedly practiced, and demands focused attention. The precise auditory timing hypothesis predicts that musical training emphasizing entrainment will be particularly effective in enhancing phonological skills.

  4. Active stream segregation specifically involves the left human auditory cortex.

    Science.gov (United States)

    Deike, Susann; Scheich, Henning; Brechmann, André

    2010-06-14

    An important aspect of auditory scene analysis is the sequential grouping of similar sounds into one "auditory stream" while keeping competing streams separate. In the present low-noise fMRI study we presented sequences of alternating high-pitch (A) and low-pitch (B) complex harmonic tones using acoustic parameters that allow the perception of either two separate streams or one alternating stream. However, the subjects were instructed to actively and continuously segregate the A from the B stream. This was controlled by the additional instruction to listen for rare level deviants only in the low-pitch stream. Compared to the control condition in which only one non-separable stream was presented the active segregation of the A from the B stream led to a selective increase of activation in the left auditory cortex (AC). Together with a similar finding from a previous study using a different acoustic cue for streaming, namely timbre, this suggests that the left auditory cortex plays a dominant role in active sequential stream segregation. However, we found cue differences within the left AC: Whereas in the posterior areas, including the planum temporale, activation increased for both acoustic cues, the anterior areas, including Heschl's gyrus, are only involved in stream segregation based on pitch.

  5. Segmental processing in the human auditory dorsal stream.

    Science.gov (United States)

    Zaehle, Tino; Geiser, Eveline; Alter, Kai; Jancke, Lutz; Meyer, Martin

    2008-07-18

    In the present study we investigated the functional organization of sublexical auditory perception with specific respect to auditory spectro-temporal processing in speech and non-speech sounds. Participants discriminated verbal and nonverbal auditory stimuli according to either spectral or temporal acoustic features in the context of a sparse event-related functional magnetic resonance imaging (fMRI) study. Based on recent models of speech processing, we hypothesized that auditory segmental processing, as is required in the discrimination of speech and non-speech sound according to its temporal features, will lead to a specific involvement of a left-hemispheric dorsal processing network comprising the posterior portion of the inferior frontal cortex and the inferior parietal lobe. In agreement with our hypothesis results revealed significant responses in the posterior part of the inferior frontal gyrus and the parietal operculum of the left hemisphere when participants had to discriminate speech and non-speech stimuli based on subtle temporal acoustic features. In contrast, when participants had to discriminate speech and non-speech stimuli on the basis of changes in the frequency content, we observed bilateral activations along the middle temporal gyrus and superior temporal sulcus. The results of the present study demonstrate an involvement of the dorsal pathway in the segmental sublexical analysis of speech sounds as well as in the segmental acoustic analysis of non-speech sounds with analogous spectro-temporal characteristics.

  6. Auditory Neuropathy - A Case of Auditory Neuropathy after Hyperbilirubinemia

    Directory of Open Access Journals (Sweden)

    Maliheh Mazaher Yazdi

    2007-12-01

    Full Text Available Background and Aim: Auditory neuropathy is an hearing disorder in which peripheral hearing is normal, but the eighth nerve and brainstem are abnormal. By clinical definition, patient with this disorder have normal OAE, but exhibit an absent or severely abnormal ABR. Auditory neuropathy was first reported in the late 1970s as different methods could identify discrepancy between absent ABR and present hearing threshold. Speech understanding difficulties are worse than can be predicted from other tests of hearing function. Auditory neuropathy may also affect vestibular function. Case Report: This article presents electrophysiological and behavioral data from a case of auditory neuropathy in a child with normal hearing after bilirubinemia in a 5 years follow-up. Audiological findings demonstrate remarkable changes after multidisciplinary rehabilitation. Conclusion: auditory neuropathy may involve damage to the inner hair cells-specialized sensory cells in the inner ear that transmit information about sound through the nervous system to the brain. Other causes may include faulty connections between the inner hair cells and the nerve leading from the inner ear to the brain or damage to the nerve itself. People with auditory neuropathy have OAEs response but absent ABR and hearing loss threshold that can be permanent, get worse or get better.

  7. Audiovisual Perception of Congruent and Incongruent Dutch Front Vowels

    Science.gov (United States)

    Valkenier, Bea; Duyne, Jurriaan Y.; Andringa, Tjeerd C.; Baskent, Deniz

    2012-01-01

    Purpose: Auditory perception of vowels in background noise is enhanced when combined with visually perceived speech features. The objective of this study was to investigate whether the influence of visual cues on vowel perception extends to incongruent vowels, in a manner similar to the McGurk effect observed with consonants. Method:…

  8. Slow modulation of ongoing activity in the auditory cortex during an interval-discrimination task

    Directory of Open Access Journals (Sweden)

    Juan M. Abolafia

    2011-10-01

    Full Text Available In this study, we recorded the single unit activity from rat auditory cortex while the animals performed an interval-discrimination task. The animals had to decide whether two auditory stimuli were separated by either 150 or 300 ms, and go to the left or right nose-poke accordingly. Spontaneous firing in between auditory responses was compared in the attentive versus non-attentive brain states. We describe the firing rate modulation detected during intervals while there was no auditory stimulation. Nearly 18% of neurons (n=14 showed a prominent neuronal discharge during the interstimulus interval, in the form of a upward or downward ramp towards the second auditory stimulus. These patterns of spontaneous activity were often modulated in the attentive versus passive trials. Modulation of the spontaneous firing rate during the task was observed not only between auditory stimuli, but also in the interval preceding the stimulus. This slow modulatory components could be locally generated or the result of a top-down influence originated in higher associative association areas. Such a neuronal discharge may be related to the computation of the interval time and contribute to the perception of the auditory stimulus.

  9. Compensating Level-Dependent Frequency Representation in Auditory Cortex by Synaptic Integration of Corticocortical Input

    Science.gov (United States)

    Happel, Max F. K.; Ohl, Frank W.

    2017-01-01

    Robust perception of auditory objects over a large range of sound intensities is a fundamental feature of the auditory system. However, firing characteristics of single neurons across the entire auditory system, like the frequency tuning, can change significantly with stimulus intensity. Physiological correlates of level-constancy of auditory representations hence should be manifested on the level of larger neuronal assemblies or population patterns. In this study we have investigated how information of frequency and sound level is integrated on the circuit-level in the primary auditory cortex (AI) of the Mongolian gerbil. We used a combination of pharmacological silencing of corticocortically relayed activity and laminar current source density (CSD) analysis. Our data demonstrate that with increasing stimulus intensities progressively lower frequencies lead to the maximal impulse response within cortical input layers at a given cortical site inherited from thalamocortical synaptic inputs. We further identified a temporally precise intercolumnar synaptic convergence of early thalamocortical and horizontal corticocortical inputs. Later tone-evoked activity in upper layers showed a preservation of broad tonotopic tuning across sound levels without shifts towards lower frequencies. Synaptic integration within corticocortical circuits may hence contribute to a level-robust representation of auditory information on a neuronal population level in the auditory cortex. PMID:28046062

  10. Auditory Processing Disorder (For Parents)

    Science.gov (United States)

    ... CAPD often have trouble maintaining attention, although health, motivation, and attitude also can play a role. Auditory ... programs. Several computer-assisted programs are geared toward children with APD. They mainly help the brain do ...

  11. An EMG Study of the Lip Muscles during Covert Auditory Verbal Hallucinations in Schizophrenia

    Science.gov (United States)

    Rapin, Lucile; Dohen, Marion; Polosan, Mircea; Perrier, Pascal; Loevenbruck, Hélène

    2013-01-01

    Purpose: "Auditory verbal hallucinations" (AVHs) are speech perceptions in the absence of external stimulation. According to an influential theoretical account of AVHs in schizophrenia, a deficit in inner-speech monitoring may cause the patients' verbal thoughts to be perceived as external voices. The account is based on a…

  12. Asymmetry in primary auditory cortex activity in tinnitus patients and controls

    NARCIS (Netherlands)

    Geven, L. I.; de Kleine, E.; Willemsen, A. T. M.; van Dijk, P.

    2014-01-01

    Tinnitus is a bothersome phantom sound percept and its neural correlates are not yet disentangled. Previously published papers, using [(18)F]-fluoro-deoxyglucose positron emission tomography (FDG-PET), have suggested an increased metabolism in the left primary auditory cortex in tinnitus patients. T

  13. Auditory Processing Disorder in Children with Reading Disabilities: Effect of Audiovisual Training

    Science.gov (United States)

    Veuillet, Evelyne; Magnan, Annie; Ecalle, Jean; Thai-Van, Hung; Collet, Lionel

    2007-01-01

    Reading disability is associated with phonological problems which might originate in auditory processing disorders. The aim of the present study was 2-fold: first, the perceptual skills of average-reading children and children with dyslexia were compared in a categorical perception task assessing the processing of a phonemic contrast based on…

  14. Temporal Information Processing as a Basis for Auditory Comprehension: Clinical Evidence from Aphasic Patients

    Science.gov (United States)

    Oron, Anna; Szymaszek, Aneta; Szelag, Elzbieta

    2015-01-01

    Background: Temporal information processing (TIP) underlies many aspects of cognitive functions like language, motor control, learning, memory, attention, etc. Millisecond timing may be assessed by sequencing abilities, e.g. the perception of event order. It may be measured with auditory temporal-order-threshold (TOT), i.e. a minimum time gap…

  15. Auditory Pattern Memory: Mechanisms of Tonal Sequence Discrimination by Human Observers

    Science.gov (United States)

    1988-10-30

    and Creelman (1977) in a study of categorical perception. Tanner’s model included a short-term decaying memory for the acoustic input to the system plus...auditory pattern components, J. &Coust. Soc. 91 Am., 76, 1037- 1044. Macmillan, N. A., Kaplan H. L., & Creelman , C. D. (1977). The psychophysics of

  16. Contribution of auditory working memory to speech understanding in mandarin-speaking cochlear implant users.

    Directory of Open Access Journals (Sweden)

    Duoduo Tao

    Full Text Available To investigate how auditory working memory relates to speech perception performance by Mandarin-speaking cochlear implant (CI users.Auditory working memory and speech perception was measured in Mandarin-speaking CI and normal-hearing (NH participants. Working memory capacity was measured using forward digit span and backward digit span; working memory efficiency was measured using articulation rate. Speech perception was assessed with: (a word-in-sentence recognition in quiet, (b word-in-sentence recognition in speech-shaped steady noise at +5 dB signal-to-noise ratio, (c Chinese disyllable recognition in quiet, (d Chinese lexical tone recognition in quiet. Self-reported school rank was also collected regarding performance in schoolwork.There was large inter-subject variability in auditory working memory and speech performance for CI participants. Working memory and speech performance were significantly poorer for CI than for NH participants. All three working memory measures were strongly correlated with each other for both CI and NH participants. Partial correlation analyses were performed on the CI data while controlling for demographic variables. Working memory efficiency was significantly correlated only with sentence recognition in quiet when working memory capacity was partialled out. Working memory capacity was correlated with disyllable recognition and school rank when efficiency was partialled out. There was no correlation between working memory and lexical tone recognition in the present CI participants.Mandarin-speaking CI users experience significant deficits in auditory working memory and speech performance compared with NH listeners. The present data suggest that auditory working memory may contribute to CI users' difficulties in speech understanding. The present pattern of results with Mandarin-speaking CI users is consistent with previous auditory working memory studies with English-speaking CI users, suggesting that the lexical

  17. Expectation and attention in hierarchical auditory prediction.

    Science.gov (United States)

    Chennu, Srivas; Noreika, Valdas; Gueorguiev, David; Blenkmann, Alejandro; Kochen, Silvia; Ibáñez, Agustín; Owen, Adrian M; Bekinschtein, Tristan A

    2013-07-03

    Hierarchical predictive coding suggests that attention in humans emerges from increased precision in probabilistic inference, whereas expectation biases attention in favor of contextually anticipated stimuli. We test these notions within auditory perception by independently manipulating top-down expectation and attentional precision alongside bottom-up stimulus predictability. Our findings support an integrative interpretation of commonly observed electrophysiological signatures of neurodynamics, namely mismatch negativity (MMN), P300, and contingent negative variation (CNV), as manifestations along successive levels of predictive complexity. Early first-level processing indexed by the MMN was sensitive to stimulus predictability: here, attentional precision enhanced early responses, but explicit top-down expectation diminished it. This pattern was in contrast to later, second-level processing indexed by the P300: although sensitive to the degree of predictability, responses at this level were contingent on attentional engagement and in fact sharpened by top-down expectation. At the highest level, the drift of the CNV was a fine-grained marker of top-down expectation itself. Source reconstruction of high-density EEG, supported by intracranial recordings, implicated temporal and frontal regions differentially active at early and late levels. The cortical generators of the CNV suggested that it might be involved in facilitating the consolidation of context-salient stimuli into conscious perception. These results provide convergent empirical support to promising recent accounts of attention and expectation in predictive coding.

  18. Theta oscillations accompanying concurrent auditory stream segregation.

    Science.gov (United States)

    Tóth, Brigitta; Kocsis, Zsuzsanna; Urbán, Gábor; Winkler, István

    2016-08-01

    The ability to isolate a single sound source among concurrent sources is crucial for veridical auditory perception. The present study investigated the event-related oscillations evoked by complex tones, which could be perceived as a single sound and tonal complexes with cues promoting the perception of two concurrent sounds by inharmonicity, onset asynchrony, and/or perceived source location difference of the components tones. In separate task conditions, participants performed a visual change detection task (visual control), watched a silent movie (passive listening) or reported for each tone whether they perceived one or two concurrent sounds (active listening). In two time windows, the amplitude of theta oscillation was modulated by the presence vs. absence of the cues: 60-350ms/6-8Hz (early) and 350-450ms/4-8Hz (late). The early response appeared both in the passive and the active listening conditions; it did not closely match the task performance; and it had a fronto-central scalp distribution. The late response was only elicited in the active listening condition; it closely matched the task performance; and it had a centro-parietal scalp distribution. The neural processes reflected by these responses are probably involved in the processing of concurrent sound segregation cues, in sound categorization, and response preparation and monitoring. The current results are compatible with the notion that theta oscillations mediate some of the processes involved in concurrent sound segregation.

  19. Complex-tone pitch representations in the human auditory system

    DEFF Research Database (Denmark)

    Bianchi, Federica

    Understanding how the human auditory system processes the physical properties of an acoustical stimulus to give rise to a pitch percept is a fascinating aspect of hearing research. Since most natural sounds are harmonic complex tones, this work focused on the nature of pitch-relevant cues...... that are necessary for the auditory system to retrieve the pitch of complex sounds. The existence of different pitch-coding mechanisms for low-numbered (spectrally resolved) and high-numbered (unresolved) harmonics was investigated by comparing pitch-discrimination performance across different cohorts of listeners......) listeners and the effect of musical training for pitch discrimination of complex tones with resolved and unresolved harmonics. Concerning the first topic, behavioral and modeling results in listeners with sensorineural hearing loss (SNHL) indicated that temporal envelope cues of complex tones...

  20. Peripheral auditory processing and speech reception in impaired hearing

    DEFF Research Database (Denmark)

    Strelcyk, Olaf

    One of the most common complaints of people with impaired hearing concerns their difficulty with understanding speech. Particularly in the presence of background noise, hearing-impaired people often encounter great difficulties with speech communication. In most cases, the problem persists even i....... Overall, this work provides insights into factors affecting auditory processing in listeners with impaired hearing and may have implications for future models of impaired auditory signal processing as well as advanced compensation strategies....... if reduced audibility has been compensated for by hearing aids. It has been hypothesized that part of the difficulty arises from changes in the perception of sounds that are well above hearing threshold, such as reduced frequency selectivity and deficits in the processing of temporal fine structure (TFS...

  1. Auditory stream segregation in children with Asperger syndrome.

    Science.gov (United States)

    Lepistö, T; Kuitunen, A; Sussman, E; Saalasti, S; Jansson-Verkasalo, E; Nieminen-von Wendt, T; Kujala, T

    2009-12-01

    Individuals with Asperger syndrome (AS) often have difficulties in perceiving speech in noisy environments. The present study investigated whether this might be explained by deficient auditory stream segregation ability, that is, by a more basic difficulty in separating simultaneous sound sources from each other. To this end, auditory event-related brain potentials were recorded from a group of school-aged children with AS and a group of age-matched controls using a paradigm specifically developed for studying stream segregation. Differences in the amplitudes of ERP components were found between groups only in the stream segregation conditions and not for simple feature discrimination. The results indicated that children with AS have difficulties in segregating concurrent sound streams, which ultimately may contribute to the difficulties in speech-in-noise perception.

  2. Aero-tactile integration in speech perception

    Science.gov (United States)

    Gick, Bryan; Derrick, Donald

    2013-01-01

    Visual information from a speaker’s face can enhance1 or interfere with2 accurate auditory perception. This integration of information across auditory and visual streams has been observed in functional imaging studies3,4, and has typically been attributed to the frequency and robustness with which perceivers jointly encounter event-specific information from these two modalities5. Adding the tactile modality has long been considered a crucial next step in understanding multisensory integration. However, previous studies have found an influence of tactile input on speech perception only under limited circumstances, either where perceivers were aware of the task6,7 or where they had received training to establish a cross-modal mapping8–10. Here we show that perceivers integrate naturalistic tactile information during auditory speech perception without previous training. Drawing on the observation that some speech sounds produce tiny bursts of aspiration (such as English ‘p’)11, we applied slight, inaudible air puffs on participants’ skin at one of two locations: the right hand or the neck. Syllables heard simultaneously with cutaneous air puffs were more likely to be heard as aspirated (for example, causing participants to mishear ‘b’ as ‘p’). These results demonstrate that perceivers integrate event-relevant tactile information in auditory perception in much the same way as they do visual information. PMID:19940925

  3. Neural and behavioral investigations into timbre perception

    Directory of Open Access Journals (Sweden)

    Stephen Michael Town

    2013-11-01

    Full Text Available Timbre is the attribute that distinguishes sounds of equal pitch, loudness and duration. It contributes to our perception and discrimination of different vowels and consonants in speech, instruments in music and environmental sounds. Here we begin by reviewing human timbre perception and the spectral and temporal acoustic features that give rise to timbre in speech, musical and environmental sounds. We also consider the perception of timbre by animals, both in the case of human vowels and non-human vocalizations. We then explore the neural representation of timbre, first within the peripheral auditory system and later at the level of the auditory cortex. We examine the neural networks that are implicated in timbre perception and the computations that may be performed in auditory cortex to enable listeners to extract information about timbre. We consider whether single neurons in auditory cortex are capable of representing spectral timbre independently of changes in other perceptual attributes and the mechanisms that may shape neural sensitivity to timbre. Finally, we conclude by outlining some of the questions that remain about the role of neural mechanisms in behavior and consider some potentially fruitful avenues for future research.

  4. Neural and behavioral investigations into timbre perception.

    Science.gov (United States)

    Town, Stephen M; Bizley, Jennifer K

    2013-11-13

    Timbre is the attribute that distinguishes sounds of equal pitch, loudness and duration. It contributes to our perception and discrimination of different vowels and consonants in speech, instruments in music and environmental sounds. Here we begin by reviewing human timbre perception and the spectral and temporal acoustic features that give rise to timbre in speech, musical and environmental sounds. We also consider the perception of timbre by animals, both in the case of human vowels and non-human vocalizations. We then explore the neural representation of timbre, first within the peripheral auditory system and later at the level of the auditory cortex. We examine the neural networks that are implicated in timbre perception and the computations that may be performed in auditory cortex to enable listeners to extract information about timbre. We consider whether single neurons in auditory cortex are capable of representing spectral timbre independently of changes in other perceptual attributes and the mechanisms that may shape neural sensitivity to timbre. Finally, we conclude by outlining some of the questions that remain about the role of neural mechanisms in behavior and consider some potentially fruitful avenues for future research.

  5. Visual face-movement sensitive cortex is relevant for auditory-only speech recognition.

    Science.gov (United States)

    Riedel, Philipp; Ragert, Patrick; Schelinski, Stefanie; Kiebel, Stefan J; von Kriegstein, Katharina

    2015-07-01

    with the 'auditory-visual view' of auditory speech perception, which assumes that auditory speech recognition is optimized by using predictions from previously encoded speaker-specific audio-visual internal models.

  6. Neural basis of the time window for subjective motor-auditory integration

    Directory of Open Access Journals (Sweden)

    Koichi eToida

    2016-01-01

    Full Text Available Temporal contiguity between an action and corresponding auditory feedback is crucial to the perception of self-generated sound. However, the neural mechanisms underlying motor–auditory temporal integration are unclear. Here, we conducted four experiments with an oddball paradigm to examine the specific event-related potentials (ERPs elicited by delayed auditory feedback for a self-generated action. The first experiment confirmed that a pitch-deviant auditory stimulus elicits mismatch negativity (MMN and P300, both when it is generated passively and by the participant’s action. In our second and third experiments, we investigated the ERP components elicited by delayed auditory feedback of for a self-generated action. We found that delayed auditory feedback elicited an enhancement of P2 (enhanced-P2 and a N300 component, which were apparently different from the MMN and P300 components observed in the first experiment. We further investigated the sensitivity of the enhanced-P2 and N300 to delay length in our fourth experiment. Strikingly, the amplitude of the N300 increased as a function of the delay length. Additionally, the N300 amplitude was significantly correlated with the conscious detection of the delay (the 50% detection point was around 200 ms, and hence reduction in the feeling of authorship of the sound (the sense of agency. In contrast, the enhanced-P2 was most prominent in short-delay (≤ 200 ms conditions and diminished in long-delay conditions. Our results suggest that different neural mechanisms are employed for the processing of temporally-deviant and pitch-deviant auditory feedback. Additionally, the temporal window for subjective motor–auditory integration is likely about 200 ms, as indicated by these auditory ERP components.

  7. Relationship between Auditory and Cognitive Abilities in Older Adults.

    Directory of Open Access Journals (Sweden)

    Stanley Sheft

    Full Text Available The objective was to evaluate the association of peripheral and central hearing abilities with cognitive function in older adults.Recruited from epidemiological studies of aging and cognition at the Rush Alzheimer's Disease Center, participants were a community-dwelling cohort of older adults (range 63-98 years without diagnosis of dementia. The cohort contained roughly equal numbers of Black (n=61 and White (n=63 subjects with groups similar in terms of age, gender, and years of education. Auditory abilities were measured with pure-tone audiometry, speech-in-noise perception, and discrimination thresholds for both static and dynamic spectral patterns. Cognitive performance was evaluated with a 12-test battery assessing episodic, semantic, and working memory, perceptual speed, and visuospatial abilities.Among the auditory measures, only the static and dynamic spectral-pattern discrimination thresholds were associated with cognitive performance in a regression model that included the demographic covariates race, age, gender, and years of education. Subsequent analysis indicated substantial shared variance among the covariates race and both measures of spectral-pattern discrimination in accounting for cognitive performance. Among cognitive measures, working memory and visuospatial abilities showed the strongest interrelationship to spectral-pattern discrimination performance.For a cohort of older adults without diagnosis of dementia, neither hearing thresholds nor speech-in-noise ability showed significant association with a summary measure of global cognition. In contrast, the two auditory metrics of spectral-pattern discrimination ability significantly contributed to a regression model prediction of cognitive performance, demonstrating association of central auditory ability to cognitive status using auditory metrics that avoided the confounding effect of speech materials.

  8. (Central Auditory Processing: the impact of otitis media

    Directory of Open Access Journals (Sweden)

    Leticia Reis Borges

    2013-07-01

    Full Text Available OBJECTIVE: To analyze auditory processing test results in children suffering from otitis media in their first five years of age, considering their age. Furthermore, to classify central auditory processing test findings regarding the hearing skills evaluated. METHODS: A total of 109 students between 8 and 12 years old were divided into three groups. The control group consisted of 40 students from public school without a history of otitis media. Experimental group I consisted of 39 students from public schools and experimental group II consisted of 30 students from private schools; students in both groups suffered from secretory otitis media in their first five years of age and underwent surgery for placement of bilateral ventilation tubes. The individuals underwent complete audiological evaluation and assessment by Auditory Processing tests. RESULTS: The left ear showed significantly worse performance when compared to the right ear in the dichotic digits test and pitch pattern sequence test. The students from the experimental groups showed worse performance when compared to the control group in the dichotic digits test and gaps-in-noise. Children from experimental group I had significantly lower results on the dichotic digits and gaps-in-noise tests compared with experimental group II. The hearing skills that were altered were temporal resolution and figure-ground perception. CONCLUSION: Children who suffered from secretory otitis media in their first five years and who underwent surgery for placement of bilateral ventilation tubes showed worse performance in auditory abilities, and children from public schools had worse results on auditory processing tests compared with students from private schools.

  9. Computational spectrotemporal auditory model with applications to acoustical information processing

    Science.gov (United States)

    Chi, Tai-Shih

    A computational spectrotemporal auditory model based on neurophysiological findings in early auditory and cortical stages is described. The model provides a unified multiresolution representation of the spectral and temporal features of sound likely critical in the perception of timbre. Several types of complex stimuli are used to demonstrate the spectrotemporal information preserved by the model. Shown by these examples, this two stage model reflects the apparent progressive loss of temporal dynamics along the auditory pathway from the rapid phase-locking (several kHz in auditory nerve), to moderate rates of synchrony (several hundred Hz in midbrain), to much lower rates of modulations in the cortex (around 30 Hz). To complete this model, several projection-based reconstruction algorithms are implemented to resynthesize the sound from the representations with reduced dynamics. One particular application of this model is to assess speech intelligibility. The spectro-temporal Modulation Transfer Functions (MTF) of this model is investigated and shown to be consistent with the salient trends in the human MTFs (derived from human detection thresholds) which exhibit a lowpass function with respect to both spectral and temporal dimensions, with 50% bandwidths of about 16 Hz and 2 cycles/octave. Therefore, the model is used to demonstrate the potential relevance of these MTFs to the assessment of speech intelligibility in noise and reverberant conditions. Another useful feature is the phase singularity emerged in the scale space generated by this multiscale auditory model. The singularity is shown to have certain robust properties and carry the crucial information about the spectral profile. Such claim is justified by perceptually tolerable resynthesized sounds from the nonconvex singularity set. In addition, the singularity set is demonstrated to encode the pitch and formants at different scales. These properties make the singularity set very suitable for traditional

  10. Multi-sensory integration in brainstem and auditory cortex.

    Science.gov (United States)

    Basura, Gregory J; Koehler, Seth D; Shore, Susan E

    2012-11-16

    Tinnitus is the perception of sound in the absence of a physical sound stimulus. It is thought to arise from aberrant neural activity within central auditory pathways that may be influenced by multiple brain centers, including the somatosensory system. Auditory-somatosensory (bimodal) integration occurs in the dorsal cochlear nucleus (DCN), where electrical activation of somatosensory regions alters pyramidal cell spike timing and rates of sound stimuli. Moreover, in conditions of tinnitus, bimodal integration in DCN is enhanced, producing greater spontaneous and sound-driven neural activity, which are neural correlates of tinnitus. In primary auditory cortex (A1), a similar auditory-somatosensory integration has been described in the normal system (Lakatos et al., 2007), where sub-threshold multisensory modulation may be a direct reflection of subcortical multisensory responses (Tyll et al., 2011). The present work utilized simultaneous recordings from both DCN and A1 to directly compare bimodal integration across these separate brain stations of the intact auditory pathway. Four-shank, 32-channel electrodes were placed in DCN and A1 to simultaneously record tone-evoked unit activity in the presence and absence of spinal trigeminal nucleus (Sp5) electrical activation. Bimodal stimulation led to long-lasting facilitation or suppression of single and multi-unit responses to subsequent sound in both DCN and A1. Immediate (bimodal response) and long-lasting (bimodal plasticity) effects of Sp5-tone stimulation were facilitation or suppression of tone-evoked firing rates in DCN and A1 at all Sp5-tone pairing intervals (10, 20, and 40 ms), and greater suppression at 20 ms pairing-intervals for single unit responses. Understanding the complex relationships between DCN and A1 bimodal processing in the normal animal provides the basis for studying its disruption in hearing loss and tinnitus models. This article is part of a Special Issue entitled: Tinnitus Neuroscience.

  11. The effect of sound on visual fidelity perception in stereoscopic 3-D.

    Science.gov (United States)

    Rojas, David; Kapralos, Bill; Hogue, Andrew; Collins, Karen; Nacke, Lennart; Cristancho, Sayra; Conati, Cristina; Dubrowski, Adam

    2013-12-01

    Visual and auditory cues are important facilitators of user engagement in virtual environments and video games. Prior research supports the notion that our perception of visual fidelity (quality) is influenced by auditory stimuli. Understanding exactly how our perception of visual fidelity changes in the presence of multimodal stimuli can potentially impact the design of virtual environments, thus creating more engaging virtual worlds and scenarios. Stereoscopic 3-D display technology provides the users with additional visual information (depth into and out of the screen plane). There have been relatively few studies that have investigated the impact that auditory stimuli have on our perception of visual fidelity in the presence of stereoscopic 3-D. Building on previous work, we examine the effect of auditory stimuli on our perception of visual fidelity within a stereoscopic 3-D environment.

  12. Predictive uncertainty in auditory sequence processing.

    Science.gov (United States)

    Hansen, Niels Chr; Pearce, Marcus T

    2014-01-01

    Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty-a property of listeners' prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure. Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex). Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty). We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty). Finally, we simulate listeners' perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature. The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music.

  13. Using auditory steady state responses to outline the functional connectivity in the tinnitus brain.

    Directory of Open Access Journals (Sweden)

    Winfried Schlee

    Full Text Available BACKGROUND: Tinnitus is an auditory phantom perception that is most likely generated in the central nervous system. Most of the tinnitus research has concentrated on the auditory system. However, it was suggested recently that also non-auditory structures are involved in a global network that encodes subjective tinnitus. We tested this assumption using auditory steady state responses to entrain the tinnitus network and investigated long-range functional connectivity across various non-auditory brain regions. METHODS AND FINDINGS: Using whole-head magnetoencephalography we investigated cortical connectivity by means of phase synchronization in tinnitus subjects and healthy controls. We found evidence for a deviating pattern of long-range functional connectivity in tinnitus that was strongly correlated with individual ratings of the tinnitus percept. Phase couplings between the anterior cingulum and the right frontal lobe and phase couplings between the anterior cingulum and the right parietal lobe showed significant condition x group interactions and were correlated with the individual tinnitus distress ratings only in the tinnitus condition and not in the control conditions. CONCLUSIONS: To the best of our knowledge this is the first study that demonstrates existence of a global tinnitus network of long-range cortical connections outside the central auditory system. This result extends the current knowledge of how tinnitus is generated in the brain. We propose that this global extend of the tinnitus network is crucial for the continuos perception of the tinnitus tone and a therapeutical intervention that is able to change this network should result in relief of tinnitus.

  14. Auditory Hallucinations in Acute Stroke

    Directory of Open Access Journals (Sweden)

    Yair Lampl

    2005-01-01

    Full Text Available Auditory hallucinations are uncommon phenomena which can be directly caused by acute stroke, mostly described after lesions of the brain stem, very rarely reported after cortical strokes. The purpose of this study is to determine the frequency of this phenomenon. In a cross sectional study, 641 stroke patients were followed in the period between 1996–2000. Each patient underwent comprehensive investigation and follow-up. Four patients were found to have post cortical stroke auditory hallucinations. All of them occurred after an ischemic lesion of the right temporal lobe. After no more than four months, all patients were symptom-free and without therapy. The fact the auditory hallucinations may be of cortical origin must be taken into consideration in the treatment of stroke patients. The phenomenon may be completely reversible after a couple of months.

  15. Modulation of auditory brainstem responses by serotonin and specific serotonin receptors.

    Science.gov (United States)

    Papesh, Melissa A; Hurley, Laura M

    2016-02-01

    The neuromodulator serotonin is found throughout the auditory system from the cochlea to the cortex. Although effects of serotonin have been reported at the level of single neurons in many brainstem nuclei, how these effects correspond to more integrated measures of auditory processing has not been well-explored. In the present study, we aimed to characterize the effects of serotonin on far-field auditory brainstem responses (ABR) across a wide range of stimulus frequencies and intensities. Using a mouse model, we investigated the consequences of systemic serotonin depletion, as well as the selective stimulation and suppression of the 5-HT1 and 5-HT2 receptors, on ABR latency and amplitude. Stimuli included tone pips spanning four octaves presented over a forty dB range. Depletion of serotonin reduced the ABR latencies in Wave II and later waves, suggesting that serotonergic effects occur as early as the cochlear nucleus. Further, agonists and antagonists of specific serotonergic receptors had different profiles of effects on ABR latencies and amplitudes across waves and frequencies, suggestive of distinct effects of these agents on auditory processing. Finally, most serotonergic effects were more pronounced at lower ABR frequencies, suggesting larger or more directional modulation of low-frequency processing. This is the first study to describe the effects of serotonin on ABR responses across a wide range of stimulus frequencies and amplitudes, and it presents an important step in understanding how serotonergic modulation of auditory brainstem processing may contribute to modulation of auditory perception.

  16. Enhanced auditory neuron survival following cell-based BDNF treatment in the deaf guinea pig.

    Directory of Open Access Journals (Sweden)

    Lisa N Pettingill

    Full Text Available Exogenous neurotrophin delivery to the deaf cochlea can prevent deafness-induced auditory neuron degeneration, however, we have previously reported that these survival effects are rapidly lost if the treatment stops. In addition, there are concerns that current experimental techniques are not safe enough to be used clinically. Therefore, for such treatments to be clinically transferable, methods of neurotrophin treatment that are safe, biocompatible and can support long-term auditory neuron survival are necessary. Cell transplantation and gene transfer, combined with encapsulation technologies, have the potential to address these issues. This study investigated the survival-promoting effects of encapsulated BDNF over-expressing Schwann cells on auditory neurons in the deaf guinea pig. In comparison to control (empty capsules, there was significantly greater auditory neuron survival following the cell-based BDNF treatment. Concurrent use of a cochlear implant is expected to result in even greater auditory neuron survival, and provide a clinically relevant method to support auditory neuron survival that may lead to improved speech perception and language outcomes for cochlear implant patients.

  17. Towards a neural basis of music perception.

    Science.gov (United States)

    Koelsch, Stefan; Siebel, Walter A

    2005-12-01

    Music perception involves complex brain functions underlying acoustic analysis, auditory memory, auditory scene analysis, and processing of musical syntax and semantics. Moreover, music perception potentially affects emotion, influences the autonomic nervous system, the hormonal and immune systems, and activates (pre)motor representations. During the past few years, research activities on different aspects of music processing and their neural correlates have rapidly progressed. This article provides an overview of recent developments and a framework for the perceptual side of music processing. This framework lays out a model of the cognitive modules involved in music perception, and incorporates information about the time course of activity of some of these modules, as well as research findings about where in the brain these modules might be located.

  18. Temporal Integration of Auditory Stimulation and Binocular Disparity Signals

    Directory of Open Access Journals (Sweden)

    Marina Zannoli

    2011-10-01

    Full Text Available Several studies using visual objects defined by luminance have reported that the auditory event must be presented 30 to 40 ms after the visual stimulus to perceive audiovisual synchrony. In the present study, we used visual objects defined only by their binocular disparity. We measured the optimal latency between visual and auditory stimuli for the perception of synchrony using a method introduced by Moutoussis & Zeki (1997. Visual stimuli were defined either by luminance and disparity or by disparity only. They moved either back and forth between 6 and 12 arcmin or from left to right at a constant disparity of 9 arcmin. This visual modulation was presented together with an amplitude-modulated 500 Hz tone. Both modulations were sinusoidal (frequency: 0.7 Hz. We found no difference between 2D and 3D motion for luminance stimuli: a 40 ms auditory lag was necessary for perceived synchrony. Surprisingly, even though stereopsis is often thought to be slow, we found a similar optimal latency in the disparity 3D motion condition (55 ms. However, when participants had to judge simultaneity for disparity 2D motion stimuli, it led to larger latencies (170 ms, suggesting that stereo motion detectors are poorly suited to track 2D motion.

  19. The effects of auditory contrast tuning upon speech intelligibility

    Directory of Open Access Journals (Sweden)

    Nathaniel J Killian

    2016-08-01

    Full Text Available We have previously identified neurons tuned to spectral contrast of wideband sounds in auditory cortex of awake marmoset monkeys. Because additive noise alters the spectral contrast of speech, contrast-tuned neurons, if present in human auditory cortex, may aid in extracting speech from noise. Given that this cortical function may be underdeveloped in individuals with sensorineural hearing loss, incorporating biologically-inspired algorithms into external signal processing devices could provide speech enhancement benefits to cochlear implantees. In this study we first constructed a computational signal processing algorithm to mimic auditory cortex contrast tuning. We then manipulated the shape of contrast channels and evaluated the intelligibility of reconstructed noisy speech using a metric to predict cochlear implant user perception. Candidate speech enhancement strategies were then tested in cochlear implantees with a hearing-in-noise test. Accentuation of intermediate contrast values or all contrast values improved computed intelligibility. Cochlear implant subjects showed significant improvement in noisy speech intelligibility with a contrast shaping procedure.

  20. Auditory Neural Prostheses – A Window to the Future

    Directory of Open Access Journals (Sweden)

    Mohan Kameshwaran

    2015-06-01

    Full Text Available Hearing loss is one of the commonest congenital anomalies to affect children world-over. The incidence of congenital hearing loss is more pronounced in developing countries like the Indian sub-continent, especially with the problems of consanguinity. Hearing loss is a double tragedy, as it leads to not only deafness but also language deprivation. However, hearing loss is the only truly remediable handicap, due to remarkable advances in biomedical engineering and surgical techniques. Auditory neural prostheses help to augment or restore hearing by integration of an external circuitry with the peripheral hearing apparatus and the central circuitry of the brain. A cochlear implant (CI is a surgically implantable device that helps restore hearing in patients with severe-profound hearing loss, unresponsive to amplification by conventional hearing aids. CIs are electronic devices designed to detect mechanical sound energy and convert it into electrical signals that can be delivered to the coch­lear nerve, bypassing the damaged hair cells of the coch­lea. The only true prerequisite is an intact auditory nerve. The emphasis is on implantation as early as possible to maximize speech understanding and perception. Bilateral CI has significant benefits which include improved speech perception in noisy environments and improved sound localization. Presently, the indications for CI have widened and these expanded indications for implantation are related to age, additional handicaps, residual hearing, and special etiologies of deafness. Combined electric and acoustic stimulation (EAS / hybrid device is designed for individuals with binaural low-frequency hearing and severe-to-profound high-frequency hearing loss. Auditory brainstem implantation (ABI is a safe and effective means of hearing rehabilitation in patients with retrocochlear disorders, such as neurofibromatosis type 2 (NF2 or congenital cochlear nerve aplasia, wherein the cochlear nerve is damaged

  1. The role of visual spatial attention in audiovisual speech perception

    DEFF Research Database (Denmark)

    Andersen, Tobias; Tiippana, K.; Laarni, J.

    2009-01-01

    integration did not change. Visual spatial attention was also able to select between the faces when lip reading. This suggests that visual spatial attention acts at the level of visual speech perception prior to audiovisual integration and that the effect propagates through audiovisual integration......Auditory and visual information is integrated when perceiving speech, as evidenced by the McGurk effect in which viewing an incongruent talking face categorically alters auditory speech perception. Audiovisual integration in speech perception has long been considered automatic and pre-attentive...... but recent reports have challenged this view. Here we study the effect of visual spatial attention on the McGurk effect. By presenting a movie of two faces symmetrically displaced to each side of a central fixation point and dubbed with a single auditory speech track, we were able to discern the influences...

  2. Auditory N1 reveals planning and monitoring processes during music performance.

    Science.gov (United States)

    Mathias, Brian; Gehring, William J; Palmer, Caroline

    2017-02-01

    The current study investigated the relationship between planning processes and feedback monitoring during music performance, a complex task in which performers prepare upcoming events while monitoring their sensory outcomes. Theories of action planning in auditory-motor production tasks propose that the planning of future events co-occurs with the perception of auditory feedback. This study investigated the neural correlates of planning and feedback monitoring by manipulating the contents of auditory feedback during music performance. Pianists memorized and performed melodies at a cued tempo in a synchronization-continuation task while the EEG was recorded. During performance, auditory feedback associated with single melody tones was occasionally substituted with tones corresponding to future (next), present (current), or past (previous) melody tones. Only future-oriented altered feedback disrupted behavior: Future-oriented feedback caused pianists to slow down on the subsequent tone more than past-oriented feedback, and amplitudes of the auditory N1 potential elicited by the tone immediately following the altered feedback were larger for future-oriented than for past-oriented or noncontextual (unrelated) altered feedback; larger N1 amplitudes were associated with greater slowing following altered feedback in the future condition only. Feedback-related negativities were elicited in all altered feedback conditions. In sum, behavioral and neural evidence suggests that future-oriented feedback disrupts performance more than past-oriented feedback, consistent with planning theories that posit similarity-based interference between feedback and planning contents. Neural sensory processing of auditory feedback, reflected in the N1 ERP, may serve as a marker for temporal disruption caused by altered auditory feedback in auditory-motor production tasks.

  3. Auditory and audio-visual processing in patients with cochlear, auditory brainstem, and auditory midbrain implants: An EEG study.

    Science.gov (United States)

    Schierholz, Irina; Finke, Mareike; Kral, Andrej; Büchner, Andreas; Rach, Stefan; Lenarz, Thomas; Dengler, Reinhard; Sandmann, Pascale

    2017-04-01

    There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc.

  4. Auditory Hallucinations Nomenclature and Classification

    NARCIS (Netherlands)

    Blom, Jan Dirk; Sommer, Iris E. C.

    2010-01-01

    Introduction: The literature on the possible neurobiologic correlates of auditory hallucinations is expanding rapidly. For an adequate understanding and linking of this emerging knowledge, a clear and uniform nomenclature is a prerequisite. The primary purpose of the present article is to provide an

  5. Nigel: A Severe Auditory Dyslexic

    Science.gov (United States)

    Cotterell, Gill

    1976-01-01

    Reported is the case study of a boy with severe auditory dyslexia who received remedial treatment from the age of four and progressed through courses at a technical college and a 3-year apprenticeship course in mechanics by the age of eighteen. (IM)

  6. What you see isn't always what you get: Auditory word signals trump consciously perceived words in lexical access.

    Science.gov (United States)

    Ostrand, Rachel; Blumstein, Sheila E; Ferreira, Victor S; Morgan, James L

    2016-06-01

    Human speech perception often includes both an auditory and visual component. A conflict in these signals can result in the McGurk illusion, in which the listener perceives a fusion of the two streams, implying that information from both has been integrated. We report two experiments investigating whether auditory-visual integration of speech occurs before or after lexical access, and whether the visual signal influences lexical access at all. Subjects were presented with McGurk or Congruent primes and performed a lexical decision task on related or unrelated targets. Although subjects perceived the McGurk illusion, McGurk and Congruent primes with matching real-word auditory signals equivalently primed targets that were semantically related to the auditory signal, but not targets related to the McGurk percept. We conclude that the time course of auditory-visual integration is dependent on the lexicality of the auditory and visual input signals, and that listeners can lexically access one word and yet consciously perceive another.

  7. Music perception in dementia

    Science.gov (United States)

    Nicholas, Jennifer M; Cohen, Miriam H; Slattery, Catherine F; Paterson, Ross W; Foulkes, Alexander J M; Schott, Jonathan M; Mummery, Catherine J; Crutch, Sebastian J; Warren, Jason D

    2017-01-01

    Despite much recent interest in music and dementia, music perception has not been widely studied across dementia syndromes using an information processing approach. Here we addressed this issue in a cohort of 30 patients representing major dementia syndromes of typical Alzheimer’s disease (AD, n=16), logopenic aphasia (LPA, an Alzheimer variant syndrome; n=5) and progressive nonfluent aphasia (PNFA; n=9) in relation to 19 healthy age-matched individuals. We designed a novel neuropsychological battery to assess perception of musical patterns in the dimensions of pitch and temporal information (requiring detection of notes that deviated from the established pattern based on local or global sequence features) and musical scene analysis (requiring detection of a familiar tune within polyphonic harmony). Performance on these tests was referenced to generic auditory (timbral) deviance detection and recognition of familiar tunes and adjusted for general auditory working memory performance. Relative to healthy controls, patients with AD and LPA had group-level deficits of global pitch (melody contour) processing while patients with PNFA as a group had deficits of local (interval) as well as global pitch processing. There was substantial individual variation within syndromic groups. No specific deficits of musical temporal processing, timbre processing, musical scene analysis or tune recognition were identified. The findings suggest that particular aspects of music perception such as pitch pattern analysis may open a window on the processing of information streams in major dementia syndromes. The potential selectivity of musical deficits for particular dementia syndromes and particular dimensions of processing warrants further systematic investigation. PMID:27802226

  8. Effect of attentional load on audiovisual speech perception: evidence from ERPs.

    Science.gov (United States)

    Alsius, Agnès; Möttönen, Riikka; Sams, Mikko E; Soto-Faraco, Salvador; Tiippana, Kaisa

    2014-01-01

    Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs) generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual, and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e., a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech.

  9. Effect of attentional load on audiovisual speech perception: Evidence from ERPs

    Directory of Open Access Journals (Sweden)

    Agnès eAlsius

    2014-07-01

    Full Text Available Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e. a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech.

  10. Use of transcranial direct current stimulation for the treatment of auditory hallucinations of schizophrenia – a systematic review

    Science.gov (United States)

    Pondé, Pedro H; de Sena, Eduardo P; Camprodon, Joan A; de Araújo, Arão Nogueira; Neto, Mário F; DiBiasi, Melany; Baptista, Abrahão Fontes; Moura, Lidia MVR; Cosmo, Camila

    2017-01-01

    Introduction Auditory hallucinations are defined as experiences of auditory perceptions in the absence of a provoking external stimulus. They are the most prevalent symptoms of schizophrenia with high capacity for chronicity and refractoriness during the course of disease. The transcranial direct current stimulation (tDCS) – a safe, portable, and inexpensive neuromodulation technique – has emerged as a promising treatment for the management of auditory hallucinations. Objective The aim of this study is to analyze the level of evidence in the literature available for the use of tDCS as a treatment for auditory hallucinations in schizophrenia. Methods A systematic review was performed, searching in the main electronic databases including the Cochrane Library and MEDLINE/PubMed. The searches were performed by combining descriptors, applying terms of the Medical Subject Headings (MeSH) of Descriptors of Health Sciences and descriptors contractions. PRISMA protocol was used as a guide and the terms used were the clinical outcomes (“Schizophrenia” OR “Auditory Hallucinations” OR “Auditory Verbal Hallucinations” OR “Psychosis”) searched together (“AND”) with interventions (“transcranial Direct Current Stimulation” OR “tDCS” OR “Brain Polarization”). Results Six randomized controlled trials that evaluated the effects of tDCS on the severity of auditory hallucinations in schizophrenic patients were selected. Analysis of the clinical results of these studies pointed toward incongruence in the information with regard to the therapeutic use of tDCS with a view to reducing the severity of auditory hallucinations in schizophrenia. Only three studies revealed a therapeutic benefit, manifested by reductions in severity and frequency of auditory verbal hallucinations in schizophrenic patients. Conclusion Although tDCS has shown promising results in reducing the severity of auditory hallucinations in schizophrenic patients, this technique cannot

  11. Perceptual hysteresis in the judgment of auditory pitch shift.

    Science.gov (United States)

    Chambers, Claire; Pressnitzer, Daniel

    2014-07-01

    Perceptual hysteresis can be defined as the enduring influence of the recent past on current perception. Here, hysteresis was investigated in a basic auditory task: pitch comparisons between successive tones. On each trial, listeners were presented with pairs of tones and asked to report the direction of subjective pitch shift, as either "up" or "down." All tones were complexes known as Shepard tones (Shepard, 1964), which comprise several frequency components at octave multiples of a base frequency. The results showed that perceptual judgments were determined both by stimulus-related factors (the interval ratio between the base frequencies within a pair) and by recent context (the intervals in the two previous trials). When tones were presented in ordered sequences, for which the frequency interval between tones was varied in a progressive manner, strong hysteresis was found. In particular, ambiguous stimuli that led to equal probabilities of "up" and "down" responses within a randomized context were almost fully determined within an ordered context. Moreover, hysteresis did not act on the direction of the reported pitch shift, but rather on the perceptual representation of each tone. Thus, hysteresis could be observed within sequences in which listeners varied between "up" and "down" responses, enabling us to largely rule out confounds related to response bias. The strength of the perceptual hysteresis observed suggests that the ongoing context may have a substantial influence on fundamental aspects of auditory perception, such as how we perceive the changes in pitch between successive sounds.

  12. Functional changes in the human auditory cortex in ageing.

    Directory of Open Access Journals (Sweden)

    Oliver Profant

    Full Text Available Hearing loss, presbycusis, is one of the most common sensory declines in the ageing population. Presbycusis is characterised by a deterioration in the processing of temporal sound features as well as a decline in speech perception, thus indicating a possible central component. With the aim to explore the central component of presbycusis, we studied the function of the auditory cortex by functional MRI in two groups of elderly subjects (>65 years and compared the results with young subjects (auditory cortex. The fMRI showed only minimal activation in response to the 8 kHz stimulation, despite the fact that all subjects heard the stimulus. Both elderly groups showed greater activation in response to acoustical stimuli in the temporal lobes in comparison with young subjects. In addition, activation in the right temporal lobe was more expressed than in the left temporal lobe in both elderly groups, whereas in the young control subjects (YC leftward lateralization was present. No statistically significant differences in activation of the auditory cortex were found between the MP and EP groups. The greater extent of cortical activation in elderly subjects in comparison with young subjects, with an asymmetry towards the right side, may serve as a compensatory mechanism for the impaired processing of auditory information appearing as a consequence of ageing.

  13. Auditory scene analysis: The sweet music of ambiguity

    Directory of Open Access Journals (Sweden)

    Daniel ePressnitzer

    2011-12-01

    Full Text Available In this review paper aimed at the non-specialist, we explore the use that neuroscientists and musicians have made of perceptual illusions based on ambiguity. The pivotal issue is auditory scene analysis, or what enables us to make sense of complex acoustic mixtures in order to follow, for instance, a single melody in the midst of an orchestra. In general, auditory scene analysis uncovers the most likely physical causes that account for the waveform collected at the ears. However, the acoustical problem is ill-posed and it must be solved from noisy sensory input. Recently, the neural mechanisms implicated in the transformation of ambiguous sensory information into coherent auditory scenes have been investigated using so-called bistability illusions (where an unchanging ambiguous stimulus evokes a succession of distinct percepts in the mind of the listener. After reviewing some of those studies, we turn to music, which arguably provides some of the most complex acoustic scenes that a human listener will ever encounter. Interestingly, musicians will not always aim at making each physical source intelligible, but rather to express one or more melodic lines with a small or large number of instruments. By means of a few musical illustrations and by using a computational model inspired by neuro-physiological principles, we suggest that this relies on a detailed (if perhaps implicit knowledge of the rules of auditory scene analysis and of its inherent ambiguity. We then put forward the opinion that some degree perceptual ambiguity may participate in our appreciation of music.

  14. Processing expectancy violations during music performance and perception: an ERP study.

    Science.gov (United States)

    Maidhof, Clemens; Vavatzanidis, Niki; Prinz, Wolfgang; Rieger, Martina; Koelsch, Stefan

    2010-10-01

    Musicians are highly trained motor experts with pronounced associations between musical actions and the corresponding auditory effects. However, the importance of auditory feedback for music performance is controversial, and it is unknown how feedback during music performance is processed. The present study investigated the neural mechanisms underlying the processing of auditory feedback manipulations in pianists. To disentangle effects of action-based and perception-based expectations, we compared feedback manipulations during performance to the mere perception of the same stimulus material. In two experiments, pianists performed bimanually sequences on a piano, while at random positions, the auditory feedback of single notes was manipulated, thereby creating a mismatch between an expected and actually perceived action effect (action condition). In addition, pianists listened to tone sequences containing the same manipulations (perception condition). The manipulations in the perception condition were either task-relevant (Experiment 1) or task-irrelevant (Experiment 2). In action and perception conditions, event-related potentials elicited by manipulated tones showed an early fronto-central negativity around 200 msec, presumably reflecting a feedback ERN/N200, followed by a positive deflection (P3a). The early negativity was more pronounced during the action compared to the perception condition. This shows that during performance, the intention to produce specific auditory effects leads to stronger expectancies than the expectancies built up during music perception.

  15. Approaches to the cortical analysis of auditory objects.

    Science.gov (United States)

    Griffiths, Timothy D; Kumar, Sukhbinder; Warren, Jason D; Stewart, Lauren; Stephan, Klaas Enno; Friston, Karl J

    2007-07-01

    We describe work that addresses the cortical basis for the analysis of auditory objects using 'generic' sounds that do not correspond to any particular events or sources (like vowels or voices) that have semantic association. The experiments involve the manipulation of synthetic sounds to produce systematic changes of stimulus features, such as spectral envelope. Conventional analyses of normal functional imaging data demonstrate that the analysis of spectral envelope and perceived timbral change involves a network consisting of planum temporale (PT) bilaterally and the right superior temporal sulcus (STS). Further analysis of imaging data using dynamic causal modelling (DCM) and Bayesian model selection was carried out in the right hemisphere areas to determine the effective connectivity between these auditory areas. Specifically, the objective was to determine if the analysis of spectral envelope in the network is done in a serial fashion (that is from HG to PT to STS) or parallel fashion (that is PT and STS receives input from HG simultaneously). Two families of models, serial and parallel (16 in total) that represent different hypotheses about the connectivity between HG, PT and STS were selected. The models within a family differ with respect to the pathway that is modulated by the analysis of spectral envelope. After the models are identified, Bayesian model selection procedure is then used to select the 'optimal' model from the specified models. The data strongly support a particular serial model containing modulation of the HG to PT effective connectivity during spectral envelope variation. Parallel work in neurological subjects addresses the effect of lesions to different parts of this network. We have recently studied in detail subjects with 'dystimbria': an alteration in the perceived quality of auditory objects distinct from pitch or loudness change. The subjects have lesions of the normal network described above with normal perception of pitch strength

  16. Membrane potential dynamics of populations of cortical neurons during auditory streaming.

    Science.gov (United States)

    Farley, Brandon J; Noreña, Arnaud J

    2015-10-01

    How a mixture of acoustic sources is perceptually organized into discrete auditory objects remains unclear. One current hypothesis postulates that perceptual segregation of different sources is related to the spatiotemporal separation of cortical responses induced by each acoustic source or stream. In the present study, the dynamics of subthreshold membrane potential activity were measured across the entire tonotopic axis of the rodent primary auditory cortex during the auditory streaming paradigm using voltage-sensitive dye imaging. Consistent with the proposed hypothesis, we observed enhanced spatiotemporal segregation of cortical responses to alternating tone sequences as their frequency separation or presentation rate was increased, both manipulations known to promote stream segregation. However, across most streaming paradigm conditions tested, a substantial cortical region maintaining a response to both tones coexisted with more peripheral cortical regions responding more selectively to one of them. We propose that these coexisting subthreshold representation types could provide neural substrates to support the flexible switching between the integrated and segregated streaming percepts.

  17. Deficient auditory processing in children with Asperger Syndrome, as indexed by event-related potentials.

    Science.gov (United States)

    Jansson-Verkasalo, Eira; Ceponiene, Rita; Kielinen, Marko; Suominen, Kalervo; Jäntti, Ville; Linna, Sirkka Liisa; Moilanen, Irma; Näätänen, Risto

    2003-03-06

    Asperger Syndrome (AS) is characterized by normal language development but deficient understanding and use of the intonation and prosody of speech. While individuals with AS report difficulties in auditory perception, there are no studies addressing auditory processing at the sensory level. In this study, event-related potentials (ERP) were recorded for syllables and tones in children with AS and in their control counterparts. Children with AS displayed abnormalities in transient sound-feature encoding, as indexed by the obligatory ERPs, and in sound discrimination, as indexed by the mismatch negativity. These deficits were more severe for the tone stimuli than for the syllables. These results indicate that auditory sensory processing is deficient in children with AS, and that these deficits might be implicated in the perceptual problems encountered by children with AS.

  18. Differential contribution of visual and auditory information to accurately predict the direction and rotational motion of a visual stimulus.

    Science.gov (United States)

    Park, Seoung Hoon; Kim, Seonjin; Kwon, MinHyuk; Christou, Evangelos A

    2016-03-01

    Vision and auditory information are critical for perception and to enhance the ability of an individual to respond accurately to a stimulus. However, it is unknown whether visual and auditory information contribute differentially to identify the direction and rotational motion of the stimulus. The purpose of this study was to determine the ability of an individual to accurately predict the direction and rotational motion of the stimulus based on visual and auditory information. In this study, we recruited 9 expert table-tennis players and used table-tennis service as our experimental model. Participants watched recorded services with different levels of visual and auditory information. The goal was to anticipate the direction of the service (left or right) and the rotational motion of service (topspin, sidespin, or cut). We recorded their responses and quantified the following outcomes: (i) directional accuracy and (ii) rotational motion accuracy. The response accuracy was the accurate predictions relative to the total number of trials. The ability of the participants to predict the direction of the service accurately increased with additional visual information but not with auditory information. In contrast, the ability of the participants to predict the rotational motion of the service accurately increased with the addition of auditory information to visual information but not with additional visual information alone. In conclusion, this finding demonstrates that visual information enhances the ability of an individual to accurately predict the direction of the stimulus, whereas additional auditory information enhances the ability of an individual to accurately predict the rotational motion of stimulus.

  19. Basic to Applied Research: The Benefits of Audio-Visual Speech Perception Research in Teaching Foreign Languages

    Science.gov (United States)

    Erdener, Dogu

    2016-01-01

    Traditionally, second language (L2) instruction has emphasised auditory-based instruction methods. However, this approach is restrictive in the sense that speech perception by humans is not just an auditory phenomenon but a multimodal one, and specifically, a visual one as well. In the past decade, experimental studies have shown that the…

  20. Auditory Dysfunction in Patients with Cerebrovascular Disease

    Directory of Open Access Journals (Sweden)

    Sadaharu Tabuchi

    2014-01-01

    Full Text Available Auditory dysfunction is a common clinical symptom that can induce profound effects on the quality of life of those affected. Cerebrovascular disease (CVD is the most prevalent neurological disorder today, but it has generally been considered a rare cause of auditory dysfunction. However, a substantial proportion of patients with stroke might have auditory dysfunction that has been underestimated due to difficulties with evaluation. The present study reviews relationships between auditory dysfunction and types of CVD including cerebral infarction, intracerebral hemorrhage, subarachnoid hemorrhage, cerebrovascular malformation, moyamoya disease, and superficial siderosis. Recent advances in the etiology, anatomy, and strategies to diagnose and treat these conditions are described. The numbers of patients with CVD accompanied by auditory dysfunction will increase as the population ages. Cerebrovascular diseases often include the auditory system, resulting in various types of auditory dysfunctions, such as unilateral or bilateral deafness, cortical deafness, pure word deafness, auditory agnosia, and auditory hallucinations, some of which are subtle and can only be detected by precise psychoacoustic and electrophysiological testing. The contribution of CVD to auditory dysfunction needs to be understood because CVD can be fatal if overlooked.

  1. The effect of auditory memory load on intensity resolution in individuals with Parkinson's disease

    Science.gov (United States)

    Richardson, Kelly C.

    Purpose: The purpose of the current study was to investigate the effect of auditory memory load on intensity resolution in individuals with Parkinson's disease (PD) as compared to two groups of listeners without PD. Methods: Nineteen individuals with Parkinson's disease, ten healthy age- and hearing-matched adults, and ten healthy young adults were studied. All listeners participated in two intensity discrimination tasks differing in auditory memory load; a lower memory load, 4IAX task and a higher memory load, ABX task. Intensity discrimination performance was assessed using a bias-free measurement of signal detectability known as d' (d-prime). Listeners further participated in a continuous loudness scaling task where they were instructed to rate the loudness level of each signal intensity using a computerized 150mm visual analogue scale. Results: Group discrimination functions indicated significantly lower intensity discrimination sensitivity (d') across tasks for the individuals with PD, as compared to the older and younger controls. No significant effect of aging on intensity discrimination was observed for either task. All three listeners groups demonstrated significantly lower intensity discrimination sensitivity for the higher auditory memory load, ABX task, compared to the lower auditory memory load, 4IAX task. Furthermore, a significant effect of aging was identified for the loudness scaling condition. The younger controls were found to rate most stimuli along the continuum as significantly louder than the older controls and the individuals with PD. Conclusions: The persons with PD showed evidence of impaired auditory perception for intensity information, as compared to the older and younger controls. The significant effect of aging on loudness perception may indicate peripheral and/or central auditory involvement.

  2. Neural entrainment to rhythmically-presented auditory, visual and audio-visual speech in children

    Directory of Open Access Journals (Sweden)

    Alan James Power

    2012-07-01

    Full Text Available Auditory cortical oscillations have been proposed to play an important role in speech perception. It is suggested that the brain may take temporal ‘samples’ of information from the speech stream at different rates, phase-resetting ongoing oscillations so that they are aligned with similar frequency bands in the input (‘phase locking’. Information from these frequency bands is then bound together for speech perception. To date, there are no explorations of neural phase-locking and entrainment to speech input in children. However, it is clear from studies of language acquisition that infants use both visual speech information and auditory speech information in learning. In order to study neural entrainment to speech in typically-developing children, we use a rhythmic entrainment paradigm (underlying 2 Hz or delta rate based on repetition of the syllable ba, presented in either the auditory modality alone, the visual modality alone, or as auditory-visual speech (via a talking head. To ensure attention to the task, children aged 13 years were asked to press a button as fast as possible when the ba stimulus violated the rhythm for each stream type. Rhythmic violation depended on delaying the occurrence of a ba in the isochronous stream. Neural entrainment was demonstrated for all stream types, and individual differences in standardized measures of language processing were related to auditory entrainment at the theta rate. Further, there was significant modulation of the preferred phase of auditory entrainment in the theta band when visual speech cues were present, indicating cross-modal phase resetting. The rhythmic entrainment paradigm developed here offers a method for exploring individual differences in oscillatory phase locking during development. In particular, a method for assessing neural entrainment and cross-modal phase resetting would be useful for exploring developmental learning difficulties thought to involve temporal sampling

  3. The auditory brainstem is a barometer of rapid auditory learning.

    Science.gov (United States)

    Skoe, E; Krizman, J; Spitzer, E; Kraus, N

    2013-07-23

    To capture patterns in the environment, neurons in the auditory brainstem rapidly alter their firing based on the statistical properties of the soundscape. How this neural sensitivity relates to behavior is unclear. We tackled this question by combining neural and behavioral measures of statistical learning, a general-purpose learning mechanism governing many complex behaviors including language acquisition. We recorded complex auditory brainstem responses (cABRs) while human adults implicitly learned to segment patterns embedded in an uninterrupted sound sequence based on their statistical characteristics. The brainstem's sensitivity to statistical structure was measured as the change in the cABR between a patterned and a pseudo-randomized sequence composed from the same set of sounds but differing in their sound-to-sound probabilities. Using this methodology, we provide the first demonstration that behavioral-indices of rapid learning relate to individual differences in brainstem physiology. We found that neural sensitivity to statistical structure manifested along a continuum, from adaptation to enhancement, where cABR enhancement (patterned>pseudo-random) tracked with greater rapid statistical learning than adaptation. Short- and long-term auditory experiences (days to years) are known to promote brainstem plasticity and here we provide a conceptual advance by showing that the brainstem is also integral to rapid learning occurring over minutes.

  4. Functional connectivity of the temporo-parietal region in schizophrenia : Effects of rTMS treatment of auditory hallucinations

    NARCIS (Netherlands)

    Vercammen, Ans; Knegtering, Henderikus; Liemburg, Edith J.; den Boer, Johannes A.; Aleman, Andre

    2010-01-01

    Auditory-verbal hallucinations are a hallmark symptom of schizophrenia. In recent years, repetitive transcranial magnetic stimulation (rTMS) targeting speech perception areas has been advanced as a potential treatment of medication-resistant hallucinations. However, the underlying neural processes r

  5. Sound and Music in Narrative Multimedia : A macroscopic discussion of audiovisual relations and auditory narrative functions in film, television and video games

    OpenAIRE

    Lund, Are Valen

    2012-01-01

    This thesis examines how we perceive an audiovisual narrative - here defined as film, television and video games - and seeks to establish a descriptive framework for auditory stimuli and their narrative functions in this regard. I initially adopt the viewpoint of cognitive psychology an account for basic information processing operations. I then discuss audiovisual perception in terms of the effects of sensory integration between the visual and auditory modalities on the construction of meani...

  6. Broadened population-level frequency tuning in the auditory cortex of tinnitus patients

    Science.gov (United States)

    Sekiya, Kenichi; Takahashi, Mariko; Murakami, Shingo; Kakigi, Ryusuke

    2017-01-01

    Tinnitus is a phantom auditory perception without an external sound source and is one of the most common public health concerns that impair the quality of life of many individuals. However, its neural mechanisms remain unclear. We herein examined population-level frequency tuning in the auditory cortex of unilateral tinnitus patients with similar hearing levels in both ears using magnetoencephalography. We compared auditory-evoked neural activities elicited by a stimulation to the tinnitus and nontinnitus ears. Objective magnetoencephalographic data suggested that population-level frequency tuning corresponding to the tinnitus ear was significantly broader than that corresponding to the nontinnitus ear in the human auditory cortex. The results obtained support the hypothesis that pathological alterations in inhibitory neural networks play an important role in the perception of subjective tinnitus. NEW & NOTEWORTHY Although subjective tinnitus is one of the most common public health concerns that impair the quality of life of many individuals, no standard treatment or objective diagnostic method currently exists. We herein revealed that population-level frequency tuning was significantly broader in the tinnitus ear than in the nontinnitus ear. The results of the present study provide an insight into the development of an objective diagnostic method for subjective tinnitus. PMID:28053240

  7. Auditory-motor learning during speech production in 9-11-year-old children.

    Directory of Open Access Journals (Sweden)

    Douglas M Shiller

    Full Text Available BACKGROUND: Hearing ability is essential for normal speech development, however the precise mechanisms linking auditory input and the improvement of speaking ability remain poorly understood. Auditory feedback during speech production is believed to play a critical role by providing the nervous system with information about speech outcomes that is used to learn and subsequently fine-tune speech motor output. Surprisingly, few studies have directly investigated such auditory-motor learning in the speech production of typically developing children. METHODOLOGY/PRINCIPAL FINDINGS: In the present study, we manipulated auditory feedback during speech production in a group of 9-11-year old children, as well as in adults. Following a period of speech practice under conditions of altered auditory feedback, compensatory changes in speech production and perception were examined. Consistent with prior studies, the adults exhibited compensatory changes in both their speech motor output and their perceptual representations of speech sound categories. The children exhibited compensatory changes in the motor domain, with a change in speech output that was similar in magnitude to that of the adults, however the children showed no reliable compensatory effect on their perceptual representations. CONCLUSIONS: The results indicate that 9-11-year-old children, whose speech motor and perceptual abilities are still not fully developed, are nonetheless capable of auditory-feedback-based sensorimotor adaptation, supporting a role for such learning processes in speech motor development. Auditory feedback may play a more limited role, however, in the fine-tuning of children's perceptual representations of speech sound categories.

  8. Auditory Cortical Maturation in a Child with Cochlear Implant: Analysis of Electrophysiological and Behavioral Measures

    Science.gov (United States)

    Silva, Liliane Aparecida Fagundes; Couto, Maria Inês Vieira; Tsuji, Robinson Koji; Bento, Ricardo Ferreira; de Carvalho, Ana Claudia Martinho; Matas, Carla Gentile

    2015-01-01

    The purpose of this study was to longitudinally assess the behavioral and electrophysiological hearing changes of a girl inserted in a CI program, who had bilateral profound sensorineural hearing loss and underwent surgery of cochlear implantation with electrode activation at 21 months of age. She was evaluated using the P1 component of Long Latency Auditory Evoked Potential (LLAEP); speech perception tests of the Glendonald Auditory Screening Procedure (GASP); Infant Toddler Meaningful Auditory Integration Scale (IT-MAIS); and Meaningful Use of Speech Scales (MUSS). The study was conducted prior to activation and after three, nine, and 18 months of cochlear implant activation. The results of the LLAEP were compared with data from a hearing child matched by gender and chronological age. The results of the LLAEP of the child with cochlear implant showed gradual decrease in latency of the P1 component after auditory stimulation (172 ms–134 ms). In the GASP, IT-MAIS, and MUSS, gradual development of listening skills and oral language was observed. The values of the LLAEP of the hearing child were expected for chronological age (132 ms–128 ms). The use of different clinical instruments allow a better understanding of the auditory habilitation and rehabilitation process via CI. PMID:26881163

  9. Evaluation of embryonic alcoholism from auditory event-related potential in fetal rats

    Institute of Scientific and Technical Information of China (English)

    梁勇; 王正敏; 屈卫东

    2004-01-01

    @@ Auditory event-related potential (AERP) is a kind of electroencephalography that measures the responses of perception, memory and judgement to special acoustic stimulation in the auditory cortex. AERP can be recorded with not only active but also passive mode. The active and passive recording modes of AERP have been shown a possible application in animals.1,2 Alcohol is a substance that can markedly affect the conscious reaction of human. Recently, AERP has been applied to study the effects of alcohol on the auditory centers of the brain. Some reports have shown dose-dependent differences in latency, amplitude, responsibility and waveform of AERP between persons who have and have not take in alcohol.3,4 The epidemiological investigations show that the central nervous function of the offspring of alcohol users might be also affected.5,6 Because the clinic research is limited by certain factors, several animal models have been applied to examine the influences of alcohol on consciousness with AERP. In the present study, young rats were exposed to alcohol during fetal development and AERP as indicator was recorded to monitor the central auditory function, and its mechanisms and characteristics of effects of the fetal alcoholism on auditory center function in rats were analyzed and discussed.

  10. Assembly of the auditory circuitry by a Hox genetic network in the mouse brainstem.

    Directory of Open Access Journals (Sweden)

    Maria Di Bonito

    Full Text Available Rhombomeres (r contribute to brainstem auditory nuclei during development. Hox genes are determinants of rhombomere-derived fate and neuronal connectivity. Little is known about the contribution of individual rhombomeres and their associated Hox codes to auditory sensorimotor circuitry. Here, we show that r4 contributes to functionally linked sensory and motor components, including the ventral nucleus of lateral lemniscus, posterior ventral cochlear nuclei (VCN, and motor olivocochlear neurons. Assembly of the r4-derived auditory components is involved in sound perception and depends on regulatory interactions between Hoxb1 and Hoxb2. Indeed, in Hoxb1 and Hoxb2 mutant mice the transmission of low-level auditory stimuli is lost, resulting in hearing impairments. On the other hand, Hoxa2 regulates the Rig1 axon guidance receptor and controls contralateral projections from the anterior VCN to the medial nucleus of the trapezoid body, a circuit involved in sound localization. Thus, individual rhombomeres and their associated Hox codes control the assembly of distinct functionally segregated sub-circuits in the developing auditory brainstem.

  11. Assembly of the auditory circuitry by a Hox genetic network in the mouse brainstem.

    Science.gov (United States)

    Di Bonito, Maria; Narita, Yuichi; Avallone, Bice; Sequino, Luigi; Mancuso, Marta; Andolfi, Gennaro; Franzè, Anna Maria; Puelles, Luis; Rijli, Filippo M; Studer, Michèle

    2013-01-01

    Rhombomeres (r) contribute to brainstem auditory nuclei during development. Hox genes are determinants of rhombomere-derived fate and neuronal connectivity. Little is known about the contribution of individual rhombomeres and their associated Hox codes to auditory sensorimotor circuitry. Here, we show that r4 contributes to functionally linked sensory and motor components, including the ventral nucleus of lateral lemniscus, posterior ventral cochlear nuclei (VCN), and motor olivocochlear neurons. Assembly of the r4-derived auditory components is involved in sound perception and depends on regulatory interactions between Hoxb1 and Hoxb2. Indeed, in Hoxb1 and Hoxb2 mutant mice the transmission of low-level auditory stimuli is lost, resulting in hearing impairments. On the other hand, Hoxa2 regulates the Rig1 axon guidance receptor and controls contralateral projections from the anterior VCN to the medial nucleus of the trapezoid body, a circuit involved in sound localization. Thus, individual rhombomeres and their associated Hox codes control the assembly of distinct functionally segregated sub-circuits in the developing auditory brainstem.

  12. Auditory cortical delta-entrainment interacts with oscillatory power in multiple fronto-parietal networks.

    Science.gov (United States)

    Keitel, Anne; Ince, Robin A A; Gross, Joachim; Kayser, Christoph

    2017-02-15

    The timing of slow auditory cortical activity aligns to the rhythmic fluctuations in speech. This entrainment is considered to be a marker of the prosodic and syllabic encoding of speech, and has been shown to correlate with intelligibility. Yet, whether and how auditory cortical entrainment is influenced by the activity in other speech-relevant areas remains unknown. Using source-localized MEG data, we quantified the dependency of auditory entrainment on the state of oscillatory activity in fronto-parietal regions. We found that delta band entrainment interacted with the oscillatory activity in three distinct networks. First, entrainment in the left anterior superior temporal gyrus (STG) was modulated by beta power in orbitofrontal areas, possibly reflecting predictive top-down modulations of auditory encoding. Second, entrainment in the left Heschl's Gyrus and anterior STG was dependent on alpha power in central areas, in line with the importance of motor structures for phonological analysis. And third, entrainment in the right posterior STG modulated theta power in parietal areas, consistent with the engagement of semantic memory. These results illustrate the topographical network interactions of auditory delta entrainment and reveal distinct cross-frequency mechanisms by which entrainment can interact with different cognitive processes underlying speech perception.

  13. Auditory Cortical Maturation in a Child with Cochlear Implant: Analysis of Electrophysiological and Behavioral Measures

    Directory of Open Access Journals (Sweden)

    Liliane Aparecida Fagundes Silva

    2015-01-01

    Full Text Available The purpose of this study was to longitudinally assess the behavioral and electrophysiological hearing changes of a girl inserted in a CI program, who had bilateral profound sensorineural hearing loss and underwent surgery of cochlear implantation with electrode activation at 21 months of age. She was evaluated using the P1 component of Long Latency Auditory Evoked Potential (LLAEP; speech perception tests of the Glendonald Auditory Screening Procedure (GASP; Infant Toddler Meaningful Auditory Integration Scale (IT-MAIS; and Meaningful Use of Speech Scales (MUSS. The study was conducted prior to activation and after three, nine, and 18 months of cochlear implant activation. The results of the LLAEP were compared with data from a hearing child matched by gender and chronological age. The results of the LLAEP of the child with cochlear implant showed gradual decrease in latency of the P1 component after auditory stimulation (172 ms–134 ms. In the GASP, IT-MAIS, and MUSS, gradual development of listening skills and oral language was observed. The values of the LLAEP of the hearing child were expected for chronological age (132 ms–128 ms. The use of different clinical instruments allow a better understanding of the auditory habilitation and rehabilitation process via CI.

  14. Auditory training during development mitigates a hearing loss-induced perceptual deficit.

    Science.gov (United States)

    Kang, Ramanjot; Sarro, Emma C; Sanes, Dan H

    2014-01-01

    Sensory experience during early development can shape the central nervous system and this is thought to influence adult perceptual skills. In the auditory system, early induction of conductive hearing loss (CHL) leads to deficits in central auditory coding properties in adult animals, and this is accompanied by diminished perceptual thresholds. In contrast, a brief regimen of auditory training during development can enhance the perceptual skills of animals when tested in adulthood. Here, we asked whether a brief period of training during development could compensate for the perceptual deficits displayed by adult animals reared with CHL. Juvenile gerbils with CHL, and age-matched controls, were trained on a frequency modulation (FM) detection task for 4 or 10 days. The performance of each group was subsequently assessed in adulthood, and compared to adults with normal hearing (NH) or adults raised with CHL that did not receive juvenile training. We show that as juveniles, both CHL and NH animals display similar FM detection thresholds that are not immediately impacted by the perceptual training. However, as adults, detection thresholds and psychometric function slopes of these animals were significantly improved. Importantly, CHL adults with juvenile training displayed thresholds that approached NH adults. Additionally, we found that hearing impaired animals trained for 10 days displayed adult thresholds closer to untrained adults than those trained for 4 days. Thus, a relatively brief period of auditory training may compensate for the deleterious impact of hearing deprivation on auditory perception on the trained task.

  15. Speech distortion measure based on auditory properties

    Institute of Scientific and Technical Information of China (English)

    CHEN Guo; HU Xiulin; ZHANG Yunyu; ZHU Yaoting

    2000-01-01

    The Perceptual Spectrum Distortion (PSD), based on auditory properties of human being, is presented to measure speech distortion. The PSD measure calculates the speech distortion distance by simulating the auditory properties of human being and converting short-time speech power spectrum to auditory perceptual spectrum. Preliminary simulative experiments in comparison with the Itakura measure have been done. The results show that the PSD measure is a perferable speech distortion measure and more consistent with subjective assessment of speech quality.

  16. Auditory evoked potentials and multiple sclerosis

    OpenAIRE

    Carla Gentile Matas; Sandro Luiz de Andrade Matas; Caroline Rondina Salzano de Oliveira; Isabela Crivellaro Gonçalves

    2010-01-01

    Multiple sclerosis (MS) is an inflammatory, demyelinating disease that can affect several areas of the central nervous system. Damage along the auditory pathway can alter its integrity significantly. Therefore, it is important to investigate the auditory pathway, from the brainstem to the cortex, in individuals with MS. OBJECTIVE: The aim of this study was to characterize auditory evoked potentials in adults with MS of the remittent-recurrent type. METHOD: The study comprised 25 individuals w...

  17. Auditory Perception in an Open Space: Detection and Recognition

    Science.gov (United States)

    2015-06-01

    conducted in a laboratory setting using loudspeaker or earphone sound reproduction . No study has been found to report similar data collected in an...Report 5197. Saberi K, Dostal L, Sadraladabai T, Bull V. Free-field release from masking. Journal of the Acoustical Society of America. 1991;90(3

  18. Effects of Amplitude Compression on Relative Auditory Distance Perception

    Science.gov (United States)

    2013-10-01

    amplitude compression in a hearing aid. In this case, the relationship between the input levels and the output levels was evaluated to determine the...A reprint from Towson University’s Office of Graduate Studies, Department of Audiology, Speech-Language Pathology, and Deaf Studies, Towson, MD, May...A reprint from Towson University’s Office of Graduate Studies, Department of Audiology, Speech-Language Pathology, and Deaf Studies, Towson, MD

  19. From sound to shape: auditory perception of drawing movements.

    Science.gov (United States)

    Thoret, Etienne; Aramaki, Mitsuko; Kronland-Martinet, Richard; Velay, Jean-Luc; Ystad, Sølvi

    2014-06-01

    This study investigates the human ability to perceive biological movements through friction sounds produced by drawings and, furthermore, the ability to recover drawn shapes from the friction sounds generated. In a first experiment, friction sounds, real-time synthesized and modulated by the velocity profile of the drawing gesture, revealed that subjects associated a biological movement to those sounds whose timbre variations were generated by velocity profiles following the 1/3 power law. This finding demonstrates that sounds can adequately inform about human movements if their acoustic characteristics are in accordance with the kinematic rule governing actual movements. Further investigations of our ability to recognize drawn shapes were carried out in 2 association tasks in which both recorded and synthesized sounds had to be associated to both distinct and similar visual shapes. Results revealed that, for both synthesized and recorded sounds, subjects made correct associations for distinct shapes, although some confusion was observed for similar shapes. The comparisons made between recorded and synthesized sounds lead to conclude that the timbre variations induced by the velocity profile enabled the shape recognition. The results are discussed in the context of the ecological and ideomotor frameworks.

  20. Sources of variability in consonant perception and their auditory correlates

    DEFF Research Database (Denmark)

    Zaar, Johannes; Dau, Torsten

    be demonstrated that any physical change in the sBmuli had a measurable effect. This holds even for slight Bme-shics in the steady-state masking-noise waveform. Furthermore, responses obtained with idenBcal sBmuli differed substanBally across different normal-hearing listeners, while individual listeners were...

  1. Lateralization of Music Processing with Noises in the Auditory Cortex: An fNIRS Study

    OpenAIRE

    Hendrik eSantosa; Melissa Jiyoun Hong; Keum-Shik eHong

    2014-01-01

    The present study is to determine the effects of background noise on the hemispheric lateralization in music processing by exposing fourteen subjects to four different auditory environments: music segments only, noise segments only, music+noise segments, and the entire music interfered by noise segments. The hemodynamic responses in both hemispheres caused by the perception of music in 10 different conditions were measured using functional near-infrared spectroscopy. As a feature to distingui...

  2. Lateralization of music processing with noises in the auditory cortex: an fNIRS study

    OpenAIRE

    Santosa, Hendrik; Hong, Melissa Jiyoun; Hong, Keum-Shik

    2014-01-01

    The present study is to determine the effects of background noise on the hemispheric lateralization in music processing by exposing 14 subjects to four different auditory environments: music segments only, noise segments only, music + noise segments, and the entire music interfered by noise segments. The hemodynamic responses in both hemispheres caused by the perception of music in 10 different conditions were measured using functional near-infrared spectroscopy. As a feature to distinguish s...

  3. Seeing the song: left auditory structures may track auditory-visual dynamic alignment.

    Directory of Open Access Journals (Sweden)

    Julia A Mossbridge

    Full Text Available Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements, it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment.

  4. Auditory Training and Its Effects upon the Auditory Discrimination and Reading Readiness of Kindergarten Children.

    Science.gov (United States)

    Cullen, Minga Mustard

    The purpose of this investigation was to evaluate the effects of a systematic auditory training program on the auditory discrimination ability and reading readiness of 55 white, middle/upper middle class kindergarten students. Following pretesting with the "Wepman Auditory Discrimination Test,""The Clymer-Barrett Prereading Battery," and the…

  5. Effects of Methylphenidate (Ritalin) on Auditory Performance in Children with Attention and Auditory Processing Disorders.

    Science.gov (United States)

    Tillery, Kim L.; Katz, Jack; Keller, Warren D.

    2000-01-01

    A double-blind, placebo-controlled study examined effects of methylphenidate (Ritalin) on auditory processing in 32 children with both attention deficit hyperactivity disorder and central auditory processing (CAP) disorder. Analyses revealed that Ritalin did not have a significant effect on any of the central auditory processing measures, although…

  6. A Role for the Right Superior Temporal Sulcus in Categorical Perception of Musical Chords

    Science.gov (United States)

    Klein, Mike E.; Zatorre, Robert J.

    2011-01-01

    Categorical perception (CP) is a mechanism whereby non-identical stimuli that have the same underlying meaning become invariantly represented in the brain. Through behavioral identification and discrimination tasks, CP has been demonstrated to occur broadly across the auditory modality, including in perception of speech (e.g. phonemes) and music…

  7. Central auditory function of deafness genes.

    Science.gov (United States)

    Willaredt, Marc A; Ebbers, Lena; Nothwang, Hans Gerd

    2014-06-01

    The highly variable benefit of hearing devices is a serious challenge in auditory rehabilitation. Various factors contribute to this phenomenon such as the diversity in ear defects, the different extent of auditory nerve hypoplasia, the age of intervention, and cognitive abilities. Recent analyses indicate that, in addition, central auditory functions of deafness genes have to be considered in this context. Since reduced neuronal activity acts as the common denominator in deafness, it is widely assumed that peripheral deafness influences development and function of the central auditory system in a stereotypical manner. However, functional characterization of transgenic mice with mutated deafness genes demonstrated gene-specific abnormalities in the central auditory system as well. A frequent function of deafness genes in the central auditory system is supported by a genome-wide expression study that revealed significant enrichment of these genes in the transcriptome of the auditory brainstem compared to the entire brain. Here, we will summarize current knowledge of the diverse central auditory functions of deafness genes. We furthermore propose the intimately interwoven gene regulatory networks governing development of the otic placode and the hindbrain as a mechanistic explanation for the widespread expression of these genes beyond the cochlea. We conclude that better knowledge of central auditory dysfunction caused by genetic alterations in deafness genes is required. In combination with improved genetic diagnostics becoming currently available through novel sequencing technologies, this information will likely contribute to better outcome prediction of hearing devices.

  8. Modeling the Development of Audiovisual Cue Integration in Speech Perception

    Science.gov (United States)

    Getz, Laura M.; Nordeen, Elke R.; Vrabic, Sarah C.; Toscano, Joseph C.

    2017-01-01

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues. PMID:28335558

  9. STUDY ON PHASE PERCEPTION IN SPEECH

    Institute of Scientific and Technical Information of China (English)

    Tong Ming; Bian Zhengzhong; Li Xiaohui; Dai Qijun; Chen Yanpu

    2003-01-01

    The perceptual effect of the phase information in speech has been studied by auditorysubjective tests. On the condition that the phase spectrum in speech is changed while amplitudespectrum is unchanged, the tests show that: (1) If the envelop of the reconstructed speech signalis unchanged, there is indistinctive auditory perception between the original speech and thereconstructed speech; (2) The auditory perception effect of the reconstructed speech mainly lieson the amplitude of the derivative of the additive phase; (3) td is the maximum relative time shiftbetween different frequency components of the reconstructed speech signal. The speech qualityis excellent while td <10ms; good while 10ms< td <20ms; common while 20ms< td <35ms, andpoor while td >35ms.

  10. Timbre perception and object separation with normal and impaired hearing

    OpenAIRE

    Emiroglu, Suzan Selma

    2007-01-01

    Timbre is a combination of all auditory object attributes other than pitch, loudness and duration. A timbre distortion caused by a sensorineural hearing loss not only affects music perception, but may also influence object recognition in general. In order to quantify differences in object segregation and timbre discrimination between normal-hearing and hearing-impaired listeners with a sensorineural hearing loss, a new method for studying timbre perception was developed, which uses cross-fade...

  11. The perception of microsound and its musical implications.

    Science.gov (United States)

    Roads, Curtis

    2003-11-01

    Sound particles or microsounds last only a few milliseconds, near the threshold of auditory perception. We can easily analyze the physical properties of sound particles either individually or in masses. However, correlating these properties with human perception remains complicated. One cannot speak of a single time frame, or a "time constant" for the auditory system. The hearing mechanism involves many different agents, each of which operates on its own timescale. The signals being sent by diverse hearing agents are integrated by the brain into a coherent auditory picture. The pioneer of "sound quanta," Dennis Gabor (1900-1979), suggested that at least two mechanisms are at work in microevent detection: one that isolates events, and another that ascertains their pitch. Human hearing imposes a certain minimum duration in order to establish a firm sense of pitch, amplitude, and timbre. This paper traces disparate strands of literature on the topic and summarizes their meaning. Specifically, we examine the perception of intensity and pitch of microsounds, the phenomena of tone fusion and fission, temporal auditory acuity, and preattentive perception. The final section examines the musical implications of microsonic analysis, synthesis, and transformation.

  12. Speech Perception Ability in Individuals with Friedreich Ataxia

    Science.gov (United States)

    Rance, Gary; Fava, Rosanne; Baldock, Heath; Chong, April; Barker, Elizabeth; Corben, Louise; Delatycki

    2008-01-01

    The aim of this study was to investigate auditory pathway function and speech perception ability in individuals with Friedreich ataxia (FRDA). Ten subjects confirmed by genetic testing as being homozygous for a GAA expansion in intron 1 of the FXN gene were included. While each of the subjects demonstrated normal, or near normal sound detection, 3…

  13. To What Extent Do Gestalt Grouping Principles Influence Tactile Perception?

    Science.gov (United States)

    Gallace, Alberto; Spence, Charles

    2011-01-01

    Since their formulation by the Gestalt movement more than a century ago, the principles of perceptual grouping have primarily been investigated in the visual modality and, to a lesser extent, in the auditory modality. The present review addresses the question of whether the same grouping principles also affect the perception of tactile stimuli.…

  14. Multisensory Speech Perception in Children with Autism Spectrum Disorders

    Science.gov (United States)

    Woynaroski, Tiffany G.; Kwakye, Leslie D.; Foss-Feig, Jennifer H.; Stevenson, Ryan A.; Stone, Wendy L.; Wallace, Mark T.

    2013-01-01

    This study examined unisensory and multisensory speech perception in 8-17 year old children with autism spectrum disorders (ASD) and typically developing controls matched on chronological age, sex, and IQ. Consonant-vowel syllables were presented in visual only, auditory only, matched audiovisual, and mismatched audiovisual ("McGurk")…

  15. Ordinal models of audiovisual speech perception

    DEFF Research Database (Denmark)

    Andersen, Tobias

    2011-01-01

    Audiovisual information is integrated in speech perception. One manifestation of this is the McGurk illusion in which watching the articulating face alters the auditory phonetic percept. Understanding this phenomenon fully requires a computational model with predictive power. Here, we describe...... ordinal models that can account for the McGurk illusion. We compare this type of models to the Fuzzy Logical Model of Perception (FLMP) in which the response categories are not ordered. While the FLMP generally fit the data better than the ordinal model it also employs more free parameters in complex...... experiments when the number of response categories are high as it is for speech perception in general. Testing the predictive power of the models using a form of cross-validation we found that ordinal models perform better than the FLMP. Based on these findings we suggest that ordinal models generally have...

  16. Autosomal recessive hereditary auditory neuropathy

    Institute of Scientific and Technical Information of China (English)

    王秋菊; 顾瑞; 曹菊阳

    2003-01-01

    Objectives: Auditory neuropathy (AN) is a sensorineural hearing disorder characterized by absent or abnormal auditory brainstem responses (ABRs) and normal cochlear outer hair cell function as measured by otoacoustic emissions (OAEs). Many risk factors are thought to be involved in its etiology and pathophysiology. Three Chinese pedigrees with familial AN are presented herein to demonstrate involvement of genetic factors in AN etiology. Methods: Probands of the above - mentioned pedigrees, who had been diagnosed with AN, were evaluated and followed up in the Department of Otolaryngology Head and Neck Surgery, China PLA General Hospital. Their family members were studied and the pedigree diagrams were established. History of illness, physical examination,pure tone audiometry, acoustic reflex, ABRs and transient evoked and distortion- product otoacoustic emissions (TEOAEs and DPOAEs) were obtained from members of these families. DPOAE changes under the influence of contralateral sound stimuli were observed by presenting a set of continuous white noise to the non - recording ear to exam the function of auditory efferent system. Some subjects received vestibular caloric test, computed tomography (CT)scan of the temporal bone and electrocardiography (ECG) to exclude other possible neuropathy disorders. Results: In most affected subjects, hearing loss of various degrees and speech discrimination difficulties started at 10 to16 years of age. Their audiological evaluation showed absence of acoustic reflex and ABRs. As expected in AN, these subjects exhibited near normal cochlear outer hair cell function as shown in TEOAE & DPOAE recordings. Pure- tone audiometry revealed hearing loss ranging from mild to severe in these patients. Autosomal recessive inheritance patterns were observed in the three families. In Pedigree Ⅰ and Ⅱ, two affected brothers were found respectively, while in pedigree Ⅲ, 2 sisters were affected. All the patients were otherwise normal without

  17. Auditory hallucinations in nonverbal quadriplegics.

    Science.gov (United States)

    Hamilton, J

    1985-11-01

    When a system for communicating with nonverbal, quadriplegic, institutionalized residents was developed, it was discovered that many were experiencing auditory hallucinations. Nine cases are presented in this study. The "voices" described have many similar characteristics, the primary one being that they give authoritarian commands that tell the residents how to behave and to which the residents feel compelled to respond. Both the relationship of this phenomenon to the theoretical work of Julian Jaynes and its effect on the lives of the residents are discussed.

  18. Narrow, duplicated internal auditory canal

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, T. [Servico de Neurorradiologia, Hospital Garcia de Orta, Avenida Torrado da Silva, 2801-951, Almada (Portugal); Shayestehfar, B. [Department of Radiology, UCLA Oliveview School of Medicine, Los Angeles, California (United States); Lufkin, R. [Department of Radiology, UCLA School of Medicine, Los Angeles, California (United States)

    2003-05-01

    A narrow internal auditory canal (IAC) constitutes a relative contraindication to cochlear implantation because it is associated with aplasia or hypoplasia of the vestibulocochlear nerve or its cochlear branch. We report an unusual case of a narrow, duplicated IAC, divided by a bony septum into a superior relatively large portion and an inferior stenotic portion, in which we could identify only the facial nerve. This case adds support to the association between a narrow IAC and aplasia or hypoplasia of the vestibulocochlear nerve. The normal facial nerve argues against the hypothesis that the narrow IAC is the result of a primary bony defect which inhibits the growth of the vestibulocochlear nerve. (orig.)

  19. Mapping tonotopy in human auditory cortex

    NARCIS (Netherlands)

    van Dijk, Pim; Langers, Dave R M; Moore, BCJ; Patterson, RD; Winter, IM; Carlyon, RP; Gockel, HE

    2013-01-01

    Tonotopy is arguably the most prominent organizational principle in the auditory pathway. Nevertheless, the layout of tonotopic maps in humans is still debated. We present neuroimaging data that robustly identify multiple tonotopic maps in the bilateral auditory cortex. In contrast with some earlier

  20. Bilateral duplication of the internal auditory canal

    Energy Technology Data Exchange (ETDEWEB)

    Weon, Young Cheol; Kim, Jae Hyoung; Choi, Sung Kyu [Seoul National University College of Medicine, Department of Radiology, Seoul National University Bundang Hospital, Seongnam-si (Korea); Koo, Ja-Won [Seoul National University College of Medicine, Department of Otolaryngology, Seoul National University Bundang Hospital, Seongnam-si (Korea)

    2007-10-15

    Duplication of the internal auditory canal is an extremely rare temporal bone anomaly that is believed to result from aplasia or hypoplasia of the vestibulocochlear nerve. We report bilateral duplication of the internal auditory canal in a 28-month-old boy with developmental delay and sensorineural hearing loss. (orig.)

  1. Primary Auditory Cortex Regulates Threat Memory Specificity

    Science.gov (United States)

    Wigestrand, Mattis B.; Schiff, Hillary C.; Fyhn, Marianne; LeDoux, Joseph E.; Sears, Robert M.

    2017-01-01

    Distinguishing threatening from nonthreatening stimuli is essential for survival and stimulus generalization is a hallmark of anxiety disorders. While auditory threat learning produces long-lasting plasticity in primary auditory cortex (Au1), it is not clear whether such Au1 plasticity regulates memory specificity or generalization. We used…

  2. Further Evidence of Auditory Extinction in Aphasia

    Science.gov (United States)

    Marshall, Rebecca Shisler; Basilakos, Alexandra; Love-Myers, Kim

    2013-01-01

    Purpose: Preliminary research ( Shisler, 2005) suggests that auditory extinction in individuals with aphasia (IWA) may be connected to binding and attention. In this study, the authors expanded on previous findings on auditory extinction to determine the source of extinction deficits in IWA. Method: Seventeen IWA (M[subscript age] = 53.19 years)…

  3. Auditory Processing Disorder and Foreign Language Acquisition

    Science.gov (United States)

    Veselovska, Ganna

    2015-01-01

    This article aims at exploring various strategies for coping with the auditory processing disorder in the light of foreign language acquisition. The techniques relevant to dealing with the auditory processing disorder can be attributed to environmental and compensatory approaches. The environmental one involves actions directed at creating a…

  4. Comparing perceived auditory width to the visual image of a performing ensemble in contrasting bi-modal environments.

    Science.gov (United States)

    Valente, Daniel L; Braasch, Jonas; Myrbeck, Shane A

    2012-01-01

    Despite many studies investigating auditory spatial impressions in rooms, few have addressed the impact of simultaneous visual cues on localization and the perception of spaciousness. The current research presents an immersive audiovisual environment in which participants were instructed to make auditory width judgments in dynamic bi-modal settings. The results of these psychophysical tests suggest the importance of congruent audio visual presentation to the ecological interpretation of an auditory scene. Supporting data were accumulated in five rooms of ascending volumes and varying reverberation times. Participants were given an audiovisual matching test in which they were instructed to pan the auditory width of a performing ensemble to a varying set of audio and visual cues in rooms. Results show that both auditory and visual factors affect the collected responses and that the two sensory modalities coincide in distinct interactions. The greatest differences between the panned audio stimuli given a fixed visual width were found in the physical space with the largest volume and the greatest source distance. These results suggest, in this specific instance, a predominance of auditory cues in the spatial analysis of the bi-modal scene.

  5. [Physiognomy-accompanying auditory hallucinations in schizophrenia: psychopathological investigation of 10 patients].

    Science.gov (United States)

    Nagashima, Hideaki; Kobayashi, Toshiyuki

    2010-01-01

    We previously reported two schizophrenic patients with characteristic hallucinations consisting of auditory hallucinations accompanied by visual hallucinations of the speaker's face. The patient sees the face of the hallucinatory speaker in his/her mind and hears the voice talking inwardly. We termed these experiences physiognomy-accompanying auditory hallucinations. In this report, we present 10 patients with schizophrenia showing physiognomy-accompanying auditory hallucinations and evaluate the characteristics of these clinical symptoms. Moreover we consider what the symptoms mean for patients and the metabasis from structural aspects. Lastly, we consider how we can treat these patients living autistic lives with the symptoms. During physiognomy-accompanying auditory hallucinations, the realistic face moves its mouth and talks to the patient expressively. In early onset cases, the faces of various real people appear talking about ordinary things while in late onset cases, the faces can be imaginary but are mainly real people talking about ordinary or delusional things. We suppose that these characteristics of the symptoms unify the schizophrenic world overwhelmed by "a force of non-sense" to "the sense field". "The force of non-sense" is a substantial power but cannot be reduced to the real meaning. And we suppose that not visual reality but the intensity of auditory hallucinations of the face brings about the overwhelming intensity of symptoms and the substantiality of this intensity depends on the states of excessive fullness of "the force of non-sense". With these symptoms patients see the narration of auditory hallucinations through the facial image and the content of auditory hallucinations is compressed into the movement of visual hallucinations of the speaker's face. The form of symptoms is realistic but the speaker's face and voice are beyond ordinary time and space. The symptoms are essentially different from ordinary perception. The visual

  6. THE EFFECTS OF SALICYLATE ON AUDITORY EVOKED POTENTIAL AMPLITWDE FROM THE AUDITORY CORTEX AND AUDITORY BRAINSTEM

    Institute of Scientific and Technical Information of China (English)

    Brian Sawka; SUN Wei

    2014-01-01

    Tinnitus has often been studied using salicylate in animal models as they are capable of inducing tempo-rary hearing loss and tinnitus. Studies have recently observed enhancement of auditory evoked responses of the auditory cortex (AC) post salicylate treatment which is also shown to be related to tinnitus like behavior in rats. The aim of this study was to observe if enhancements of the AC post salicylate treatment are also present at structures in the brainstem. Four male Sprague Dawley rats with AC implanted electrodes were tested for both AC and auditory brainstem response (ABR) recordings pre and post 250 mg/kg intraperitone-al injections of salicylate. The responses were recorded as the peak to trough amplitudes of P1-N1 (AC), ABR wave V, and ABR waveⅡ. AC responses resulted in statistically significant enhancement of ampli-tude at 2 hours post salicylate with 90 dB stimuli tone bursts of 4, 8, 12, and 20 kHz. Wave V of ABR re-sponses at 90 dB resulted in a statistically significant reduction of amplitude 2 hours post salicylate and a mean decrease of amplitude of 31%for 16 kHz. WaveⅡamplitudes at 2 hours post treatment were signifi-cantly reduced for 4, 12, and 20 kHz stimuli at 90 dB SPL. Our results suggest that the enhancement chang-es of the AC related to salicylate induced tinnitus are generated superior to the level of the inferior colliculus and may originate in the AC.

  7. How and when auditory action effects impair motor performance.

    Science.gov (United States)

    D'Ausilio, Alessandro; Brunetti, Riccardo; Delogu, Franco; Santonico, Cristina; Belardinelli, Marta Olivetti

    2010-03-01

    Music performance is characterized by complex cross-modal interactions, offering a remarkable window into training-induced long-term plasticity and multimodal integration processes. Previous research with pianists has shown that playing a musical score is affected by the concurrent presentation of musical tones. We investigated the nature of this audio-motor coupling by evaluating how congruent and incongruent cross-modal auditory cues affect motor performance at different time intervals. We found facilitation if a congruent sound preceded motor planning with a large Stimulus Onset Asynchrony (SOA -300 and -200 ms), whereas we observed interference when an incongruent sound was presented with shorter SOAs (-200, -100 and 0 ms). Interference and facilitation, instead of developing through time as opposite effects of the same mechanism, showed dissociable time-courses suggesting their derivation from distinct processes. It seems that the motor preparation induced by the auditory cue has different consequences on motor performance according to the congruency with the future motor state the system is planning and the degree of asynchrony between the motor act and the sound presentation. The temporal dissociation we found contributes to the understanding of how perception meets action in the context of audio-motor integration.

  8. Auditory and language skills of children using hearing aids

    Directory of Open Access Journals (Sweden)

    Leticia Macedo Penna

    2015-04-01

    Full Text Available INTRODUCTION: Hearing loss may impair the development of a child. The rehabilitation process for individuals with hearing loss depends on effective interventions.OBJECTIVE: To describe the linguistic profile and the hearing skills of children using hearing aids, to characterize the rehabilitation process and to analyze its association with the children's degree of hearing loss.METHODS: Cross-sectional study with a non-probabilistic sample of 110 children using hearing aids (6-10 years of age for mild to profound hearing loss. Tests of language, speech perception, phonemic discrimination, and school performance were performed. The associations were verified by the following tests: chi-squared for linear trend and Kruskal-Wallis.RESULTS: About 65% of the children had altered vocabulary, whereas 89% and 94% had altered phonology and inferior school performance, respectively. The degree of hearing loss was associated with differences in the median age of diagnosis; the age at which the hearing aids were adapted and at which speech therapy was started; and the performance on auditory tests and the type of communication used.CONCLUSION: The diagnosis of hearing loss and the clinical interventions occurred late, contributing to impairments in auditory and language development.

  9. Cochlear Damage Affects Neurotransmitter Chemistry in the Central Auditory System

    Directory of Open Access Journals (Sweden)

    Donald Albert Godfrey

    2014-11-01

    Full Text Available Tinnitus, the perception of a monotonous sound not actually present in the environment, affects nearly 20% of the population of the United States. Although there has been great progress in tinnitus research over the past 25 years, the neurochemical basis of tinnitus is still poorly understood. We review current research about the effects of various types of cochlear damage on the neurotransmitter chemistry in the central auditory system and document evidence that different changes in this chemistry can underlie similar behaviorally measured tinnitus symptoms. Most available data have been obtained from rodents following cochlear damage produced by cochlear ablation, loud sound, or ototoxic drugs. Effects on neurotransmitter systems have been measured as changes in neurotransmitter level, synthesis, release, uptake, and receptors. In this review, magnitudes of changes are presented for neurotransmitter-related amino acids, acetylcholine, and serotonin. A variety of effects have been found in these studies that may be related to animal model, survival time, type of cochlear damage, or methodology. The overall impression from the evidence presented is that any imbalance of neurotransmitter-related chemistry could disrupt auditory processing in such a way as to produce tinnitus.

  10. A model of auditory nerve responses to electrical stimulation

    DEFF Research Database (Denmark)

    Joshi, Suyash Narendra; Dau, Torsten; Epp, Bastian

    to neutralize the charge induced during the cathodic phase. Single-neuron recordings in cat auditory nerve using monophasic electrical stimulation show, however, that both phases in isolation can generate an AP. The site of AP generation differs for both phases, being more central for the anodic phase and more...... perception of CI listeners, a model needs to incorporate the correct responsiveness of the AN to anodic and cathodic polarity. Previous models of electrical stimulation have been developed based on AN responses to symmetric biphasic stimulation or to monophasic cathodic stimulation. These models, however......, fail to correctly predict responses to anodic stimulation. This study presents a model that simulates AN responses to anodic and cathodic stimulation. The main goal was to account for the data obtained with monophasic electrical stimulation in cat AN. The model is based on an exponential integrate...

  11. Is the auditory evoked P2 response a biomarker of learning?

    Directory of Open Access Journals (Sweden)

    Kelly eTremblay

    2014-02-01

    Full Text Available Even though auditory training exercises for humans have been shown to improve certain perceptual skills of individuals with and without hearing loss, there is a lack of knowledge pertaining to which aspects of training are responsible for the perceptual gains, and which aspects of perception are changed. To better define how auditory training impacts brain and behavior, electroencephalography and magnetoencephalography have been used to determine the time course and coincidence of cortical modulations associated with different types of training. Here we focus on P1-N1-P2 auditory evoked responses (AEP, as there are consistent reports of gains in P2 amplitude following various types of auditory training experiences; including music and speech-sound training. The purpose of this experiment was to determine if the auditory evoked P2 response is a biomarker of learning. To do this, we taught native English speakers to identify a new pre-voiced temporal cue that is not used phonemically in the English language so that coinciding changes in evoked neural activity could be characterized. To differentiate possible effects of repeated stimulus exposure and a button-pushing task from learning itself, we examined modulations in brain activity in a group of participants who learned to identify the pre-voicing contrast and compared it to participants, matched in time, and stimulus exposure, that did not. The main finding was that the amplitude of the P2 auditory evoked response increased across repeated EEG sessions for all groups, regardless of any change in perceptual performance. What’s more, these effects were retained for months. Changes in P2 amplitude were attributed to changes in neural activity associated with the acquisition process and not the learned outcome itself. A further finding was the expression of a late negativity (LN wave 600-900 ms post-stimulus onset, post-training, exclusively for the group that learned to identify the pre

  12. The role of the auditory brainstem in processing musically relevant pitch.

    Science.gov (United States)

    Bidelman, Gavin M

    2013-01-01

    Neuroimaging work has shed light on the cerebral architecture involved in processing the melodic and harmonic aspects of music. Here, recent evidence is reviewed illustrating that subcortical auditory structures contribute to the early formation and processing of musically relevant pitch. Electrophysiological recordings from the human brainstem and population responses from the auditory nerve reveal that nascent features of tonal music (e.g., consonance/dissonance, pitch salience, harmonic sonority) are evident at early, subcortical levels of the auditory pathway. The salience and harmonicity of brainstem activity is strongly correlated with listeners' perceptual preferences and perceived consonance for the tonal relationships of music. Moreover, the hierarchical ordering of pitch intervals/chords described by the Western music practice and their perceptual consonance is well-predicted by the salience with which pitch combinations are encoded in subcortical auditory structures. While the neural correlates of consonance can be tuned and exaggerated with musical training, they persist even in the absence of musicianship or long-term enculturation. As such, it is posited that the structural foundations of musical pitch might result from innate processing performed by the central auditory system. A neurobiological predisposition for consonant, pleasant sounding pitch relationships may be one reason why these pitch combinations have been favored by composers and listeners for centuries. It is suggested that important perceptual dimensions of music emerge well before the auditory signal reaches cerebral cortex and prior to attentional engagement. While cortical mechanisms are no doubt critical to the perception, production, and enjoyment of music, the contribution of subcortical structures implicates a more integrated, hierarchically organized network underlying music processing within the brain.

  13. Coding of communication calls in the subcortical and cortical structures of the auditory system.

    Science.gov (United States)

    Suta, D; Popelár, J; Syka, J

    2008-01-01

    The processing of species-specific communication signals in the auditory system represents an important aspect of animal behavior and is crucial for its social interactions, reproduction, and survival. In this article the neuronal mechanisms underlying the processing of communication signals in the higher centers of the auditory system--inferior colliculus (IC), medial geniculate body (MGB) and auditory cortex (AC)--are reviewed, with particular attention to the guinea pig. The selectivity of neuronal responses for individual calls in these auditory centers in the guinea pig is usually low--most neurons respond to calls as well as to artificial sounds; the coding of complex sounds in the central auditory nuclei is apparently based on the representation of temporal and spectral features of acoustical stimuli in neural networks. Neuronal response patterns in the IC reliably match the sound envelope for calls characterized by one or more short impulses, but do not exactly fit the envelope for long calls. Also, the main spectral peaks are represented by neuronal firing rates in the IC. In comparison to the IC, response patterns in the MGB and AC demonstrate a less precise representation of the sound envelope, especially in the case of longer calls. The spectral representation is worse in the case of low-frequency calls, but not in the case of broad-band calls. The emotional content of the call may influence neuronal responses in the auditory pathway, which can be demonstrated by stimulation with time-reversed calls or by measurements performed under different levels of anesthesia. The investigation of the principles of the neural coding of species-specific vocalizations offers some keys for understanding the neural mechanisms underlying human speech perception.

  14. Brain state-dependent abnormal LFP activity in the auditory cortex of a schizophrenia mouse model.

    Science.gov (United States)

    Nakao, Kazuhito; Nakazawa, Kazu

    2014-01-01

    In schizophrenia, evoked 40-Hz auditory steady-state responses (ASSRs) are impaired, which reflects the sensory deficits in this disorder, and baseline spontaneous oscillatory activity also appears to be abnormal. It has been debated whether the evoked ASSR impairments are due to the possible increase in baseline power. GABAergic interneuron-specific NMDA receptor (NMDAR) hypofunction mutant mice mimic some behavioral and pathophysiological aspects of schizophrenia. To determine the presence and extent of sensory deficits in these mutant mice, we recorded spontaneous local field potential (LFP) activity and its click-train evoked ASSRs from primary auditory cortex of awake, head-restrained mice. Baseline spontaneous LFP power in the pre-stimulus period before application of the first click trains was augmented at a wide range of frequencies. However, when repetitive ASSR stimuli were presented every 20 s, averaged spontaneous LFP power amplitudes during the inter-ASSR stimulus intervals in the mutant mice became indistinguishable from the levels of control mice. Nonetheless, the evoked 40-Hz ASSR power and their phase locking to click trains were robustly impaired in the mutants, although the evoked 20-Hz ASSRs were also somewhat diminished. These results suggested that NMDAR hypofunction in cortical GABAergic neurons confers two brain state-dependent LFP abnormalities in the auditory cortex; (1) a broadband increase in spontaneous LFP power in the absence of external inputs, and (2) a robust deficit in the evoked ASSR power and its phase-locking despite of normal baseline LFP power magnitude during the repetitive auditory stimuli. The "paradoxically" high spontaneous LFP activity of the primary auditory cortex in the absence of external stimuli may possibly contribute to the emergence of schizophrenia-related aberrant auditory perception.

  15. Is the auditory evoked P2 response a biomarker of learning?

    Science.gov (United States)

    Tremblay, Kelly L; Ross, Bernhard; Inoue, Kayo; McClannahan, Katrina; Collet, Gregory

    2014-01-01

    Even though auditory training exercises for humans have been shown to improve certain perceptual skills of individuals with and without hearing loss, there is a lack of knowledge pertaining to which aspects of training are responsible for the perceptual gains, and which aspects of perception are changed. To better define how auditory training impacts brain and behavior, electroencephalography (EEG) and magnetoencephalography (MEG) have been used to determine the time course and coincidence of cortical modulations associated with different types of training. Here we focus on P1-N1-P2 auditory evoked responses (AEP), as there are consistent reports of gains in P2 amplitude following various types of auditory training experiences; including music and speech-sound training. The purpose of this experiment was to determine if the auditory evoked P2 response is a biomarker of learning. To do this, we taught native English speakers to identify a new pre-voiced temporal cue that is not used phonemically in the English language so that coinciding changes in evoked neural activity could be characterized. To differentiate possible effects of repeated stimulus exposure and a button-pushing task from learning itself, we examined modulations in brain activity in a group of participants who learned to identify the pre-voicing contrast and compared it to participants, matched in time, and stimulus exposure, that did not. The main finding was that the amplitude of the P2 auditory evoked response increased across repeated EEG sessions for all groups, regardless of any change in perceptual performance. What's more, these effects are retained for months. Changes in P2 amplitude were attributed to changes in neural activity associated with the acquisition process and not the learned outcome itself. A further finding was the expression of a late negativity (LN) wave 600-900 ms post-stimulus onset, post-training exclusively for the group that learned to identify the pre-voiced contrast.

  16. Brain state-dependent abnormal LFP activity in the auditory cortex of a schizophrenia mouse model

    Directory of Open Access Journals (Sweden)

    Kazuhito eNakao

    2014-07-01

    Full Text Available In schizophrenia, evoked 40-Hz auditory steady-state responses (ASSRs are impaired, which reflects the sensory deficits in this disorder, and baseline spontaneous oscillatory activity also appears to be abnormal. It has been debated whether the evoked ASSR impairments are due to the possible increase in baseline power. GABAergic interneuron-specific NMDA receptor (NMDAR hypofunction mutant mice mimic some behavioral and pathophysiological aspects of schizophrenia. To determine the presence and extent of sensory deficits in these mutant mice, we recorded spontaneous local field potential (LFP activity and its click-train evoked ASSRs from primary auditory cortex of awake, head-restrained mice. Baseline spontaneous LFP power in the pre-stimulus period before application of the first click trains was augmented at a wide range of frequencies. However, when repetitive ASSR stimuli were presented every 20 sec, averaged spontaneous LFP power amplitudes during the inter-ASSR stimulus intervals in the mutant mice became indistinguishable from the levels of control mice. Nonetheless, the evoked 40-Hz ASSR power and their phase locking to click trains were robustly impaired in the mutants, although the evoked 20-Hz ASSRs were also somewhat diminished. These results suggested that NMDAR hypofunction in cortical GABAergic neurons confers two brain state-dependent LFP abnormalities in the auditory cortex; (1 a broadband increase in spontaneous LFP power in the absence of external inputs, and (2 a robust deficit in the evoked ASSR power and its phase-locking despite of normal baseline LFP power magnitude during the repetitive auditory stimuli. The paradoxically high spontaneous LFP activity of the primary auditory cortex in the absence of external stimuli may possibly contribute to the emergence of schizophrenia-related aberrant auditory perception.

  17. The role of the auditory brainstem in processing musically-relevant pitch

    Directory of Open Access Journals (Sweden)

    Gavin M. Bidelman

    2013-05-01

    Full Text Available Neuroimaging work has shed light on the cerebral architecture involved in processing the melodic and harmonic aspects of music. Here, recent evidence is reviewed illustrating that subcortical auditory structures contribute to the early formation and processing of musically-relevant pitch. Electrophysiological recordings from the human brainstem and population responses from the auditory nerve reveal that nascent features of tonal music (e.g., consonance/dissonance, pitch salience, harmonic sonority are evident at early, subcortical levels of the auditory pathway. The salience and harmonicity of brainstem activity is strongly correlated with listeners’ perceptual preferences and perceived consonance for the tonal relationships of music. Moreover, the hierarchical ordering of pitch intervals/chords described by the Western music practice and their perceptual consonance is well-predicted by the salience with which pitch combinations are encoded in subcortical auditory structures. While the neural correlates of consonance can be tuned and exaggerated with musical training, they persist even in the absence of musicianship or long-term enculturation. As such, it is posited that the structural foundations of musical pitch might result from innate processing performed by the central auditory system. A neurobiological predisposition for consonant, pleasant sounding pitch relationships may be one reason why these pitch combinations have been favored by composers and listeners for centuries. It is suggested that important perceptual dimensions of music emerge well before the auditory signal reaches cerebral cortex and prior to attentional engagement. While cortical mechanisms are no doubt critical to the perception, production, and enjoyment of music, the contribution of subcortical structures implicates a more integrated, hierarchically organized network underlying music processing within the brain.

  18. Relationship between Sympathetic Skin Responses and Auditory Hypersensitivity to Different Auditory Stimuli.

    Science.gov (United States)

    Kato, Fumi; Iwanaga, Ryoichiro; Chono, Mami; Fujihara, Saori; Tokunaga, Akiko; Murata, Jun; Tanaka, Koji; Nakane, Hideyuki; Tanaka, Goro

    2014-07-01

    [Purpose] Auditory hypersensitivity has been widely reported in patients with autism spectrum disorders. However, the neurological background of auditory hypersensitivity is currently not clear. The present study examined the relationship between sympathetic nervous system responses and auditory hypersensitivity induced by different types of auditory stimuli. [Methods] We exposed 20 healthy young adults to six different types of auditory stimuli. The amounts of palmar sweating resulting from the auditory stimuli were compared between groups with (hypersensitive) and without (non-hypersensitive) auditory hypersensitivity. [Results] Although no group × type of stimulus × first stimulus interaction was observed for the extent of reaction, significant type of stimulus × first stimulus interaction was noted for the extent of reaction. For an 80 dB-6,000 Hz stimulus, the trends for palmar sweating differed between the groups. For the first stimulus, the variance became larger in the hypersensitive group than in the non-hypersensitive group. [Conclusion] Subjects who regularly felt excessive reactions to auditory stimuli tended to have excessive sympathetic responses to repeated loud noises compared with subjects who did not feel excessive reactions. People with auditory hypersensitivity may be classified into several subtypes depending on their reaction patterns to auditory stimuli.

  19. Speech perception as an active cognitive process

    Directory of Open Access Journals (Sweden)

    Shannon eHeald

    2014-03-01

    Full Text Available One view of speech perception is that acoustic signals are transformed into representations for pattern matching to determine linguistic structure. This process can be taken as a statistical pattern-matching problem, assuming realtively stable linguistic categories are characterized by neural representations related to auditory properties of speech that can be compared to speech input. This kind of pattern matching can be termed a passive process which implies rigidity of processingd with few demands on cognitive processing. An alternative view is that speech recognition, even in early stages, is an active process in which speech analysis is attentionally guided. Note that this does not mean consciously guided but that information-contingent changes in early auditory encoding can occur as a function of context and experience. Active processing assumes that attention, plasticity, and listening goals are important in considering how listeners cope with adverse circumstances that impair hearing by masking noise in the environment or hearing loss. Although theories of speech perception have begun to incorporate some active processing, they seldom treat early speech encoding as plastic and attentionally guided. Recent research has suggested that speech perception is the product of both feedforward and feedback interactions between a number of brain regions that include descending projections perhaps as far downstream as the cochlea. It is important to understand how the ambiguity of the speech signal and constraints of context dynamically determine cognitive resources recruited during perception including focused attention, learning, and working memory. Theories of speech perception need to go beyond the current corticocentric approach in order to account for the intrinsic dynamics of the auditory encoding of speech. In doing so, this may provide new insights into ways in which hearing disorders and loss may be treated either through augementation or

  20. Musical competence and phoneme perception in a foreign language.

    Science.gov (United States)

    Swaminathan, Swathi; Schellenberg, E Glenn

    2017-02-15

    We investigated whether musical competence was associated with the perception of foreign-language phonemes. The sample comprised adult native-speakers of English who varied in music training. The measures included tests of general cognitive abilities, melody and rhythm perception, and the perception of consonantal contrasts that were phonemic in Zulu but not in English. Music training was associated positively with performance on the tests of melody and rhythm perception, but not with performance on the phoneme-perception task. In other words, we found no evidence for transfer of music training to foreign-language speech perception. Rhythm perception was not associated with the perception of Zulu clicks, but such an association was evident when the phonemes sounded more similar to English consonants. Moreover, it persisted after controlling for general cognitive abilities and music training. By contrast, there was no association between melody perception and phoneme perception. The findings are consistent with proposals that music- and speech-perception rely on similar mechanisms of auditory temporal processing, and that this overlap is independent of general cognitive functioning. They provide no support, however, for the idea that music training improves speech perception.

  1. Human Auditory and Adjacent Nonauditory Cerebral Cortices Are Hypermetabolic in Tinnitus as Measured by Functional Near-Infrared Spectroscopy (fNIRS).

    Science.gov (United States)

    Issa, Mohamad; Bisconti, Silvia; Kovelman, Ioulia; Kileny, Paul; Basura, Gregory J

    2016-01-01

    Tinnitus is the phantom perception of sound in the absence of an acoustic stimulus. To date, the purported neural correlates of tinnitus from animal models have not been adequately characterized with translational technology in the human brain. The aim of the present study was to measure changes in oxy-hemoglobin concentration from regions of interest (ROI; auditory cortex) and non-ROI (adjacent nonauditory cortices) during auditory stimulation and silence in participants with subjective tinnitus appreciated equally in both ears and in nontinnitus controls using functional near-infrared spectroscopy (fNIRS). Control and tinnitus participants with normal/near-normal hearing were tested during a passive auditory task. Hemodynamic activity was monitored over ROI and non-ROI under episodic periods of auditory stimulation with 750 or 8000 Hz tones, broadband noise, and silence. During periods of silence, tinnitus participants maintained increased hemodynamic responses in ROI, while a significant deactivation was seen in controls. Interestingly, non-ROI activity was also increased in the tinnitus group as compared to controls during silence. The present results demonstrate that both auditory and select nonauditory cortices have elevated hemodynamic activity in participants with tinnitus in the absence of an external auditory stimulus, a finding that may reflect basic science neural correlates of tinnitus that ultimately contribute to phantom sound perception.

  2. Human Auditory and Adjacent Nonauditory Cerebral Cortices Are Hypermetabolic in Tinnitus as Measured by Functional Near-Infrared Spectroscopy (fNIRS

    Directory of Open Access Journals (Sweden)

    Mohamad Issa

    2016-01-01

    Full Text Available Tinnitus is the phantom perception of sound in the absence of an acoustic stimulus. To date, the purported neural correlates of tinnitus from animal models have not been adequately characterized with translational technology in the human brain. The aim of the present study was to measure changes in oxy-hemoglobin concentration from regions of interest (ROI; auditory cortex and non-ROI (adjacent nonauditory cortices during auditory stimulation and silence in participants with subjective tinnitus appreciated equally in both ears and in nontinnitus controls using functional near-infrared spectroscopy (fNIRS. Control and tinnitus participants with normal/near-normal hearing were tested during a passive auditory task. Hemodynamic activity was monitored over ROI and non-ROI under episodic periods of auditory stimulation with 750 or 8000 Hz tones, broadband noise, and silence. During periods of silence, tinnitus participants maintained increased hemodynamic responses in ROI, while a significant deactivation was seen in controls. Interestingly, non-ROI activity was also increased in the tinnitus group as compared to controls during silence. The present results demonstrate that both auditory and select nonauditory cortices have elevated hemodynamic activity in participants with tinnitus in the absence of an external auditory stimulus, a finding that may reflect basic science neural correlates of tinnitus that ultimately contribute to phantom sound perception.

  3. Electrical brain imaging evidences left auditory cortex involvement in speech and non-speech discrimination based on temporal features

    Directory of Open Access Journals (Sweden)

    Jancke Lutz

    2007-12-01

    Full Text Available Abstract Background Speech perception is based on a variety of spectral and temporal acoustic features available in the acoustic signal. Voice-onset time (VOT is considered an important cue that is cardinal for phonetic perception. Methods In the present study, we recorded and compared scalp auditory evoked potentials (AEP in response to consonant-vowel-syllables (CV with varying voice-onset-times (VOT and non-speech analogues with varying noise-onset-time (NOT. In particular, we aimed to investigate the spatio-temporal pattern of acoustic feature processing underlying elemental speech perception and relate this temporal processing mechanism to specific activations of the auditory cortex. Results Results show that the characteristic AEP waveform in response to consonant-vowel-syllables is on a par with those of non-speech sounds with analogue temporal characteristics. The amplitude of the N1a and N1b component of the auditory evoked potentials significantly correlated with the duration of the VOT in CV and likewise, with the duration of the NOT in non-speech sounds. Furthermore, current density maps indicate overlapping supratemporal networks involved in the perception of both speech and non-speech sounds with a bilateral activation pattern during the N1a time window and leftward asymmetry during the N1b time window. Elaborate regional statistical analysis of the activation over the middle and posterior portion of the supratemporal plane (STP revealed strong left lateralized responses over the middle STP for both the N1a and N1b component, and a functional leftward asymmetry over the posterior STP for the N1b component. Conclusion The present data demonstrate overlapping spatio-temporal brain responses during the perception of temporal acoustic cues in both speech and non-speech sounds. Source estimation evidences a preponderant role of the left middle and posterior auditory cortex in speech and non-speech discrimination based on temporal

  4. Predicting Future Reading Problems Based on Pre-reading Auditory Measures: A Longitudinal Study of Children with a Familial Risk of Dyslexia

    Science.gov (United States)

    Law, Jeremy M.; Vandermosten, Maaike; Ghesquière, Pol; Wouters, Jan

    2017-01-01

    Purpose: This longitudinal study examines measures of temporal auditory processing in pre-reading children with a family risk of dyslexia. Specifically, it attempts to ascertain whether pre-reading auditory processing, speech perception, and phonological awareness (PA) reliably predict later literacy achievement. Additionally, this study retrospectively examines the presence of pre-reading auditory processing, speech perception, and PA impairments in children later found to be literacy impaired. Method: Forty-four pre-reading children with and without a family risk of dyslexia were assessed at three time points (kindergarten, first, and second grade). Auditory processing measures of rise time (RT) discrimination and frequency modulation (FM) along with speech perception, PA, and various literacy tasks were assessed. Results: Kindergarten RT uniquely contributed to growth in literacy in grades one and two, even after controlling for letter knowledge and PA. Highly significant concurrent and predictive correlations were observed with kindergarten RT significantly predicting first grade PA. Retrospective analysis demonstrated atypical performance in RT and PA at all three time points in children who later developed literacy impairments. Conclusions: Although significant, kindergarten auditory processing contributions to later literacy growth lack the power to be considered as a single-cause predictor; thus results support temporal processing deficits' contribution within a multiple deficit model of dyslexia. PMID:28223953

  5. Consonance and dissonance of musical chords: neural correlates in auditory cortex of monkeys and humans.

    Science.gov (United States)

    Fishman, Y I; Volkov, I O; Noh, M D; Garell, P C; Bakken, H; Arezzo, J C; Howard, M A; Steinschneider, M

    2001-12-01

    Some musical chords sound pleasant, or consonant, while others sound unpleasant, or dissonant. Helmholtz's psychoacoustic theory of consonance and dissonance attributes the perception of dissonance to the sensation of "beats" and "roughness" caused by interactions in the auditory periphery between adjacent partials of complex tones comprising a musical chord. Conversely, consonance is characterized by the relative absence of beats and roughness. Physiological studies in monkeys suggest that roughness may be represented in primary auditory cortex (A1) by oscillatory neuronal ensemble responses phase-locked to the amplitude-modulated temporal envelope of complex sounds. However, it remains unknown whether phase-locked responses also underlie the representation of dissonance in auditory cortex. In the present study, responses evoked by musical chords with varying degrees of consonance and dissonance were recorded in A1 of awake macaques and evaluated using auditory-evoked potential (AEP), multiunit activity (MUA), and current-source density (CSD) techniques. In parallel studies, intracranial AEPs evoked by the same musical chords were recorded directly from the auditory cortex of two human subjects undergoing surgical evaluation for medically intractable epilepsy. Chords were composed of two simultaneous harmonic complex tones. The magnitude of oscillatory phase-locked activity in A1 of the monkey correlates with the perceived dissonance of the musical chords. Responses evoked by dissonant chords, such as minor and major seconds, display oscillations phase-locked to the predicted difference frequencies, whereas responses evoked by consonant chords, such as octaves and perfect fifths, display little or no phase-locked activity. AEPs recorded in Heschl's gyrus display strikingly similar oscillatory patterns to those observed in monkey A1, with dissonant chords eliciting greater phase-locked activity than consonant chords. In contrast to recordings in Heschl's gyrus

  6. Auditory filters at low-frequencies

    DEFF Research Database (Denmark)

    Orellana, Carlos Andrés Jurado; Pedersen, Christian Sejer; Møller, Henrik

    2009-01-01

    Prediction and assessment of low-frequency noise problems requires information about the auditory filter characteristics at low-frequencies. Unfortunately, data at low-frequencies is scarce and practically no results have been published for frequencies below 100 Hz. Extrapolation of ERB results......-ear transfer function), the asymmetry of the auditory filter changed from steeper high-frequency slopes at 1000 Hz to steeper low-frequency slopes below 100 Hz. Increasing steepness at low-frequencies of the middle-ear high-pass filter is thought to cause this effect. The dynamic range of the auditory filter...

  7. Assessing the aging effect on auditory-verbal memory by Persian version of dichotic auditory verbal memory test

    Directory of Open Access Journals (Sweden)

    Zahra Shahidipour

    2014-01-01

    Conclusion: Based on the obtained results, significant reduction in auditory memory was seen in aged group and the Persian version of dichotic auditory-verbal memory test, like many other auditory verbal memory tests, showed the aging effects on auditory verbal memory performance.

  8. Action-based effects on music perception

    Directory of Open Access Journals (Sweden)

    Pieter-Jan eMaes

    2014-01-01

    Full Text Available The classical, disembodied approach to music cognition conceptualizes action and perception as separate, peripheral phenomena. In contrast, embodied accounts to music cognition emphasize the central role of the close coupling of action and perception. It is a commonly established fact that perception spurs action tendencies. We present a theoretical framework capturing the ways that the human motor system, and the actions it produces, can reciprocally influence the perception of music. The cornerstone of this framework is the common coding theory postulating a representational overlap in the brain between the planning, the execution, and the perception of movement. The integration of action and perception in so-called internal models is explained as a result of associative learning processes. Characteristic of internal models is that they allow intended or perceived sensory states to be transferred into corresponding motor commands (inverse modelling, and vice versa, to predict the sensory outcomes of planned actions (forward modelling. Embodied accounts typically adhere to inverse modelling to explain action effects on music perception (Leman, 2007. We extent this account by pinpointing forward modelling as an alternative mechanism by which action can modulate perception. We provide an extensive overview of recent empirical evidence in support of this idea. Additionally, we demonstrate that motor dysfunctions can cause perceptual disabilities, supporting the main idea of the paper that the human motor system plays a functional role in auditory perception. The finding that music perception is shaped by the human motor system, and the action it produces, suggests that the musical mind is highly embodied. However, we advocate for a more radical approach to embodied (music cognition in the sense that it needs to be considered as a dynamic process, in which aspects of action, perception, introspection, and social interaction are of crucial

  9. The influence of visuospatial attention on unattended auditory 40 Hz responses.

    Science.gov (United States)

    Roth, Cullen; Gupta, Cota Navin; Plis, Sergey M; Damaraju, Eswar; Khullar, Siddharth; Calhoun, Vince D; Bridwell, David A

    2013-01-01

    Information must integrate from multiple brain areas in healthy cognition and perception. The present study examined the extent to which cortical responses within one sensory modality are modulated by a complex task conducted within another sensory modality. Electroencephalographic (EEG) responses were measured to a 40 Hz auditory stimulus while individuals attended to modulations in the amplitude of the 40 Hz stimulus, and as a function of the difficulty of the popular computer game Tetris. The steady-state response to the 40 Hz stimulus was isolated by Fourier analysis of the EEG. The response at the stimulus frequency was normalized by the response within the surrounding frequencies, generating the signal-to-noise ratio (SNR). Seven out of eight individuals demonstrate a monotonic increase in the log SNR of the 40 Hz responses going from the difficult visuospatial task to the easy visuospatial task to attending to the auditory stimuli. This pattern is represented statistically by a One-Way ANOVA, indicating significant differences in log SNR across the three tasks. The sensitivity of 40 Hz auditory responses to the visuospatial load was further demonstrated by a significant correlation between log SNR and the difficulty (i.e., speed) of the Tetris task. Thus, the results demonstrate that 40 Hz auditory cortical responses are influenced by an individual's goal-directed attention to the stimulus, and by the degree of difficulty of a complex visuospatial task.

  10. A vision-free brain-computer interface (BCI) paradigm based on auditory selective attention.

    Science.gov (United States)

    Kim, Do-Won; Cho, Jae-Hyun; Hwang, Han-Jeong; Lim, Jeong-Hwan; Im, Chang-Hwan

    2011-01-01

    Majority of the recently developed brain computer interface (BCI) systems have been using visual stimuli or visual feedbacks. However, the BCI paradigms based on visual perception might not be applicable to severe locked-in patients who have lost their ability to control their eye movement or even their vision. In the present study, we investigated the feasibility of a vision-free BCI paradigm based on auditory selective attention. We used the power difference of auditory steady-state responses (ASSRs) when the participant modulates his/her attention to the target auditory stimulus. The auditory stimuli were constructed as two pure-tone burst trains with different beat frequencies (37 and 43 Hz) which were generated simultaneously from two speakers located at different positions (left and right). Our experimental results showed high classification accuracies (64.67%, 30 commands/min, information transfer rate (ITR) = 1.89 bits/min; 74.00%, 12 commands/min, ITR = 2.08 bits/min; 82.00%, 6 commands/min, ITR = 1.92 bits/min; 84.33%, 3 commands/min, ITR = 1.12 bits/min; without any artifact rejection, inter-trial interval = 6 sec), enough to be used for a binary decision. Based on the suggested paradigm, we implemented a first online ASSR-based BCI system that demonstrated the possibility of materializing a totally vision-free BCI system.

  11. Auditory stimulus timing influences perceived duration of co-occurring visual stimuli

    Directory of Open Access Journals (Sweden)

    Vincenzo eRomei

    2011-09-01

    Full Text Available There is increasing interest in multisensory influences upon sensory-specific judgements, such as when auditory stimuli affect visual perception. Here we studied whether the duration of an auditory event can objectively affect the perceived duration of a co-occurring visual event. On each trial, participants were presented with a pair of successive flashes and had to judge whether the first or second was longer. Two beeps were presented with the flashes. The order of short and long stimuli could be the same across audition and vision (audiovisual congruent or reversed, so that the longer flash was accompanied by the shorter beep and vice versa (audiovisual incongruent; or the two beeps could have the same duration as each other. Beeps and flashes could onset synchronously or asynchronously. In a further control experiment, the beep durations were much longer (tripled than the flashes. Results showed that visual duration-discrimination sensitivity (d' was significantly higher for congruent (and significantly lower for incongruent audiovisual synchronous combinations, relative to the visual only presentation. This effect was abolished when auditory and visual stimuli were presented asynchronously, or when sound durations tripled those of flashes. We conclude that the temporal properties of co-occurring auditory stimuli influence the perceived duration of visual stimuli and that this can reflect genuine changes in visual sensitivity rather than mere response bias.

  12. Auditory midbrain implant: research and development towards a second clinical trial.

    Science.gov (United States)

    Lim, Hubert H; Lenarz, Thomas

    2015-04-01

    The cochlear implant is considered one of the most successful neural prostheses to date, which was made possible by visionaries who continued to develop the cochlear implant through multiple technological and clinical challenges. However, patients without a functional auditory nerve or implantable cochlea cannot benefit from a cochlear implant. The focus of the paper is to review the development and translation of a new type of central auditory prosthesis for this group of patients that is known as the auditory midbrain implant (AMI) and is designed for electrical stimulation within the inferior colliculus. The rationale and results for the first AMI clinical study using a multi-site single-shank array will be presented initially. Although the AMI has achieved encouraging results in terms of safety and improvements in lip-reading capabilities and environmental awareness, it has not yet provided sufficient speech perception. Animal and human data will then be presented to show that a two-shank AMI array can potentially improve hearing performance by targeting specific neurons of the inferior colliculus. A new two-shank array, stimulation strategy, and surgical approach are planned for the AMI that are expected to improve hearing performance in the patients who will be implanted in an upcoming clinical trial funded by the National Institutes of Health. Positive outcomes from this clinical trial will motivate new efforts and developments toward improving central auditory prostheses for those who cannot sufficiently benefit from cochlear implants. This article is part of a Special Issue entitled .

  13. Volume Attenuation and High Frequency Loss as Auditory Depth Cues in Stereoscopic 3D Cinema

    Science.gov (United States)

    Manolas, Christos; Pauletto, Sandra

    2014-09-01

    Assisted by the technological advances of the past decades, stereoscopic 3D (S3D) cinema is currently in the process of being established as a mainstream form of entertainment. The main focus of this collaborative effort is placed on the creation of immersive S3D visuals. However, with few exceptions, little attention has been given so far to the potential effect of the soundtrack on such environments. The potential of sound both as a means to enhance the impact of the S3D visual information and to expand the S3D cinematic world beyond the boundaries of the visuals is large. This article reports on our research into the possibilities of using auditory depth cues within the soundtrack as a means of affecting the perception of depth within cinematic S3D scenes. We study two main distance-related auditory cues: high-end frequency loss and overall volume attenuation. A series of experiments explored the effectiveness of these auditory cues. Results, although not conclusive, indicate that the studied auditory cues can influence the audience judgement of depth in cinematic 3D scenes, sometimes in unexpected ways. We conclude that 3D filmmaking can benefit from further studies on the effectiveness of specific sound design techniques to enhance S3D cinema.

  14. Comparison of Auditory Event-Related Potential P300 in Sighted and Early Blind Individuals

    Directory of Open Access Journals (Sweden)

    Fatemeh Heidari

    2010-06-01

    Full Text Available Background and Aim: Following an early visual deprivation, the neural network involved in processing auditory spatial information undergoes a profound reorganization. In order to investigate this process, event-related potentials provide accurate information about time course neural activation as well as perception and cognitive processes. In this study, the latency and amplitude of auditory P300 were compared in sighted and early blind individuals in age range of 18-25 years old.Methods: In this cross-sectional study, auditory P300 potential was measured in conventional oddball paradigm by using two tone burst stimuli (1000 and 2000 Hz on 40 sighted subjects and 19 early blind subjects with mean age 20.94 years old.Results: The mean latency of P300 in early blind subjects was significantly smaller than sighted subjects (p=0.00.( There was no significant difference in amplitude between two groups (p>0.05.Conclusion: Reduced latency of P300 in early blind subjects in comparison to sighted subjects probably indicates the rate of automatic processing and information categorization is faster in early blind subjects because of sensory compensation. It seems that neural plasticity increases the rate of auditory processing and attention in early blind subjects.

  15. Use of auditory learning to manage listening problems in children

    OpenAIRE

    Moore, David R.; Halliday, Lorna F.; Amitay, Sygal

    2008-01-01

    This paper reviews recent studies that have used adaptive auditory training to address communication problems experienced by some children in their everyday life. It considers the auditory contribution to developmental listening and language problems and the underlying principles of auditory learning that may drive further refinement of auditory learning applications. Following strong claims that language and listening skills in children could be improved by auditory learning, researchers hav...

  16. How prior expectations shape multisensory perception.

    Science.gov (United States)

    Gau, Remi; Noppeney, Uta

    2016-01-01

    The brain generates a representation of our environment by integrating signals from a common source, but segregating signals from different sources. This fMRI study investigated how the brain arbitrates between perceptual integration and segregation based on top-down congruency expectations and bottom-up stimulus-bound congruency cues. Participants were presented audiovisual movies of phonologically congruent, incongruent or McGurk syllables that can be integrated into an illusory percept (e.g. "ti" percept for visual «ki» with auditory /pi/). They reported the syllable they perceived. Critically, we manipulated participants' top-down congruency expectations by presenting McGurk stimuli embedded in blocks of congruent or incongruent syllables. Behaviorally, participants were more likely to fuse audiovisual signals into an illusory McGurk percept in congruent than incongruent contexts. At the neural level, the left inferior frontal sulcus (lIFS) showed increased activations for bottom-up incongruent relative to congruent inputs. Moreover, lIFS activations were increased for physically identical McGurk stimuli, when participants segregated the audiovisual signals and reported their auditory percept. Critically, this activation increase for perceptual segregation was amplified when participants expected audiovisually incongruent signals based on prior sensory experience. Collectively, our results demonstrate that the lIFS combines top-down prior (in)congruency expectations with bottom-up (in)congruency cues to arbitrate between multisensory integration and segregation.

  17. The oscillatory activities and its synchronization in auditory-visual integration as revealed by event-related potentials to bimodal stimuli

    Science.gov (United States)

    Guo, Jia; Xu, Peng; Yao, Li; Shu, Hua; Zhao, Xiaojie

    2012-03-01

    Neural mechanism of auditory-visual speech integration is always a hot study of multi-modal perception. The articulation conveys speech information that helps detect and disambiguate the auditory speech. As important characteristic of EEG, oscillations and its synchronization have been applied to cognition research more and more. This study analyzed the EEG data acquired by unimodal and bimodal stimuli using time frequency and phase synchrony approach, investigated the oscillatory activities and its synchrony modes behind evoked potential during auditory-visual integration, in order to reveal the inherent neural integration mechanism under these modes. It was found that beta activity and its synchronization differences had relationship with gesture N1-P2, which happened in the earlier stage of speech coding to pronouncing action. Alpha oscillation and its synchronization related with auditory N1-P2 might be mainly responsible for auditory speech process caused by anticipation from gesture to sound feature. The visual gesture changing enhanced the interaction of auditory brain regions. These results provided explanations to the power and connectivity change of event-evoked oscillatory activities which matched ERPs during auditory-visual speech integration.

  18. Auditory-visual spatial interaction and modularity

    Science.gov (United States)

    Radeau, M

    1994-02-01

    The results of dealing with the conditions for pairing visual and auditory data coming from spatially separate locations argue for cognitive impenetrability and computational autonomy, the pairing rules being the Gestalt principles of common fate and proximity. Other data provide evidence for pairing with several properties of modular functioning. Arguments for domain specificity are inferred from comparison with audio-visual speech. Suggestion of innate specification can be found in developmental data indicating that the grouping of visual and auditory signals is supported very early in life by the same principles that operate in adults. Support for a specific neural architecture comes from neurophysiological studies of the bimodal (auditory-visual) neurons of the cat superior colliculus. Auditory-visual pairing thus seems to present the four main properties of the Fodorian module.

  19. [Approaches to therapy of auditory agnosia].

    Science.gov (United States)

    Fechtelpeter, A; Göddenhenrich, S; Huber, W; Springer, L

    1990-01-01

    In a 41-year-old stroke patient with bitemporal brain damage, we found severe signs of auditory agnosia 6 months after onset. Recognition of environmental sounds was extremely impaired when tested in a multiple choice sound-picture matching task, whereas auditory discrimination between sounds and picture identifications by written names was almost undisturbed. In a therapy experiment, we tried to enhance sound recognition via semantic categorization and association, imitation of sound and analysis of auditory features, respectively. The stimulation of conscious auditory analysis proved to be increasingly effective over a 4-week period of therapy. We were able to show that the patient's improvement was not only a simple effect of practicing, but it was stable and carried over to nontrained items.

  20. Effect of omega-3 on auditory system

    Directory of Open Access Journals (Sweden)

    Vida Rahimi

    2014-01-01

    Full Text Available Background and Aim: Omega-3 fatty acid have structural and biological roles in the body 's various systems . Numerous studies have tried to research about it. Auditory system is affected a s well. The aim of this article was to review the researches about the effect of omega-3 on auditory system.Methods: We searched Medline , Google Scholar, PubMed, Cochrane Library and SID search engines with the "auditory" and "omega-3" keywords and read textbooks about this subject between 19 70 and 20 13.Conclusion: Both excess and deficient amounts of dietary omega-3 fatty acid can cause harmful effects on fetal and infant growth and development of brain and central nervous system esspesially auditory system. It is important to determine the adequate dosage of omega-3.

  1. On the Physical Development and Plasticity of Auditory Neuro System%听觉体验的神经生理发展及可塑性研究

    Institute of Scientific and Technical Information of China (English)

    王晓宇; 黄玲玲

    2011-01-01

    人脑的许多重要功能(如感知、学习、记忆、语言、意识、情绪、运动控制等)都与听觉体验紧密相关,人有高度进化的听觉体验系统,能够全方位帮助人们检测、快速地加工并体验有生物意义的声信号(语言),指导特殊的行为(如言语辨知和交流)。听觉体验的神经生理机制在听觉的可塑性中具有举足轻重的作用。%A number of important functions of hu man brain (such as perception, learning, memory, language, consciousness,emotion,motion control,etc.) are closely related with auditory perception. People have highly evolved auditory perception system, which is capable of all-round testing, rapid processing and perception to be biologically significant acoustic signals (language) and to guide specific behaviors (such as verbal identified knowledge and communication) . Auditory experience nervous system development is the important route and plays a significant role in rebuilding auditory experience.

  2. Speaking of reading: The role of basic auditory and speech processing in the manifestation of dyslexia in children at familial risk

    NARCIS (Netherlands)

    Hakvoort, B.E.

    2016-01-01

    Deficits in phonological skills are generally acknowledged to relate to dyslexia. Knowledge about the phonological structure of language is extracted from the speech signal. Auditory and speech perception are thus often thought to be important for the development of apt phonological skills. Therefor

  3. Time course of auditory streaming: Do CI users differ from normal-hearing listeners?

    Directory of Open Access Journals (Sweden)

    Martin eBöckmann-Barthel

    2014-07-01

    Full Text Available In a complex acoustical environment with multiple sound sources the auditory system uses streaming as a tool to organize the incoming sounds in one or more streams depending on the stimulus parameters. Streaming is commonly studied by alternating sequences of signals. These are often tones with different frequencies. The present study investigates stream segregation in cochlear implant (CI users, where hearing is restored by electrical stimulation of the auditory nerve. CI users listened to 30-s long sequences of alternating A and B harmonic complexes at four different fundamental frequency separations, ranging from 2 to 14 semitones. They had to indicate as promptly as possible after sequence onset, if they perceived one stream or two streams and, in addition, any changes of the percept throughout the rest of the sequence. The conventional view is that the initial percept is always that of a single stream which may after some time change to a percept of two streams. This general build-up hypothesis has recently been challenged on the basis of a new analysis of data of normal-hearing listeners which showed a build-up response only for an intermediate frequency separation. Using the same experimental paradigm and analysis, the present study found that the results of CI users agree with those of the normal-hearing listeners: (i the probability of the first decision to be a one-stream percept decreased and that of a two-stream percept increased as Δf increased, and (ii a build-up was only found for 6 semitones. Only the time elapsed before the listeners made their first decision of the percept was prolonged as compared to normal-hearing listeners. The similarity in the data of the CI user and the normal-hearing listeners indicates that the quality of stream formation is similar in these groups of listeners.

  4. A critical period for auditory thalamocortical connectivity

    DEFF Research Database (Denmark)

    Rinaldi Barkat, Tania; Polley, Daniel B; Hensch, Takao K

    2011-01-01

    connectivity by in vivo recordings and day-by-day voltage-sensitive dye imaging in an acute brain slice preparation. Passive tone-rearing modified response strength and topography in mouse primary auditory cortex (A1) during a brief, 3-d window, but did not alter tonotopic maps in the thalamus. Gene...... locus of change for the tonotopic plasticity. The evolving postnatal connectivity between thalamus and cortex in the days following hearing onset may therefore determine a critical period for auditory processing....

  5. Changing Perceptions

    Science.gov (United States)

    Mallett, Susanne; Wren, Steve; Dawes, Mark; Blinco, Amy; Haines, Brett; Everton, Jenny; Morgan, Ellen; Barton, Craig; Breen, Debbie; Ellison, Geraldine; Burgess, Danny; Stavrou, Jim; Carre, Catherine; Watson, Fran; Cherry, David; Hawkins, Chris; Stapenhill-Hunt, Maria; Gilderdale, Charlie; Kiddle, Alison; Piggott, Jennifer

    2009-01-01

    A group of teachers involved in embedding NRICH tasks (http://nrich.maths.org) into their everyday practice were keen to challenge common perceptions of mathematics, and of the teaching and learning of mathematics. In this article, the teachers share what they are doing to change these perceptions in their schools.

  6. Speech Evoked Auditory Brainstem Response in Stuttering

    Directory of Open Access Journals (Sweden)

    Ali Akbar Tahaei

    2014-01-01

    Full Text Available Auditory processing deficits have been hypothesized as an underlying mechanism for stuttering. Previous studies have demonstrated abnormal responses in subjects with persistent developmental stuttering (PDS at the higher level of the central auditory system using speech stimuli. Recently, the potential usefulness of speech evoked auditory brainstem responses in central auditory processing disorders has been emphasized. The current study used the speech evoked ABR to investigate the hypothesis that subjects with PDS have specific auditory perceptual dysfunction. Objectives. To determine whether brainstem responses to speech stimuli differ between PDS subjects and normal fluent speakers. Methods. Twenty-five subjects with PDS participated in this study. The speech-ABRs were elicited by the 5-formant synthesized syllable/da/, with duration of 40 ms. Results. There were significant group differences for the onset and offset transient peaks. Subjects with PDS had longer latencies for the onset and offset peaks relative to the control group. Conclusions. Subjects with PDS showed a deficient neural timing in the early stages of the auditory pathway consistent with temporal processing deficits and their abnormal timing may underlie to their disfluency.

  7. Auditory processing in fragile x syndrome.

    Science.gov (United States)

    Rotschafer, Sarah E; Razak, Khaleel A

    2014-01-01

    Fragile X syndrome (FXS) is an inherited form of intellectual disability and autism. Among other symptoms, FXS patients demonstrate abnormalities in sensory processing and communication. Clinical, behavioral, and electrophysiological studies consistently show auditory hypersensitivity in humans with FXS. Consistent with observations in humans, the Fmr1 KO mouse model of FXS also shows evidence of altered auditory processing and communication deficiencies. A well-known and commonly used phenotype in pre-clinical studies of FXS is audiogenic seizures. In addition, increased acoustic startle response is seen in the Fmr1 KO mice. In vivo electrophysiological recordings indicate hyper-excitable responses, broader frequency tuning, and abnormal spectrotemporal processing in primary auditory cortex of Fmr1 KO mice. Thus, auditory hyper-excitability is a robust, reliable, and translatable biomarker in Fmr1 KO mice. Abnormal auditory evoked responses have been used as outcome measures to test therapeutics in FXS patients. Given that similarly abnormal responses are present in Fmr1 KO mice suggests that cellular mechanisms can be addressed. Sensory cortical deficits are relatively more tractable from a mechanistic perspective than more complex social behaviors that are typically studied in autism and FXS. The focus of this review is to bring together clinical, functional, and structural studies in humans with electrophysiological and behavioral studies in mice to make the case that auditory hypersensitivity provides a unique opportunity to integrate molecular, cellular, circuit level studies with behavioral outcomes in the search for therapeutics for FXS and other autism spectrum disorders.

  8. Auditory Processing in Fragile X Syndrome

    Directory of Open Access Journals (Sweden)

    Sarah E Rotschafer

    2014-02-01

    Full Text Available Fragile X syndrome (FXS is an inherited form of intellectual disability and autism. Among other symptoms, FXS patients demonstrate abnormalities in sensory processing and communication. Clinical, behavioral and electrophysiological studies consistently show auditory hypersensitivity in humans with FXS. Consistent with observations in humans, the Fmr1 KO mouse model of FXS also shows evidence of altered auditory processing and communication deficiencies. A well-known and commonly used phenotype in pre-clinical studies of FXS is audiogenic seizures. In addition, increased acoustic startle is also seen in the Fmr1 KO mice. In vivo electrophysiological recordings indicate hyper-excitable responses, broader frequency tuning and abnormal spectrotemporal processing in primary auditory cortex of Fmr1 KO mice. Thus, auditory hyper-excitability is a robust, reliable and translatable biomarker in Fmr1 KO mice. Abnormal auditory evoked responses have been used as outcome measures to test therapeutics in FXS patients. Given that similarly abnormal responses are present in Fmr1 KO mice suggests that cellular mechanisms can be addressed. Sensory cortical deficits are relatively more tractable from a mechanistic perspective than more complex social behaviors that are typically studied in autism and FXS. The focus of this review is to bring together clinical, functional and structural studies in humans with electrophysiological and behavioral studies in mice to make the case that auditory hypersensitivity provides a unique opportunity to integrate molecular, cellular, circuit level studies with behavioral outcomes in the search for therapeutics for FXS and other autism spectrum disorders.

  9. Auditory model inversion and its application

    Institute of Scientific and Technical Information of China (English)

    ZHAO Heming; WANG Yongqi; CHEN Xueqin

    2005-01-01

    Auditory model has been applied to several aspects of speech signal processing field, and appears to be effective in performance. This paper presents the inverse transform of each stage of one widely used auditory model. First of all it is necessary to invert correlogram and reconstruct phase information by repetitious iterations in order to get auditory-nerve firing rate. The next step is to obtain the negative parts of the signal via the reverse process of the HWR (Half Wave Rectification). Finally the functions of inner hair cell/synapse model and Gammatone filters have to be inverted. Thus the whole auditory model inversion has been achieved. An application of noisy speech enhancement based on auditory model inversion algorithm is proposed. Many experiments show that this method is effective in reducing noise.Especially when SNR of noisy speech is low it is more effective than other methods. Thus this auditory model inversion method given in this paper is applicable to speech enhancement field.

  10. Auditory dysfunction associated with solvent exposure

    Directory of Open Access Journals (Sweden)

    Fuente Adrian

    2013-01-01

    Full Text Available Abstract Background A number of studies have demonstrated that solvents may induce auditory dysfunction. However, there is still little knowledge regarding the main signs and symptoms of solvent-induced hearing loss (SIHL. The aim of this research was to investigate the association between solvent exposure and adverse effects on peripheral and central auditory functioning with a comprehensive audiological test battery. Methods Seventy-two solvent-exposed workers and 72 non-exposed workers were selected to participate in the study. The test battery comprised pure-tone audiometry (PTA, transient evoked otoacoustic emissions (TEOAE, Random Gap Detection (RGD and Hearing-in-Noise test (HINT. Results Solvent-exposed subjects presented with poorer mean test results than non-exposed subjects. A bivariate and multivariate linear regression model analysis was performed. One model for each auditory outcome (PTA, TEOAE, RGD and HINT was independently constructed. For all of the models solvent exposure was significantly associated with the auditory outcome. Age also appeared significantly associated with some auditory outcomes. Conclusions This study provides further evidence of the possible adverse effect of solvents on the peripheral and central auditory functioning. A discussion of these effects and the utility of selected hearing tests to assess SIHL is addressed.

  11. Long Latency Auditory Evoked Potentials during Meditation.

    Science.gov (United States)

    Telles, Shirley; Deepeshwar, Singh; Naveen, Kalkuni Visweswaraiah; Pailoor, Subramanya

    2015-10-01

    The auditory sensory pathway has been studied in meditators, using midlatency and short latency auditory evoked potentials. The present study evaluated long latency auditory evoked potentials (LLAEPs) during meditation. Sixty male participants, aged between 18 and 31 years (group mean±SD, 20.5±3.8 years), were assessed in 4 mental states based on descriptions in the traditional texts. They were (a) random thinking, (b) nonmeditative focusing, (c) meditative focusing, and (d) meditation. The order of the sessions was randomly assigned. The LLAEP components studied were P1 (40-60 ms), N1 (75-115 ms), P2 (120-180 ms), and N2 (180-280 ms). For each component, the peak amplitude and peak latency were measured from the prestimulus baseline. There was significant decrease in the peak latency of the P2 component during and after meditation (Pmeditation facilitates the processing of information in the auditory association cortex, whereas the number of neurons recruited was smaller in random thinking and non-meditative focused thinking, at the level of the secondary auditory cortex, auditory association cortex and anterior cingulate cortex.

  12. Few juvenile auditory perceptual skills correlate with adult performance.

    Science.gov (United States)

    Sarro, Emma C; Sanes, Dan H

    2014-02-01

    Measures of human mental development suggest that behavioral skills displayed during early life can predict an individual's subsequent cognitive performance. Support for this draws from longitudinal studies that reveal compelling within-subject correlations during childhood. If this idea applies across the life span, then correlations in performance should persist into adulthood. Here, we address this prediction in juvenile and adult gerbils by evaluating within-subject measures of auditory learning and perception. Animals were trained and tested as juveniles on either an amplitude modulation (AM) or a frequency modulation (FM) detection task. Measures of learning and perception obtained from juveniles were then compared to similar measures obtained when each subject was tested in adulthood on either the same task or the untrained task. For animals trained and tested on the AM detection task as juveniles and adults, there was no correlation between juvenile and adult learning metrics, or perceptual sensitivity. For animals trained and tested on FM detection as juveniles, we observed a significant relationship to their adult performance. Juveniles that performed the best on FM detection were the poorest at AM detection, and the best at FM detection, when tested as adults. Thus, across-age correlations for sensory and cognitive measures, obtained during development and in adulthood, depend heavily on the specific type of developmental experience and the outcome measure.

  13. Interaction of streaming and attention in human auditory cortex.

    Science.gov (United States)

    Gutschalk, Alexander; Rupp, André; Dykstra, Andrew R

    2015-01-01

    Serially presented tones are sometimes segregated into two perceptually distinct streams. An ongoing debate is whether this basic streaming phenomenon reflects automatic processes or requires attention focused to the stimuli. Here, we examined the influence of focused attention on streaming-related activity in human auditory cortex using magnetoencephalography (MEG). Listeners were presented with a dichotic paradigm in which left-ear stimuli consisted of canonical streaming stimuli (ABA_ or ABAA) and right-ear stimuli consisted of a classical oddball paradigm. In phase one, listeners were instructed to attend the right-ear oddball sequence and detect rare deviants. In phase two, they were instructed to attend the left ear streaming stimulus and report whether they heard one or two streams. The frequency difference (ΔF) of the sequences was set such that the smallest and largest ΔF conditions generally induced one- and two-stream percepts, respectively. Two intermediate ΔF conditions were chosen to elicit bistable percepts (i.e., either one or two streams). Attention enhanced the peak-to-peak amplitude of the P1-N1 complex, but only for ambiguous ΔF conditions, consistent with the notion that automatic mechanisms for streaming tightly interact with attention and that the latter is of particular importance for ambiguous sound sequences.

  14. Frequency Selectivity Behaviour in the Auditory Midbrain: Implications of Model Study

    Institute of Scientific and Technical Information of China (English)

    KUANG Shen-Bing; WANG Jia-Fu; ZENG Ting

    2006-01-01

    By numerical simulations on frequency dependence of the spiking threshold, i.e. on the critical amplitude of periodic stimulus, for a neuron to fire, we find that bushy cells in the cochlear nuclear exhibit frequency selectivity behaviour. However, the selective frequency band of a bushy cell is far away from that of the preferred spectral range in human and mammal auditory perception. The mechanism underlying this neural activity is also discussed. Further studies show that the ion channel densities have little impact on the selective frequency band of bushy cells. These findings suggest that the neuronal behaviour of frequency selectivity in bushy cells at both the single cell and population levels may be not functionally relevant to frequency discrimination. Our results may reveal a neural hint to the reconsideration on the bushy cell functional role in auditory information processing of sound frequency.

  15. Combined mirror visual and auditory feedback therapy for upper limb phantom pain: a case report

    Directory of Open Access Journals (Sweden)

    Yan Kun

    2011-01-01

    Full Text Available Abstract Introduction Phantom limb sensation and phantom limb pain is a very common issue after amputations. In recent years there has been accumulating data implicating 'mirror visual feedback' or 'mirror therapy' as helpful in the treatment of phantom limb sensation and phantom limb pain. Case presentation We present the case of a 24-year-old Caucasian man, a left upper limb amputee, treated with mirror visual feedback combined with auditory feedback with improved pain relief. Conclusion This case may suggest that auditory feedback might enhance the effectiveness of mirror visual feedback and serve as a valuable addition to the complex multi-sensory processing of body perception in patients who are amputees.

  16. Relating auditory attributes of multichannel sound to preference and to physical parameters

    DEFF Research Database (Denmark)

    Choisel, Sylvain; Wickelmaier, Florian Maria

    2006-01-01

    Sound reproduced by multichannel systems is affected by many factors giving rise to various sensations, or auditory attributes. Relating specific attributes to overall preference and to physical measures of the sound field provides valuable information for a better understanding of the parameters...... playing a role in sound quality evaluation. Eight selected attributes are quantified by a panel of 39 listeners using paired-comparison judgments and probabilistic choice models, and related to overall preference. A multiple-regression model predicts preference well, and some similarities are observed...... within and between musical program materials, allowing for a careful generalization regarding the perception of spatial audio reproduction. Finally, a set of objective measures is derived from analysis of the sound field at the listening position in an attempt to predict the auditory attributes....

  17. Frequency Selectivity Behaviour in the Auditory Midbrain: Implications of Model Study

    Science.gov (United States)

    Kuang, Shen-Bing; Wang, Jia-Fu; Zeng, Ting

    2006-12-01

    By numerical simulations on frequency dependence of the spiking threshold, i.e. on the critical amplitude of periodic stimulus, for a neuron to fire, we find that bushy cells in the cochlear nuclear exhibit frequency selectivity behaviour. However, the selective frequency band of a bushy cell is far away from that of the preferred spectral range in human and mammal auditory perception. The mechanism underlying this neural activity is also discussed. Further studies show that the ion channel densities have little impact on the selective frequency band of bushy cells. These findings suggest that the neuronal behaviour of frequency selectivity in bushy cells at both the single cell and population levels may be not functionally relevant to frequency discrimination. Our results may reveal a neural hint to the reconsideration on the bushy cell functional role in auditory information processing of sound frequency.

  18. Pitch perception of complex sounds nonlinearity revisited

    CERN Document Server

    González, D L; Sportolari, F; Rosso, O; Cartwright, J H E; Piro, O

    1995-01-01

    The ability of the auditory system to perceive the fundamental frequency of a sound even when this frequency is removed from the stimulus is an interesting phenomenon related to the pitch of complex sounds. This capability is known as ``residue'' or ``virtual pitch'' perception and was first reported last century in the pioneering work of Seebeck. It is residue perception that allows one to listen to music with small transistor radios, which in general have a very poor and sometimes negligible response to low frequencies. The first attempt, due to Helmholtz, to explain the residue as a nonlinear effect in the ear considered it to originate from difference combination tones. However, later experiments have shown that the residue does not coincide with a difference combination tone. These results and the fact that dichotically presented signals also elicit residue perception have led to nonlinear theories being gradually abandoned in favour of central processor models. In this paper we use recent results from t...

  19. Steroid-dependent auditory plasticity for the enhancement of acoustic communication: recent insights from a vocal teleost fish

    OpenAIRE

    Joseph A Sisneros

    2009-01-01

    The vocal plainfin midshipman fish (Porichthys notatus) has become an excellent model for identifying neural mechanisms of auditory perception that may be shared by all vertebrates. Recent neuroethological studies of the midshipman fish have yielded strong evidence for the steroid-dependent modulation of hearing sensitivity that leads to enhanced coupling of sender and receiver in this vocal-acoustic communication system. Previous work shows that non-reproductive females treated with either t...

  20. Auditory function in vestibular migraine

    Directory of Open Access Journals (Sweden)

    John Mathew

    2016-01-01

    Full Text Available Introduction: Vestibular migraine (VM is a vestibular syndrome seen in patients with migraine and is characterized by short spells of spontaneous or positional vertigo which lasts between a few seconds to weeks. Migraine and VM are considered to be a result of chemical abnormalities in the serotonin pathway. Neuhauser′s diagnostic criteria for vestibular migraine is widely accepted. Research on VM is still limited and there are few studies which have been published on this topic. Materials and Methods: This study has two parts. In the first part, we did a retrospective chart review of eighty consecutive patients who were diagnosed with vestibular migraine and determined the frequency of auditory dysfunction in these patients. The second part was a prospective case control study in which we compared the audiological parameters of thirty patients diagnosed with VM with thirty normal controls to look for any significant differences. Results: The frequency of vestibular migraine in our population is 22%. The frequency of hearing loss in VM is 33%. Conclusion: There is a significant difference between cases and controls with regards to the presence of distortion product otoacoustic emissions in both ears. This finding suggests that the hearing loss in VM is cochlear in origin.

  1. Assessing the validity of subjective reports in the auditory streaming paradigm.

    Science.gov (United States)

    Farkas, Dávid; Denham, Susan L; Bendixen, Alexandra; Winkler, István

    2016-04-01

    While subjective reports provide a direct measure of perception, their validity is not self-evident. Here, the authors tested three possible biasing effects on perceptual reports in the auditory streaming paradigm: errors due to imperfect understanding of the instructions, voluntary perceptual biasing, and susceptibility to implicit expectations. (1) Analysis of the responses to catch trials separately promoting each of the possible percepts allowed the authors to exclude participants who likely have not fully understood the instructions. (2) Explicit biasing instructions led to markedly different behavior than the conventional neutral-instruction condition, suggesting that listeners did not voluntarily bias their perception in a systematic way under the neutral instructions. Comparison with a random response condition further supported this conclusion. (3) No significant relationship was found between social desirability, a scale-based measure of susceptibility to implicit social expectations, and any of the perceptual measures extracted from the subjective reports. This suggests that listeners did not significantly bias their perceptual reports due to possible implicit expectations present in the experimental context. In sum, these results suggest that valid perceptual data can be obtained from subjective reports in the auditory streaming paradigm.

  2. Musical experience and the aging auditory system: implications for cognitive abilities and hearing speech in noise.

    Directory of Open Access Journals (Sweden)

    Alexandra Parbery-Clark

    Full Text Available Much of our daily communication occurs in the presence of background noise, compromising our ability to hear. While understanding speech in noise is a challenge for everyone, it becomes increasingly difficult as we age. Although aging is generally accompanied by hearing loss, this perceptual decline cannot fully account for the difficulties experienced by older adults for hearing in noise. Decreased cognitive skills concurrent with reduced perceptual acuity are thought to contribute to the difficulty older adults experience understanding speech in noise. Given that musical experience positively impacts speech perception in noise in young adults (ages 18-30, we asked whether musical experience benefits an older cohort of musicians (ages 45-65, potentially offsetting the age-related decline in speech-in-noise perceptual abilities and associated cognitive function (i.e., working memory. Consistent with performance in young adults, older musicians demonstrated enhanced speech-in-noise perception relative to nonmusicians along with greater auditory, but not visual, working memory capacity. By demonstrating that speech-in-noise perception and related cognitive function are enhanced in older musicians, our results imply that musical training may reduce the impact of age-related auditory decline.

  3. Cortical plasticity as a mechanism for storing Bayesian priors in sensory perception.

    Directory of Open Access Journals (Sweden)

    Hania Köver

    Full Text Available Human perception of ambiguous sensory signals is biased by prior experiences. It is not known how such prior information is encoded, retrieved and combined with sensory information by neurons. Previous authors have suggested dynamic encoding mechanisms for prior information, whereby top-down modulation of firing patterns on a trial-by-trial basis creates short-term representations of priors. Although such a mechanism may well account for perceptual bias arising in the short-term, it does not account for the often irreversible and robust changes in perception that result from long-term, developmental experience. Based on the finding that more frequently experienced stimuli gain greater representations in sensory cortices during development, we reasoned that prior information could be stored in the size of cortical sensory representations. For the case of auditory perception, we use a computational model to show that prior information about sound frequency distributions may be stored in the size of primary auditory cortex frequency representations, read-out by elevated baseline activity in all neurons and combined with sensory-evoked activity to generate a perception that conforms to Bayesian integration theory. Our results suggest an alternative neural mechanism for experience-induced long-term perceptual bias in the context of auditory perception. They make the testable prediction that the extent of such perceptual prior bias is modulated by both the degree of cortical reorganization and the magnitude of spontaneous activity in primary auditory cortex. Given that cortical over-representation of frequently experienced stimuli, as well as perceptual bias towards such stimuli is a common phenomenon across sensory modalities, our model may generalize to sensory perception, rather than being specific to auditory perception.

  4. Current status of auditory aging and anti-aging research.

    Science.gov (United States)

    Ruan, Qingwei; Ma, Cheng; Zhang, Ruxin; Yu, Zhuowei

    2014-01-01

    The development of presbycusis, or age-related hearing loss, is determined by a combination of genetic and environmental factors. The auditory periphery exhibits a progressive bilateral, symmetrical reduction of auditory sensitivity to sound from high to low frequencies. The central auditory nervous system shows symptoms of decline in age-related cognitive abilities, including difficulties in speech discrimination and reduced central auditory processing, ultimately resulting in auditory perceptual abnormalities. The pathophysiological mechanisms of presbycusis include excitotoxicity, oxidative stress, inflammation, aging and oxidative stress-induced DNA damage that results in apoptosis in the auditory pathway. However, the originating signals that trigger these mechanisms remain unclear. For instance, it is still unknown whether insulin is involved in auditory aging. Auditory aging has preclinical lesions, which manifest as asymptomatic loss of periphery auditory nerves and changes in the plasticity of the central auditory nervous system. Currently, the diagnosis of preclinical, reversible lesions depends on the detection of auditory impairment by functional imaging, and the identification of physiological and molecular biological markers. However, despite recent improvements in the application of these markers, they remain under-utilized in clinical practice. The application of antisenescent approaches to the prevention of auditory aging has produced inconsistent results. Future research will focus on the identification of markers for the diagnosis of preclinical auditory aging and the development of effective interventions.

  5. Experience and information loss in auditory and visual memory.

    Science.gov (United States)

    Gloede, Michele E; Paulauskas, Emily E; Gregg, Melissa K

    2017-07-01

    Recent studies show that recognition memory for sounds is inferior to memory for pictures. Four experiments were conducted to examine the nature of auditory and visual memory. Experiments 1-3 were conducted to evaluate the role of experience in auditory and visual memory. Participants received a study phase with pictures/sounds, followed by a recognition memory test. Participants then completed auditory training with each of the sounds, followed by a second memory test. Despite auditory training in Experiments 1 and 2, visual memory was superior to auditory memory. In Experiment 3, we found that it is possible to improve auditory memory, but only after 3 days of specific auditory training and 3 days of visual memory decay. We examined the time course of information loss in auditory and visual memory in Experiment 4 and found a trade-off between visual and auditory recognition memory: Visual memory appears to have a larger capacity, while auditory memory is more enduring. Our results indicate that visual and auditory memory are inherently different memory systems and that differences in visual and auditory recognition memory performance may be due to the different amounts of experience with visual and auditory information, as well as structurally different neural circuitry specialized for information retention.

  6. On the Role of Crossmodal Prediction in Audiovisual Emotion Perception

    Directory of Open Access Journals (Sweden)

    Sarah eJessen

    2013-07-01

    Full Text Available Humans rely on multiple sensory modalities to determine the emotional state of others. In fact, such multisensory perception may be one of the mechanisms explaining the ease and efficiency by which others’ emotions are recognized. But how and when exactly do the different modalities interact? One aspect in multisensory perception that has received increasing interest in recent years is the concept of crossmodal prediction. In emotion perception, as in most other settings, visual information precedes the auditory one. Thereby, leading in visual information can facilitate subsequent auditory processing. While this mechanism has often been described in audiovisual speech perception, it has not been addressed so far in audiovisual emotion perception. Based on the current state of the art in (a crossmodal prediction and (b multisensory emotion perception research, we propose that it is essential to consider the former in order to fully understand the latter. Focusing on electroencephalographic (EEG and magnetoencephalographic (MEG studies, we provide a brief overview of the current research in both fields. In discussing these findings, we suggest that emotional visual information may allow for a more reliable prediction of auditory information compared to non-emotional visual information. In support of this hypothesis, we present a re-analysis of a previous data set that shows an inverse correlation between the N1 response in the EEG and the duration of visual emotional but not non-emotional information. If the assumption that emotional content allows for more reliable predictions can be corroborated in future studies, crossmodal prediction is a crucial factor in our understanding of multisensory emotion perception.

  7. Effects of Caffeine on Auditory Brainstem Response

    Directory of Open Access Journals (Sweden)

    Saleheh Soleimanian

    2008-06-01

    Full Text Available Background and Aim: Blocking of the adenosine receptor in central nervous system by caffeine can lead to increasing the level of neurotransmitters like glutamate. As the adenosine receptors are present in almost all brain areas like central auditory pathway, it seems caffeine can change conduction in this way. The purpose of this study was to evaluate the effects of caffeine on latency and amplitude of auditory brainstem response(ABR.Materials and Methods: In this clinical trial study 43 normal 18-25 years old male students were participated. The subjects consumed 0, 2 and 3 mg/kg BW caffeine in three different sessions. Auditory brainstem responses were recorded before and 30 minute after caffeine consumption. The results were analyzed by Friedman and Wilcoxone test to assess the effects of caffeine on auditory brainstem response.Results: Compared to control group the latencies of waves III,V and I-V interpeak interval of the cases decreased significantly after 2 and 3mg/kg BW caffeine consumption. Wave I latency significantly decreased after 3mg/kg BW caffeine consumption(p<0.01. Conclusion: Increasing of the glutamate level resulted from the adenosine receptor blocking brings about changes in conduction in the central auditory pathway.

  8. Facilitated auditory detection for speech sounds.

    Science.gov (United States)

    Signoret, Carine; Gaudrain, Etienne; Tillmann, Barbara; Grimault, Nicolas; Perrin, Fabien

    2011-01-01

    If it is well known that knowledge facilitates higher cognitive functions, such as visual and auditory word recognition, little is known about the influence of knowledge on detection, particularly in the auditory modality. Our study tested the influence of phonological and lexical knowledge on auditory detection. Words, pseudo-words, and complex non-phonological sounds, energetically matched as closely as possible, were presented at a range of presentation levels from sub-threshold to clearly audible. The participants performed a detection task (Experiments 1 and 2) that was followed by a two alternative forced-choice recognition task in Experiment 2. The results of this second task in Experiment 2 suggest a correct recognition of words in the absence of detection with a subjective threshold approach. In the detection task of both experiments, phonological stimuli (words and pseudo-words) were better detected than non-phonological stimuli (complex sounds), presented close to the auditory threshold. This finding suggests an advantage of speech for signal detection. An additional advantage of words over pseudo-words was observed in Experiment 2, suggesting that lexical knowledge could also improve auditory detection when listeners had to recognize the stimulus in a subsequent task. Two simulations of detection performance performed on the sound signals confirmed that the advantage of speech over non-speech processing could not be attributed to energetic differences in the stimuli.

  9. Facilitated auditory detection for speech sounds

    Directory of Open Access Journals (Sweden)

    Carine eSignoret

    2011-07-01

    Full Text Available If it is well known that knowledge facilitates higher cognitive functions, such as visual and auditory word recognition, little is known about the influence of knowledge on detection, particularly in the auditory modality. Our study tested the influence of phonological and lexical knowledge on auditory detection. Words, pseudo words and complex non phonological sounds, energetically matched as closely as possible, were presented at a range of presentation levels from sub threshold to clearly audible. The participants performed a detection task (Experiments 1 and 2 that was followed by a two alternative forced choice recognition task in Experiment 2. The results of this second task in Experiment 2 suggest a correct recognition of words in the absence of detection with a subjective threshold approach. In the detection task of both experiments, phonological stimuli (words and pseudo words were better detected than non phonological stimuli (complex sounds, presented close to the auditory threshold. This finding suggests an advantage of speech for signal detection. An additional advantage of words over pseudo words was observed in Experiment 2, suggesting that lexical knowledge could also improve auditory detection when listeners had to recognize the stimulus in a subsequent task. Two simulations of detection performance performed on the sound signals confirmed that the advantage of speech over non speech processing could not be attributed to energetic differences in the stimuli.

  10. Functional Connectivity Studies Of Patients With Auditory Verbal Hallucinations

    Directory of Open Access Journals (Sweden)

    Ralph E Hoffman

    2012-01-01

    Full Text Available Functional connectivity (FC studies of brain mechanisms leading to auditory verbal hallucinations (AVHs utilizing functional magnetic resonance imaging (fMRI data are reviewed. Initial FC studies utilized fMRI data collected during performance of various tasks, which suggested frontotemporal disconnection and/or source-monitoring.disturbances. Later FC studies have utilized resting (no-task fMRI data. These studies have produced a mixed picture of disconnection and hyperconnectivity involving different pathways associated with AVHs. Results of our most recent FC study of AVHs are reviewed in detail. This study suggests that the core mechanism producing AVHs involves not a single pathway, but a more complex functional loop. Components of this loop include Wernicke’s area and its right homologue, the left inferior frontal cortex, and the putamen. It is noteworthy that the putamen appears to play a critical role in the generation of spontaneous language, and in determining whether auditory stimuli are registered consciously as percepts. Excessive functional coordination linking this region with the Wernicke’s seed region in patients with schizophrenia could therefore generate an overabundance of potentially conscious language representations. In our model, intact FC in the other two legs of corticostriatal loop (Wernicke’s with left IFG, and left IFG with putamen appeared to allow this disturbance (common to schizophrenia overall to be expressed as a conscious hallucination of speech. Recommendations for future studies are discussed, including inclusion of multiple methodologies applied to the same subjects in order to compare and contrast different mechanistic hypotheses, utilizing EEG to better parse time-course of neural synchronization leading to AVHs, and ascertaining experiential subtypes of AVHs that may reflect distinct mechanisms.

  11. Temporal feature perception in cochlear implant users.

    Directory of Open Access Journals (Sweden)

    Lydia Timm

    Full Text Available For the perception of timbre of a musical instrument, the attack time is known to hold crucial information. The first 50 to 150 ms of sound onset reflect the excitation mechanism, which generates the sound. Since auditory processing and music perception in particular are known to be hampered in cochlear implant (CI users, we conducted an electroencephalography (EEG study with an oddball paradigm to evaluate the processing of small differences in musical sound onset. The first 60 ms of a cornet sound were manipulated in order to examine whether these differences are detected by CI users and normal-hearing controls (NH controls, as revealed by auditory evoked potentials (AEPs. Our analysis focused on the N1 as an exogenous component known to reflect physical stimuli properties as well as on the P2 and the Mismatch Negativity (MMN. Our results revealed different N1 latencies as well as P2 amplitudes and latencies for the onset manipulations in both groups. An MMN could be elicited only in the NH control group. Together with additional findings that suggest an impact of musical training on CI users' AEPs, our findings support the view that impaired timbre perception in CI users is at partly due to altered sound onset feature detection.

  12. Acoustic and auditory phonetics: the adaptive design of speech sound systems.

    Science.gov (United States)

    Diehl, Randy L

    2008-03-12

    Speech perception is remarkably robust. This paper examines how acoustic and auditory properties of vowels and consonants help to ensure intelligibility. First, the source-filter theory of speech production is briefly described, and the relationship between vocal-tract properties and formant patterns is demonstrated for some commonly occurring vowels. Next, two accounts of the structure of preferred sound inventories, quantal theory and dispersion theory, are described and some of their limitations are noted. Finally, it is suggested that certain aspects of quantal and dispersion theories can be unified in a principled way so as to achieve reasonable predictive accuracy.

  13. The effect of background music in auditory health persuasion

    NARCIS (Netherlands)

    Elbert, Sarah; Dijkstra, Arie

    2013-01-01

    In auditory health persuasion, threatening information regarding health is communicated by voice only. One relevant context of auditory persuasion is the addition of background music. There are different mechanisms through which background music might influence persuasion, for example through mood (

  14. The role of temporal coherence in auditory stream segregation

    DEFF Research Database (Denmark)

    Christiansen, Simon Krogholt

    The ability to perceptually segregate concurrent sound sources and focus one’s attention on a single source at a time is essential for the ability to use acoustic information. While perceptual experiments have determined a range of acoustic cues that help facilitate auditory stream segregation......, it is not clear how the auditory system realizes the task. This thesis presents a study of the mechanisms involved in auditory stream segregation. Through a combination of psychoacoustic experiments, designed to characterize the influence of acoustic cues on auditory stream formation, and computational models...... of auditory processing, the role of auditory preprocessing and temporal coherence in auditory stream formation was evaluated. The computational model presented in this study assumes that auditory stream segregation occurs when sounds stimulate non-overlapping neural populations in a temporally incoherent...

  15. Intradermal melanocytic nevus of the external auditory canal.

    Science.gov (United States)

    Alves, Renato V; Brandão, Fabiano H; Aquino, José E P; Carvalho, Maria R M S; Giancoli, Suzana M; Younes, Eduado A P

    2005-01-01

    Intradermal nevi are common benign pigmented skin tumors. Their occurrence within the external auditory canal is uncommon. The clinical and pathologic features of an intradermal nevus arising within the external auditory canal are presented, and the literature reviewed.

  16. What determines auditory distraction? On the roles of local auditory changes and expectation violations.

    Directory of Open Access Journals (Sweden)

    Jan P Röer

    Full Text Available Both the acoustic variability of a distractor sequence and the degree to which it violates expectations are important determinants of auditory distraction. In four experiments we examined the relative contribution of local auditory changes on the one hand and expectation violations on the other hand in the disruption of serial recall by irrelevant sound. We present evidence for a greater disruption by auditory sequences ending in unexpected steady state distractor repetitions compared to auditory sequences with expected changing state endings even though the former contained fewer local changes. This effect was demonstrated with piano melodies (Experiment 1 and speech distractors (Experiment 2. Furthermore, it was replicated when the expectation violation occurred after the encoding of the target items (Experiment 3, indicating that the items' maintenance in short-term memory was disrupted by attentional capture and not their encoding. This seems to be primarily due to the violation of a model of the specific auditory distractor sequences because the effect vanishes and even reverses when the experiment provides no opportunity to build up a specific neural model about the distractor sequence (Experiment 4. Nevertheless, the violation of abstract long-term knowledge about auditory regularities seems to cause a small and transient capture effect: Disruption decreased markedly over the course of the experiments indicating that participants habituated to the unexpected distractor repetitions across trials. The overall pattern of results adds to the growing literature that the degree to which auditory distractors violate situation-specific expectations is a more important determinant of auditory distraction than the degree to which a distractor sequence contains local auditory changes.

  17. ABR and auditory P300 findings inchildren with ADHD

    OpenAIRE

    Schochat Eliane; Scheuer Claudia Ines; Andrade Ênio Roberto de

    2002-01-01

    Auditory processing disorders (APD), also referred as central auditory processing disorders (CAPD) and attention deficit hyperactivity disorders (ADHD) have become popular diagnostic entities for school age children. It has been demonstrated a high incidence of comorbid ADHD with communication disorders and auditory processing disorder. The aim of this study was to investigate ABR and P300 auditory evoked potentials in children with ADHD, in a double-blind study. Twenty-one children, ages bet...

  18. Auditory and motor imagery modulate learning in music performance

    Directory of Open Access Journals (Sweden)

    Rachel M. Brown

    2013-07-01

    Full Text Available Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians’ encoding (during Learning, as they practiced novel melodies, and retrieval (during Recall of those melodies. Pianists learned melodies by listening without performing (auditory learning or performing without sound (motor learning; following Learning, pianists performed the melodies from memory with auditory feedback (Recall. During either Learning (Experiment 1 or Recall (Experiment 2, pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced and temporal regularity (variability of quarter-note interonset intervals were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists’ pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2. Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1: Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2: Higher auditory imagery skill predicted greater temporal regularity during Recall in the

  19. [Functional neuroimaging of auditory hallucinations in schizophrenia].

    Science.gov (United States)

    Font, M; Parellada, E; Fernández-Egea, E; Bernardo, M; Lomeña, F

    2003-01-01

    The neurobiological bases underlying the generation of auditory hallucinations, a distressing and paradigmatic symptom of schizophrenia, are still unknown in spite of in-depth phenomenological descriptions. This work aims to make a critical review of the latest published literature in recent years, focusing on functional neuroimaging studies (PET, SPECT, fMRI) of auditory hallucinations. Thus, the studies are classified according to whether they are sensory activation, trait and state. The two main hypotheses proposed to explain the phenomenon, external speech vs. subvocal or inner speech, are also explained. Finally, the latest unitary theory as well as the limitations the studies published are commented on. The need to continue investigating in this field, that is still underdeveloped, is posed in order to understand better the etiopathogenesis of auditory hallucinations in schizophrenia.

  20. The mitochondrial connection in auditory neuropathy.

    Science.gov (United States)

    Cacace, Anthony T; Pinheiro, Joaquim M B

    2011-01-01

    'Auditory neuropathy' (AN), the term used to codify a primary degeneration of the auditory nerve, can be linked directly or indirectly to mitochondrial dysfunction. These observations are based on the expression of AN in known mitochondrial-based neurological diseases (Friedreich's ataxia, Mohr-Tranebjærg syndrome), in conditions where defects in axonal transport, protein trafficking, and fusion processes perturb and/or disrupt mitochondrial dynamics (Charcot-Marie-Tooth disease, autosomal dominant optic atrophy), in a common neonatal condition known to be toxic to mitochondria (hyperbilirubinemia), and where respiratory chain deficiencies produce reductions in oxidative phosphorylation that adversely affect peripheral auditory mechanisms. This body of evidence is solidified by data derived from temporal bone and genetic studies, biochemical, molecular biologic, behavioral, electroacoustic, and electrophysiological investigations.

  1. The auditory hallucination: a phenomenological survey.

    Science.gov (United States)

    Nayani, T H; David, A S

    1996-01-01

    A comprehensive semi-structured questionnaire was administered to 100 psychotic patients who had experienced auditory hallucinations. The aim was to extend the phenomenology of the hallucination into areas of both form and content and also to guide future theoretical development. All subjects heard 'voices' talking to or about them. The location of the voice, its characteristics and the nature of address were described. Precipitants and alleviating factors plus the effect of the hallucinations on the sufferer were identified. Other hallucinatory experiences, thought insertion and insight were examined for their inter-relationships. A pattern emerged of increasing complexity of the auditory-verbal hallucination over time by a process of accretion, with the addition of more voices and extended dialogues, and more intimacy between subject and voice. Such evolution seemed to relate to the lessening of distress and improved coping. These findings should inform both neurological and cognitive accounts of the pathogenesis of auditory hallucinations in psychotic disorders.

  2. Cooperative dynamics in auditory brain response

    CERN Document Server

    Kwapien, J; Liu, L C; Ioannides, A A

    1998-01-01

    Simultaneous estimates of the activity in the left and right auditory cortex of five normal human subjects were extracted from Multichannel Magnetoencephalography recordings. Left, right and binaural stimulation were used, in separate runs, for each subject. The resulting time-series of left and right auditory cortex activity were analysed using the concept of mutual information. The analysis constitutes an objective method to address the nature of inter-hemispheric correlations in response to auditory stimulations. The results provide a clear evidence for the occurrence of such correlations mediated by a direct information transport, with clear laterality effects: as a rule, the contralateral hemisphere leads by 10-20ms, as can be seen in the average signal. The strength of the inter-hemispheric coupling, which cannot be extracted from the average data, is found to be highly variable from subject to subject, but remarkably stable for each subject.

  3. Auditory temporal processes in the elderly

    Directory of Open Access Journals (Sweden)

    E. Ben-Artzi

    2011-03-01

    Full Text Available Several studies have reported age-related decline in auditory temporal resolution and in working memory. However, earlier studies did not provide evidence as to whether these declines reflect overall changes in the same mechanisms, or reflect age-related changes in two independent mechanisms. In the current study we examined whether the age-related decline in auditory temporal resolution and in working memory would remain significant even after controlling for their shared variance. Eighty-two participants, aged 21-82 performed the dichotic temporal order judgment task and the backward digit span task. The findings indicate that age-related decline in auditory temporal resolution and in working memory are two independent processes.

  4. Evaluating auditory stream segregation of SAM tone sequences by subjective and objective psychoacoustical tasks, and brain activity

    Directory of Open Access Journals (Sweden)

    Lena-Vanessa eDollezal

    2014-06-01

    Full Text Available Auditory stream segregation refers to a segregated percept of signal streams with different acoustic features. Different approaches have been pursued in studies of stream segregation. In psychoacoustics, stream segregation has mostly been investigated with a subjective task asking the subjects to report their percept. Few studies have applied an objective task in which stream segregation is evaluated indirectly by determining thresholds for a percept that depends on whether auditory streams are segregated or not. Furthermore, both perceptual measures and physiological measures of brain activity have been employed but only little is known about their relation. How the results from different tasks and measures are related is evaluated in the present study using examples relying on the ABA- stimulation paradigm that apply the same stimuli. We presented A and B signals that were sinusoidally amplitude modulated (SAM tones providing purely temporal, spectral or both types of cues to evaluate perceptual stream segregation and its physiological correlate. Which types of cues are most prominent was determined by the choice of carrier and modulation frequencies (fmod of the signals. In the subjective task subjects reported their percept and in the objective task we measured their sensitivity for detecting time-shifts of B signals in an ABA- sequence. As a further measure of processes underlying stream segregation we employed functional magnetic resonance imaging (fMRI. SAM tone parameters were chosen to evoke an integrated (1-stream, a segregated (2-stream or an ambiguous percept by adjusting the fmod difference between A and B tones (∆fmod. The results of both psychoacoustical tasks are significantly correlated. BOLD responses in fMRI depend on ∆fmod between A and B SAM tones. The effect of ∆fmod, however, differs between auditory cortex and frontal regions suggesting differences in representation related to the degree of perceptual ambiguity of

  5. Early and late beta-band power reflect audiovisual perception in the McGurk illusion.

    Science.gov (United States)

    Roa Romero, Yadira; Senkowski, Daniel; Keil, Julian

    2015-04-01

    The McGurk illusion is a prominent example of audiovisual speech perception and the influence that visual stimuli can have on auditory perception. In this illusion, a visual speech stimulus influences the perception of an incongruent auditory stimulus, resulting in a fused novel percept. In this high-density electroencephalography (EEG) study, we were interested in the neural signatures of the subjective percept of the McGurk illusion as a phenomenon of speech-specific multisensory integration. Therefore, we examined the role of cortical oscillations and event-related responses in the perception of congruent and incongruent audiovisual speech. We compared the cortical activity elicited by objectively congruent syllables with incongruent audiovisual stimuli. Importantly, the latter elicited a subjectively congruent percept: the McGurk illusion. We found that early event-related responses (N1) to audiovisual stimuli were reduced during the perception of the McGurk illusion compared with congruent stimuli. Most interestingly, our study showed a stronger poststimulus suppression of beta-band power (13-30 Hz) at short (0-500 ms) and long (500-800 ms) latencies during the perception of the McGurk illusion compared with congruent stimuli. Our study demonstrates that auditory perception is influenced by visual context and that the subsequent formation of a McGurk illusion requires stronger audiovisual integration even at early processing stages. Our results provide evidence that beta-band suppression at early stages reflects stronger stimulus processing in the McGurk illusion. Moreover, stronger late beta-band suppression in McGurk illusion indicates the resolution of incongruent physical audiovisual input and the formation of a coherent, illusory multisensory percept.

  6. Do dyslexics have auditory input processing difficulties?

    DEFF Research Database (Denmark)

    Poulsen, Mads

    2011-01-01

    Word production difficulties are well documented in dyslexia, whereas the results are mixed for receptive phonological processing. This asymmetry raises the possibility that the core phonological deficit of dyslexia is restricted to output processing stages. The present study investigated whether...... a group of dyslexics had word level receptive difficulties using an auditory lexical decision task with long words and nonsense words. The dyslexics were slower and less accurate than chronological age controls in an auditory lexical decision task, with disproportionate low performance on nonsense words...

  7. The many facets of auditory display

    Science.gov (United States)

    Blattner, Meera M.

    1995-01-01

    In this presentation we will examine some of the ways sound can be used in a virtual world. We make the case that many different types of audio experience are available to us. A full range of audio experiences include: music, speech, real-world sounds, auditory displays, and auditory cues or messages. The technology of recreating real-world sounds through physical modeling has advanced in the past few years allowing better simulation of virtual worlds. Three-dimensional audio has further enriched our sensory experiences.

  8. Transient auditory hallucinations in an adolescent.

    Science.gov (United States)

    Skokauskas, Norbert; Pillay, Devina; Moran, Tom; Kahn, David A

    2010-05-01

    In adolescents, hallucinations can be a transient illness or can be associated with non-psychotic psychopathology, psychosocial adversity, or a physical illness. We present the case of a 15-year-old secondary-school student who presented with a 1-month history of first onset auditory hallucinations, which had been increasing in frequency and severity, and mild paranoid ideation. Over a 10-week period, there was a gradual diminution, followed by a complete resolution, of symptoms. We discuss issues regarding the diagnosis and prognosis of auditory hallucinations in adolescents.

  9. Tactile perception during action observation.

    Science.gov (United States)

    Vastano, Roberta; Inuggi, Alberto; Vargas, Claudia D; Baud-Bovy, Gabriel; Jacono, Marco; Pozzo, Thierry

    2016-09-01

    It has been suggested that tactile perception becomes less acute during movement to optimize motor control and to prevent an overload of afferent information generated during action. This empirical phenomenon, known as "tactile gating effect," has been associated with mechanisms of sensory feedback prediction. However, less attention has been given to the tactile attenuation effect during the observation of an action. The aim of this study was to investigate whether and how the observation of a goal-directed action influences tactile perception as during overt action. In a first experiment, we recorded vocal reaction times (RTs) of participants to tactile stimulations during the observation of a reach-to-grasp action. The stimulations were delivered on different body parts that could be either congruent or incongruent with the observed effector (the right hand and the right leg, respectively). The tactile stimulation was contrasted with a no body-related stimulation (an auditory beep). We found increased RTs for tactile congruent stimuli compared to both tactile incongruent and auditory stimuli. This effect was reported only during the observation of the reaching phase, whereas RTs were not modulated during the grasping phase. A tactile two-alternative forced-choice (2AFC) discrimination task was then conducted in order to quantify the changes in tactile sensitivity during the observation of the same goal-directed actions. In agreement with the first experiment, the tactile perceived intensity was reduced only during the reaching phase. These results suggest that tactile processing during action observation relies on a process similar to that occurring during action execution.

  10. Face configuration affects speech perception: Evidence from a McGurk mismatch negativity study

    DEFF Research Database (Denmark)

    Eskelund, Kasper; MacDonald, Ewen; Andersen, Tobias

    2015-01-01

    as demonstrated by the Thatcher illusion in which the orientation of the eyes and mouth with respect to the face is inverted (Thatcherization). This gives the face a grotesque appearance but this is only seen when the face is upright. Thatcherization can likewise disrupt visual speech perception but only when...... the face is upright indicating that facial configuration can be important for visual speech perception. This effect can propagate to auditory speech perception through audiovisual integration so that Thatcherization disrupts the McGurk illusion in which visual speech perception alters perception...... perception due to the McGurk illusion without any change in the acoustic stimulus. We found that Thatcherization disrupted a strong McGurk illusion and a correspondingly strong McGurk-MMN only for upright faces. This confirms that facial configuration can be important for audiovisual speech perception...

  11. Altered temporal dynamics of neural adaptation in the aging human auditory cortex.

    Science.gov (United States)

    Herrmann, Björn; Henry, Molly J; Johnsrude, Ingrid S; Obleser, Jonas

    2016-09-01

    Neural response adaptation plays an important role in perception and cognition. Here, we used electroencephalography to investigate how aging affects the temporal dynamics of neural adaptation in human auditory cortex. Younger (18-31 years) and older (51-70 years) normal hearing adults listened to tone sequences with varying onset-to-onset intervals. Our results show long-lasting neural adaptation such that the response to a particular tone is a nonlinear function of the extended temporal history of sound events. Most important, aging is associated with multiple changes in auditory cortex; older adults exhibit larger and less variable response magnitudes, a larger dynamic response range, and a reduced sensitivity to temporal context. Computational modeling suggests that reduced adaptation recovery times underlie these changes in the aging auditory cortex and that the extended temporal stimulation has less influence on the neural response to the current sound in older compared with younger individuals. Our human electroencephalography results critically narrow the gap to animal electrophysiology work suggesting a compensatory release from cortical inhibition accompanying hearing loss and aging.

  12. Adaptation to Delayed Speech Feedback Induces Temporal Recalibration between Vocal Sensory and Auditory Modalities

    Directory of Open Access Journals (Sweden)

    Kosuke Yamamoto

    2011-10-01

    Full Text Available We ordinarily perceive our voice sound as occurring simultaneously with vocal production, but the sense of simultaneity in vocalization can be easily interrupted by delayed auditory feedback (DAF. DAF causes normal people to have difficulty speaking fluently but helps people with stuttering to improve speech fluency. However, the underlying temporal mechanism for integrating the motor production of voice and the auditory perception of vocal sound remains unclear. In this study, we investigated the temporal tuning mechanism integrating vocal sensory and voice sounds under DAF with an adaptation technique. Participants read some sentences with specific delay times of DAF (0, 30, 75, 120 ms during three minutes to induce ‘Lag Adaptation’. After the adaptation, they then judged the simultaneity between motor sensation and vocal sound given feedback in producing simple voice but not speech. We found that speech production with lag adaptation induced a shift in simultaneity responses toward the adapted auditory delays. This indicates that the temporal tuning mechanism in vocalization can be temporally recalibrated after prolonged exposure to delayed vocal sounds. These findings suggest vocalization is finely tuned by the temporal recalibration mechanism, which acutely monitors the integration of temporal delays between motor sensation and vocal sound.

  13. Temporal recalibration in vocalization induced by adaptation of delayed auditory feedback.

    Directory of Open Access Journals (Sweden)

    Kosuke Yamamoto

    Full Text Available BACKGROUND: We ordinarily perceive our voice sound as occurring simultaneously with vocal production, but the sense of simultaneity in vocalization can be easily interrupted by delayed auditory feedback (DAF. DAF causes normal people to have difficulty speaking fluently but helps people with stuttering to improve speech fluency. However, the underlying temporal mechanism for integrating the motor production of voice and the auditory perception of vocal sound remains unclear. In this study, we investigated the temporal tuning mechanism integrating vocal sensory and voice sounds under DAF with an adaptation technique. METHODS AND FINDINGS: Participants produced a single voice sound repeatedly with specific delay times of DAF (0, 66, 133 ms during three minutes to induce 'Lag Adaptation'. They then judged the simultaneity between motor sensation and vocal sound given feedback. We found that lag adaptation induced a shift in simultaneity responses toward the adapted auditory delays. This indicates that the temporal tuning mechanism in vocalization can be temporally recalibrated after prolonged exposure to delayed vocal sounds. Furthermore, we found that the temporal recalibration in vocalization can be affected by averaging delay times in the adaptation phase. CONCLUSIONS: These findings suggest vocalization is finely tuned by the temporal recalibration mechanism, which acutely monitors the integration of temporal delays between motor sensation and vocal sound.

  14. Brain responses to altered auditory feedback during musical keyboard production: an fMRI study.

    Science.gov (United States)

    Pfordresher, Peter Q; Mantell, James T; Brown, Steven; Zivadinov, Robert; Cox, Jennifer L

    2014-03-27

    Alterations of auditory feedback during piano performance can be profoundly disruptive. Furthermore, different alterations can yield different types of disruptive effects. Whereas alterations of feedback synchrony disrupt performed timing, alterations of feedback pitch contents can disrupt accuracy. The current research tested whether these behavioral dissociations correlate with differences in brain activity. Twenty pianists performed simple piano keyboard melodies while being scanned in a 3-T magnetic resonance imaging (MRI) scanner. In different conditions they experienced normal auditory feedback, altered auditory feedback (asynchronous delays or altered pitches), or control conditions that excluded movement or sound. Behavioral results replicated past findings. Neuroimaging data suggested that asynchronous delays led to increased activity in Broca's area and its right homologue, whereas disruptive alterations of pitch elevated activations in the cerebellum, area Spt, inferior parietal lobule, and the anterior cingulate cortex. Both disruptive conditions increased activations in the supplementary motor area. These results provide the first evidence of neural responses associated with perception/action mismatch during keyboard production.

  15. The impact of a concurrent motor task on auditory and visual temporal discrimination tasks.

    Science.gov (United States)

    Mioni, Giovanna; Grassi, Massimo; Tarantino, Vincenza; Stablum, Franca; Grondin, Simon; Bisiacchi, Patrizia S

    2016-04-01

    Previous studies have shown the presence of an interference effect on temporal perception when participants are required to simultaneously execute a nontemporal task. Such interference likely has an attentional source. In the present work, a temporal discrimination task was performed alone or together with a self-paced finger-tapping task used as concurrent, nontemporal task. Temporal durations were presented in either the visual or the auditory modality, and two standard durations (500 and 1,500 ms) were used. For each experimental condition, the participant's threshold was estimated and analyzed. The mean Weber fraction was higher in the visual than in the auditory modality, but only for the subsecond duration, and it was higher with the 500-ms than with the 1,500-ms standard duration. Interestingly, the Weber fraction was significantly higher in the dual-task condition, but only in the visual modality. The results suggest that the processing of time in the auditory modality is likely automatic, but not in the visual modality.

  16. Auditory Cortex tACS and tRNS for Tinnitus: Single versus Multiple Sessions

    Directory of Open Access Journals (Sweden)

    Laura Claes

    2014-01-01

    Full Text Available Tinnitus is the perception of a sound in the absence of an external acoustic source, which often exerts a significant impact on the quality of life. Currently there is evidence that neuroplastic changes in both neural pathways are involved in the generation and maintaining of tinnitus. Neuromodulation has been suggested to interfere with these neuroplastic alterations. In this study we aimed to compare the effect of two upcoming forms of transcranial electrical neuromodulation: alternating current stimulation (tACS and random noise stimulation (tRNS, both applied on the auditory cortex. A database with 228 patients with chronic tinnitus who underwent noninvasive neuromodulation was retrospectively analyzed. The results of this study show that a single session of tRNS induces a significant suppressive effect on tinnitus loudness and distress, in contrast to tACS. Multiple sessions of tRNS augment the suppressive effect on tinnitus loudness but have no effect on tinnitus distress. In conclusion this preliminary study shows a possibly beneficial effect of tRNS on tinnitus and can be a motivation for future randomized placebo-controlled clinical studies with auditory tRNS for tinnitus. Auditory alpha-modulated tACS does not seem to be contributing to the treatment of tinnitus.

  17. Large cross-sectional study of presbycusis reveals rapid progressive decline in auditory temporal acuity.

    Science.gov (United States)

    Ozmeral, Erol J; Eddins, Ann C; Frisina, D Robert; Eddins, David A

    2016-07-01

    The auditory system relies on extraordinarily precise timing cues for the accurate perception of speech, music, and object identification. Epidemiological research has documented the age-related progressive decline in hearing sensitivity that is known to be a major health concern for the elderly. Although smaller investigations indicate that auditory temporal processing also declines with age, such measures have not been included in larger studies. Temporal gap detection thresholds (TGDTs; an index of auditory temporal resolution) measured in 1071 listeners (aged 18-98 years) were shown to decline at a minimum rate of 1.05 ms (15%) per decade. Age was a significant predictor of TGDT when controlling for audibility (partial correlation) and when restricting analyses to persons with normal-hearing sensitivity (n = 434). The TGDTs were significantly better for males (3.5 ms; 51%) than females when averaged across the life span. These results highlight the need for indices of temporal processing in diagnostics, as treatment targets, and as factors in models of aging.

  18. Music-induced cortical plasticity and lateral inhibition in the human auditory cortex as foundations for tonal tinnitus treatment

    Directory of Open Access Journals (Sweden)

    Christo ePantev

    2012-06-01

    Full Text Available Over the past 15 years, we have studied plasticity in the human auditory cortex by means of magnetoencephalography (MEG. Two main topics nurtured our curiosity: the effects of musical training on plasticity in the auditory system, and the effects of lateral inhibition. One of our plasticity studies found that listening to notched music for three hours inhibited the neuronal activity in the auditory cortex that corresponded to the center-frequency of the notch, suggesting suppression of neural activity by lateral inhibition. Crucially, the overall effects of lateral inhibition on human auditory cortical activity were stronger than the habituation effects. Based on these results we developed a novel treatment strategy for tonal tinnitus - tailor-made notched music training (TMNMT. By notching the music energy spectrum around the individual tinnitus frequency, we intended to attract lateral inhibition to auditory neurons involved in tinnitus perception. So far, the training strategy has been evaluated in two studies. The results of the initial long-term controlled study (12 months supported the validity of the treatment concept: subjective tinnitus loudness and annoyance were significantly reduced after TMNMT but not when notching spared the tinnitus frequencies. Correspondingly, tinnitus-related auditory evoked fields (AEFs were significantly reduced after training. The subsequent short-term (5 days training study indicated that training was more effective in the case of tinnitus frequencies ≤ 8 kHz compared to tinnitus frequencies > 8 kHz, and that training should be employed over a long-term in order to induce more persistent effects. Further development and evaluation of TMNMT therapy are planned. A goal is to transfer this novel, completely non-invasive, and low-cost treatment approach for tonal tinnitus into routine clinical practice.

  19. Role of bimodal stimulation for auditory-perceptual skills development in children with a unilateral cochlear implant.

    Science.gov (United States)

    Marsella, P; Giannantonio, S; Scorpecci, A; Pianesi, F; Micardi, M; Resca, A

    2015-12-01

    This is a prospective randomised study that evaluated the differences arising from a bimodal stimulation compared to a monaural electrical stimulation in deaf children, particularly in terms of auditory-perceptual skills development. We enrolled 39 children aged 12 to 36 months, suffering from severe-to-profound bilateral sensorineural hearing loss with residual hearing on at least one side. All were unilaterally implanted: 21 wore only the cochlear implant (CI) (unilateral CI group), while the other 18 used the CI and a contralateral hearing aid at the same time (bimodal group). They were assessed with a test battery designed to appraise preverbal and verbal auditory-perceptual skills immediately before and 6 and 12 months after implantation. No statistically significant differences were observed between groups at time 0, while at 6 and 12 months children in the bimodal group had better scores in each test than peers in the unilateral CI group. Therefore, although unilateral deafness/hearing does not undermine hearing acuity in normal listening, the simultaneous use of a CI and a contralateral hearing aid (binaural hearing through a bimodal stimulation) provides an advantage in terms of acquisition of auditory-perceptual skills, allowing children to achieve the basic milestones of auditory perception faster and in greater number than children with only one CI. Thus, "keeping awake" the contralateral auditory pathway, albeit not crucial in determining auditory acuity, guarantees benefits compared with the use of the implant alone. These findings provide initial evidence to establish shared guidelines for better rehabilitation of patients undergoing unilateral cochlear implantation, and add more evidence regarding the correct indications for bilateral cochlear implantation.

  20. Reading adn Auditory-Visual Equivalences

    Science.gov (United States)

    Sidman, Murray

    1971-01-01

    A retarded boy, unable to read orally or with comprehension, was taught to match spoken to printed words and was then capable of reading comprehension (matching printed words to picture) and oral reading (naming printed words aloud), demonstrating that certain learned auditory-visual equivalences are sufficient prerequisites for reading…

  1. Auditory Integration Training: The Magical Mystery Cure.

    Science.gov (United States)

    Tharpe, Anne Marie

    1999-01-01

    This article notes the enthusiastic reception received by auditory integration training (AIT) for children with a wide variety of disorders including autism but raises concerns about this alternative treatment practice. It offers reasons for cautious evaluation of AIT prior to clinical implementation and summarizes current research findings. (DB)

  2. Integration and segregation in auditory scene analysis

    Science.gov (United States)

    Sussman, Elyse S.

    2005-03-01

    Assessment of the neural correlates of auditory scene analysis, using an index of sound change detection that does not require the listener to attend to the sounds [a component of event-related brain potentials called the mismatch negativity (MMN)], has previously demonstrated that segregation processes can occur without attention focused on the sounds and that within-stream contextual factors influence how sound elements are integrated and represented in auditory memory. The current study investigated the relationship between the segregation and integration processes when they were called upon to function together. The pattern of MMN results showed that the integration of sound elements within a sound stream occurred after the segregation of sounds into independent streams and, further, that the individual streams were subject to contextual effects. These results are consistent with a view of auditory processing that suggests that the auditory scene is rapidly organized into distinct streams and the integration of sequential elements to perceptual units takes place on the already formed streams. This would allow for the flexibility required to identify changing within-stream sound patterns, needed to appreciate music or comprehend speech..

  3. Development of Receiver Stimulator for Auditory Prosthesis

    Directory of Open Access Journals (Sweden)

    K. Raja Kumar

    2010-05-01

    Full Text Available The Auditory Prosthesis (AP is an electronic device that can provide hearing sensations to people who are profoundly deaf by stimulating the auditory nerve via an array of electrodes with an electric current allowing them to understand the speech. The AP system consists of two hardware functional units such as Body Worn Speech Processor (BWSP and Receiver Stimulator. The prototype model of Receiver Stimulator for Auditory Prosthesis (RSAP consists of Speech Data Decoder, DAC, ADC, constant current generator, electrode selection logic, switch matrix and simulated electrode resistance array. The laboratory model of speech processor is designed to implement the Continuous Interleaved Sampling (CIS speech processing algorithm which generates the information required for electrode stimulation based on the speech / audio data. Speech Data Decoder receives the encoded speech data via an inductive RF transcutaneous link from speech processor. Twelve channels of auditory Prosthesis with selectable eight electrodes for stimulation of simulated electrode resistance array are used for testing. The RSAP is validated by using the test data generated by the laboratory prototype of speech processor. The experimental results are obtained from specific speech/sound tests using a high-speed data acquisition system and found satisfactory.

  4. Auditory Processing Disorder: School Psychologist Beware?

    Science.gov (United States)

    Lovett, Benjamin J.

    2011-01-01

    An increasing number of students are being diagnosed with auditory processing disorder (APD), but the school psychology literature has largely neglected this controversial condition. This article reviews research on APD, revealing substantial concerns with assessment tools and diagnostic practices, as well as insufficient research regarding many…

  5. The Goldilocks Effect in Infant Auditory Attention

    Science.gov (United States)

    Kidd, Celeste; Piantadosi, Steven T.; Aslin, Richard N.

    2014-01-01

    Infants must learn about many cognitive domains (e.g., language, music) from auditory statistics, yet capacity limits on their cognitive resources restrict the quantity that they can encode. Previous research has established that infants can attend to only a subset of available acoustic input. Yet few previous studies have directly examined infant…

  6. Auditory Training with Frequent Communication Partners

    Science.gov (United States)

    Tye-Murray, Nancy; Spehar, Brent; Sommers, Mitchell; Barcroft, Joe

    2016-01-01

    Purpose: Individuals with hearing loss engage in auditory training to improve their speech recognition. They typically practice listening to utterances spoken by unfamiliar talkers but never to utterances spoken by their most frequent communication partner (FCP)--speech they most likely desire to recognize--under the assumption that familiarity…

  7. Affective Priming with Auditory Speech Stimuli

    Science.gov (United States)

    Degner, Juliane

    2011-01-01

    Four experiments explored the applicability of auditory stimulus presentation in affective priming tasks. In Experiment 1, it was found that standard affective priming effects occur when prime and target words are presented simultaneously via headphones similar to a dichotic listening procedure. In Experiment 2, stimulus onset asynchrony (SOA) was…

  8. Affective priming with auditory speech stimuli

    NARCIS (Netherlands)

    Degner, J.

    2011-01-01

    Four experiments explored the applicability of auditory stimulus presentation in affective priming tasks. In Experiment 1, it was found that standard affective priming effects occur when prime and target words are presented simultaneously via headphones similar to a dichotic listening procedure. In

  9. Using TMS to study the role of the articulatory motor system in speech perception.

    Science.gov (United States)

    Möttönen, Riikka; Watkins, Kate E

    2012-09-01

    Background: The ability to communicate using speech is a remarkable skill, which requires precise coordination of articulatory movements and decoding of complex acoustic signals. According to the traditional view, speech production and perception rely on motor and auditory brain areas, respectively. However, there is growing evidence that auditory-motor circuits support both speech production and perception.Aims: In this article we provide a review of how transcranial magnetic stimulation (TMS) has been used to investigate the excitability of the motor system during listening to speech and the contribution of the motor system to performance in various speech perception tasks. We also discuss how TMS can be used in combination with brain-imaging techniques to study interactions between motor and auditory systems during speech perception.Main contribution: TMS has proven to be a powerful tool to investigate the role of the articulatory motor system in speech perception.Conclusions: TMS studies have provided support for the view that the motor structures that control the movements of the articulators contribute not only to speech production but also to speech perception.

  10. Auditory pathology in cri-du-chat (5p-) syndrome: phenotypic evidence for auditory neuropathy.

    Science.gov (United States)

    Swanepoel, D

    2007-10-01

    5p-(cri-du-chat syndrome) is a well-defined clinical entity presenting with phenotypic and cytogenetic variability. Despite recognition that abnormalities in audition are common, limited reports on auditory functioning in affected individuals are available. The current study presents a case illustrating the auditory functioning in a 22-month-old patient diagnosed with 5p- syndrome, karyotype 46,XX,del(5)(p13). Auditory neuropathy was diagnosed based on abnormal auditory evoked potentials with neural components suggesting severe to profound hearing loss in the presence of cochlear microphonic responses and behavioral reactions to sound at mild to moderate hearing levels. The current case and a review of available reports indicate that auditory neuropathy or neural dys-synchrony may be another phenotype of the condition possibly related to abnormal expression of the protein beta-catenin mapped to 5p. Implications are for routine and diagnostic specific assessments of auditory functioning and for employment of non-verbal communication methods in early intervention.

  11. Biological impact of auditory expertise across the life span: musicians as a model of auditory learning.

    Science.gov (United States)

    Strait, Dana L; Kraus, Nina

    2014-02-01

    Experience-dependent characteristics of auditory function, especially with regard to speech-evoked auditory neurophysiology, have garnered increasing attention in recent years. This interest stems from both pragmatic and theoretical concerns as it bears implications for the prevention and remediation of language-based learning impairment in addition to providing insight into mechanisms engendering experience-dependent changes in human sensory function. Musicians provide an attractive model for studying the experience-dependency of auditory processing in humans due to their distinctive neural enhancements compared to nonmusicians. We have only recently begun to address whether these enhancements are observable early in life, during the initial years of music training when the auditory system is under rapid development, as well as later in life, after the onset of the aging process. Here we review neural enhancements in musically trained individuals across the life span in the context of cellular mechanisms that underlie learning, identified in animal models. Musicians' subcortical physiologic enhancements are interpreted according to a cognitive framework for auditory learning, providing a model in which to study mechanisms of experience-dependent changes in human auditory function.

  12. Representation of Reward Feedback in Primate Auditory Cortex

    Directory of Open Access Journals (Sweden)

    Michael eBrosch

    2011-02-01

    Full Text Available It is well established that auditory cortex is plastic on different time scales and that this plasticity is driven by the reinforcement that is used to motivate subjects to learn or to perform an auditory task. Motivated by these findings, we study in detail properties of neuronal firing in auditory cortex that is related to reward feedback. We recorded from the auditory cortex of two monkeys while they were performing an auditory categorization task. Monkeys listened to a sequence of tones and had to signal when the frequency of adjacent tones stepped in downward direction, irrespective of the tone frequency and step size. Correct identifications were rewarded with either a large or a small amount of water. The size of reward depended on the monkeys' performance in the previous trial: it was large after a correct trial and small after an incorrect trial. The rewards served to maintain task performance. During task performance we found three successive periods of neuronal firing in auditory cortex that reflected (1 the reward expectancy for each trial, (2 the reward size received and (3 the mismatch between the expected and delivered reward. These results, together with control experiments suggest that auditory cortex receives reward feedback that could be used to adapt auditory cortex to task requirements. Additionally, the results presented here extend previous observations of non-auditory roles of auditory cortex and shows that auditory cortex is even more cognitively influenced than lately recognized.

  13. Representation of reward feedback in primate auditory cortex.

    Science.gov (United States)

    Brosch, Michael; Selezneva, Elena; Scheich, Henning

    2011-01-01

    It is well established that auditory cortex is plastic on different time scales and that this plasticity is driven by the reinforcement that is used to motivate subjects to learn or to perform an auditory task. Motivated by these findings, we study in detail properties of neuronal firing in auditory cortex that is related to reward feedback. We recorded from the auditory cortex of two monkeys while they were performing an auditory categorization task. Monkeys listened to a sequence of tones and had to signal when the frequency of adjacent tones stepped in downward direction, irrespective of the tone frequency and step size. Correct identifications were rewarded with either a large or a small amount of water. The size of reward depended on the monkeys' performance in the previous trial: it was large after a correct trial and small after an incorrect trial. The rewards served to maintain task performance. During task performance we found three successive periods of neuronal firing in auditory cortex that reflected (1) the reward expectancy for each trial, (2) the reward-size received, and (3) the mismatch between the expected and delivered reward. These results, together with control experiments suggest that auditory cortex receives reward feedback that could be used to adapt auditory cortex to task requirements. Additionally, the results presented here extend previous observations of non-auditory roles of auditory cortex and shows that auditory cortex is even more cognitively influenced than lately recognized.

  14. To what extent do Gestalt grouping principles influence tactile perception?

    Science.gov (United States)

    Gallace, Alberto; Spence, Charles

    2011-07-01

    Since their formulation by the Gestalt movement more than a century ago, the principles of perceptual grouping have primarily been investigated in the visual modality and, to a lesser extent, in the auditory modality. The present review addresses the question of whether the same grouping principles also affect the perception of tactile stimuli. Although, to date, only a few studies have explicitly investigated the existence of Gestalt grouping principles in the tactile modality, we argue that many more studies have indirectly provided evidence relevant to this topic. Reviewing this body of research, we argue that similar principles to those reported previously in visual and auditory studies also govern the perceptual grouping of tactile stimuli. In particular, we highlight evidence showing that the principles of proximity, similarity, common fate, good continuation, and closure affect tactile perception in both unimodal and crossmodal settings. We also highlight that the grouping of tactile stimuli is often affected by visual and auditory information that happen to be presented simultaneously. Finally, we discuss the theoretical and applied benefits that might pertain to the further study of Gestalt principles operating in both unisensory and multisensory tactile perception.

  15. Vocal Learning in Grey Parrots: A Brief Review of Perception, Production, and Cross-Species Comparisons

    Science.gov (United States)

    Pepperberg, Irene M.

    2010-01-01

    This chapter briefly reviews what is known-and what remains to be understood--about Grey parrot vocal learning. I review Greys' physical capacities--issues of auditory perception and production--then discuss how these capacities are used in vocal learning and can be recruited for referential communication with humans. I discuss cross-species…

  16. Time Perception: Modality and Duration Effects in Attention-Deficit/Hyperactivity Disorder (ADHD)

    Science.gov (United States)

    Toplak, Maggie E.; Tannock, Rosemary

    2005-01-01

    Time perception performance was systematically investigated in adolescents with and without attention-deficit/hyperactivity disorder (ADHD). Specifically, the effects of manipulating modality (auditory and visual) and length of duration (200 and 1000 ms) were examined. Forty-six adolescents with ADHD and 44 controls were administered four duration…

  17. Technological, biological, and acoustical constraints to music perception in cochlear implant users.

    Science.gov (United States)

    Limb, Charles J; Roy, Alexis T

    2014-02-01

    Despite advances in technology, the ability to perceive music remains limited for many cochlear implant users. This paper reviews the technological, biological, and acoustical constraints that make music an especially challenging stimulus for cochlear implant users, while highlighting recent research efforts to overcome these shortcomings. The limitations of cochlear implant devices, which have been optimized for speech comprehension, become evident when applied to music, particularly with regards to inadequate spectral, fine-temporal, and dynamic range representation. Beyond the impoverished information transmitted by the device itself, both peripheral and central auditory nervous system deficits are seen in the presence of sensorineural hearing loss, such as auditory nerve degeneration and abnormal auditory cortex activation. These technological and biological constraints to effective music perception are further compounded by the complexity of the acoustical features of music itself that require the perceptual integration of varying rhythmic, melodic, harmonic, and timbral elements of sound. Cochlear implant users not only have difficulty perceiving spectral components individually (leading to fundamental disruptions in perception of pitch, melody, and harmony) but also display deficits with higher perceptual integration tasks required for music perception, such as auditory stream segregation. Despite these current limitations, focused musical training programs, new assessment methods, and improvements in the representation and transmission of the complex acoustical features of music through technological innovation offer the potential for significant advancements in cochlear implant-mediated music perception.

  18. The Content of Imagined Sounds Changes Visual Motion Perception in the Cross-Bounce Illusion

    Science.gov (United States)

    Berger, Christopher C.; Ehrsson, H. Henrik

    2017-01-01

    Can what we imagine hearing change what we see? Whether imagined sensory stimuli are integrated with external sensory stimuli to shape our perception of the world has only recently begun to come under scrutiny. Here, we made use of the cross-bounce illusion in which an auditory stimulus presented at the moment two passing objects meet promotes the perception that the objects bounce off rather than cross by one another to examine whether the content of imagined sound changes visual motion perception in a manner that is consistent with multisensory integration. The results from this study revealed that auditory imagery of a sound with acoustic properties typical of a collision (i.e., damped sound) promoted the bounce-percept, but auditory imagery of the same sound played backwards (i.e., ramped sound) did not. Moreover, the vividness of the participants’ auditory imagery predicted the strength of this imagery-induced illusion. In a separate experiment, we ruled out the possibility that changes in attention (i.e., sensitivity index d′) or response bias (response bias index c) were sufficient to explain this effect. Together, these findings suggest that this imagery-induced multisensory illusion reflects the successful integration of real and imagined cross-modal sensory stimuli, and more generally, that what we imagine hearing can change what we see. PMID:28071707

  19. Moving stimuli facilitate synchronization but not temporal perception

    Directory of Open Access Journals (Sweden)

    Susana Silva

    2016-11-01

    Full Text Available Recent studies have shown that a moving visual stimulus (e.g., a bouncing ball facilitates synchronization compared to a static stimulus (e.g., a flashing light, and that it can even be as effective as an auditory beep. We asked a group of participants to perform different tasks with four stimulus types: beeps, siren-like sounds, visual flashes (static and bouncing balls. First, participants performed synchronization with isochronous sequences (stimulus-guided synchronization, followed by a continuation phase in which the stimulus was internally generated (imagery-guided synchronization. Then they performed a perception task, in which they judged whether the final part of a temporal sequence was compatible with the previous beat structure (stimulus-guided perception. Similar to synchronization, an imagery-guided variant was added, in which sequences contained a gap in between (imagery-guided perception. Balls outperformed flashes and matched beeps (powerful ball effect in stimulus-guided synchronization but not in perception (stimulus- or imagery-guided. In imagery-guided synchronization, performance accuracy decreased for beeps and balls, but not for flashes and sirens. Our findings suggest that the advantages of moving visual stimuli over static ones are grounded in action rather than perception, and they support the hypothesis that the sensorimotor coupling mechanisms for auditory (beeps and moving visual stimuli (bouncing balls overlap.

  20. Measuring Auditory Selective Attention using Frequency Tagging

    Directory of Open Access Journals (Sweden)

    Hari M Bharadwaj

    2014-02-01

    Full Text Available Frequency tagging of sensory inputs (presenting stimuli that fluctuate periodically at rates to which the cortex can phase lock has been used to study attentional modulation of neural responses to inputs in different sensory modalities. For visual inputs, the visual steady-state response (VSSR at the frequency modulating an attended object is enhanced, while the VSSR to a distracting object is suppressed. In contrast, the effect of attention on the auditory steady-state response (ASSR is inconsistent across studies. However, most auditory studies analyzed results at the sensor level or used only a small number of equivalent current dipoles to fit cortical responses. In addition, most studies of auditory spatial attention used dichotic stimuli (independent signals at the ears rather than more natural, binaural stimuli. Here, we asked whether these methodological choices help explain discrepant results. Listeners attended to one of two competing speech streams, one simulated from the left and one from the right, that were modulated at different frequencies. Using distributed source modeling of magnetoencephalography results, we estimate how spatially directed attention modulates the ASSR in neural regions across the whole brain. Attention enhances the ASSR power at the frequency of the attended stream in the contralateral auditory cortex. The attended-stream modulation frequency also drives phase-locked responses in the left (but not right precentral sulcus (lPCS, a region implicated in control of eye gaze and visual spatial attention. Importantly, this region shows no phase locking to the distracting stream suggesting that the lPCS in engaged in an attention-specific manner. Modeling results that take account of the geometry and phases of the cortical sources phase locked to the two streams (including hemispheric asymmetry of lPCS activity help partly explain why past ASSR studies of auditory spatial attention yield seemingly contradictory

  1. Comparison of Electrophysiological Auditory Measures in Fishes.

    Science.gov (United States)

    Maruska, Karen P; Sisneros, Joseph A

    2016-01-01

    Sounds provide fishes with important information used to mediate behaviors such as predator avoidance, prey detection, and social communication. How we measure auditory capabilities in fishes, therefore, has crucial implications for interpreting how individual species use acoustic information in their natural habitat. Recent analyses have highlighted differences between behavioral and electrophysiologically determined hearing thresholds, but less is known about how physiological measures at different auditory processing levels compare within a single species. Here we provide one of the first comparisons of auditory threshold curves determined by different recording methods in a single fish species, the soniferous Hawaiian sergeant fish Abudefduf abdominalis, and review past studies on representative fish species with tuning curves determined by different methods. The Hawaiian sergeant is a colonial benthic-spawning damselfish (Pomacentridae) that produces low-frequency, low-intensity sounds associated with reproductive and agonistic behaviors. We compared saccular potentials, auditory evoked potentials (AEP), and single neuron recordings from acoustic nuclei of the hindbrain and midbrain torus semicircularis. We found that hearing thresholds were lowest at low frequencies (~75-300 Hz) for all methods, which matches the spectral components of sounds produced by this species. However, thresholds at best frequency determined via single cell recordings were ~15-25 dB lower than those measured by AEP and saccular potential techniques. While none of these physiological techniques gives us a true measure of the auditory "perceptual" abilities of a naturally behaving fish, this study highlights that different methodologies can reveal similar detectable range of frequencies for a given species, but absolute hearing sensitivity may vary considerably.

  2. Impairments of auditory scene analysis in Alzheimer's disease.

    Science.gov (United States)

    Goll, Johanna C; Kim, Lois G; Ridgway, Gerard R; Hailstone, Julia C; Lehmann, Manja; Buckley, Aisling H; Crutch, Sebastian J; Warren, Jason D

    2012-01-01

    Parsing of sound sources in the auditory environment or 'auditory scene analysis' is a computationally demanding cognitive operation that is likely to be vulnerable to the neurodegenerative process in Alzheimer's disease. However, little information is available concerning auditory scene analysis in Alzheimer's disease. Here we undertook a detailed neuropsychological and neuroanatomical characterization of auditory scene analysis in a cohort of 21 patients with clinically typical Alzheimer's disease versus age-matched healthy control subjects. We designed a novel auditory dual stream paradigm based on synthetic sound sequences to assess two key generic operations in auditory scene analysis (object segregation and grouping) in relation to simpler auditory perceptual, task and general neuropsychological factors. In order to assess neuroanatomical associations of performance on auditory scene analysis tasks, structural brain magnetic resonance imaging data from the patient cohort were analysed using voxel-based morphometry. Compared with healthy controls, patients with Alzheimer's disease had impairments of auditory scene analysis, and segregation and grouping operations were comparably affected. Auditory scene analysis impairments in Alzheimer's disease were not wholly attributable to simple auditory perceptual or task factors; however, the between-group difference relative to healthy controls was attenuated after accounting for non-verbal (visuospatial) working memory capacity. These findings demonstrate that clinically typical Alzheimer's disease is associated with a generic deficit of auditory scene analysis. Neuroanatomical associations of auditory scene analysis performance were identified in posterior cortical areas including the posterior superior temporal lobes and posterior cingulate. This work suggests a basis for understanding a class of clinical symptoms in Alzheimer's disease and for delineating cognitive mechanisms that mediate auditory scene analysis

  3. IMPAIRED PROCESSING IN THE PRIMARY AUDITORY CORTEX OF AN ANIMAL MODEL OF AUTISM

    Directory of Open Access Journals (Sweden)

    Renata eAnomal

    2015-11-01

    Full Text Available Autism is a neurodevelopmental disorder clinically characterized by deficits in communication, lack of social interaction and, repetitive behaviors with restricted interests. A number of studies have reported that sensory perception abnormalities are common in autistic individuals and might contribute to the complex behavioral symptoms of the disorder. In this context, hearing incongruence is particularly prevalent. Considering that some of this abnormal processing might stem from the unbalance of inhibitory and excitatory drives in brain circuitries, we used an animal model of autism induced by valproic acid (VPA during pregnancy in order to investigate the tonotopic organization of the primary auditory cortex (AI and its local inhibitory circuitry. Our results show that VPA rats have distorted primary auditory maps with over-representation of high frequencies, broadly tuned receptive fields and higher sound intensity thresholds as compared to controls. However, we did not detect differences in the number of parvalbumin-positive interneurons in AI of VPA and control rats. Altogether our findings show that neurophysiological impairments of hearing perception in this autism model occur independently of alterations in the number of parvalbumin-expressing interneurons. These data support the notion that fine circuit alterations, rather than gross cellular modification, could lead to neurophysiological changes in the autistic brain.

  4. Neurodynamics for auditory stream segregation: tracking sounds in the mustached bat's natural environment.

    Science.gov (United States)

    Kanwal, Jagmeet S; Medvedev, Andrei V; Micheyl, Christophe

    2003-08-01

    During navigation and the search phase of foraging, mustached bats emit approximately 25 ms long echolocation pulses (at 10-40 Hz) that contain multiple harmonics of a constant frequency (CF) component followed by a short (3 ms) downward frequency modulation. In the context of auditory stream segregation, therefore, bats may either perceive a coherent pulse-echo sequence (PEPE...), or segregated pulse and echo streams (P-P-P... and E-E-E...). To identify the neural mechanisms for stream segregation in bats, we developed a simple yet realistic neural network model with seven layers and 420 nodes. Our model required recurrent and lateral inhibition to enable output nodes in the network to 'latch-on' to a single tone (corresponding to a CF component in either the pulse or echo), i.e., exhibit differential suppression by the alternating two tones presented at a high rate (> 10 Hz). To test the applicability of our model to echolocation, we obtained neurophysiological data from the primary auditory cortex of awake mustached bats. Event-related potentials reliably reproduced the latching behaviour observed at output nodes in the network. Pulse as well as nontarget (clutter) echo CFs facilitated this latching. Individual single unit responses were erratic, but when summed over several recording sites, they also exhibited reliable latching behaviour even at 40 Hz. On the basis of these findings, we propose that a neural correlate of auditory stream segregation is present within localized synaptic activity in the mustached bat's auditory cortex and this mechanism may enhance the perception of echolocation sounds in the natural environment.

  5. Lipreading and covert speech production similarly modulate human auditory-cortex responses to pure tones.

    Science.gov (United States)

    Kauramäki, Jaakko; Jääskeläinen, Iiro P; Hari, Riitta; Möttönen, Riikka; Rauschecker, Josef P; Sams, Mikko

    2010-01-27

    Watching the lips of a speaker enhances speech perception. At the same time, the 100 ms response to speech sounds is suppressed in the observer's auditory cortex. Here, we used whole-scalp 306-channel magnetoencephalography (MEG) to study whether lipreading modulates human auditory processing already at the level of the most elementary sound features, i.e., pure tones. We further envisioned the temporal dynamics of the suppression to tell whether the effect is driven by top-down influences. Nineteen subjects were presented with 50 ms tones spanning six octaves (125-8000 Hz) (1) during "lipreading," i.e., when they watched video clips of silent articulations of Finnish vowels /a/, /i/, /o/, and /y/, and reacted to vowels presented twice in a row; (2) during a visual control task; (3) during a still-face passive control condition; and (4) in a separate experiment with a subset of nine subjects, during covert production of the same vowels. Auditory-cortex 100 ms responses (N100m) were equally suppressed in the lipreading and covert-speech-production tasks compared with the visual control and baseline tasks; the effects involved all frequencies and were most prominent in the left hemisphere. Responses to tones presented at different times with respect to the onset of the visual articulation showed significantly increased N100m suppression immediately after the articulatory gesture. These findings suggest that the lipreading-related suppression in the auditory cortex is caused by top-down influences, possibly by an efference copy from the speech-production system, generated during both own speech and lipreading.

  6. Stress-Related Functional Connectivity Changes Between Auditory Cortex and Cingulate in Tinnitus.

    Science.gov (United States)

    Vanneste, Sven; De Ridder, Dirk

    2015-08-01

    The question arises whether functional connectivity (FC) changes between the distress and tinnitus loudness network during resting state depends on the amount of distress tinnitus patients' experience. Fifty-five patients with constant chronic tinnitus were included in this study. Electroencephalography (EEG) recordings were performed and seed-based (at the auditory cortex) source localized FC (lagged phase synchronization) was computed for the different EEG frequency bands. Results initially demonstrate that the correlation between loudness and distress is nonlinear. Loudness correlates with beta3 and gamma band activity in the auditory cortices, and distress with alpha1 and beta3 changes in the subgenual, dorsal anterior, and posterior cingulate cortex. In comparison to nontinnitus controls, seed-based FC differed between the left auditory cortices for the alpha1 and beta3 bands in a network encompassing the posterior cingulate cortex extending into the parahippocampal area, the anterior cingulate, and insula. Furthermore, distress changes the FC between the auditory cortex, encoding loudness, and different parts of the cingulate, encoding distress: the subgenual anterior, the dorsal anterior, and the posterior cingulate. These changes are specific for the alpha1 and beta3 frequency bands. These results fit with a recently proposed model that states that tinnitus is generated by multiple dynamically active separable but overlapping networks, each characterizing a specific aspect of the unified tinnitus percept, but adds to this concept that the interaction between these networks is a complex interplay of correlations and anti-correlations between areas involved in distress and loudness depending on the distress state of the tinnitus patient.

  7. Auditory hallucinations in tinnitus patients: Emotional relationships and depression

    Directory of Open Access Journals (Sweden)

    Santos, Rosa Maria Rodrigues dos

    2012-01-01

    Full Text Available Introduction: Over the last few years, our Tinnitus Research Group has identified an increasing number of patients with tinnitus who also complained of repeated perception of complex sounds, such as music and voices. Such hallucinatory phenomena motivated us to study their possible relation to the patients' psyches. Aims: To assess whether hallucinatory phenomena were related to the patients' psychosis and/or depression, and clarify their content and function in the patients' psyches. Method: Ten subjects (8 women; mean age = 65.7 years were selected by otolaryngologists and evaluated by the same psychologists through semi-structured interviews, the Hamilton Depression Rating Scale, and psychoanalysis interviews. Results: We found no association between auditory hallucinations and psychosis; instead, this phenomenon was associated with depressive aspects. The patients' discourse revealed that hallucinatory phenomena played unconscious roles in their emotional life. In all cases, there was a remarkable and strong tendency to recall/repeat unpleasant facts/situations, which tended to exacerbate the distress caused by the tinnitus and hallucinatory phenomena and worsen depressive aspects. Conclusions: There is an important relationship between tinnitus, hallucinatory phenomena, and depression based on persistent recall of facts/situations leading to psychic distress. The knowledge of such findings represents a further step towards the need to adapt the treatment of this particular subgroup of tinnitus patients through interdisciplinary teamwork. Prospective.

  8. The music of your emotions: neural substrates involved in detection of emotional correspondence between auditory and visual music actions.

    Science.gov (United States)

    Petrini, Karin; Crabbe, Frances; Sheridan, Carol; Pollick, Frank E

    2011-04-29

    In humans, emotions from music serve important communicative roles. Despite a growing interest in the neural basis of music perception, action and emotion, the majority of previous studies in this area have focused on the auditory aspects of music performances. Here we investigate how the brain processes the emotions elicited by audiovisual music performances. We used event-related functional magnetic resonance imaging, and in Experiment 1 we defined the areas responding to audiovisual (musician's movements with music), visual (musician's movements only), and auditory emotional (music only) displays. Subsequently a region of interest analysis was performed to examine if any of the areas detected in Experiment 1 showed greater activation for emotionally mismatching performances (combining the musician's movements with mismatching emotional sound) than for emotionally matching music performances (combining the musician's movements with matching emotional sound) as presented in Experiment 2 to the same participants. The insula and the left thalamus were found to respond consistently to visual, auditory and audiovisual emotional information and to have increased activation for emotionally mismatching displays in comparison with emotionally matching displays. In contrast, the right thalamus was found to respond to audiovisual emotional displays and to have similar activation for emotionally matching and mismatching displays. These results suggest that the insula and left thalamus have an active role in detecting emotional correspondence between auditory and visual information during music performances, whereas the right thalamus has a different role.

  9. The music of your emotions: neural substrates involved in detection of emotional correspondence between auditory and visual music actions.

    Directory of Open Access Journals (Sweden)

    Karin Petrini

    Full Text Available In humans, emotions from music serve important communicative roles. Despite a growing interest in the neural basis of music perception, action and emotion, the majority of previous studies in this area have focused on the auditory aspects of music performances. Here we investigate how the brain processes the emotions elicited by audiovisual music performances. We used event-related functional magnetic resonance imaging, and in Experiment 1 we defined the areas responding to audiovisual (musician's movements with music, visual (musician's movements only, and auditory emotional (music only displays. Subsequently a region of interest analysis was performed to examine if any of the areas detected in Experiment 1 showed greater activation for emotionally mismatching performances (combining the musician's movements with mismatching emotional sound than for emotionally matching music performances (combining the musician's movements with matching emotional sound as presented in Experiment 2 to the same participants. The insula and the left thalamus were found to respond consistently to visual, auditory and audiovisual emotional information and to have increased activation for emotionally mismatching displays in comparison with emotionally matching displays. In contrast, the right thalamus was found to respond to audiovisual emotional displays and to have similar activation for emotionally matching and mismatching displays. These results suggest that the insula and left thalamus have an active role in detecting emotional correspondence between auditory and visual information during music performances, whereas the right thalamus has a different role.

  10. Attention Modulates the Auditory Cortical Processing of Spatial and Category Cues in Naturalistic Auditory Scenes

    Science.gov (United States)

    Renvall, Hanna; Staeren, Noël; Barz, Claudia S.; Ley, Anke; Formisano, Elia

    2016-01-01

    This combined fMRI and MEG study investigated brain activations during listening and attending to natural auditory scenes. We first recorded, using in-ear microphones, vocal non-speech sounds, and environmental sounds that were mixed to construct auditory scenes containing two concurrent sound streams. During the brain measurements, subjects attended to one of the streams while spatial acoustic information of the scene was either preserved (stereophonic sounds) or removed (monophonic sounds). Compared to monophonic sounds, stereophonic sounds evoked larger blood-oxygenation-level-dependent (BOLD) fMRI responses in the bilateral posterior superior temporal areas, independent of which stimulus attribute the subject was attending to. This finding is consistent with the functional role of these regions in the (automatic) processing of auditory spatial cues. Additionally, significant differences in the cortical activation patterns depending on the target of attention were observed. Bilateral planum temporale and inferior frontal gyrus were preferentially activated when attending to stereophonic environmental sounds, whereas when subjects attended to stereophonic voice sounds, the BOLD responses were larger at the bilateral middle superior temporal gyrus and sulcus, previously reported to show voice sensitivity. In contrast, the time-resolved MEG responses were stronger for mono- than stereophonic sounds in the bilateral auditory cortices at ~360 ms after the stimulus onset when attending to the voice excerpts within the combined sounds. The observed effects suggest that during the segregation of auditory objects from the auditory background, spatial sound cues together with other relevant temporal and spectral cues are processed in an attention-dependent manner at the cortical locations generally involved in sound recognition. More synchronous neuronal activation during monophonic than stereophonic sound processing, as well as (local) neuronal inhibitory mechanisms in

  11. Speech enhancement for listeners with hearing loss based on a model for vowel coding in the auditory midbrain.

    Science.gov (United States)

    Rao, Akshay; Carney, Laurel H

    2014-07-01

    A novel signal-processing strategy is proposed to enhance speech for listeners with hearing loss. The strategy focuses on improving vowel perception based on a recent hypothesis for vowel coding in the auditory system. Traditionally, studies of neural vowel encoding have focused on the representation of formants (peaks in vowel spectra) in the discharge patterns of the population of auditory-nerve (AN) fibers. A recent hypothesis focuses instead on vowel encoding in the auditory midbrain, and suggests a robust representation of formants. AN fiber discharge rates are characterized by pitch-related fluctuations having frequency-dependent modulation depths. Fibers tuned to frequencies near formants exhibit weaker pitch-related fluctuations than those tuned to frequencies between formants. Many auditory midbrain neurons show tuning to amplitude modulation frequency in addition to audio frequency. According to the auditory midbrain vowel encoding hypothesis, the response map of a population of midbrain neurons tuned to modulations near voice pitch exhibits minima near formant frequencies, due to the lack of strong pitch-related fluctuations at their inputs. This representation is robust over the range of noise conditions in which speech intelligibility is also robust for normal-hearing listeners. Based on this hypothesis, a vowel-enhancement strategy has been proposed that aims to restore vowel encoding at the level of the auditory midbrain. The signal processing consists of pitch tracking, formant tracking, and formant enhancement. The novel formant-tracking method proposed here estimates the first two formant frequencies by modeling characteristics of the auditory periphery, such as saturated discharge rates of AN fibers and modulation tuning properties of auditory midbrain neurons. The formant enhancement stage aims to restore the representation of formants at the level of the midbrain by increasing the dominance of a single harmonic near each formant and saturating

  12. Effects of frequency and level on auditory stream segregation.

    Science.gov (United States)

    Rose, M M; Moore, B C

    2000-09-01

    This study examined the effect of center frequency and level on the perceptual grouping of rapid tone sequences. The sequence ABA-ABA-...was used, where A and B represent sinusoidal tone bursts (10-ms rise/fall, 80-ms steady state, 20-ms interval between tones) and - represents a silent interval of 120 ms. In experiment 1, tone A was fixed in frequency at 62, 125, 250, 500, 1000, 2000, 4000, 6000, or 8000 Hz. Both tones had a level of approximately 40 dB SL. Tone B started with a frequency well above that of tone A, and its frequency was swept toward that of tone A so that the frequency separation between them decreased in an exponential manner. Subjects were required to indicate when they could no longer hear the tones A and B as two separate streams, but heard only a single stream with a "gallop" rhythm. This changeover point between percepts is called the fission boundary. The separation between tones A and B at the fission boundary was roughly independent of the frequency of tone A when expressed as the difference in number of equivalent rectangular bandwidths (ERBs) between A and B, but varied more with frequency when the difference was expressed in barks or cents. In experiment 2, the center frequency was fixed at 250, 1000, or 4000 Hz, and the level of the A and B tones was 40, 55, 70, or 85 dB SPL. The frequency separation of the A and B tones at the fission boundary tended to increase slightly with increasing level, in a manner consistent with the broadening of the auditory filter with increasing level. The results support the "peripheral channeling" explanation of stream segregation advanced by Hartmann and Johnson [Music Percept. 9, 155-184 (1991)], and indicate that the perception of fusion or fission in alternating-tone sequences depends partly upon the degree of overlap of the excitation patterns evoked by the successive sounds in the cochlea, as assumed in the theory of Beauvois and Meddis [J. Acoust. Soc. Am. 99, 2270-2280 (1996)].

  13. Shaping the aging brain: Role of auditory input patterns in the emergence of auditory cortical impairments

    Directory of Open Access Journals (Sweden)

    Brishna Soraya Kamal

    2013-09-01

    Full Text Available Age-related impairments in the primary auditory cortex (A1 include poor tuning selectivity, neural desynchronization and degraded responses to low-probability sounds. These changes have been largely attributed to reduced inhibition in the aged brain, and are thought to contribute to substantial hearing impairment in both humans and animals. Since many of these changes can be partially reversed with auditory training, it has been speculated that they might not be purely degenerative, but might rather represent negative plastic adjustments to noisy or distorted auditory signals reaching the brain. To test this hypothesis, we examined the impact of exposing young adult rats to 8 weeks of low-grade broadband noise on several aspects of A1 function and structure. We then characterized the same A1 elements in aging rats for comparison. We found that the impact of noise exposure on A1 tuning selectivity, temporal processing of auditory signal and responses to oddball tones was almost indistinguishable from the effect of natural aging. Moreover, noise exposure resulted in a reduction in the population of parvalbumin inhibitory interneurons and cortical myelin as previously documented in the aged group. Most of these changes reversed after returning the rats to a quiet environment. These results support the hypothesis that age-related changes in A1 have a strong activity-dependent component and indicate that the presence or absence of clear auditory input patterns might be a key factor in sustaining adult A1 function.

  14. Selective memory retrieval of auditory what and auditory where involves the ventrolateral prefrontal cortex.

    Science.gov (United States)

    Kostopoulos, Penelope; Petrides, Michael

    2016-02-16

    There is evidence from the visual, verbal, and tactile memory domains that the midventrolateral prefrontal cortex plays a critical role in the top-down modulation of activity within posterior cortical areas for the selective retrieval of specific aspects of a memorized experience, a functional process often referred to as active controlled retrieval. In the present functional neuroimaging study, we explore the neural bases of active retrieval for auditory nonverbal information, about which almost nothing is known. Human participants were scanned with functional magnetic resonance imaging (fMRI) in a task in which they were presented with short melodies from different locations in a simulated virtual acoustic environment within the scanner and were then instructed to retrieve selectively either the particular melody presented or its location. There were significant activity increases specifically within the midventrolateral prefrontal region during the selective retrieval of nonverbal auditory information. During the selective retrieval of information from auditory memory, the right midventrolateral prefrontal region increased its interaction with the auditory temporal region and the inferior parietal lobule in the right hemisphere. These findings provide evidence that the midventrolateral prefrontal cortical region interacts with specific posterior cortical areas in the human cerebral cortex for the selective retrieval of object and location features of an auditory memory experience.

  15. An Auditory Model with Hearing Loss

    DEFF Research Database (Denmark)

    Nielsen, Lars Bramsløw

    An auditory model based on the psychophysics of hearing has been developed and tested. The model simulates the normal ear or an impaired ear with a given hearing loss. Based on reviews of the current literature, the frequency selectivity and loudness growth as functions of threshold and stimulus...... level have been found and implemented in the model. The auditory model was verified against selected results from the literature, and it was confirmed that the normal spread of masking and loudness growth could be simulated in the model. The effects of hearing loss on these parameters was also...... in qualitative agreement with recent findings. The temporal properties of the ear have currently not been included in the model. As an example of a real-world application of the model, loudness spectrograms for a speech utterance were presented. By introducing hearing loss, the speech sounds became less audible...

  16. Anatomy and Physiology of the Auditory Tracts

    Directory of Open Access Journals (Sweden)

    Mohammad hosein Hekmat Ara

    1999-03-01

    Full Text Available Hearing is one of the excel sense of human being. Sound waves travel through the medium of air and enter the ear canal and then hit the tympanic membrane. Middle ear transfer almost 60-80% of this mechanical energy to the inner ear by means of “impedance matching”. Then, the sound energy changes to traveling wave and is transferred based on its specific frequency and stimulates organ of corti. Receptors in this organ and their synapses transform mechanical waves to the neural waves and transfer them to the brain. The central nervous system tract of conducting the auditory signals in the auditory cortex will be explained here briefly.

  17. Modeling auditory evoked potentials to complex stimuli

    DEFF Research Database (Denmark)

    Rønne, Filip Munch

    The auditory evoked potential (AEP) is an electrical signal that can be recorded from electrodes attached to the scalp of a human subject when a sound is presented. The signal is considered to reflect neural activity in response to the acoustic stimulation and is a well established clinical...... clinically and in research towards using realistic and complex stimuli, such as speech, to electrophysiologically assess the human hearing. However, to interpret the AEP generation to complex sounds, the potential patterns in response to simple stimuli needs to be understood. Therefore, the model was used...... to simulate auditory brainstem responses (ABRs) evoked by classic stimuli like clicks, tone bursts and chirps. The ABRs to these simple stimuli were compared to literature data and the model was shown to predict the frequency dependence of tone-burst ABR wave-V latency and the level-dependence of ABR wave...

  18. Cognitive mechanisms associated with auditory sensory gating.

    Science.gov (United States)

    Jones, L A; Hills, P J; Dick, K M; Jones, S P; Bright, P

    2016-02-01

    Sensory gating is a neurophysiological measure of inhibition that is characterised by a reduction in the P50 event-related potential to a repeated identical stimulus. The objective of this work was to determine the cognitive mechanisms that relate to the neurological phenomenon of auditory sensory gating. Sixty participants underwent a battery of 10 cognitive tasks, including qualitatively different measures of attentional inhibition, working memory, and fluid intelligence. Participants additionally completed a paired-stimulus paradigm as a measure of auditory sensory gating. A correlational analysis revealed that several tasks correlated significantly with sensory gating. However once fluid intelligence and working memory were accounted for, only a measure of latent inhibition and accuracy scores on the continuous performance task showed significant sensitivity to sensory gating. We conclude that sensory gating reflects the identification of goal-irrelevant information at the encoding (input) stage and the subsequent ability to selectively attend to goal-relevant information based on that previous identification.

  19. Lesions in the external auditory canal

    Directory of Open Access Journals (Sweden)

    Priyank S Chatra

    2011-01-01

    Full Text Available The external auditory canal is an S- shaped osseo-cartilaginous structure that extends from the auricle to the tympanic membrane. Congenital, inflammatory, neoplastic, and traumatic lesions can affect the EAC. High-resolution CT is well suited for the evaluation of the temporal bone, which has a complex anatomy with multiple small structures. In this study, we describe the various lesions affecting the EAC.

  20. Midbrain auditory selectivity to natural sounds.

    Science.gov (United States)

    Wohlgemuth, Melville J; Moss, Cynthia F

    2016-03-01

    This study investigated auditory stimulus selectivity in the midbrain superior colliculus (SC) of the echolocating bat, an animal that relies on hearing to guide its orienting behaviors. Multichannel, single-unit recordings were taken across laminae of the midbrain SC of the awake, passively listening big brown bat, Eptesicus fuscus. Species-specific frequency-modulated (FM) echolocation sound sequences with dynamic spectrotemporal features served as acoustic stimuli along with artificial sound sequences matched in bandwidth, amplitude, and duration but differing in spectrotemporal structure. Neurons in dorsal sensory regions of the bat SC responded selectively to elements within the FM sound sequences, whereas neurons in ventral sensorimotor regions showed broad response profiles to natural and artificial stimuli. Moreover, a generalized linear model (GLM) constructed on responses in the dorsal SC to artificial linear FM stimuli failed to predict responses to natural sounds and vice versa, but the GLM produced accurate response predictions in ventral SC neurons. This result suggests that auditory selectivity in the dorsal extent of the bat SC arises through nonlinear mechanisms, which extract species-specific sensory information. Importantly, auditory selectivity appeared only in responses to stimuli containing the natural statistics of acoustic signals used by the bat for spatial orientation-sonar vocalizations-offering support for the hypothesis that sensory selectivity enables rapid species-specific orienting behaviors. The results of this study are the first, to our knowledge, to show auditory spectrotemporal selectivity to natural stimuli in SC neurons and serve to inform a more general understanding of mechanisms guiding sensory selectivity for natural, goal-directed orienting behaviors.

  1. Response recovery in the locust auditory pathway.

    Science.gov (United States)

    Wirtssohn, Sarah; Ronacher, Bernhard

    2016-01-01

    Temporal resolution and the time courses of recovery from acute adaptation of neurons in the auditory pathway of the grasshopper Locusta migratoria were investigated with a response recovery paradigm. We stimulated with a series of single click and click pair stimuli while performing intracellular recordings from neurons at three processing stages: receptors and first and second order interneurons. The response to the second click was expressed relative to the single click response. This allowed the uncovering of the basic temporal resolution in these neurons. The effect of adaptation increased with processing layer. While neurons in the auditory periphery displayed a steady response recovery after a short initial adaptation, many interneurons showed nonlinear effects: most prominent a long-lasting suppression of the response to the second click in a pair, as well as a gain in response if a click was preceded by a click a few milliseconds before. Our results reveal a distributed temporal filtering of input at an early auditory processing stage. This set of specified filters is very likely homologous across grasshopper species and thus forms the neurophysiological basis for extracting relevant information from a variety of different temporal signals. Interestingly, in terms of spike timing precision neurons at all three processing layers recovered very fast, within 20 ms. Spike waveform analysis of several neuron types did not sufficiently explain the response recovery profiles implemented in these neurons, indicating that temporal resolution in neurons located at several processing layers of the auditory pathway is not necessarily limited by the spike duration and refractory period.

  2. Brainstem auditory evoked response: application in neurology

    Directory of Open Access Journals (Sweden)

    Carlos A. M. Guerreiro

    1982-03-01

    Full Text Available The tecnique that we use for eliciting brainstem auditory evoked responses (BAERs is described. BAERs are a non-invasive and reliable clinical test when carefully performed. This test is indicated in the evaluation of disorders which may potentially involve the brainstem such as coma, multiple sclerosis posterior fossa tumors and others. Unsuspected lesions with normal radiologic studies (including CT-scan can be revealed by the BAER.

  3. Cognitive mechanisms associated with auditory sensory gating

    OpenAIRE

    Jones, L. A.; Hills, P.J.; Dick, K.M.; Jones, S. P.; Bright, P

    2015-01-01

    Sensory gating is a neurophysiological measure of inhibition that is characterised by a reduction in the P50 event-related potential to a repeated identical stimulus. The objective of this work was to determine the cognitive mechanisms that relate to the neurological phenomenon of auditory sensory gating. Sixty participants underwent a battery of 10 cognitive tasks, including qualitatively different measures of attentional inhibition, working memory, and fluid intelligence. Participants addit...

  4. Stroke caused auditory attention deficits in children

    Directory of Open Access Journals (Sweden)

    Karla Maria Ibraim da Freiria Elias

    2013-01-01

    Full Text Available OBJECTIVE: To verify the auditory selective attention in children with stroke. METHODS: Dichotic tests of binaural separation (non-verbal and consonant-vowel and binaural integration - digits and Staggered Spondaic Words Test (SSW - were applied in 13 children (7 boys, from 7 to 16 years, with unilateral stroke confirmed by neurological examination and neuroimaging. RESULTS: The attention performance showed significant differences in comparison to the control group in both kinds of tests. In the non-verbal test, identifications the ear opposite the lesion in the free recall stage was diminished and, in the following stages, a difficulty in directing attention was detected. In the consonant- vowel test, a modification in perceptual asymmetry and difficulty in focusing in the attended stages was found. In the digits and SSW tests, ipsilateral, contralateral and bilateral deficits were detected, depending on the characteristics of the lesions and demand of the task. CONCLUSION: Stroke caused auditory attention deficits when dealing with simultaneous sources of auditory information.

  5. Hierarchical processing of auditory objects in humans.

    Directory of Open Access Journals (Sweden)

    Sukhbinder Kumar

    2007-06-01

    Full Text Available This work examines the computational architecture used by the brain during the analysis of the spectral envelope of sounds, an important acoustic feature for defining auditory objects. Dynamic causal modelling and Bayesian model selection were used to evaluate a family of 16 network models explaining functional magnetic resonance imaging responses in the right temporal lobe during spectral envelope analysis. The models encode different hypotheses about the effective connectivity between Heschl's Gyrus (HG, containing the primary auditory cortex, planum temporale (PT, and superior temporal sulcus (STS, and the modulation of that coupling during spectral envelope analysis. In particular, we aimed to determine whether information processing during spectral envelope analysis takes place in a serial or parallel fashion. The analysis provides strong support for a serial architecture with connections from HG to PT and from PT to STS and an increase of the HG to PT connection during spectral envelope analysis. The work supports a computational model of auditory object processing, based on the abstraction of spectro-temporal "templates" in the PT before further analysis of the abstracted form in anterior temporal lobe areas.

  6. The simultaneous perception of auditory–tactile stimuli in voluntary movement

    Science.gov (United States)

    Hao, Qiao; Ogata, Taiki; Ogawa, Ken-ichiro; Kwon, Jinhwan; Miyake, Yoshihiro

    2015-01-01

    The simultaneous perception of multimodal information in the environment during voluntary movement is very important for effective reactions to the environment. Previous studies have found that voluntary movement affects the simultaneous perception of auditory and tactile stimuli. However, the results of these experiments are not completely consistent, and the differences may be attributable to methodological differences in the previous studies. In this study, we investigated the effect of voluntary movement on the simultaneous perception of auditory and tactile stimuli using a temporal order judgment task with voluntary movement, involuntary movement, and no movement. To eliminate the potential effect of stimulus predictability and the effect of spatial information associated with large-scale movement in the previous studies, we randomized the interval between the start of movement and the first stimulus, and used small-scale movement. As a result, the point of subjective simultaneity (PSS) during voluntary movement shifted from the tactile stimulus being first during involuntary movement or no movement to the auditory stimulus being first. The just noticeable difference (JND), an indicator of temporal resolution, did not differ across the three conditions. These results indicate that voluntary movement itself affects the PSS in auditory–tactile simultaneous perception, but it does not influence the JND. In the discussion of these results, we suggest that simultaneous perception may be affected by the efference copy. PMID:26441799

  7. The simultaneous perception of auditory–tactile stimuli in voluntary movement

    Directory of Open Access Journals (Sweden)

    Qiao eHao

    2015-09-01

    Full Text Available The simultaneous perception of multimodal information in the environment during voluntary movement is very important for effective reactions to the environment. Previous studies have found that voluntary movement affects the simultaneous perception of auditory and tactile stimuli. However, the results of these experiments are not completely consistent, and the differences may be attributable to methodological differences in the previous studies. In this study, we investigated the effect of voluntary movement on the simultaneous perception of auditory and tactile stimuli using a temporal order judgment task with voluntary movement, involuntary movement, and no movement. To eliminate the potential effect of stimulus predictability and the effect of spatial information associated with large-scale movement in the previous studies, we randomized the interval between the start of movement and the first stimulus, and used small-scale movement. As a result, the point of subjective simultaneity (PSS during voluntary movement shifted from the tactile stimulus being first during involuntary movement or no movement to the auditory stimulus being first. The just noticeable difference (JND, an indicator of temporal resolution, did not differ across the three conditions. These results indicate that voluntary movement itself affects the PSS in auditory–tactile simultaneous perception, but it does not influence the JND. In the discussion of these results, we suggest that simultaneous perception may be affected by the efference copy.

  8. Auditory evoked potentials in peripheral vestibular disorder individuals

    Directory of Open Access Journals (Sweden)

    Matas, Carla Gentile

    2011-07-01

    Full Text Available Introduction: The auditory and vestibular systems are located in the same peripheral receptor, however they enter the CNS and go through different ways, thus creating a number of connections and reaching a wide area of the encephalon. Despite going through different ways, some changes can impair both systems. Such tests as Auditory Evoked Potentials can help find a diagnosis when vestibular alterations are seen. Objective: describe the Auditory Evoked Potential results in individuals complaining about dizziness or vertigo with Peripheral Vestibular Disorders and in normal individuals having the same complaint. Methods: Short, middle and long latency Auditory Evoked Potentials were performed as a transversal prospective study. Conclusion: individuals complaining about dizziness or vertigo can show some changes in BAEP (Brainstem Auditory Evoked Potential, MLAEP (Medium Latency Auditory Evoked Potential and P300.

  9. Millisecond precision spike timing shapes tactile perception.

    Science.gov (United States)

    Mackevicius, Emily L; Best, Matthew D; Saal, Hannes P; Bensmaia, Sliman J

    2012-10-31

    In primates, the sense of touch has traditionally been considered to be a spatial modality, drawing an analogy to the visual system. In this view, stimuli are encoded in spatial patterns of activity over the sheet of receptors embedded in the skin. We propose that the spatial processing mode is complemented by a temporal one. Indeed, the transduction and processing of complex, high-frequency skin vibrations have been shown to play an important role in tactile texture perception, and the frequency composition of vibrations shapes the evoked percept. Mechanoreceptive afferents innervating the glabrous skin exhibit temporal patterning in their responses, but the importance and behavioral relevance of spike timing, particularly for naturalistic stimuli, remains to be elucidated. Based on neurophysiological recordings from Rhesus macaques, we show that spike timing conveys information about the frequency composition of skin vibrations, both for individual afferents and for afferent populations, and that the temporal fidelity varies across afferent class. Furthermore, the perception of skin vibrations, measured in human subjects, is better predicted when spike timing is taken into account, and the resolution that predicts perception best matches the optimal resolution of the respective afferent classes. In light of these results, the peripheral representation of complex skin vibrations draws a powerful analogy with the auditory and vibrissal systems.

  10. Visual and audiovisual effects of isochronous timing on visual perception and brain activity.

    Science.gov (United States)

    Marchant, Jennifer L; Driver, Jon

    2013-06-01

    Understanding how the brain extracts and combines temporal structure (rhythm) information from events presented to different senses remains unresolved. Many neuroimaging beat perception studies have focused on the auditory domain and show the presence of a highly regular beat (isochrony) in "auditory" stimulus streams enhances neural responses in a distributed brain network and affects perceptual performance. Here, we acquired functional magnetic resonance imaging (fMRI) measurements of brain activity while healthy human participants performed a visual task on isochronous versus randomly timed "visual" streams, with or without concurrent task-irrelevant sounds. We found that visual detection of higher intensity oddball targets was better for isochronous than randomly timed streams, extending previous auditory findings to vision. The impact of isochrony on visual target sensitivity correlated positively with fMRI signal changes not only in visual cortex but also in auditory sensory cortex during audiovisual presentations. Visual isochrony activated a similar timing-related brain network to that previously found primarily in auditory beat perception work. Finally, activity in multisensory left posterior superior temporal sulcus increased specifically during concurrent isochronous audiovisual presentations. These results indicate that regular isochronous timing can modulate visual processing and this can also involve multisensory audiovisual brain mechanisms.

  11. Auditory cortical processing in real-world listening: the auditory system going real.

    Science.gov (United States)

    Nelken, Israel; Bizley, Jennifer; Shamma, Shihab A; Wang, Xiaoqin

    2014-11-12

    The auditory sense of humans transforms intrinsically senseless pressure waveforms into spectacularly rich perceptual phenomena: the music of Bach or the Beatles, the poetry of Li Bai or Omar Khayyam, or more prosaically the sense of the world filled with objects emitting sounds that is so important for those of us lucky enough to have hearing. Whereas the early representations of sounds in the auditory system are based on their physical structure, higher auditory centers are thought to represent sounds in terms of their perceptual attributes. In this symposium, we will illustrate the current research into this process, using four case studies. We will illustrate how the spectral and temporal properties of sounds are used to bind together, segregate, categorize, and interpret sound patterns on their way to acquire meaning, with important lessons to other sensory systems as well.

  12. A new test of attention in listening (TAIL predicts auditory performance.

    Directory of Open Access Journals (Sweden)

    Yu-Xuan Zhang

    Full Text Available Attention modulates auditory perception, but there are currently no simple tests that specifically quantify this modulation. To fill the gap, we developed a new, easy-to-use test of attention in listening (TAIL based on reaction time. On each trial, two clearly audible tones were presented sequentially, either at the same or different ears. The frequency of the tones was also either the same or different (by at least two critical bands. When the task required same/different frequency judgments, presentation at the same ear significantly speeded responses and reduced errors. A same/different ear (location judgment was likewise facilitated by keeping tone frequency constant. Perception was thus influenced by involuntary orienting of attention along the task-irrelevant dimension. When information in the two stimulus dimensions were congruent (same-frequency same-ear, or different-frequency different-ear, response was faster and more accurate than when they were incongruent (same-frequency different-ear, or different-frequency same-ear, suggesting the involvement of executive control to resolve conflicts. In total, the TAIL yielded five independent outcome measures: (1 baseline reaction time, indicating information processing efficiency, (2 involuntary orienting of attention to frequency and (3 location, and (4 conflict resolution for frequency and (5 location. Processing efficiency and conflict resolution accounted for up to 45% of individual variances in the low- and high-threshold variants of three psychoacoustic tasks assessing temporal and spectral processing. Involuntary orientation of attention to the irrelevant dimension did not correlate with perceptual performance on these tasks. Given that TAIL measures are unlikely to be limited by perceptual sensitivity, we suggest that the correlations reflect modulation of perceptual performance by attention. The TAIL thus has the power to identify and separate contributions of different components of

  13. Depth-Dependent Temporal Response Properties in Core Auditory Cortex

    OpenAIRE

    Christianson, G. Björn; Sahani, Maneesh; Linden, Jennifer F.

    2011-01-01

    The computational role of cortical layers within auditory cortex has proven difficult to establish. One hypothesis is that interlaminar cortical processing might be dedicated to analyzing temporal properties of sounds; if so, then there should be systematic depth-dependent changes in cortical sensitivity to the temporal context in which a stimulus occurs. We recorded neural responses simultaneously across cortical depth in primary auditory cortex and anterior auditory field of CBA/Ca mice, an...

  14. [Auditory guidance systems for the visually impaired people].

    Science.gov (United States)

    He, Jing; Nie, Min; Luo, Lan; Tong, Shanbao; Niu, Jinhai; Zhu, Yisheng

    2010-04-01

    Visually impaired people face many inconveniences because of the loss of vision. Therefore, scientists are trying to design various guidance systems for improving the lives of the blind. Based on sensory substitution, auditory guidance has become an interesting topic in the field of biomedical engineering. In this paper, we made a state-of-technique review of the auditory guidance system. Although there have been many technical challenges, the auditory guidance system would be a useful alternative for the visually impaired people.

  15. Auditory cortex basal activity modulates cochlear responses in chinchillas.

    Directory of Open Access Journals (Sweden)

    Alex León

    Full Text Available BACKGROUND: The auditory efferent system has unique neuroanatomical pathways that connect the cerebral cortex with sensory receptor cells. Pyramidal neurons located in layers V and VI of the primary auditory cortex constitute descending projections to the thalamus, inferior colliculus, and even directly to the superior olivary complex and to the cochlear nucleus. Efferent pathways are connected to the cochlear receptor by the olivocochlear system, which innervates outer hair cells and auditory nerve fibers. The functional role of the cortico-olivocochlear efferent system remains debated. We hypothesized that auditory cortex basal activity modulates cochlear and auditory-nerve afferent responses through the efferent system. METHODOLOGY/PRINCIPAL FINDINGS: Cochlear microphonics (CM, auditory-nerve compound action potentials (CAP and auditory cortex evoked potentials (ACEP were recorded in twenty anesthetized chinchillas, before, during and after auditory cortex deactivation by two methods: lidocaine microinjections or cortical cooling with cryoloops. Auditory cortex deactivation induced a transient reduction in ACEP amplitudes in fifteen animals (deactivation experiments and a permanent reduction in five chinchillas (lesion experiments. We found significant changes in the amplitude of CM in both types of experiments, being the most common effect a CM decrease found in fifteen animals. Concomitantly to CM amplitude changes, we found CAP increases in seven chinchillas and CAP reductions in thirteen animals. Although ACEP amplitudes were completely recovered after ninety minutes in deactivation experiments, only partial recovery was observed in the magnitudes of cochlear responses. CONCLUSIONS/SIGNIFICANCE: These results show that blocking ongoing auditory cortex activity modulates CM and CAP responses, demonstrating that cortico-olivocochlear circuits regulate auditory nerve and cochlear responses through a basal efferent tone. The diversity of the

  16. Using Facebook to Reach People Who Experience Auditory Hallucinations

    OpenAIRE

    Crosier, Benjamin Sage; Brian, Rachel Marie; Ben-Zeev, Dror

    2016-01-01

    Background Auditory hallucinations (eg, hearing voices) are relatively common and underreported false sensory experiences that may produce distress and impairment. A large proportion of those who experience auditory hallucinations go unidentified and untreated. Traditional engagement methods oftentimes fall short in reaching the diverse population of people who experience auditory hallucinations. Objective The objective of this proof-of-concept study was to examine the viability of leveraging...

  17. Myosin VIIA, important for human auditory function, is necessary for Drosophila auditory organ development.

    Directory of Open Access Journals (Sweden)

    Sokol V Todi

    Full Text Available BACKGROUND: Myosin VIIA (MyoVIIA is an unconventional myosin necessary for vertebrate audition [1]-[5]. Human auditory transduction occurs in sensory hair cells with a staircase-like arrangement of apical protrusions called stereocilia. In these hair cells, MyoVIIA maintains stereocilia organization [6]. Severe mutations in the Drosophila MyoVIIA orthologue, crinkled (ck, are semi-lethal [7] and lead to deafness by disrupting antennal auditory organ (Johnston's Organ, JO organization [8]. ck/MyoVIIA mutations result in apical detachment of auditory transduction units (scolopidia from the cuticle that transmits antennal vibrations as mechanical stimuli to JO. PRINCIPAL FINDINGS: Using flies expressing GFP-tagged NompA, a protein required for auditory organ organization in Drosophila, we examined the role of ck/MyoVIIA in JO development and maintenance through confocal microscopy and extracellular electrophysiology. Here we show that ck/MyoVIIA is necessary early in the developing antenna for initial apical attachment of the scolopidia to the articulating joint. ck/MyoVIIA is also necessary to maintain scolopidial attachment throughout adulthood. Moreover, in the adult JO, ck/MyoVIIA genetically interacts with the non-muscle myosin II (through its regulatory light chain protein and the myosin binding subunit of myosin II phosphatase. Such genetic interactions have not previously been observed in scolopidia. These factors are therefore candidates for modulating MyoVIIA activity in vertebrates. CONCLUSIONS: Our findings indicate that MyoVIIA plays evolutionarily conserved roles in auditory organ development and maintenance in invertebrates and vertebrates, enhancing our understanding of auditory organ development and function, as well as providing significant clues for future research.

  18. Effect of auditory training on the middle latency response in children with (central) auditory processing disorder.

    Science.gov (United States)

    Schochat, E; Musiek, F E; Alonso, R; Ogata, J

    2010-08-01

    The purpose of this study was to determine the middle latency response (MLR) characteristics (latency and amplitude) in children with (central) auditory processing disorder [(C)APD], categorized as such by their performance on the central auditory test battery, and the effects of these characteristics after auditory training. Thirty children with (C)APD, 8 to 14 years of age, were tested using the MLR-evoked potential. This group was then enrolled in an 8-week auditory training program and then retested at the completion of the program. A control group of 22 children without (C)APD, composed of relatives and acquaintances of those involved in the research, underwent the same testing at equal time intervals, but were not enrolled in the auditory training program. Before auditory training, MLR results for the (C)APD group exhibited lower C3-A1 and C3-A2 wave amplitudes in comparison to the control group [C3-A1, 0.84 microV (mean), 0.39 (SD--standard deviation) for the (C)APD group and 1.18 microV (mean), 0.65 (SD) for the control group; C3-A2, 0.69 microV (mean), 0.31 (SD) for the (C)APD group and 1.00 microV (mean), 0.46 (SD) for the control group]. After training, the MLR C3-A1 [1.59 microV (mean), 0.82 (SD)] and C3-A2 [1.24 microV (mean), 0.73 (SD)] wave amplitudes of the (C)APD group significantly increased, so that there was no longer a significant difference in MLR amplitude between (C)APD and control groups. These findings suggest progress in the use of electrophysiological measurements for the diagnosis and treatment of (C)APD.

  19. Effect of auditory training on the middle latency response in children with (central auditory processing disorder

    Directory of Open Access Journals (Sweden)

    E. Schochat

    2010-08-01

    Full Text Available The purpose of this study was to determine the middle latency response (MLR characteristics (latency and amplitude in children with (central auditory processing disorder [(CAPD], categorized as such by their performance on the central auditory test battery, and the effects of these characteristics after auditory training. Thirty children with (CAPD, 8 to 14 years of age, were tested using the MLR-evoked potential. This group was then enrolled in an 8-week auditory training program and then retested at the completion of the program. A control group of 22 children without (CAPD, composed of relatives and acquaintances of those involved in the research, underwent the same testing at equal time intervals, but were not enrolled in the auditory training program. Before auditory training, MLR results for the (CAPD group exhibited lower C3-A1 and C3-A2 wave amplitudes in comparison to the control group [C3-A1, 0.84 µV (mean, 0.39 (SD - standard deviation for the (CAPD group and 1.18 µV (mean, 0.65 (SD for the control group; C3-A2, 0.69 µV (mean, 0.31 (SD for the (CAPD group and 1.00 µV (mean, 0.46 (SD for the control group]. After training, the MLR C3-A1 [1.59 µV (mean, 0.82 (SD] and C3-A2 [1.24 µV (mean, 0.73 (SD] wave amplitudes of the (CAPD group significantly increased, so that there was no longer a significant difference in MLR amplitude between (CAPD and control groups. These findings suggest progress in the use of electrophysiological measurements for the diagnosis and treatment of (CAPD.

  20. Age-related decline of the cytochrome c oxidase subunit expression in the auditory cortex of the mimetic aging rat model associated with the common deletion.

    Science.gov (United States)

    Zhong, Yi; Hu, Yujuan; Peng, Wei; Sun, Yu; Yang, Yang; Zhao, Xueyan; Huang, Xiang; Zhang, Honglian; Kong, Weijia

    2012-12-01

    The age-related deterioration in the central auditory system is well known to impair the abilities of sound localization and speech perception. However, the mechanisms involved in the age-related central auditory deficiency remain unclear. Previous studies have demonstrated that mitochondrial DNA (mtDNA) deletions accumulated with age in the auditory system. Also, a cytochrome c oxidase (CcO) deficiency has been proposed to be a causal factor in the age-related decline in mitochondrial respiratory activity. This study was designed to explore the changes of CcO activity and to investigate the possible relationship between the mtDNA common deletion (CD) and CcO activity as well as the mRNA expression of CcO subunits in the auditory cortex of D-galactose (D-gal)-induced mimetic aging rats at different ages. Moreover, we explored whether peroxisome proliferator-activated receptor-γ coactivator 1α (PGC-1α), nuclear respiratory factor 1 (NRF-1) and mitochondrial transcription factor A (TFAM) were involved in the changes of nuclear- and mitochondrial-encoded CcO subunits in the auditory cortex during aging. Our data demonstrated that d-gal-induced mimetic aging rats exhibited an accelerated accumulation of the CD and a gradual decline in the CcO activity in the auditory cortex during the aging process. The reduction in the CcO activity was correlated with the level of CD load in the auditory cortex. The mRNA expression of CcO subunit III was reduced significantly with age in the d-gal-induced mimetic aging rats. In contrast, the decline in the mRNA expression of subunits I and IV was relatively minor. Additionally, significant increases in the mRNA and protein levels of PGC-1α, NRF-1 and TFAM were observed in the auditory cortex of D-gal-induced mimetic aging rats with aging. These findings suggested that the accelerated accumulation of the CD in the auditory cortex may induce a substantial decline in CcO subunit III and lead to a significant decline in the Cc

  1. Sound objects – Auditory objects – Musical objects

    DEFF Research Database (Denmark)

    Hjortkjær, Jens

    2015-01-01

    The auditory system transforms patterns of sound energy into perceptual objects but the precise definition of an ‘auditory object’ is much debated. In the context of music listening, Pierre Schaeffer argued that ‘sound objects’ are the fundamental perceptual units in ‘musical objects......’. In this paper, I review recent neurocognitive research suggesting that the auditory system is sensitive to structural information about real-world objects. Instead of focusing solely on perceptual sound features as determinants of auditory objects, I propose that real-world object properties are inherent...

  2. Sound objects – Auditory objects – Musical objects

    DEFF Research Database (Denmark)

    Hjortkjær, Jens

    2016-01-01

    The auditory system transforms patterns of sound energy into perceptual objects but the precise definition of an ‘auditory object’ is much debated. In the context of music listening, Pierre Schaeffer argued that ‘sound objects’ are the fundamental perceptual units in ‘musical objects......’. In this paper, I review recent neurocognitive research suggesting that the auditory system is sensitive to structural information about real-world objects. Instead of focusing solely on perceptual sound features as determinants of auditory objects, I propose that real-world object properties are inherent...

  3. Extrinsic sound stimulations and development of periphery auditory synapses

    Institute of Scientific and Technical Information of China (English)

    Kun Hou; Shiming Yang; Ke Liu

    2015-01-01

    The development of auditory synapses is a key process for the maturation of hearing function. However, it is still on debate regarding whether the development of auditory synapses is dominated by acquired sound stimulations. In this review, we summarize relevant publications in recent decades to address this issue. Most reported data suggest that extrinsic sound stimulations do affect, but not govern the development of periphery auditory synapses. Overall, periphery auditory synapses develop and mature according to its intrinsic mechanism to build up the synaptic connections between sensory neurons and/or interneurons.

  4. Evaluation of peripheral compression and auditory nerve fiber intensity coding using auditory steady-state responses

    DEFF Research Database (Denmark)

    Encina Llamas, Gerard; M. Harte, James; Epp, Bastian

    2015-01-01

    The compressive nonlinearity of the auditory system is assumed to be an epiphenomenon of a healthy cochlea and, particularly, of outer-hair cell function. Another ability of the healthy auditory system is to enable communication in acoustical environments with high-level background noises....... Evaluation of these properties provides information about the health state of the system. It has been shown that a loss of outer hair cells leads to a reduction in peripheral compression. It has also recently been shown in animal studies that noise over-exposure, producing temporary threshold shifts, can...

  5. Participação do cerebelo no processamento auditivo Participation of the cerebellum in auditory processing

    Directory of Open Access Journals (Sweden)

    Patrícia Maria Sens

    2007-04-01

    Full Text Available O cerebelo era tradicionalmente visto como um órgão coordenador da motricidade, entretanto é atualmente considerado como um importante centro de integração de sensibilidades e coordenação de várias fases do processo cognitivo. OBJETIVO: é sistematizar as informações da literatura quanto à participação do cerebelo na percepção auditiva. MÉTODOS: foram selecionados na literatura trabalhos em animais sobre a fisiologia e anatomia das vias auditivas do cerebelo, além de trabalhos em humanos sobre diversas funções do cerebelo na percepção auditiva. Foram discutidos os achados da literatura, que há evidências que o cerebelo participa das seguintes funções cognitivas relacionadas à audição: geração verbal; processamento auditivo; atenção auditiva; memória auditiva; raciocínio abstrato; timing; solução de problemas; discriminação sensorial; informação sensorial; processamento da linguagem; operações lingüísticas. CONCLUSÃO: Foi constatado que são incompletas as informações sobre as estruturas, funções e vias auditivas do cerebelo.The cerebellum, traditionally conceived as a controlling organ of motricity, it is today considered an all-important integration center for both sensitivity and coordination of the various phases of the cognitive process. AIM: This paper aims at gather and sort literature information on the cerebellum’s role in the auditory perception. METHODS: We have selected animal studies of both the physiology and the anatomy of the cerebellum auditory pathway, as well as papers on humans discussing several functions of the cerebellum in auditory perception. As for the literature, it has been discussed and concluded that there is evidence that the cerebellum participates in many cognitive functions related to hearing: speech generation, auditory processing, auditory memory, abstract reasoning, timing, solution of problems, sensorial discrimination, sensorial information, language

  6. Human Engineer’s Guide to Auditory Displays. Volume 1. Elements of Perception and Memory Affecting Auditory Displays.

    Science.gov (United States)

    1984-08-01

    performance to a greater extent. Similar results involving melodic transposition with, and without, distortions in interval sizes were obtained by...Dowling and Fujitani (1971). Even the rhythmic information alone in a monotone may account for some melodic recognition, as shown by White (1960) and...1975), timbre disparity (Deutsch, 1978), unidirectional continuance of patterned change (Divenyi and Hirsh, 1974, 1975), and rhythm (Fraisse, 1978

  7. Organization of the auditory brainstem in a lizard, Gekko gecko. I. Auditory nerve, cochlear nuclei, and superior olivary nuclei

    DEFF Research Database (Denmark)

    Tang, Y. Z.; Christensen-Dalsgaard, J.; Carr, C. E.

    2012-01-01

    We used tract tracing to reveal the connections of the auditory brainstem in the Tokay gecko (Gekko gecko). The auditory nerve has two divisions, a rostroventrally directed projection of mid- to high best-frequency fibers to the nucleus angularis (NA) and a more dorsal and caudal projection of lo...... of auditory connections in lizards and archosaurs but also different processing of low- and high-frequency information in the brainstem. J. Comp. Neurol. 520:17841799, 2012. (C) 2011 Wiley Periodicals, Inc...

  8. Visual cortex and auditory cortex activation in early binocularly blind macaques: A BOLD-fMRI study using auditory stimuli.

    Science.gov (United States)

    Wang, Rong; Wu, Lingjie; Tang, Zuohua; Sun, Xinghuai; Feng, Xiaoyuan; Tang, Weijun; Qian, Wen; Wang, Jie; Jin, Lixin; Zhong, Yufeng; Xiao, Zebin

    2017-04-15

    Cross-modal plasticity within the visual and auditory cortices of early binocularly blind macaques is not well studied. In this study, four healthy neonatal macaques were assigned to group A (control group) or group B (binocularly blind group). Sixteen months later, blood oxygenation level-dependent functional imaging (BOLD-fMRI) was conducted to examine the activation in the visual and auditory cortices of each macaque while being tested using pure tones as auditory stimuli. The changes in the BOLD response in the visual and auditory cortices of all macaques were compared with immunofluorescence staining findings. Compared with group A, greater BOLD activity was observed in the bilateral visual cortices of group B, and this effect was particularly obvious in the right visual cortex. In addition, more activated volumes were found in the bilateral auditory cortices of group B than of group A, especially in the right auditory cortex. These findings were consistent with the fact that there were more c-Fos-positive cells in the bilateral visual and auditory cortices of group B compared with group A (p visual cortices of binocularly blind macaques can be reorganized to process auditory stimuli after visual deprivation, and this effect is more obvious in the right than the left visual cortex. These results indicate the establishment of cross-modal plasticity within the visual and auditory cortices.

  9. Use of transcranial direct current stimulation for the treatment of auditory hallucinations of schizophrenia – a systematic review

    Directory of Open Access Journals (Sweden)

    Pondé PH

    2017-02-01

    Full Text Available Pedro H Pondé,1 Eduardo P de Sena,2 Joan A Camprodon,3 Arão Nogueira de Araújo,2 Mário F Neto,4 Melany DiBiasi,5 Abrahão Fontes Baptista,6,7 Lidia MVR Moura,8 Camila Cosmo2,3,6,9,10 1Dynamics of Neuromusculoskeletal System Laboratory, Bahiana School of Medicine and Public Health, 2Postgraduate Program in Interactive Process of Organs and Systems, Federal University of Bahia, Salvador, Bahia, Brazil; 3Laboratory for Neuropsychiatry and Neuromodulation and Transcranial Magnetic Stimulation Clinical Service, Department of Psychiatry, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; 4Scientific Training Center Department, School of Medicine of Bahia, Federal University of Bahia, Salvador, Bahia, Brazil; 5Neuromodulation Center, Spaulding Rehabilitation Hospital, Harvard Medical School, Boston, MA, USA; 6Functional Electrostimulation Laboratory, Biomorphology Department, 7Postgraduate Program on Medicine and Human Health, School of Medicine, Federal University of Bahia, Salvador, Bahia, Brazil; 8Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; 9Center for Technological Innovation in Rehabilitation, Federal University of Bahia, 10Bahia State Health Department (SESAB, Salvador, Bahia, Brazil Introduction: Auditory hallucinations are defined as experiences of auditory perceptions in the absence of a provoking external stimulus. They are the most prevalent symptoms of schizophrenia with high capacity for chronicity and refractoriness during the course of disease. The transcranial direct current stimulation (tDCS – a safe, portable, and inexpensive neuromodulation technique – has emerged as a promising treatment for the management of auditory hallucinations. Objective: The aim of this study is to analyze the level of evidence in the literature available for the use of tDCS as a treatment for auditory hallucinations in schizophrenia. Methods: A systematic review was performed

  10. Arrested Development of Audiovisual Speech Perception in Autism Spectrum Disorders

    Science.gov (United States)

    Stevenson, Ryan A.; Siemann, Justin K.; Woynaroski, Tiffany G.; Schneider, Brittany C.; Eberly, Haley E.; Camarata, Stephen M.; Wallace, Mark T.

    2013-01-01

    Atypical communicative abilities are a core marker of Autism Spectrum Disorders (ASD). A number of studies have shown that, in addition to auditory comprehension differences, individuals with autism frequently show atypical responses to audiovisual speech, suggesting a multisensory contribution to these communicative differences from their typically developing peers. To shed light on possible differences in the maturation of audiovisual speech integration, we tested younger (ages 6-12) and older (ages 13-18) children with and without ASD on a task indexing such multisensory integration. To do this, we used the McGurk effect, in which the pairing of incongruent auditory and visual speech tokens typically results in the perception of a fused percept distinct from the auditory and visual signals, indicative of active integration of the two channels conveying speech information. Whereas little difference was seen in audiovisual speech processing (i.e., reports of McGurk fusion) between the younger ASD and TD groups, there was a significant difference at the older ages. While TD controls exhibited an increased rate of fusion (i.e., integration) with age, children with ASD failed to show this increase. These data suggest arrested development of audiovisual speech integration in ASD. The results are discussed in light of the extant literature and necessary next steps in research. PMID:24218241

  11. Pitch perception beyond the traditional existence region of pitch

    DEFF Research Database (Denmark)

    Oxenham, Andrew J.; Micheyl, Christophe; Keebler, Michael V.

    2011-01-01

    Humans’ ability to recognize musical melodies is generally limited to pure-tone frequencies below 4 or 5 kHz. This limit coincides with the highest notes on modern musical instruments and is widely believed to reflect the upper limit of precise stimulusdriven spike timing in the auditory nerve. We...... tested the upper limits of pitch and melody perception in humans using pure and harmonic complex tones, such as those produced by the human voice and musical instruments, in melody recognition and pitchmatching tasks. We found that robust pitch perception can be elicited by harmonic complex tones...... with fundamental frequencies below 2 kHz, even when all of the individual harmonics are above 6 kHz—well above the currently accepted existence region of pitch and above the currently accepted limits of neural phase locking. The results suggest that the perception of musical pitch at high frequencies...

  12. Effects of sequential streaming on auditory masking using psychoacoustics and auditory evoked potentials.

    Science.gov (United States)

    Verhey, Jesko L; Ernst, Stephan M A; Yasin, Ifat

    2012-03-01

    The present study was aimed at investigating the relationship between the mismatch negativity (MMN) and psychoacoustical effects of sequential streaming on comodulation masking release (CMR). The influence of sequential streaming on CMR was investigated using a psychoacoustical alternative forced-choice procedure and electroencephalography (EEG) for the same group of subjects. The psychoacoustical data showed, that adding precursors comprising of only off-signal-frequency maskers abolished the CMR. Complementary EEG data showed an MMN irrespective of the masker envelope correlation across frequency when only the off-signal-frequency masker components were present. The addition of such precursors promotes a separation of the on- and off-frequency masker components into distinct auditory objects preventing the auditory system from using comodulation as an additional cue. A frequency-specific adaptation changing the representation of the flanking bands in the streaming conditions may also contribute to the reduction of CMR in the stream conditions, however, it is unlikely that adaptation is the primary reason for the streaming effect. A neurophysiological correlate of sequential streaming was found in EEG data using MMN, but the magnitude of the MMN was not correlated with the audibility of the signal in CMR experiments. Dipole source analysis indicated different cortical regions involved in processing auditory streaming and modulation detection. In particular, neural sources for processing auditory streaming include cortical regions involved in decision-making.

  13. Time computations in anuran auditory systems

    Directory of Open Access Journals (Sweden)

    Gary J Rose

    2014-05-01

    Full Text Available Temporal computations are important in the acoustic communication of anurans. In many cases, calls between closely related species are nearly identical spectrally but differ markedly in temporal structure. Depending on the species, calls can differ in pulse duration, shape and/or rate (i.e., amplitude modulation, direction and rate of frequency modulation, and overall call duration. Also, behavioral studies have shown that anurans are able to discriminate between calls that differ in temporal structure. In the peripheral auditory system, temporal information is coded primarily in the spatiotemporal patterns of activity of auditory-nerve fibers. However, major transformations in the representation of temporal information occur in the central auditory system. In this review I summarize recent advances in understanding how temporal information is represented in the anuran midbrain, with particular emphasis on mechanisms that underlie selectivity for pulse duration and pulse rate (i.e., intervals between onsets of successive pulses. Two types of neurons have been identified that show selectivity for pulse rate: long-interval cells respond well to slow pulse rates but fail to spike or respond phasically to fast pulse rates; conversely, interval-counting neurons respond to intermediate or fast pulse rates, but only after a threshold number of pulses, presented at optimal intervals, have occurred. Duration-selectivity is manifest as short-pass, band-pass or long-pass tuning. Whole-cell patch recordings, in vivo, suggest that excitation and inhibition are integrated in diverse ways to generate temporal selectivity. In many cases, activity-related enhancement or depression of excitatory or inhibitory processes appear to contribute to selective responses.

  14. Tinnitus suppression by electric stimulation of the auditory nerve

    Directory of Open Access Journals (Sweden)

    Janice Erica Chang

    2012-03-01

    Full Text Available Electric stimulation of the auditory nerve via a cochlear implant (CI has been observed to suppress tinnitus, but parameters of an effective electric stimulus remain unexplored. Here we used CI research processors to systematically vary pulse rate, electrode place, and current amplitude of electric stimuli, and measure their effects on tinnitus loudness and stimulus loudness as a function of stimulus duration. Thirteen tinnitus subjects who used CIs were tested, with 9 (70% being Responders who achieved greater than 30% tinnitus loudness reduction in response to at least one stimulation condition and the remaining 4 (30% being Non-Responders who had less than 30% tinnitus loudness reduction in response to any stimulus condition tested. Despite large individual variability, several interesting observations were made between stimulation parameters, tinnitus characteristics, and tinnitus suppression. If a subject’s tinnitus was suppressed by one stimulus, then it was more likely to be suppressed by another stimulus. If the tinnitus contained a pulsating component, then it would be more likely suppressed by a given combination of stimulus parameters than tinnitus without these components. There was also a disassociation between the subjects’ clinical speech processor and our research processor in terms of their effectiveness in tinnitus suppression. Finally, an interesting dichotomy was observed between loudness adaptation to electric stimuli and their effects on tinnitus loudness, with the Responders exhibiting higher degrees of loudness adaptation than the Non-Responders. Although the mechanisms underlying these observations remain to be resolved, their clinical implications are clear. When using a CI to manage tinnitus, the clinical processor that is optimized for speech perception needs to be customized for optimal tinnitus suppression.

  15. From ear to hand: the role of the auditory-motor loop in pointing to an auditory source

    Directory of Open Access Journals (Sweden)

    Eric Olivier Boyer

    2013-04-01

    Full Text Available Studies of the nature of the neural mechanisms involved in goal-directed movements tend to concentrate on the role of vision. We present here an attempt to address the mechanisms whereby an auditory input is transformed into a motor command. The spatial and temporal organization of hand movements were studied in normal human subjects as they pointed towards unseen auditory targets located in a horizontal plane in front of them. Positions and movements of the hand were measured by a six infrared camera tracking system. In one condition, we assessed the role of auditory information about target position in correcting the trajectory of the hand. To accomplish this, the duration of the target presentation was varied. In another condition, subjects received continuous auditory feedback of their hand movement while pointing to the auditory targets. Online auditory control of the direction of pointing movements was assessed by evaluating how subjects reacted to shifts in heard hand position. Localization errors were exacerbated by short duration of target presentation but not modified by auditory feedback of hand position. Long duration of target presentation gave rise to a higher level of accuracy and was accompanied by early automatic head orienting movements consistently related to target direction. These results highlight the efficiency of auditory feedback processing in online motor control and suggest that the auditory system takes advantages of dynamic changes of the acoustic cues due to changes in head orientation in order to process online motor control. How to design an informative acoustic feedback needs to be carefully studied to demonstrate that auditory feedback of the hand could assist the monitoring of movements directed at objects in auditory space.

  16. From ear to hand: the role of the auditory-motor loop in pointing to an auditory source

    Science.gov (United States)

    Boyer, Eric O.; Babayan, Bénédicte M.; Bevilacqua, Frédéric; Noisternig, Markus; Warusfel, Olivier; Roby-Brami, Agnes; Hanneton, Sylvain; Viaud-Delmon, Isabelle

    2013-01-01

    Studies of the nature of the neural mechanisms involved in goal-directed movements tend to concentrate on the role of vision. We present here an attempt to address the mechanisms whereby an auditory input is transformed into a motor command. The spatial and temporal organization of hand movements were studied in normal human subjects as they pointed toward unseen auditory targets located in a horizontal plane in front of them. Positions and movements of the hand were measured by a six infrared camera tracking system. In one condition, we assessed the role of auditory information about target position in correcting the trajectory of the hand. To accomplish this, the duration of the target presentation was varied. In another condition, subjects received continuous auditory feedback of their hand movement while pointing to the auditory targets. Online auditory control of the direction of pointing movements was assessed by evaluating how subjects reacted to shifts in heard hand position. Localization errors were exacerbated by short duration of target presentation but not modified by auditory feedback of hand position. Long duration of target presentation gave rise to a higher level of accuracy and was accompanied by early automatic head orienting movements consistently related to target direction. These results highlight the efficiency of auditory feedback processing in online motor control and suggest that the auditory system takes advantages of dynamic changes of the acoustic cues due to changes in head orientation in order to process online motor control. How to design an informative acoustic feedback needs to be carefully studied to demonstrate that auditory feedback of the hand could assist the monitoring of movements directed at objects in auditory space. PMID:23626532

  17. Multiprofessional committee on auditory health: COMUSA.

    Science.gov (United States)

    Lewis, Doris Ruthy; Marone, Silvio Antonio Monteiro; Mendes, Beatriz C A; Cruz, Oswaldo Laercio Mendonça; Nóbrega, Manoel de

    2010-01-01

    Created in 2007, COMUSA is a multiprofessional committee comprising speech therapy, otology, otorhinolaryngology and pediatrics with the aim of debating and countersigning auditory health actions for neonatal, lactating, preschool and school children, adolescents, adults and elderly persons. COMUSA includes representatives of the Brazilian Audiology Academy (Academia Brasileira de Audiologia or ABA), the Brazilian Otorhinolaryngology and Cervicofacial Surgery Association (Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico Facial or ABORL), the Brazilian Phonoaudiology Society (Sociedade Brasileira de Fonoaudiologia or SBFa), the Brazilian Otology Society (Sociedade Brasileira de Otologia or SBO), and the Brazilian Pediatrics Society (Sociedade Brasileira de Pediatria or SBP).

  18. Musical and auditory hallucinations: A spectrum.

    Science.gov (United States)

    E Fischer, Corinne; Marchie, Anthony; Norris, Mireille

    2004-02-01

    Musical hallucinosis is a rare and poorly understood clinical phenomenon. While an association appears to exist between this phenomenon and organic brain pathology, aging and sensory impairment the precise association remains unclear. The authors present two cases of musical hallucinosis, both in elderly patients with mild-moderate cognitive impairment and mild-moderate hearing loss, who subsequently developed auditory hallucinations and in one case command hallucinations. The literature in reference to musical hallucinosis will be reviewed and a theory relating to the development of musical hallucinations will be proposed.

  19. Cancer of the external auditory canal

    DEFF Research Database (Denmark)

    Nyrop, Mette; Grøntved, Aksel

    2002-01-01

    OBJECTIVE: To evaluate the outcome of surgery for cancer of the external auditory canal and relate this to the Pittsburgh staging system used both on squamous cell carcinoma and non-squamous cell carcinoma. DESIGN: Retrospective case series of all patients who had surgery between 1979 and 2000....... PATIENTS: Ten women and 10 men with previously untreated primary cancer. Median age at diagnosis was 67 years (range, 31-87 years). Survival data included 18 patients with at least 2 years of follow-up or recurrence. INTERVENTION: Local canal resection or partial temporal bone resection. MAIN OUTCOME...

  20. CAVERNOUS HEMANGIOMA OF THE INTERNAL AUDITORY CANAL

    Directory of Open Access Journals (Sweden)

    Mohammad Hossein Hekmatara

    1993-06-01

    Full Text Available Cavernous hemangioma is a rare benign tumor of the internal auditory canal (IAC of which fourteen cases have been reported so far."nTinnitus and progressive sensorineural hearing loss (SNHL are the chief complaints of the patients. Audiological and radiological planes, CTScan, and magnetic resonance image (MRI studies are helpful in diagnosis. The only choice of treatment is surgery with elective transmastoid trans¬labyrinthine approach. And if tumor is very large, the method of choice will be retrosigmoid approach.