WorldWideScience

Sample records for auditory perception

  1. Auditory Spatial Perception: Auditory Localization

    Science.gov (United States)

    2012-05-01

    the surrounding space and the location and position of our own body within it. Thus, it is the multisensory awareness of being immersed in a specific...improves situational awareness, speech perception, and sound source identification in the presence of other sound sources (e.g., Bronkhorst, 2000; Kidd et...ventriloquism effect (VE) (Howard and Templeton, 1966) in which the listener perceives the ventriloquist’s speech as coming from ventriloquist’s dummy. The

  2. The Perception of Auditory Motion

    Science.gov (United States)

    Leung, Johahn

    2016-01-01

    The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion. The fact that we perceive our auditory world to be stable despite almost continual movement of the head demonstrates the efficiency and effectiveness of this process. This review examines the acoustical basis of auditory motion perception and a wide range of psychophysical, electrophysiological, and cortical imaging studies that have probed the limits and possible mechanisms underlying this perception. PMID:27094029

  3. Auditory adaptation improves tactile frequency perception

    NARCIS (Netherlands)

    Crommett, L.E.; Pérez Bellido, A.; Yau, J.M.

    2017-01-01

    Our ability to process temporal frequency information by touch underlies our capacity to perceive and discriminate surface textures. Auditory signals, which also provide extensive temporal frequency information, can systematically alter the perception of vibrations on the hand. How auditory signals

  4. Perception of Complex Auditory Patterns.

    Science.gov (United States)

    1987-11-02

    and Piercy, M. (1973). Defects of non - verbal auditory perception in children with developmental aphasia . Nature (London), 241, 468-469. Watson, C.S...LII, zS 4p ETV I Hearing and Communication Laboratory Department of Speech and Hearing Sciences 7 Indiana University Bloomington, Indiana 47405 Final...Technical Report Air Force Office of Scientific Research AFOSR-84-0337 September 1, 1984 to August 31, 1987 Hearing and Communication Laboratory

  5. Auditory event files: integrating auditory perception and action planning.

    Science.gov (United States)

    Zmigrod, Sharon; Hommel, Bernhard

    2009-02-01

    The features of perceived objects are processed in distinct neural pathways, which call for mechanisms that integrate the distributed information into coherent representations (the binding problem). Recent studies of sequential effects have demonstrated feature binding not only in perception, but also across (visual) perception and action planning. We investigated whether comparable effects can be obtained in and across auditory perception and action. The results from two experiments revealed effects indicative of spontaneous integration of auditory features (pitch and loudness, pitch and location), as well as evidence for audio-manual stimulus-response integration. Even though integration takes place spontaneously, features related to task-relevant stimulus or response dimensions are more likely to be integrated. Moreover, integration seems to follow a temporal overlap principle, with features coded close in time being more likely to be bound together. Taken altogether, the findings are consistent with the idea of episodic event files integrating perception and action plans.

  6. Auditory environmental context affects visual distance perception.

    Science.gov (United States)

    Etchemendy, Pablo E; Abregú, Ezequiel; Calcagno, Esteban R; Eguia, Manuel C; Vechiatti, Nilda; Iasi, Federico; Vergara, Ramiro O

    2017-08-03

    In this article, we show that visual distance perception (VDP) is influenced by the auditory environmental context through reverberation-related cues. We performed two VDP experiments in two dark rooms with extremely different reverberation times: an anechoic chamber and a reverberant room. Subjects assigned to the reverberant room perceived the targets farther than subjects assigned to the anechoic chamber. Also, we found a positive correlation between the maximum perceived distance and the auditorily perceived room size. We next performed a second experiment in which the same subjects of Experiment 1 were interchanged between rooms. We found that subjects preserved the responses from the previous experiment provided they were compatible with the present perception of the environment; if not, perceived distance was biased towards the auditorily perceived boundaries of the room. Results of both experiments show that the auditory environment can influence VDP, presumably through reverberation cues related to the perception of room size.

  7. McGurk illusion recalibrates subsequent auditory perception

    NARCIS (Netherlands)

    Lüttke, C.S.; Ekman, M.; Gerven, M.A.J. van; Lange, F.P. de

    2016-01-01

    Visual information can alter auditory perception. This is clearly illustrated by the well-known McGurk illusion, where an auditory/aba/ and a visual /aga/ are merged to the percept of 'ada'. It is less clear however whether such a change in perception may recalibrate subsequent perception. Here we

  8. Attention within Auditory Word Perception.

    Science.gov (United States)

    1985-11-01

    with two ears. Jornal f tbp & ti So iM SL Amrir, 25 975-979. Coren, S. & Girgus, J. S. (1972). Differentiation and decrement in the Meller -Lyer...behavio. New York: AL1eton-Centy-Crofts. Lwisp E. 0. (1908). The effect of practice on the perception of the Meller -Lyer illusion. Britia amrnal nt...PAR Technology Corp. 7926 Jones Branch Drive Dr. Sandra P. Marshall Suite 170 Dept. of Psychology McLean, VA 22102 San Diego State University San

  9. Auditory perception of a human walker.

    Science.gov (United States)

    Cottrell, David; Campbell, Megan E J

    2014-01-01

    When one hears footsteps in the hall, one is able to instantly recognise it as a person: this is an everyday example of auditory biological motion perception. Despite the familiarity of this experience, research into this phenomenon is in its infancy compared with visual biological motion perception. Here, two experiments explored sensitivity to, and recognition of, auditory stimuli of biological and nonbiological origin. We hypothesised that the cadence of a walker gives rise to a temporal pattern of impact sounds that facilitates the recognition of human motion from auditory stimuli alone. First a series of detection tasks compared sensitivity with three carefully matched impact sounds: footsteps, a ball bouncing, and drumbeats. Unexpectedly, participants were no more sensitive to footsteps than to impact sounds of nonbiological origin. In the second experiment participants made discriminations between pairs of the same stimuli, in a series of recognition tasks in which the temporal pattern of impact sounds was manipulated to be either that of a walker or the pattern more typical of the source event (a ball bouncing or a drumbeat). Under these conditions, there was evidence that both temporal and nontemporal cues were important in recognising theses stimuli. It is proposed that the interval between footsteps, which reflects a walker's cadence, is a cue for the recognition of the sounds of a human walking.

  10. Sensorimotor Learning Enhances Expectations During Auditory Perception.

    Science.gov (United States)

    Mathias, Brian; Palmer, Caroline; Perrin, Fabien; Tillmann, Barbara

    2015-08-01

    Sounds that have been produced with one's own motor system tend to be remembered better than sounds that have only been perceived, suggesting a role of motor information in memory for auditory stimuli. To address potential contributions of the motor network to the recognition of previously produced sounds, we used event-related potential, electric current density, and behavioral measures to investigate memory for produced and perceived melodies. Musicians performed or listened to novel melodies, and then heard the melodies either in their original version or with single pitch alterations. Production learning enhanced subsequent recognition accuracy and increased amplitudes of N200, P300, and N400 responses to pitch alterations. Premotor and supplementary motor regions showed greater current density during the initial detection of alterations in previously produced melodies than in previously perceived melodies, associated with the N200. Primary motor cortex was more strongly engaged by alterations in previously produced melodies within the P300 and N400 timeframes. Motor memory traces may therefore interface with auditory pitch percepts in premotor regions as early as 200 ms following perceived pitch onsets. Outcomes suggest that auditory-motor interactions contribute to memory benefits conferred by production experience, and support a role of motor prediction mechanisms in the production effect. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. [Pathophysiology of auditory and speech perception].

    Science.gov (United States)

    Dauman, René

    2009-05-20

    Auditory perception or hearing can be defined as the interpretation of sensory evidence, produced by the ears in response to sound, in terms of the events that caused the sound. We do not hear a window but we may hear a window closing. We do not hear a dog but we may hear a dog barking. And we do not hear a person but we may hear a person talking. Hearing impairment can result in anxiety or stress in everyday life. Pure-tone hearing loss (or threshold shift) is a measure of hearing impairment. Aging and excessive noise are the main causes of hearing impairment. Speech perception is another concept. The difference with the former is best illustrated by the disabled individual declaring "I can hear that someone is talking to me, but I don't understand what she says". Being unable to understand easily and clearly significant others, especially in understanding speech in a noisy environment, can give rise to considerable psychosocial and professional consequences (disability). Presbycusis is the decline in hearing sensitivity caused by the aging process at different levels of the auditory system. However, it is difficult to isolate age effects from other contributors to age-related hearing loss such as noise damage, genetic susceptibility, inflammatory otologic disorders, and ototoxic agents. Therefore, presbycusis and age-related hearing loss are often used synonymously. In this report pathophysiology is mostly described with regard to presbycusis, and the main peripheral types of presbycusis (sensory or Corti organ-related, strial, and neural) are summarized. An original experimental model of strial presbycusis, based on chronic application of furosemide at the round window, is further described. Central presbycusis is mainly determined by degeneration secondary to peripheral impairment (concept of deafferentation). Central auditory changes typically affect speed of processing and result in poorer speech understanding in noise or with rapid or degraded speech. Last

  12. Simultanagnosia does not affect processes of auditory Gestalt perception.

    Science.gov (United States)

    Rennig, Johannes; Bleyer, Anna Lena; Karnath, Hans-Otto

    2017-05-01

    Simultanagnosia is a neuropsychological deficit of higher visual processes caused by temporo-parietal brain damage. It is characterized by a specific failure of recognition of a global visual Gestalt, like a visual scene or complex objects, consisting of local elements. In this study we investigated to what extend this deficit should be understood as a deficit related to specifically the visual domain or whether it should be seen as defective Gestalt processing per se. To examine if simultanagnosia occurs across sensory domains, we designed several auditory experiments sharing typical characteristics of visual tasks that are known to be particularly demanding for patients suffering from simultanagnosia. We also included control tasks for auditory working memory deficits and for auditory extinction. We tested four simultanagnosia patients who suffered from severe symptoms in the visual domain. Two of them indeed showed significant impairments in recognition of simultaneously presented sounds. However, the same two patients also suffered from severe auditory working memory deficits and from symptoms comparable to auditory extinction, both sufficiently explaining the impairments in simultaneous auditory perception. We thus conclude that deficits in auditory Gestalt perception do not appear to be characteristic for simultanagnosia and that the human brain obviously uses independent mechanisms for visual and for auditory Gestalt perception. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. An interactive model of auditory-motor speech perception.

    Science.gov (United States)

    Liebenthal, Einat; Möttönen, Riikka

    2017-12-18

    Mounting evidence indicates a role in perceptual decoding of speech for the dorsal auditory stream connecting between temporal auditory and frontal-parietal articulatory areas. The activation time course in auditory, somatosensory and motor regions during speech processing is seldom taken into account in models of speech perception. We critically review the literature with a focus on temporal information, and contrast between three alternative models of auditory-motor speech processing: parallel, hierarchical, and interactive. We argue that electrophysiological and transcranial magnetic stimulation studies support the interactive model. The findings reveal that auditory and somatomotor areas are engaged almost simultaneously, before 100 ms. There is also evidence of early interactions between auditory and motor areas. We propose a new interactive model of auditory-motor speech perception in which auditory and articulatory somatomotor areas are connected from early stages of speech processing. We also discuss how attention and other factors can affect the timing and strength of auditory-motor interactions and propose directions for future research. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Context, Contrast, and Tone of Voice in Auditory Sarcasm Perception

    Science.gov (United States)

    Voyer, Daniel; Thibodeau, Sophie-Hélène; Delong, Breanna J.

    2016-01-01

    Four experiments were conducted to investigate the interplay between context and tone of voice in the perception of sarcasm. These experiments emphasized the role of contrast effects in sarcasm perception exclusively by means of auditory stimuli whereas most past research has relied on written material. In all experiments, a positive or negative…

  15. Influence of Auditory and Haptic Stimulation in Visual Perception

    Directory of Open Access Journals (Sweden)

    Shunichi Kawabata

    2011-10-01

    Full Text Available While many studies have shown that visual information affects perception in the other modalities, little is known about how auditory and haptic information affect visual perception. In this study, we investigated how auditory, haptic, or auditory and haptic stimulation affects visual perception. We used a behavioral task based on the subjects observing the phenomenon of two identical visual objects moving toward each other, overlapping and then continuing their original motion. Subjects may perceive the objects as either streaming each other or bouncing and reversing their direction of motion. With only visual motion stimulus, subjects usually report the objects as streaming, whereas if a sound or flash is played when the objects touch each other, subjects report the objects as bouncing (Bounce-Inducing Effect. In this study, “auditory stimulation”, “haptic stimulation” or “haptic and auditory stimulation” were presented at various times relative to the visual overlap of objects. Our result shows the bouncing rate when haptic and auditory stimulation were presented were the highest. This result suggests that the Bounce-Inducing Effect is enhanced by simultaneous modality presentation to visual motion. In the future, a neuroscience approach (eg, TMS, fMRI may be required to elucidate the brain mechanism in this study.

  16. Listener orientation and spatial judgments of elevated auditory percepts

    Science.gov (United States)

    Parks, Anthony J.

    How do listener head rotations affect auditory perception of elevation? This investi-. gation addresses this in the hopes that perceptual judgments of elevated auditory. percepts may be more thoroughly understood in terms of dynamic listening cues. engendered by listener head rotations and that this phenomenon can be psychophys-. ically and computationally modeled. Two listening tests were conducted and a. psychophysical model was constructed to this end. The frst listening test prompted. listeners to detect an elevated auditory event produced by a virtual noise source. orbiting the median plane via 24-channel ambisonic spatialization. Head rotations. were tracked using computer vision algorithms facilitated by camera tracking. The. data were used to construct a dichotomous criteria model using factorial binary. logistic regression model. The second auditory test investigated the validity of the. historically supported frequency dependence of auditory elevation perception using. narrow-band noise for continuous and brief stimuli with fxed and free-head rotation. conditions. The data were used to construct a multinomial logistic regression model. to predict categorical judgments of above, below, and behind. Finally, in light. of the psychophysical data found from the above studies, a functional model of. elevation perception for point sources along the cone of confusion was constructed. using physiologically-inspired signal processing methods along with top-down pro-. cessing utilizing principles of memory and orientation. The model is evaluated using. white noise bursts for 42 subjects' head-related transfer functions. The investigation. concludes with study limitations, possible implications, and speculation on future. research trajectories.

  17. Auditory perception of self-similarity in water sounds.

    Directory of Open Access Journals (Sweden)

    Maria Neimark Geffen

    2011-05-01

    Full Text Available Many natural signals, including environmental sounds, exhibit scale-invariant statistics: their structure is repeated at multiple scales. Such scale invariance has been identified separately across spectral and temporal correlations of natural sounds (Clarke and Voss, 1975; Attias and Schreiner, 1997; Escabi et al., 2003; Singh and Theunissen, 2003. Yet the role of scale-invariance across overall spectro-temporal structure of the sound has not been explored directly in auditory perception. Here, we identify that the sound wave of a recording of running water is a self-similar fractal, exhibiting scale-invariance not only within spectral channels, but also across the full spectral bandwidth. The auditory perception of the water sound did not change with its scale. We tested the role of scale-invariance in perception by using an artificial sound, which could be rendered scale-invariant. We generated a random chirp stimulus: an auditory signal controlled by two parameters, Q, controlling the relative, and r, controlling the absolute, temporal structure of the sound. Imposing scale-invariant statistics on the artificial sound was required for its perception as natural and water-like. Further, Q had to be restricted to a specific range for the sound to be perceived as natural. To detect self-similarity in the water sound, and identify Q, the auditory system needs to process the temporal dynamics of the waveform across spectral bands in terms of the number of cycles, rather than absolute timing. We propose a two-stage neural model implementing this computation. This computation may be carried out by circuits of neurons in the auditory cortex. The set of auditory stimuli developed in this study are particularly suitable for measurements of response properties of neurons in the auditory pathway, allowing for quantification of the effects of varying the statistics of the spectro-temporal statistical structure of the stimulus.

  18. Neural Encoding of Auditory Features during Music Perception and Imagery.

    Science.gov (United States)

    Martin, Stephanie; Mikutta, Christian; Leonard, Matthew K; Hungate, Dylan; Koelsch, Stefan; Shamma, Shihab; Chang, Edward F; Millán, José Del R; Knight, Robert T; Pasley, Brian N

    2017-10-27

    Despite many behavioral and neuroimaging investigations, it remains unclear how the human cortex represents spectrotemporal sound features during auditory imagery, and how this representation compares to auditory perception. To assess this, we recorded electrocorticographic signals from an epileptic patient with proficient music ability in 2 conditions. First, the participant played 2 piano pieces on an electronic piano with the sound volume of the digital keyboard on. Second, the participant replayed the same piano pieces, but without auditory feedback, and the participant was asked to imagine hearing the music in his mind. In both conditions, the sound output of the keyboard was recorded, thus allowing precise time-locking between the neural activity and the spectrotemporal content of the music imagery. This novel task design provided a unique opportunity to apply receptive field modeling techniques to quantitatively study neural encoding during auditory mental imagery. In both conditions, we built encoding models to predict high gamma neural activity (70-150 Hz) from the spectrogram representation of the recorded sound. We found robust spectrotemporal receptive fields during auditory imagery with substantial, but not complete overlap in frequency tuning and cortical location compared to receptive fields measured during auditory perception. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  19. Auditory Perception of Statistically Blurred Sound Textures

    DEFF Research Database (Denmark)

    McWalter, Richard Ian; MacDonald, Ewen; Dau, Torsten

    Sound textures have been identified as a category of sounds which are processed by the peripheral auditory system and captured with running timeaveraged statistics. Although sound textures are temporally homogeneous, they offer a listener with enough information to identify and differentiate...... sources. This experiment investigated the ability of the auditory system to identify statistically blurred sound textures and the perceptual relationship between sound textures. Identification performance of statistically blurred sound textures presented at a fixed blur increased over those presented...... as a gradual blur. The results suggests that the correct identification of sound textures is influenced by the preceding blurred stimulus. These findings draw parallels to the recognition of blurred images....

  20. Modeling auditory perception of individual hearing-impaired listeners

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve; Dau, Torsten

    Models of auditory signal processing and perception allow us to generate hypotheses that can be quantitatively tested, which in turn helps us to explain and understand the functioning of the auditory system. Here, the perceptual consequences of hearing impairment in individual listeners were...... investigated within the framework of the computational auditory signal processing and perception (CASP) model of Jepsen et al. [ J. Acoust. Soc. Am., in press]. Several parameters of the model were modified according to data from psychoacoustic measurements. Parameters associated with the cochlear stage were...... forward masking. The model may be useful for the evaluation of hearing-aid algorithms, where a reliable simulation of hearing impairment may reduce the need for time-consuming listening tests during development....

  1. Auditory object formation affects modulation perception

    DEFF Research Database (Denmark)

    Piechowiak, Tobias

    2005-01-01

    the target sound in time determine whether or not across-frequency modulation effects are observed. The results suggest that the binding of sound elements into coherent auditory objects precedes aspects of modulation analysis and imply a cortical locus involving integration times of several hundred...

  2. Auditory Spectral Integration in the Perception of Static Vowels

    Science.gov (United States)

    Fox, Robert Allen; Jacewicz, Ewa; Chang, Chiung-Yun

    2011-01-01

    Purpose: To evaluate potential contributions of broadband spectral integration in the perception of static vowels. Specifically, can the auditory system infer formant frequency information from changes in the intensity weighting across harmonics when the formant itself is missing? Does this type of integration produce the same results in the lower…

  3. Auditory Space Perception in Left- and Right-Handers

    Science.gov (United States)

    Ocklenburg, Sebastian; Hirnstein, Marco; Hausmann, Markus; Lewald, Jorg

    2010-01-01

    Several studies have shown that handedness has an impact on visual spatial abilities. Here we investigated the effect of laterality on auditory space perception. Participants (33 right-handers, 20 left-handers) completed two tasks of sound localization. In a dark, anechoic, and sound-proof room, sound stimuli (broadband noise) were presented via…

  4. Concurrent auditory perception difficulties in older adults with right hemisphere cerebrovascular accident

    OpenAIRE

    Talebi, Hossein; Moossavi, Abdollah; Faghihzadeh, Soghrat

    2014-01-01

    Background: Older adults with cerebrovascular accident (CVA) show evidence of auditory and speech perception problems. In present study, it was examined whether these problems are due to impairments of concurrent auditory segregation procedure which is the basic level of auditory scene analysis and auditory organization in auditory scenes with competing sounds. Methods: Concurrent auditory segregation using competing sentence test (CST) and dichotic digits test (DDT) was assessed and compared...

  5. Odors bias time perception in visual and auditory modalities

    Directory of Open Access Journals (Sweden)

    Zhenzhu eYue

    2016-04-01

    Full Text Available Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 ms or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor. The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a

  6. Odors Bias Time Perception in Visual and Auditory Modalities.

    Science.gov (United States)

    Yue, Zhenzhu; Gao, Tianyu; Chen, Lihan; Wu, Jiashuang

    2016-01-01

    Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal) were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor). The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a framework of

  7. Neural Correlates of Realistic and Unrealistic Auditory Space Perception

    Directory of Open Access Journals (Sweden)

    Akiko Callan

    2011-10-01

    Full Text Available Binaural recordings can simulate externalized auditory space perception over headphones. However, if the orientation of the recorder's head and the orientation of the listener's head are incongruent, the simulated auditory space is not realistic. For example, if a person lying flat on a bed listens to an environmental sound that was recorded by microphones inserted in ears of a person who was in an upright position, the sound simulates an auditory space rotated 90 degrees to the real-world horizontal axis. Our question is whether brain activation patterns are different between the unrealistic auditory space (ie, the orientation of the listener's head and the orientation of the recorder's head are incongruent and the realistic auditory space (ie, the orientations are congruent. River sounds that were binaurally recorded either in a supine position or in an upright body position were served as auditory stimuli. During fMRI experiments, participants listen to the stimuli and pressed one of two buttons indicating the direction of the water flow (horizontal/vertical. Behavioral results indicated that participants could not differentiate between the congruent and the incongruent conditions. However, neuroimaging results showed that the congruent condition activated the planum temporale significantly more than the incongruent condition.

  8. The plastic ear and perceptual relearning in auditory spatial perception.

    Directory of Open Access Journals (Sweden)

    Simon eCarlile

    2014-08-01

    Full Text Available The auditory system of adult listeners has been shown to accommodate to altered spectral cues to sound location which presumably provides the basis for recalibration to changes in the shape of the ear over a life time. Here we review the role of auditory and non-auditory inputs to the perception of sound location and consider a range of recent experiments looking at the role of non-auditory inputs in the process of accommodation to these altered spectral cues. A number of studies have used small ear moulds to modify the spectral cues that result in significant degradation in localization performance. Following chronic exposure (10-60 days performance recovers to some extent and recent work has demonstrated that this occurs for both audio-visual and audio-only regions of space. This begs the questions as to the teacher signal for this remarkable functional plasticity in the adult nervous system. Following a brief review of influence of the motor state in auditory localisation, we consider the potential role of auditory-motor learning in the perceptual recalibration of the spectral cues. Several recent studies have considered how multi-modal and sensory-motor feedback might influence accommodation to altered spectral cues produced by ear moulds or through virtual auditory space stimulation using non-individualised spectral cues. The work with ear moulds demonstrates that a relatively short period of training involving sensory-motor feedback (5 – 10 days significantly improved both the rate and extent of accommodation to altered spectral cues. This has significant implications not only for the mechanisms by which this complex sensory information is encoded to provide a spatial code but also for adaptive training to altered auditory inputs. The review concludes by considering the implications for rehabilitative training with hearing aids and cochlear prosthesis.

  9. Auditory distance perception in humans : A summary of past and present research

    NARCIS (Netherlands)

    Zahorik, P.; Brungart, D.S.; Bronkhorst, A.W.

    2005-01-01

    Although auditory distance perception is a critical component of spatial hearing, it has received substantially less scienti.c attention than the directional aspects of auditory localization. Here we summarize current knowledge on auditory distance perception, with special emphasis on recent

  10. Silent articulation modulates auditory and audiovisual speech perception.

    Science.gov (United States)

    Sato, Marc; Troille, Emilie; Ménard, Lucie; Cathiard, Marie-Agnès; Gracco, Vincent

    2013-06-01

    The concept of an internal forward model that internally simulates the sensory consequences of an action is a central idea in speech motor control. Consistent with this hypothesis, silent articulation has been shown to modulate activity of the auditory cortex and to improve the auditory identification of concordant speech sounds, when embedded in white noise. In the present study, we replicated and extended this behavioral finding by showing that silently articulating a syllable in synchrony with the presentation of a concordant auditory and/or visually ambiguous speech stimulus improves its identification. Our results further demonstrate that, even in the case of perfect perceptual identification, concurrent mouthing of a syllable speeds up the perceptual processing of a concordant speech stimulus. These results reflect multisensory-motor interactions during speech perception and provide new behavioral arguments for internally generated sensory predictions during silent speech production.

  11. The effects of speech motor preparation on auditory perception

    Science.gov (United States)

    Myers, John

    Perception and action are coupled via bidirectional relationships between sensory and motor systems. Motor systems influence sensory areas by imparting a feedforward influence on sensory processing termed "motor efference copy" (MEC). MEC is suggested to occur in humans because speech preparation and production modulate neural measures of auditory cortical activity. However, it is not known if MEC can affect auditory perception. We tested the hypothesis that during speech preparation auditory thresholds will increase relative to a control condition, and that the increase would be most evident for frequencies that match the upcoming vocal response. Participants performed trials in a speech condition that contained a visual cue indicating a vocal response to prepare (one of two frequencies), followed by a go signal to speak. To determine threshold shifts, voice-matched or -mismatched pure tones were presented at one of three time points between the cue and target. The control condition was the same except the visual cues did not specify a response and subjects did not speak. For each participant, we measured f0 thresholds in isolation from the task in order to establish baselines. Results indicated that auditory thresholds were highest during speech preparation, relative to baselines and a non-speech control condition, especially at suprathreshold levels. Thresholds for tones that matched the frequency of planned responses gradually increased over time, but sharply declined for the mismatched tones shortly before targets. Findings support the hypothesis that MEC influences auditory perception by modulating thresholds during speech preparation, with some specificity relative to the planned response. The threshold increase in tasks vs. baseline may reflect attentional demands of the tasks.

  12. Speech perception using combinations of auditory, visual, and tactile information.

    Science.gov (United States)

    Blamey, P J; Cowan, R S; Alcantara, J I; Whitford, L A; Clark, G M

    1989-01-01

    Four normally-hearing subjects were trained and tested with all combinations of a highly-degraded auditory input, a visual input via lipreading, and a tactile input using a multichannel electrotactile speech processor. The speech perception of the subjects was assessed with closed sets of vowels, consonants, and multisyllabic words; with open sets of words and sentences, and with speech tracking. When the visual input was added to any combination of other inputs, a significant improvement occurred for every test. Similarly, the auditory input produced a significant improvement for all tests except closed-set vowel recognition. The tactile input produced scores that were significantly greater than chance in isolation, but combined less effectively with the other modalities. The addition of the tactile input did produce significant improvements for vowel recognition in the auditory-tactile condition, for consonant recognition in the auditory-tactile and visual-tactile conditions, and in open-set word recognition in the visual-tactile condition. Information transmission analysis of the features of vowels and consonants indicated that the information from auditory and visual inputs were integrated much more effectively than information from the tactile input. The less effective combination might be due to lack of training with the tactile input, or to more fundamental limitations in the processing of multimodal stimuli.

  13. Upper limits of auditory rotational motion perception.

    Science.gov (United States)

    Féron, François-Xavier; Frissen, Ilja; Boissinot, Julien; Guastavino, Catherine

    2010-12-01

    Three experiments are reported, which investigated the auditory velocity thresholds beyond which listeners are no longer able to perceptually resolve a smooth circular trajectory. These thresholds were measured for band-limited noises, white noise, and harmonic sounds (HS), and in different acoustical environments. Experiments 1 and 2 were conducted in an acoustically dry laboratory. Observed thresholds varied as a function of stimulus type and spectral content. Thresholds for band-limited noises were unaffected by center frequency and equal to that of white noise. For HS, however, thresholds decreased as the fundamental frequency of the stimulus increased. The third experiment was a replication of the second in a reverberant concert hall, which produced qualitatively similar results except that thresholds were significantly higher than in the acoustically dry laboratory.

  14. Effects of Amplitude Compression on Relative Auditory Distance Perception

    Science.gov (United States)

    2013-10-01

    human sound localization (pp. 36-200). Cambridge, MA: The MIT Press. Carmichel, E. L., Harris, F. P., & Story, B. H. (2007). Effects of binaural ...auditory distance perception by reducing the level differences between sounds . The focus of the present study was to investigate the effect of amplitude...create stimuli. Two levels of amplitude compression were applied to the recordings through Adobe Audition sound editing software to simulate military

  15. The relationship of phonological ability, speech perception, and auditory perception in adults with dyslexia

    OpenAIRE

    Law, Jeremy M.; Vandermosten, Maaike; Ghesquiere, Pol; Wouters, Jan

    2014-01-01

    This study investigated whether auditory, speech perception, and phonological skills are tightly interrelated or independently contributing to reading. We assessed each of these three skills in 36 adults with a past diagnosis of dyslexia and 54 matched normal reading adults. Phonological skills were tested by the typical threefold tasks, i.e., rapid automatic naming, verbal short-term memory and phonological awareness. Dynamic auditory processing skills were assessed by means of a frequency m...

  16. The relationship of phonological ability, speech perception and auditory perception in adults with dyslexia.

    OpenAIRE

    Jeremy eLaw; Maaike eVandermosten; Pol eGhesquiere; Jan eWouters

    2014-01-01

    This study investigated whether auditory, speech perception and phonological skills are tightly interrelated or independently contributing to reading. We assessed each of these three skills in 36 adults with a past diagnosis of dyslexia and 54 matched normal reading adults. Phonological skills were tested by the typical threefold tasks, i.e. rapid automatic naming, verbal short term memory and phonological awareness. Dynamic auditory processing skills were assessed by means of a frequency mod...

  17. HEaDS-UP Phase IV Assessment: Headgear Effects on Auditory Perception

    Science.gov (United States)

    2015-02-01

    ARL-TR-7203 ● FEB 2015 US Army Research Laboratory HEaDS-UP Phase IV Assessment: Headgear Effects on Auditory Perception...Assessment: Headgear Effects on Auditory Perception by Angelique A Scharine Human Research and Engineering Directorate, ARL...Assessment: Headgear Effects on Auditory Perception 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Angelique A

  18. Visual Timing of Structured Dance Movements Resembles Auditory Rhythm Perception

    Directory of Open Access Journals (Sweden)

    Yi-Huang Su

    2016-01-01

    Full Text Available Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance.

  19. A computational model of human auditory signal processing and perception

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve; Ewert, Stephan D.; Dau, Torsten

    2008-01-01

    A model of computational auditory signal-processing and perception that accounts for various aspects of simultaneous and nonsimultaneous masking in human listeners is presented. The model is based on the modulation filterbank model described by Dau et al. [J. Acoust. Soc. Am. 102, 2892 (1997......)] but includes major changes at the peripheral and more central stages of processing. The model contains outer- and middle-ear transformations, a nonlinear basilar-membrane processing stage, a hair-cell transduction stage, a squaring expansion, an adaptation stage, a 150-Hz lowpass modulation filter, a bandpass...... modulation filterbank, a constant-variance internal noise, and an optimal detector stage. The model was evaluated in experimental conditions that reflect, to a different degree, effects of compression as well as spectral and temporal resolution in auditory processing. The experiments include intensity...

  20. The influence of auditory and visual information on the perception of crispy food

    NARCIS (Netherlands)

    Pocztaruk, R.D.; Abbink, J.H.; Wijk, de R.A.; Frasca, L.C.D.; Gaviao, M.B.D.; Bilt, van de A.

    2011-01-01

    The influence of auditory and/or visual information on the perception of crispy food and on the physiology of chewing was investigated. Participants chewed biscuits of three different levels of crispness under four experimental conditions: no masking, auditory masking, visual masking, and auditory

  1. Relating binaural pitch perception to the individual listener's auditory profile.

    Science.gov (United States)

    Santurette, Sébastien; Dau, Torsten

    2012-04-01

    The ability of eight normal-hearing listeners and fourteen listeners with sensorineural hearing loss to detect and identify pitch contours was measured for binaural-pitch stimuli and salience-matched monaurally detectable pitches. In an effort to determine whether impaired binaural pitch perception was linked to a specific deficit, the auditory profiles of the individual listeners were characterized using measures of loudness perception, cognitive ability, binaural processing, temporal fine structure processing, and frequency selectivity, in addition to common audiometric measures. Two of the listeners were found not to perceive binaural pitch at all, despite a clear detection of monaural pitch. While both binaural and monaural pitches were detectable by all other listeners, identification scores were significantly lower for binaural than for monaural pitch. A total absence of binaural pitch sensation coexisted with a loss of a binaural signal-detection advantage in noise, without implying reduced cognitive function. Auditory filter bandwidths did not correlate with the difference in pitch identification scores between binaural and monaural pitches. However, subjects with impaired binaural pitch perception showed deficits in temporal fine structure processing. Whether the observed deficits stemmed from peripheral or central mechanisms could not be resolved here, but the present findings may be useful for hearing loss characterization.

  2. Neural coding and perception of pitch in the normal and impaired human auditory system

    DEFF Research Database (Denmark)

    Santurette, Sébastien

    2011-01-01

    for a variety of basic auditory tasks, indicating that it may be a crucial measure to consider for hearing-loss characterization. In contrast to hearing-impaired listeners, adults with dyslexia showed no deficits in binaural pitch perception, suggesting intact low-level auditory mechanisms. The second part...... into the fundamental auditory mechanisms underlying pitch perception, and may have implications for future pitch-perception models, as well as strategies for auditory-profile characterization and restoration of accurate pitch perception in impaired hearing.......Pitch is an important attribute of hearing that allows us to perceive the musical quality of sounds. Besides music perception, pitch contributes to speech communication, auditory grouping, and perceptual segregation of sound sources. In this work, several aspects of pitch perception in humans were...

  3. The relationship of phonological ability, speech perception and auditory perception in adults with dyslexia.

    Directory of Open Access Journals (Sweden)

    Jeremy eLaw

    2014-07-01

    Full Text Available This study investigated whether auditory, speech perception and phonological skills are tightly interrelated or independently contributing to reading. We assessed each of these three skills in 36 adults with a past diagnosis of dyslexia and 54 matched normal reading adults. Phonological skills were tested by the typical threefold tasks, i.e. rapid automatic naming, verbal short term memory and phonological awareness. Dynamic auditory processing skills were assessed by means of a frequency modulation (FM and an amplitude rise time (RT; an intensity discrimination task (ID was included as a non-dynamic control task. Speech perception was assessed by means of sentences and words in noise tasks. Group analysis revealed significant group differences in auditory tasks (i.e. RT and ID and in phonological processing measures, yet no differences were found for speech perception. In addition, performance on RT discrimination correlated with reading but this relation was mediated by phonological processing and not by speech in noise. Finally, inspection of the individual scores revealed that the dyslexic readers showed an increased proportion of deviant subjects on the slow-dynamic auditory and phonological tasks, yet each individual dyslexic reader does not display a clear pattern of deficiencies across the levels of processing skills. Although our results support phonological and slow-rate dynamic auditory deficits which relate to literacy, they suggest that at the individual level, problems in reading and writing cannot be explained by the cascading auditory theory. Instead, dyslexic adults seem to vary considerably in the extent to which each of the auditory and phonological factors are expressed and interact with environmental and higher-order cognitive influences.

  4. The relationship of phonological ability, speech perception, and auditory perception in adults with dyslexia.

    Science.gov (United States)

    Law, Jeremy M; Vandermosten, Maaike; Ghesquiere, Pol; Wouters, Jan

    2014-01-01

    This study investigated whether auditory, speech perception, and phonological skills are tightly interrelated or independently contributing to reading. We assessed each of these three skills in 36 adults with a past diagnosis of dyslexia and 54 matched normal reading adults. Phonological skills were tested by the typical threefold tasks, i.e., rapid automatic naming, verbal short-term memory and phonological awareness. Dynamic auditory processing skills were assessed by means of a frequency modulation (FM) and an amplitude rise time (RT); an intensity discrimination task (ID) was included as a non-dynamic control task. Speech perception was assessed by means of sentences and words-in-noise tasks. Group analyses revealed significant group differences in auditory tasks (i.e., RT and ID) and in phonological processing measures, yet no differences were found for speech perception. In addition, performance on RT discrimination correlated with reading but this relation was mediated by phonological processing and not by speech-in-noise. Finally, inspection of the individual scores revealed that the dyslexic readers showed an increased proportion of deviant subjects on the slow-dynamic auditory and phonological tasks, yet each individual dyslexic reader does not display a clear pattern of deficiencies across the processing skills. Although our results support phonological and slow-rate dynamic auditory deficits which relate to literacy, they suggest that at the individual level, problems in reading and writing cannot be explained by the cascading auditory theory. Instead, dyslexic adults seem to vary considerably in the extent to which each of the auditory and phonological factors are expressed and interact with environmental and higher-order cognitive influences.

  5. Effects of Auditory Information on Self-Motion Perception during Simultaneous Presentation of Visual Shearing Motion

    Directory of Open Access Journals (Sweden)

    Shigehito eTanahashi

    2015-06-01

    Full Text Available Recent studies have found that self-motion perception induced by simultaneous presentation of visual and auditory motion is facilitated when the directions of visual and auditory motion stimuli are identical. They did not, however, examine possible contributions of auditory motion information for determining direction of self-motion perception. To examine this, a visual stimulus projected on a hemisphere screen and an auditory stimulus presented through headphones were presented separately or simultaneously, depending on experimental conditions. The participant continuously indicated the direction and strength of self-motion during the 130-s experimental trial. When the visual stimulus with a horizontal shearing rotation and the auditory stimulus with a horizontal one-directional rotation were presented simultaneously, the duration and strength of self-motion perceived in the opposite direction of the auditory rotation stimulus were significantly longer and stronger than those perceived in the same direction of the auditory rotation stimulus. However, the auditory stimulus alone could not sufficiently induce self-motion perception, and if it did, its direction was not consistent within each experimental trial. We concluded that auditory motion information can determine perceived direction of self-motion during simultaneous presentation of visual and auditory motion information, at least when visual stimuli moved in opposing directions (around the yaw-axis. We speculate that the contribution of auditory information depends on the plausibility and information balance of visual and auditory information.

  6. Influence of Eye Movements, Auditory Perception, and Phonemic Awareness in the Reading Process

    Science.gov (United States)

    Megino-Elvira, Laura; Martín-Lobo, Pilar; Vergara-Moragues, Esperanza

    2016-01-01

    The authors' aim was to analyze the relationship of eye movements, auditory perception, and phonemic awareness with the reading process. The instruments used were the King-Devick Test (saccade eye movements), the PAF test (auditory perception), the PFC (phonemic awareness), the PROLEC-R (lexical process), the Canals reading speed test, and the…

  7. Relating binaural pitch perception to the individual listener's auditory profile

    DEFF Research Database (Denmark)

    Santurette, Sébastien; Dau, Torsten

    2012-01-01

    The ability of eight normal-hearing listeners and fourteen listeners with sensorineural hearing loss to detect and identify pitch contours was measured for binaural-pitch stimuli and salience-matched monaurally detectable pitches. In an effort to determine whether impaired binaural pitch perception...... sensation coexisted with a loss of a binaural signal-detection advantage in noise, without implying reduced cognitive function. Auditory filter bandwidths did not correlate with the difference in pitch identification scores between binaural and monaural pitches. However, subjects with impaired binaural...... pitch perception showed deficits in temporal fine structure processing. Whether the observed deficits stemmed from peripheral or central mechanisms could not be resolved here, but the present findings may be useful for hearing loss characterization. (C) 2012 Acoustical Society of America. [http...

  8. Vision of tongue movements bias auditory speech perception.

    Science.gov (United States)

    D'Ausilio, Alessandro; Bartoli, Eleonora; Maffongelli, Laura; Berry, Jeffrey James; Fadiga, Luciano

    2014-10-01

    Audiovisual speech perception is likely based on the association between auditory and visual information into stable audiovisual maps. Conflicting audiovisual inputs generate perceptual illusions such as the McGurk effect. Audiovisual mismatch effects could be either driven by the detection of violations in the standard audiovisual statistics or via the sensorimotor reconstruction of the distal articulatory event that generated the audiovisual ambiguity. In order to disambiguate between the two hypotheses we exploit the fact that the tongue is hidden to vision. For this reason, tongue movement encoding can solely be learned via speech production but not via others׳ speech perception alone. Here we asked participants to identify speech sounds while matching or mismatching visual representations of tongue movements which were shown. Vision of congruent tongue movements facilitated auditory speech identification with respect to incongruent trials. This result suggests that direct visual experience of an articulator movement is not necessary for the generation of audiovisual mismatch effects. Furthermore, we suggest that audiovisual integration in speech may benefit from speech production learning. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Visual and auditory perception in preschool children at risk for dyslexia.

    Science.gov (United States)

    Ortiz, Rosario; Estévez, Adelina; Muñetón, Mercedes; Domínguez, Carolina

    2014-11-01

    Recently, there has been renewed interest in perceptive problems of dyslexics. A polemic research issue in this area has been the nature of the perception deficit. Another issue is the causal role of this deficit in dyslexia. Most studies have been carried out in adult and child literates; consequently, the observed deficits may be the result rather than the cause of dyslexia. This study addresses these issues by examining visual and auditory perception in children at risk for dyslexia. We compared children from preschool with and without risk for dyslexia in auditory and visual temporal order judgment tasks and same-different discrimination tasks. Identical visual and auditory, linguistic and nonlinguistic stimuli were presented in both tasks. The results revealed that the visual as well as the auditory perception of children at risk for dyslexia is impaired. The comparison between groups in auditory and visual perception shows that the achievement of children at risk was lower than children without risk for dyslexia in the temporal tasks. There were no differences between groups in auditory discrimination tasks. The difficulties of children at risk in visual and auditory perceptive processing affected both linguistic and nonlinguistic stimuli. Our conclusions are that children at risk for dyslexia show auditory and visual perceptive deficits for linguistic and nonlinguistic stimuli. The auditory impairment may be explained by temporal processing problems and these problems are more serious for processing language than for processing other auditory stimuli. These visual and auditory perceptive deficits are not the consequence of failing to learn to read, thus, these findings support the theory of temporal processing deficit. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Modulation of Illusory Auditory Perception by Transcranial Electrical Stimulation

    Directory of Open Access Journals (Sweden)

    Giulia Prete

    2017-06-01

    Full Text Available The aim of the present study was to test whether transcranial electrical stimulation can modulate illusory perception in the auditory domain. In two separate experiments we applied transcranial Direct Current Stimulation (anodal/cathodal tDCS, 2 mA; N = 60 and high-frequency transcranial Random Noise Stimulation (hf-tRNS, 1.5 mA, offset 0; N = 45 on the temporal cortex during the presentation of the stimuli eliciting the Deutsch's illusion. The illusion arises when two sine tones spaced one octave apart (400 and 800 Hz are presented dichotically in alternation, one in the left and the other in the right ear, so that when the right ear receives the high tone, the left ear receives the low tone, and vice versa. The majority of the population perceives one high-pitched tone in one ear alternating with one low-pitched tone in the other ear. The results revealed that neither anodal nor cathodal tDCS applied over the left/right temporal cortex modulated the perception of the illusion, whereas hf-tRNS applied bilaterally on the temporal cortex reduced the number of times the sequence of sounds is perceived as the Deutsch's illusion with respect to the sham control condition. The stimulation time before the beginning of the task (5 or 15 min did not influence the perceptual outcome. In accordance with previous findings, we conclude that hf-tRNS can modulate auditory perception more efficiently than tDCS.

  11. [Münchner screening of auditory perception disorders (MAUS)].

    Science.gov (United States)

    Nickisch, A; Heuckmann, C; Burger, T; Massinger, C

    2006-04-01

    The diagnosis of APD (Auditory Perception Disorder) is a time consuming procedure. In Germany at the present, no screening test for APD exists which makes it possible to differentiate between children who are not likely to suffer from an APD and those who need to be diagnosed in detail. The Munich Auditory Screening of Perception Disorders (MAUS) contains the following subtests: Series of Syllables, Words in Noise and Identification and Differentiation of Phonemes (test duration: 15 minutes). The MAUS was standardized using 359 primary school children between 6 and 11 years of age. Furthermore, the MAUS was used in addition to the complete, extensive APD-diagnostics in testing 52 children (36 with APD and 16 without APD) within the age group mentioned. T-scores for each subtest were established by the standardization of the MAUS. The internal consistency of the test was sufficient. The intercorrelation between subtests was very slight. Therefore, each subtest seems to play an independent part in defining the construct of APD. Because of the results of the pilot study which formed the basis for the development of the screening instrument used, and because of the sensitivity scores reached in testing a group of 36 children with diagnosed APD, it can be expected that the MAUS will show a high sensitivity with regard to APD. Using the MAUS, it can be determined if and to what extent the test results of an individual deviate from those of the normal primary school population. The MAUS can identify children at risk of having an APD and can differentiate these children from those who are unlikely to suffer from an APD.

  12. Direct Contribution of Auditory Motion Information to Sound-Induced Visual Motion Perception

    Directory of Open Access Journals (Sweden)

    Souta Hidaka

    2011-10-01

    Full Text Available We have recently demonstrated that alternating left-right sound sources induce motion perception to static visual stimuli along the horizontal plane (SIVM: sound-induced visual motion perception, Hidaka et al., 2009. The aim of the current study was to elucidate whether auditory motion signals, rather than auditory positional signals, can directly contribute to the SIVM. We presented static visual flashes at retinal locations outside the fovea together with a lateral auditory motion provided by a virtual stereo noise source smoothly shifting in the horizontal plane. The flashes appeared to move in the situation where auditory positional information would have little influence on the perceived position of visual stimuli; the spatiotemporal position of the flashes was in the middle of the auditory motion trajectory. Furthermore, the auditory motion altered visual motion perception in a global motion display; in this display, different localized motion signals of multiple visual stimuli were combined to produce a coherent visual motion perception so that there was no clear one-to-one correspondence between the auditory stimuli and each visual stimulus. These findings suggest the existence of direct interactions between the auditory and visual modalities in motion processing and motion perception.

  13. Auditory deficits in amusia extend beyond poor pitch perception.

    Science.gov (United States)

    Whiteford, Kelly L; Oxenham, Andrew J

    2017-05-01

    Congenital amusia is a music perception disorder believed to reflect a deficit in fine-grained pitch perception and/or short-term or working memory for pitch. Because most measures of pitch perception include memory and segmentation components, it has been difficult to determine the true extent of pitch processing deficits in amusia. It is also unclear whether pitch deficits persist at frequencies beyond the range of musical pitch. To address these questions, experiments were conducted with amusics and matched controls, manipulating both the stimuli and the task demands. First, we assessed pitch discrimination at low (500Hz and 2000Hz) and high (8000Hz) frequencies using a three-interval forced-choice task. Amusics exhibited deficits even at the highest frequency, which lies beyond the existence region of musical pitch. Next, we assessed the extent to which frequency coding deficits persist in one- and two-interval frequency-modulation (FM) and amplitude-modulation (AM) detection tasks at 500Hz at slow (fm=4Hz) and fast (fm=20Hz) modulation rates. Amusics still exhibited deficits in one-interval FM detection tasks that should not involve memory or segmentation. Surprisingly, amusics were also impaired on AM detection, which should not involve pitch processing. Finally, direct comparisons between the detection of continuous and discrete FM demonstrated that amusics suffer deficits in both coding and segmenting pitch information. Our results reveal auditory deficits in amusia extending beyond pitch perception that are subtle when controlling for memory and segmentation, and are likely exacerbated in more complex contexts such as musical listening. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Effect of Size Change and Brightness Change of Visual Stimuli on Loudness Perception and Pitch Perception of Auditory Stimuli

    Directory of Open Access Journals (Sweden)

    Syouya Tanabe

    2011-10-01

    Full Text Available People obtain a lot of information from visual and auditory sensation on daily life. Regarding the effect of visual stimuli on perception of auditory stimuli, studies of phonological perception and sound localization have been made in great numbers. This study examined the effect of visual stimuli on perception in loudness and pitch of auditory stimuli. We used the image of figures whose size or brightness was changed as visual stimuli, and the sound of pure tone whose loudness or pitch was changed as auditory stimuli. Those visual and auditory stimuli were combined independently to make four types of audio-visual multisensory stimuli for psychophysical experiments. In the experiments, participants judged change in loudness or pitch of auditory stimuli, while they judged the direction of size change or the kind of a presented figure in visual stimuli. Therefore they cannot neglect visual stimuli while they judged auditory stimuli. As a result, perception in loudness and pitch were promoted significantly around their difference limen, when the image was getting bigger or brighter, compared with the case in which the image had no changes. This indicates that perception in loudness and pitch were affected by change in size and brightness of visual stimuli.

  15. Auditory feedback affects perception of effort when exercising with a Pulley machine

    DEFF Research Database (Denmark)

    Bordegoni, Monica; Ferrise, Francesco; Grani, Francesco

    2013-01-01

    In this paper we describe an experiment that investigates the role of auditory feedback in affecting the perception of effort when using a physical pulley machine. Specifically, we investigated whether variations in the amplitude and frequency content of the pulley sound affect perception of effort....... Results show that variations in frequency content affect the perception of effort....

  16. Critique: auditory form and gestural topology in the perception of speech.

    Science.gov (United States)

    Remez, R E

    1996-03-01

    Some influential accounts of speech perception have asserted that the goal of perception is to recover the articulatory gestures that create the acoustic signal, while others have proposed that speech perception proceeds by a method of acoustic categorization of signal elements. These accounts have been frustrated by difficulties in identifying a set of primitive articulatory constituents underlying speech production, and a set of primitive acoustic-auditory elements underlying speech perception. An argument by Lindblom favors an account of production and perception based on the auditory form of speech and its cognitive elaboration, rejecting the aim of defining a set of articulatory primitives by appealing to theoretical principle, while recognizing the empirical difficulty of identifying a set of acoustic or auditory primitives. An examination of this thesis found opportunities to defend some of its conclusions with independent evidence, but favors a characterization of the constituents of speech perception as linguistic rather than as articulatory or acoustic.

  17. Influence of anxiety, depression and looming cognitive style on auditory looming perception.

    Science.gov (United States)

    Riskind, John H; Kleiman, Evan M; Seifritz, Erich; Neuhoff, John

    2014-01-01

    Previous studies show that individuals with an anticipatory auditory looming bias over-estimate the closeness of a sound source that approaches them. Our present study bridges cognitive clinical and perception research, and provides evidence that anxiety symptoms and a particular putative cognitive style that creates vulnerability for anxiety (looming cognitive style, or LCS) are related to how people perceive this ecologically fundamental auditory warning signal. The effects of anxiety symptoms on the anticipatory auditory looming effect synergistically depend on the dimension of perceived personal danger assessed by the LCS (physical or social threat). Depression symptoms, in contrast to anxiety symptoms, predict a diminution of the auditory looming bias. Findings broaden our understanding of the links between cognitive-affective states and auditory perception processes and lend further support to past studies providing evidence that the looming cognitive style is related to bias in threat processing. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Brain-speech alignment enhances auditory cortical responses and speech perception.

    Science.gov (United States)

    Saoud, Houda; Josse, Goulven; Bertasi, Eric; Truy, Eric; Chait, Maria; Giraud, Anne-Lise

    2012-01-04

    Asymmetry in auditory cortical oscillations could play a role in speech perception by fostering hemispheric triage of information across the two hemispheres. Due to this asymmetry, fast speech temporal modulations relevant for phonemic analysis could be best perceived by the left auditory cortex, while slower modulations conveying vocal and paralinguistic information would be better captured by the right one. It is unclear, however, whether and how early oscillation-based selection influences speech perception. Using a dichotic listening paradigm in human participants, where we provided different parts of the speech envelope to each ear, we show that word recognition is facilitated when the temporal properties of speech match the rhythmic properties of auditory cortices. We further show that the interaction between speech envelope and auditory cortices rhythms translates in their level of neural activity (as measured with fMRI). In the left auditory cortex, the neural activity level related to stimulus-brain rhythm interaction predicts speech perception facilitation. These data demonstrate that speech interacts with auditory cortical rhythms differently in right and left auditory cortex, and that in the latter, the interaction directly impacts speech perception performance.

  19. Prolonged maturation of auditory perception and learning in gerbils.

    Science.gov (United States)

    Sarro, Emma C; Sanes, Dan H

    2010-08-01

    In humans, auditory perception reaches maturity over a broad age range, extending through adolescence. Despite this slow maturation, children are considered to be outstanding learners, suggesting that immature perceptual skills might actually be advantageous to improvement on an acoustic task as a result of training (perceptual learning). Previous non-human studies have not employed an identical task when comparing perceptual performance of young and mature subjects, making it difficult to assess learning. Here, we used an identical procedure on juvenile and adult gerbils to examine the perception of amplitude modulation (AM), a stimulus feature that is an important component of most natural sounds. On average, Adult animals could detect smaller fluctuations in amplitude (i.e., smaller modulation depths) than Juveniles, indicating immature perceptual skills in Juveniles. However, the population variance was much greater for Juveniles, a few animals displaying adult-like AM detection. To determine whether immature perceptual skills facilitated learning, we compared naïve performance on the AM detection task with the amount of improvement following additional training. The amount of improvement in Adults correlated with naïve performance: those with the poorest naïve performance improved the most. In contrast, the naïve performance of Juveniles did not predict the amount of learning. Those Juveniles with immature AM detection thresholds did not display greater learning than Adults. Furthermore, for several of the Juveniles with adult-like thresholds, AM detection deteriorated with repeated testing. Thus, immature perceptual skills in young animals were not associated with greater learning. (c) 2010 Wiley Periodicals, Inc.

  20. Comparison of Auditory Perception in Cochlear Implanted Children with and without Additional Disabilities

    Directory of Open Access Journals (Sweden)

    Seyed Basir Hashemi

    2016-05-01

    Full Text Available Background: The number of children with cochlear implants who have other difficulties such as attention deficiency and cerebral palsy has increased dramatically. Despite the need for information on the results of cochlear implantation in this group, the available literature is extremely limited. We, therefore, sought to compare the levels of auditory perception in children with cochlear implants with and without additional disabilities. Methods: A spondee test comprising 20 two-syllable words was performed. The data analysis was done using SPSS, version 19. Results: Thirty-one children who had received cochlear implants 2 years previously and were at an average age of 7.5 years were compared via the spondee test. From the 31 children,15 had one or more additional disabilities. The data analysis indicated that the mean score of auditory perception in this group was approximately 30 scores below that of the children with cochlear implants who had no additional disabilities. Conclusion: Although there was an improvement in the auditory perception of all the children with cochlear implants, there was a noticeable difference in the level of auditory perception between those with and without additional disabilities. Deafness and additional disabilities depended the children on lip reading alongside the auditory ways of communication. In addition, the level of auditory perception in the children with cochlear implants who had more than one additional disability was significantly less than that of the other children with cochlear implants who had one additional disability.

  1. The evolutionary neuroscience of musical beat perception: the Action Simulation for Auditory Prediction (ASAP) hypothesis.

    Science.gov (United States)

    Patel, Aniruddh D; Iversen, John R

    2014-01-01

    a perceived periodic pulse that structures the perception of musical rhythm and which serves as a framework for synchronized movement to music. What are the neural mechanisms of musical beat perception, and how did they evolve? One view, which dates back to Darwin and implicitly informs some current models of beat perception, is that the relevant neural mechanisms are relatively general and are widespread among animal species. On the basis of recent neural and cross-species data on musical beat processing, this paper argues for a different view. Here we argue that beat perception is a complex brain function involving temporally-precise communication between auditory regions and motor planning regions of the cortex (even in the absence of overt movement). More specifically, we propose that simulation of periodic movement in motor planning regions provides a neural signal that helps the auditory system predict the timing of upcoming beats. This "action simulation for auditory prediction" (ASAP) hypothesis leads to testable predictions. We further suggest that ASAP relies on dorsal auditory pathway connections between auditory regions and motor planning regions via the parietal cortex, and suggest that these connections may be stronger in humans than in non-human primates due to the evolution of vocal learning in our lineage. This suggestion motivates cross-species research to determine which species are capable of human-like beat perception, i.e., beat perception that involves accurate temporal prediction of beat times across a fairly broad range of tempi.

  2. The effect of music on auditory perception in cochlear-implant users and normal-hearing listeners

    OpenAIRE

    Fuller, Christina Diechina

    2016-01-01

    Cochlear implants (CIs) are auditory prostheses for severely deaf people that do not benefit from conventional hearing aids. Speech perception is reasonably good with CIs; other signals such as music perception are challenging. First, the perception of music and music related perception in CI users was tested. Second, the possible positive influence of musical training on auditory perception was investigated. The enjoyment of music in CI users was suboptimal. Identifying vocal emotions (angry...

  3. Auditory capture of visual motion: effects on perception and discrimination.

    Science.gov (United States)

    McCourt, Mark E; Leone, Lynnette M

    2016-09-28

    We asked whether the perceived direction of visual motion and contrast thresholds for motion discrimination are influenced by the concurrent motion of an auditory sound source. Visual motion stimuli were counterphasing Gabor patches, whose net motion energy was manipulated by adjusting the contrast of the leftward-moving and rightward-moving components. The presentation of these visual stimuli was paired with the simultaneous presentation of auditory stimuli, whose apparent motion in 3D auditory space (rightward, leftward, static, no sound) was manipulated using interaural time and intensity differences, and Doppler cues. In experiment 1, observers judged whether the Gabor visual stimulus appeared to move rightward or leftward. In experiment 2, contrast discrimination thresholds for detecting the interval containing unequal (rightward or leftward) visual motion energy were obtained under the same auditory conditions. Experiment 1 showed that the perceived direction of ambiguous visual motion is powerfully influenced by concurrent auditory motion, such that auditory motion 'captured' ambiguous visual motion. Experiment 2 showed that this interaction occurs at a sensory stage of processing as visual contrast discrimination thresholds (a criterion-free measure of sensitivity) were significantly elevated when paired with congruent auditory motion. These results suggest that auditory and visual motion signals are integrated and combined into a supramodal (audiovisual) representation of motion.

  4. A Comparison of Auditory Perception in Hearing-Impaired and Normal-Hearing Listeners: An Auditory Scene Analysis Study

    Science.gov (United States)

    Bayat, Arash; Farhadi, Mohammad; Pourbakht, Akram; Sadjedi, Hamed; Emamdjomeh, Hesam; Kamali, Mohammad; Mirmomeni, Golshan

    2013-01-01

    Background Auditory scene analysis (ASA) is the process by which the auditory system separates individual sounds in natural-world situations. ASA is a key function of auditory system, and contributes to speech discrimination in noisy backgrounds. It is known that sensorineural hearing loss (SNHL) detrimentally affects auditory function in complex environments, but relatively few studies have focused on the influence of SNHL on higher level processes which are likely involved in auditory perception in different situations. Objectives The purpose of the current study was to compare the auditory system ability of normally hearing and SNHL subjects using the ASA examination. Materials and Methods A total of 40 right-handed adults (age range: 18 - 45 years) participated in this study. The listeners were divided equally into control and mild to moderate SNHL groups. ASA ability was measured using an ABA-ABA sequence. The frequency of the "A" was kept constant at 500, 1000, 2000 or 4000 Hz, while the frequency of the "B" was set at 3 to 80 percent above the" A" tone. For ASA threshold detection, the frequency of the B stimulus was decreased until listeners reported that they could no longer hear two separate sounds. Results The ASA performance was significantly better for controls than the SNHL group; these differences were more obvious at higher frequencies. We found no significant differences between ASA ability as a function of tone durations in both groups. Conclusions The present study indicated that SNHL may cause a reduction in perceptual separation of the incoming acoustic information to form accurate representations of our acoustic world. PMID:24719695

  5. Task-dependent calibration of auditory spatial perception through environmental visual observation.

    Directory of Open Access Journals (Sweden)

    Alessia eTonelli

    2015-06-01

    Full Text Available Visual information is paramount to space perception. Vision influences auditory space estimation. Many studies show that simultaneous visual and auditory cues improve precision of the final multisensory estimate. However, the amount or the temporal extent of visual information, that is sufficient to influence auditory perception, is still unknown. It is therefore interesting to know if vision can improve auditory precision through a short-term environmental observation preceding the audio task and whether this influence is task-specific or environment-specific or both. To test these issues we investigate possible improvements of acoustic precision with sighted blindfolded participants in two audio tasks (minimum audible angle and space bisection and two acoustically different environments (normal room and anechoic room. With respect to a baseline of auditory precision, we found an improvement of precision in the space bisection task but not in the minimum audible angle after the observation of a normal room. No improvement was found when performing the same task in an anechoic chamber. In addition, no difference was found between a condition of short environment observation and a condition of full vision during the whole experimental session. Our results suggest that even short-term environmental observation can calibrate auditory spatial performance. They also suggest that echoes can be the cue that underpins visual calibration. Echoes may mediate the transfer of information from the visual to the auditory system.

  6. Task-dependent calibration of auditory spatial perception through environmental visual observation.

    Science.gov (United States)

    Tonelli, Alessia; Brayda, Luca; Gori, Monica

    2015-01-01

    Visual information is paramount to space perception. Vision influences auditory space estimation. Many studies show that simultaneous visual and auditory cues improve precision of the final multisensory estimate. However, the amount or the temporal extent of visual information, that is sufficient to influence auditory perception, is still unknown. It is therefore interesting to know if vision can improve auditory precision through a short-term environmental observation preceding the audio task and whether this influence is task-specific or environment-specific or both. To test these issues we investigate possible improvements of acoustic precision with sighted blindfolded participants in two audio tasks [minimum audible angle (MAA) and space bisection] and two acoustically different environments (normal room and anechoic room). With respect to a baseline of auditory precision, we found an improvement of precision in the space bisection task but not in the MAA after the observation of a normal room. No improvement was found when performing the same task in an anechoic chamber. In addition, no difference was found between a condition of short environment observation and a condition of full vision during the whole experimental session. Our results suggest that even short-term environmental observation can calibrate auditory spatial performance. They also suggest that echoes can be the cue that underpins visual calibration. Echoes may mediate the transfer of information from the visual to the auditory system.

  7. Speech perception using combinations of auditory, visual, and tactile information

    National Research Council Canada - National Science Library

    Blamey, P J; Cowan, R S; Alcantara, J I; Whitford, L A; Clark, G M

    1989-01-01

    Four normally-hearing subjects were trained and tested with all combinations of a highly-degraded auditory input, a visual input via lipreading, and a tactile input using a multichannel electrotactile speech processor...

  8. Feel what you say: an auditory effect on somatosensory perception.

    Science.gov (United States)

    Champoux, François; Shiller, Douglas M; Zatorre, Robert J

    2011-01-01

    In the present study, we demonstrate an audiotactile effect in which amplitude modulation of auditory feedback during voiced speech induces a throbbing sensation over the lip and laryngeal regions. Control tasks coupled with the examination of speech acoustic parameters allow us to rule out the possibility that the effect may have been due to cognitive factors or motor compensatory effects. We interpret the effect as reflecting the tight interplay between auditory and tactile modalities during vocal production.

  9. Feel what you say: an auditory effect on somatosensory perception.

    Directory of Open Access Journals (Sweden)

    François Champoux

    Full Text Available In the present study, we demonstrate an audiotactile effect in which amplitude modulation of auditory feedback during voiced speech induces a throbbing sensation over the lip and laryngeal regions. Control tasks coupled with the examination of speech acoustic parameters allow us to rule out the possibility that the effect may have been due to cognitive factors or motor compensatory effects. We interpret the effect as reflecting the tight interplay between auditory and tactile modalities during vocal production.

  10. Auditory signal processing in communication: perception and performance of vocal sounds.

    Science.gov (United States)

    Prather, Jonathan F

    2013-11-01

    Learning and maintaining the sounds we use in vocal communication require accurate perception of the sounds we hear performed by others and feedback-dependent imitation of those sounds to produce our own vocalizations. Understanding how the central nervous system integrates auditory and vocal-motor information to enable communication is a fundamental goal of systems neuroscience, and insights into the mechanisms of those processes will profoundly enhance clinical therapies for communication disorders. Gaining the high-resolution insight necessary to define the circuits and cellular mechanisms underlying human vocal communication is presently impractical. Songbirds are the best animal model of human speech, and this review highlights recent insights into the neural basis of auditory perception and feedback-dependent imitation in those animals. Neural correlates of song perception are present in auditory areas, and those correlates are preserved in the auditory responses of downstream neurons that are also active when the bird sings. Initial tests indicate that singing-related activity in those downstream neurons is associated with vocal-motor performance as opposed to the bird simply hearing itself sing. Therefore, action potentials related to auditory perception and action potentials related to vocal performance are co-localized in individual neurons. Conceptual models of song learning involve comparison of vocal commands and the associated auditory feedback to compute an error signal that is used to guide refinement of subsequent song performances, yet the sites of that comparison remain unknown. Convergence of sensory and motor activity onto individual neurons points to a possible mechanism through which auditory and vocal-motor signals may be linked to enable learning and maintenance of the sounds used in vocal communication. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives". Copyright © 2013

  11. You can't stop the music: reduced auditory alpha power and coupling between auditory and memory regions facilitate the illusory perception of music during noise.

    Science.gov (United States)

    Müller, Nadia; Keil, Julian; Obleser, Jonas; Schulz, Hannah; Grunwald, Thomas; Bernays, René-Ludwig; Huppertz, Hans-Jürgen; Weisz, Nathan

    2013-10-01

    Our brain has the capacity of providing an experience of hearing even in the absence of auditory stimulation. This can be seen as illusory conscious perception. While increasing evidence postulates that conscious perception requires specific brain states that systematically relate to specific patterns of oscillatory activity, the relationship between auditory illusions and oscillatory activity remains mostly unexplained. To investigate this we recorded brain activity with magnetoencephalography and collected intracranial data from epilepsy patients while participants listened to familiar as well as unknown music that was partly replaced by sections of pink noise. We hypothesized that participants have a stronger experience of hearing music throughout noise when the noise sections are embedded in familiar compared to unfamiliar music. This was supported by the behavioral results showing that participants rated the perception of music during noise as stronger when noise was presented in a familiar context. Time-frequency data show that the illusory perception of music is associated with a decrease in auditory alpha power pointing to increased auditory cortex excitability. Furthermore, the right auditory cortex is concurrently synchronized with the medial temporal lobe, putatively mediating memory aspects associated with the music illusion. We thus assume that neuronal activity in the highly excitable auditory cortex is shaped through extensive communication between the auditory cortex and the medial temporal lobe, thereby generating the illusion of hearing music during noise. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. Concurrent auditory perception difficulties in older adults with right hemisphere cerebrovascular accident.

    Science.gov (United States)

    Talebi, Hossein; Moossavi, Abdollah; Faghihzadeh, Soghrat

    2014-01-01

    Older adults with cerebrovascular accident (CVA) show evidence of auditory and speech perception problems. In present study, it was examined whether these problems are due to impairments of concurrent auditory segregation procedure which is the basic level of auditory scene analysis and auditory organization in auditory scenes with competing sounds. Concurrent auditory segregation using competing sentence test (CST) and dichotic digits test (DDT) was assessed and compared in 30 male older adults (15 normal and 15 cases with right hemisphere CVA) in the same age groups (60-75 years old). For the CST, participants were presented with target message in one ear and competing message in the other one. The task was to listen to target sentence and repeat back without attention to competing sentence. For the DDT, auditory stimuli were monosyllabic digits presented dichotically and the task was to repeat those. Comparing mean score of CST and DDT between CVA patients with right hemisphere impairment and normal participants showed statistically significant difference (p=0.001 for CST and p<0.0001 for DDT). The present study revealed that abnormal CST and DDT scores of participants with right hemisphere CVA could be related to concurrent segregation difficulties. These findings suggest that low level segregation mechanisms and/or high level attention mechanisms might contribute to the problems.

  13. The effect of music on auditory perception in cochlear-implant users and normal-hearing listeners

    NARCIS (Netherlands)

    Fuller, Christina Diechina

    2016-01-01

    Cochlear implants (CIs) are auditory prostheses for severely deaf people that do not benefit from conventional hearing aids. Speech perception is reasonably good with CIs; other signals such as music perception are challenging. First, the perception of music and music related perception in CI users

  14. The evolutionary neuroscience of musical beat perception: the Action Simulation for Auditory Prediction (ASAP) hypothesis

    Science.gov (United States)

    Patel, Aniruddh D.; Iversen, John R.

    2013-01-01

    Every human culture has some form of music with a beat: a perceived periodic pulse that structures the perception of musical rhythm and which serves as a framework for synchronized movement to music. What are the neural mechanisms of musical beat perception, and how did they evolve? One view, which dates back to Darwin and implicitly informs some current models of beat perception, is that the relevant neural mechanisms are relatively general and are widespread among animal species. On the basis of recent neural and cross-species data on musical beat processing, this paper argues for a different view. Here we argue that beat perception is a complex brain function involving temporally-precise communication between auditory regions and motor planning regions of the cortex (even in the absence of overt movement). More specifically, we propose that simulation of periodic movement in motor planning regions provides a neural signal that helps the auditory system predict the timing of upcoming beats. This “action simulation for auditory prediction” (ASAP) hypothesis leads to testable predictions. We further suggest that ASAP relies on dorsal auditory pathway connections between auditory regions and motor planning regions via the parietal cortex, and suggest that these connections may be stronger in humans than in non-human primates due to the evolution of vocal learning in our lineage. This suggestion motivates cross-species research to determine which species are capable of human-like beat perception, i.e., beat perception that involves accurate temporal prediction of beat times across a fairly broad range of tempi. PMID:24860439

  15. The evolutionary neuroscience of musical beat perception: the Action Simulation for Auditory Prediction (ASAP hypothesis.

    Directory of Open Access Journals (Sweden)

    Aniruddh D. Patel

    2014-05-01

    Full Text Available Every human culture has some form of music with a beat: a perceived periodic pulse that structures the perception of musical rhythm and which serves as a framework for synchronized movement to music. What are the neural mechanisms of musical beat perception, and how did they evolve? One view, which dates back to Darwin and implicitly informs some current models of beat perception, is that the relevant neural mechanisms are relatively general and are widespread among animal species. On the basis of recent neural and cross-species data on musical beat processing, this paper argues for a different view. Here we argue that beat perception is a complex brain function involving temporally-precise communication between auditory regions and motor planning regions of the cortex (even in the absence of overt movement. More specifically, we propose that simulation of periodic movement in motor planning regions provides a neural signal that helps the auditory system predict the timing of upcoming beats. This action simulation for auditory prediction (ASAP hypothesis leads to testable predictions. We further suggest that ASAP relies on dorsal auditory pathway connections between auditory regions and motor planning regions via the parietal cortex, and suggest that these connections may be stronger in humans than in nonhuman primates due to the evolution of vocal learning in our lineage. This suggestion motivates cross-species research to determine which species are capable of human-like beat perception, i.e., beat perception that involves accurate temporal prediction of beat times across a fairly broad range of tempi.

  16. Auditory-Visual Perception of Changing Distance by Human Infants.

    Science.gov (United States)

    Walker-Andrews, Arlene S.; Lennon, Elizabeth M.

    1985-01-01

    Examines, in two experiments, 5-month-old infants' sensitivity to auditory-visual specification of distance and direction of movement. One experiment presented two films with soundtracks in either a match or mismatch condition; the second showed the two films side-by-side with a single soundtrack appropriate to one. Infants demonstrated visual…

  17. The influence of presentation method on auditory length perception

    DEFF Research Database (Denmark)

    Kirkwood, Brent Christopher

    2005-01-01

    Humans are capable of hearing the lengths of wooden rods dropped onto hard floors. In an attempt to understand the influence of the stimulus presentation method for testing this kind of everyday listening task, listener performance was compared for three presentation methods in an auditory length...

  18. Tactile enhancement of auditory and visual speech perception in untrained perceivers

    OpenAIRE

    Gick, Bryan; Jóhannsdóttir, Kristín M.; Gibraiel, Diana; Mühlbauer, Jeff

    2008-01-01

    A single pool of untrained subjects was tested for interactions across two bimodal perception conditions: audio-tactile, in which subjects heard and felt speech, and visual-tactile, in which subjects saw and felt speech. Identifications of English obstruent consonants were compared in bimodal and no-tactile baseline conditions. Results indicate that tactile information enhances speech perception by about 10 percent, regardless of which other mode (auditory or visual) is active. However, withi...

  19. Biases in Visual, Auditory, and Audiovisual Perception of Space.

    Directory of Open Access Journals (Sweden)

    Brian Odegaard

    2015-12-01

    Full Text Available Localization of objects and events in the environment is critical for survival, as many perceptual and motor tasks rely on estimation of spatial location. Therefore, it seems reasonable to assume that spatial localizations should generally be accurate. Curiously, some previous studies have reported biases in visual and auditory localizations, but these studies have used small sample sizes and the results have been mixed. Therefore, it is not clear (1 if the reported biases in localization responses are real (or due to outliers, sampling bias, or other factors, and (2 whether these putative biases reflect a bias in sensory representations of space or a priori expectations (which may be due to the experimental setup, instructions, or distribution of stimuli. Here, to address these questions, a dataset of unprecedented size (obtained from 384 observers was analyzed to examine presence, direction, and magnitude of sensory biases, and quantitative computational modeling was used to probe the underlying mechanism(s driving these effects. Data revealed that, on average, observers were biased towards the center when localizing visual stimuli, and biased towards the periphery when localizing auditory stimuli. Moreover, quantitative analysis using a Bayesian Causal Inference framework suggests that while pre-existing spatial biases for central locations exert some influence, biases in the sensory representations of both visual and auditory space are necessary to fully explain the behavioral data. How are these opposing visual and auditory biases reconciled in conditions in which both auditory and visual stimuli are produced by a single event? Potentially, the bias in one modality could dominate, or the biases could interact/cancel out. The data revealed that when integration occurred in these conditions, the visual bias dominated, but the magnitude of this bias was reduced compared to unisensory conditions. Therefore, multisensory integration not only

  20. The influence of the auditory environment on the emotional perception of speech

    NARCIS (Netherlands)

    Brouwers, M.A.J.

    2008-01-01

    In this thesis the influence of the auditory environment on the emotional perception of speech in mediated communication is addressed. The motivation of this study is the development of techniques that enable suppression of environmental sound, with the goal to increasethe signal-to-noise ratio in

  1. General Auditory Processing, Speech Perception and Phonological Awareness Skills in Chinese-English Biliteracy

    Science.gov (United States)

    Chung, Kevin K. H.; McBride-Chang, Catherine; Cheung, Him; Wong, Simpson W. L.

    2013-01-01

    This study focused on the associations of general auditory processing, speech perception, phonological awareness and word reading in Cantonese-speaking children from Hong Kong learning to read both Chinese (first language [L1]) and English (second language [L2]). Children in Grades 2--4 ("N" = 133) participated and were administered…

  2. The perception of prosody and associated auditory cues in early-implanted children: the role of auditory working memory and musical activities.

    Science.gov (United States)

    Torppa, Ritva; Faulkner, Andrew; Huotilainen, Minna; Järvikivi, Juhani; Lipsanen, Jari; Laasonen, Marja; Vainio, Martti

    2014-03-01

    To study prosodic perception in early-implanted children in relation to auditory discrimination, auditory working memory, and exposure to music. Word and sentence stress perception, discrimination of fundamental frequency (F0), intensity and duration, and forward digit span were measured twice over approximately 16 months. Musical activities were assessed by questionnaire. Twenty-one early-implanted and age-matched normal-hearing (NH) children (4-13 years). Children with cochlear implants (CIs) exposed to music performed better than others in stress perception and F0 discrimination. Only this subgroup of implanted children improved with age in word stress perception, intensity discrimination, and improved over time in digit span. Prosodic perception, F0 discrimination and forward digit span in implanted children exposed to music was equivalent to the NH group, but other implanted children performed more poorly. For children with CIs, word stress perception was linked to digit span and intensity discrimination: sentence stress perception was additionally linked to F0 discrimination. Prosodic perception in children with CIs is linked to auditory working memory and aspects of auditory discrimination. Engagement in music was linked to better performance across a range of measures, suggesting that music is a valuable tool in the rehabilitation of implanted children.

  3. Leftward lateralization of auditory cortex underlies holistic sound perception in Williams syndrome.

    Directory of Open Access Journals (Sweden)

    Martina Wengenroth

    Full Text Available BACKGROUND: Individuals with the rare genetic disorder Williams-Beuren syndrome (WS are known for their characteristic auditory phenotype including strong affinity to music and sounds. In this work we attempted to pinpoint a neural substrate for the characteristic musicality in WS individuals by studying the structure-function relationship of their auditory cortex. Since WS subjects had only minor musical training due to psychomotor constraints we hypothesized that any changes compared to the control group would reflect the contribution of genetic factors to auditory processing and musicality. METHODOLOGY/PRINCIPAL FINDINGS: Using psychoacoustics, magnetoencephalography and magnetic resonance imaging, we show that WS individuals exhibit extreme and almost exclusive holistic sound perception, which stands in marked contrast to the even distribution of this trait in the general population. Functionally, this was reflected by increased amplitudes of left auditory evoked fields. On the structural level, volume of the left auditory cortex was 2.2-fold increased in WS subjects as compared to control subjects. Equivalent volumes of the auditory cortex have been previously reported for professional musicians. CONCLUSIONS/SIGNIFICANCE: There has been an ongoing debate in the neuroscience community as to whether increased gray matter of the auditory cortex in musicians is attributable to the amount of training or innate disposition. In this study musical education of WS subjects was negligible and control subjects were carefully matched for this parameter. Therefore our results not only unravel the neural substrate for this particular auditory phenotype, but in addition propose WS as a unique genetic model for training-independent auditory system properties.

  4. Cortical oscillations in auditory perception and speech: evidence for two temporal windows in human auditory cortex

    Directory of Open Access Journals (Sweden)

    Huan eLuo

    2012-05-01

    Full Text Available Natural sounds, including vocal communication sounds, contain critical information at multiple time scales. Two essential temporal modulation rates in speech have been argued to be in the low gamma band (~20-80 ms duration information and the theta band (~150-300 ms, corresponding to segmental and syllabic modulation rates, respectively. On one hypothesis, auditory cortex implements temporal integration using time constants closely related to these values. The neural correlates of a proposed dual temporal window mechanism in human auditory cortex remain poorly understood. We recorded MEG responses from participants listening to non-speech auditory stimuli with different temporal structures, created by concatenating frequency-modulated segments of varied segment durations. We show that these non-speech stimuli with temporal structure matching speech-relevant scales (~25 ms and ~200 ms elicit reliable phase tracking in the corresponding associated oscillatory frequencies (low gamma and theta bands. In contrast, stimuli with non-matching temporal structure do not. Furthermore, the topography of theta band phase tracking shows rightward lateralization while gamma band phase tracking occurs bilaterally. The results support the hypothesis that there exists multi-time resolution processing in cortex on discontinuous scales and provide evidence for an asymmetric organization of temporal analysis (asymmetrical sampling in time, AST. The data argue for a macroscopic-level neural mechanism underlying multi-time resolution processing: the sliding and resetting of intrinsic temporal windows on privileged time scales.

  5. Inconsistent Effect of Arousal on Early Auditory Perception.

    Science.gov (United States)

    Bolders, Anna C; Band, Guido P H; Stallen, Pieter Jan M

    2017-01-01

    Mood has been shown to influence cognitive performance. However, little is known about the influence of mood on sensory processing, specifically in the auditory domain. With the current study, we sought to investigate how auditory processing of neutral sounds is affected by the mood state of the listener. This was tested in two experiments by measuring masked-auditory detection thresholds before and after a standard mood-induction procedure. In the first experiment ( N = 76), mood was induced by imagining a mood-appropriate event combined with listening to mood inducing music. In the second experiment ( N = 80), imagining was combined with affective picture viewing to exclude any possibility of confounding the results by acoustic properties of the music. In both experiments, the thresholds were determined by means of an adaptive staircase tracking method in a two-interval forced-choice task. Masked detection thresholds were compared between participants in four different moods (calm, happy, sad, and anxious), which enabled differentiation of mood effects along the dimensions arousal and pleasure. Results of the two experiments were analyzed both in separate analyses and in a combined analysis. The first experiment showed that, while there was no impact of pleasure level on the masked threshold, lower arousal was associated with lower threshold (higher masked sensitivity). However, as indicated by an interaction effect between experiment and arousal, arousal did have a different effect on the threshold in Experiment 2. Experiment 2 showed a trend of arousal in opposite direction. These results show that the effect of arousal on auditory-masked sensitivity may depend on the modality of the mood-inducing stimuli. As clear conclusions regarding the genuineness of the arousal effect on the masked threshold cannot be drawn, suggestions for further research that could clarify this issue are provided.

  6. Auditory Perception, Suprasegmental Speech Processing, and Vocabulary Development in Chinese Preschoolers.

    Science.gov (United States)

    Wang, Hsiao-Lan S; Chen, I-Chen; Chiang, Chun-Han; Lai, Ying-Hui; Tsao, Yu

    2016-10-01

    The current study examined the associations between basic auditory perception, speech prosodic processing, and vocabulary development in Chinese kindergartners, specifically, whether early basic auditory perception may be related to linguistic prosodic processing in Chinese Mandarin vocabulary acquisition. A series of language, auditory, and linguistic prosodic tests were given to 100 preschool children who had not yet learned how to read Chinese characters. The results suggested that lexical tone sensitivity and intonation production were significantly correlated with children's general vocabulary abilities. In particular, tone awareness was associated with comprehensive language development, whereas intonation production was associated with both comprehensive and expressive language development. Regression analyses revealed that tone sensitivity accounted for 36% of the unique variance in vocabulary development, whereas intonation production accounted for 6% of the variance in vocabulary development. Moreover, auditory frequency discrimination was significantly correlated with lexical tone sensitivity, syllable duration discrimination, and intonation production in Mandarin Chinese. Also it provided significant contributions to tone sensitivity and intonation production. Auditory frequency discrimination may indirectly affect early vocabulary development through Chinese speech prosody. © The Author(s) 2016.

  7. (Amusicality in Williams syndrome: Examining relationships among auditory perception, musical skill, and emotional responsiveness to music

    Directory of Open Access Journals (Sweden)

    Miriam eLense

    2013-08-01

    Full Text Available Williams syndrome (WS, a genetic, neurodevelopmental disorder, is of keen interest to music cognition researchers because of its characteristic auditory sensitivities and emotional responsiveness to music. However, actual musical perception and production abilities are more variable. We examined musicality in WS through the lens of amusia and explored how their musical perception abilities related to their auditory sensitivities, musical production skills, and emotional responsiveness to music. In our sample of 73 adolescents and adults with WS, 11% met criteria for amusia, which is higher than the 4% prevalence rate reported in the typically developing population. Amusia was not related to auditory sensitivities but was related to musical training. Performance on the amusia measure strongly predicted musical skill but not emotional responsiveness to music, which was better predicted by general auditory sensitivities. This study represents the first time amusia has been examined in a population with a known neurodevelopmental genetic disorder with a range of cognitive abilities. Results have implications for the relationships across different levels of auditory processing, musical skill development, and emotional responsiveness to music, as well as the understanding of gene-brain-behavior relationships in individuals with WS and typically developing individuals with and without amusia.

  8. Auditory processing disorder and speech perception problems in noise: finding the underlying origin.

    Science.gov (United States)

    Lagacé, Josée; Jutras, Benoît; Gagné, Jean-Pierre

    2010-06-01

    A hallmark listening problem of individuals presenting with auditory processing disorder (APD) is their poor recognition of speech in noise. The underlying perceptual problem of the listening difficulties in unfavorable listening conditions is unknown. The objective of this article was to demonstrate theoretically how to determine whether the speech recognition problems are related to an auditory dysfunction, a language-based dysfunction, or a combination of both. Tests such as the Speech Perception in Noise (SPIN) test allow the exploration of the auditory and language-based functions involved in speech perception in noise, which is not possible with most other speech-in-noise tests. Psychometric functions illustrating results from hypothetical groups of individuals with APD on the SPIN test are presented. This approach makes it possible to postulate about the origin of the speech perception problems in noise. APD is a complex and heterogeneous disorder for which the underlying deficit is currently unclear. Because of their design, SPIN-like tests can potentially be used to identify the nature of the deficits underlying problems with speech perception in noise for this population. A better understanding of the difficulties with speech perception in noise experienced by many listeners with APD should lead to more efficient intervention programs.

  9. A loudspeaker-based room auralization system for auditory perception research

    DEFF Research Database (Denmark)

    Buchholz, Jörg; Favrot, Sylvain Emmanuel

    2009-01-01

    . This system provides a flexible research platform for conducting auditory experiments with normal-hearing, hearing-impaired, and aided hearing-impaired listeners in a fully controlled and realistic environment. This includes measures of basic auditory function (e.g., signal detection, distance perception......) and measures of speech intelligibility. A battery of objective tests (e.g., reverberation time, clarity, interaural correlation coefficient) and subjective tests (e.g., speech reception thresholds) is presented that demonstrates the applicability of the LoRA system....

  10. Auditory Perception and Production of Speech Feature Contrasts by Pediatric Implant Users.

    Science.gov (United States)

    Mahshie, James; Core, Cynthia; Larsen, Michael D

    2015-01-01

    The aim of the present research is to examine the relations between auditory perception and production of specific speech contrasts by children with cochlear implants (CIs) who received their implants before 3 years of age and to examine the hierarchy of abilities for perception and production for consonant and vowel features. The following features were examined: vowel height, vowel place, consonant place of articulation (front and back), continuance, and consonant voicing. Fifteen children (mean age = 4;0 and range 3;2 to 5;11) with a minimum of 18 months of experience with their implants and no additional known disabilities served as participants. Perception of feature contrasts was assessed using a modification of the Online Imitative Speech Pattern Contrast test, which uses imitation to assess speech feature perception. Production was examined by having the children name a series of pictures containing consonant and vowel segments that reflected contrasts of each feature. For five of the six feature contrasts, production accuracy was higher than perception accuracy. There was also a significant and positive correlation between accuracy of production and auditory perception for each consonant feature. This correlation was not found for vowels, owing largely to the overall high perception and production scores attained on the vowel features. The children perceived vowel feature contrasts more accurately than consonant feature contrasts. On average, the children had lower perception scores for Back Place and Continuance feature contrasts than for Anterior Place and Voicing contrasts. For all features, the median production scores were 100%; the majority of the children were able to accurately and consistently produce the feature contrasts. The mean production scores for features reflect greater score variability for consonant feature production than for vowel features. Back Place of articulation for back consonants and Continuance contrasts appeared to be the

  11. Maintaining realism in auditory length-perception experiments

    DEFF Research Database (Denmark)

    Kirkwood, Brent Christopher

    2005-01-01

    Humans are capable of hearing the lengths of wooden rods dropped onto hard floors. In an attempt to understand the influence of the stimulus presentation method for testing this kind of everyday listening task, listener performance was compared for three presentation methods in an auditory length......-estimation experiment. A comparison of the length-estimation accuracy for the three presentation methods indicates that the choice of presentation method is important for maintaining realism and for maintaining the acoustic cues utilized by listeners in perceiving length....

  12. Tactile-auditory speech perception by unimodally and bimodally trained normal-hearing subjects.

    Science.gov (United States)

    Alcántara, J I; Blamey, P J; Clark, G M

    1993-03-01

    The following study compared the effectiveness of unimodal and bimodal training strategies at improving the perception of speech information under a variety of conditions. Normal-hearing subjects were trained in the perception of vowel and consonant stimuli. Speech information was provided either via a multiple channel electrotactile speech processing aid (the Tickle Talker), and/or by a 200-Hz low-pass filtered auditory signal. Two subjects were trained only in the combined tactile-plus-auditory (TA) condition; the remaining two were trained in both the tactile-alone (T) and auditory-alone (A) conditions; however, only one condition was used at any single time. All subjects were evaluated in the TA, T, and A conditions, both at the beginning of the study, prior to training, and at the completion of training, on closed-set vowel and consonant confusion tests, and on an open-set word test. Results indicated that whilst statistically significant improvements occurred from one evaluation period to the next, in both groups of subjects, the improvements per condition were not dependent on the type of training received. The results provide a preliminary indication that the provision of unimodal training does not impair the perception of speech information under bimodal perception conditions.

  13. Adaptation to delayed auditory feedback induces the temporal recalibration effect in both speech perception and production.

    Science.gov (United States)

    Yamamoto, Kosuke; Kawabata, Hideaki

    2014-12-01

    We ordinarily speak fluently, even though our perceptions of our own voices are disrupted by various environmental acoustic properties. The underlying mechanism of speech is supposed to monitor the temporal relationship between speech production and the perception of auditory feedback, as suggested by a reduction in speech fluency when the speaker is exposed to delayed auditory feedback (DAF). While many studies have reported that DAF influences speech motor processing, its relationship to the temporal tuning effect on multimodal integration, or temporal recalibration, remains unclear. We investigated whether the temporal aspects of both speech perception and production change due to adaptation to the delay between the motor sensation and the auditory feedback. This is a well-used method of inducing temporal recalibration. Participants continually read texts with specific DAF times in order to adapt to the delay. Then, they judged the simultaneity between the motor sensation and the vocal feedback. We measured the rates of speech with which participants read the texts in both the exposure and re-exposure phases. We found that exposure to DAF changed both the rate of speech and the simultaneity judgment, that is, participants' speech gained fluency. Although we also found that a delay of 200 ms appeared to be most effective in decreasing the rates of speech and shifting the distribution on the simultaneity judgment, there was no correlation between these measurements. These findings suggest that both speech motor production and multimodal perception are adaptive to temporal lag but are processed in distinct ways.

  14. A psychophysical imaging method evidencing auditory cue extraction during speech perception: a group analysis of auditory classification images.

    Science.gov (United States)

    Varnet, Léo; Knoblauch, Kenneth; Serniclaes, Willy; Meunier, Fanny; Hoen, Michel

    2015-01-01

    Although there is a large consensus regarding the involvement of specific acoustic cues in speech perception, the precise mechanisms underlying the transformation from continuous acoustical properties into discrete perceptual units remains undetermined. This gap in knowledge is partially due to the lack of a turnkey solution for isolating critical speech cues from natural stimuli. In this paper, we describe a psychoacoustic imaging method known as the Auditory Classification Image technique that allows experimenters to estimate the relative importance of time-frequency regions in categorizing natural speech utterances in noise. Importantly, this technique enables the testing of hypotheses on the listening strategies of participants at the group level. We exemplify this approach by identifying the acoustic cues involved in da/ga categorization with two phonetic contexts, Al- or Ar-. The application of Auditory Classification Images to our group of 16 participants revealed significant critical regions on the second and third formant onsets, as predicted by the literature, as well as an unexpected temporal cue on the first formant. Finally, through a cluster-based nonparametric test, we demonstrate that this method is sufficiently sensitive to detect fine modifications of the classification strategies between different utterances of the same phoneme.

  15. Auditory Perceptual Learning for Speech Perception Can Be Enhanced by Audiovisual Training

    Directory of Open Access Journals (Sweden)

    Lynne E Bernstein

    2013-03-01

    Full Text Available Speech perception under audiovisual conditions is well known to confer benefits to perception such as increased speed and accuracy. Here, we investigated how audiovisual training might benefit or impede auditory perceptual learning speech degraded by vocoding. In Experiments 1 and 3, participants learned paired associations between vocoded spoken nonsense words and nonsense pictures in a protocol with a fixed number of trials. In Experiment 1, paired-associates (PA audiovisual (AV training of one group of participants was compared with audio-only (AO training of another group. When tested under AO conditions, the AV-trained group was significantly more accurate than the AO-trained group. In addition, pre- and post-training AO forced-choice consonant identification with untrained nonsense words showed that AV-trained participants had learned significantly more than AO participants. The pattern of results pointed to their having learned at the level of the auditory phonetic features of the vocoded stimuli. Experiment 2, a no-training control with testing and re-testing on the AO consonant identification, showed that the controls were as accurate as the AO-trained participants in Experiment 1 but less accurate than the AV-trained participants. In Experiment 3, PA training alternated AV and AO conditions on a list-by-list basis within participants, and training was to criterion (92% correct. PA training with AO stimuli was reliably more effective than training with AV stimuli. We explain these discrepant results in terms of the so-called "reverse hierarchy theory" of perceptual learning and in terms of the diverse multisensory and unisensory processing resources available to speech perception. We propose that early audiovisual speech integration can potentially impede auditory perceptual learning; but visual top-down access to relevant auditory features can promote auditory perceptual learning.

  16. Auditory Perceptual Learning for Speech Perception Can be Enhanced by Audiovisual Training.

    Science.gov (United States)

    Bernstein, Lynne E; Auer, Edward T; Eberhardt, Silvio P; Jiang, Jintao

    2013-01-01

    Speech perception under audiovisual (AV) conditions is well known to confer benefits to perception such as increased speed and accuracy. Here, we investigated how AV training might benefit or impede auditory perceptual learning of speech degraded by vocoding. In Experiments 1 and 3, participants learned paired associations between vocoded spoken nonsense words and nonsense pictures. In Experiment 1, paired-associates (PA) AV training of one group of participants was compared with audio-only (AO) training of another group. When tested under AO conditions, the AV-trained group was significantly more accurate than the AO-trained group. In addition, pre- and post-training AO forced-choice consonant identification with untrained nonsense words showed that AV-trained participants had learned significantly more than AO participants. The pattern of results pointed to their having learned at the level of the auditory phonetic features of the vocoded stimuli. Experiment 2, a no-training control with testing and re-testing on the AO consonant identification, showed that the controls were as accurate as the AO-trained participants in Experiment 1 but less accurate than the AV-trained participants. In Experiment 3, PA training alternated AV and AO conditions on a list-by-list basis within participants, and training was to criterion (92% correct). PA training with AO stimuli was reliably more effective than training with AV stimuli. We explain these discrepant results in terms of the so-called "reverse hierarchy theory" of perceptual learning and in terms of the diverse multisensory and unisensory processing resources available to speech perception. We propose that early AV speech integration can potentially impede auditory perceptual learning; but visual top-down access to relevant auditory features can promote auditory perceptual learning.

  17. Auditory Perceptual Learning for Speech Perception Can be Enhanced by Audiovisual Training

    Science.gov (United States)

    Bernstein, Lynne E.; Auer, Edward T.; Eberhardt, Silvio P.; Jiang, Jintao

    2013-01-01

    Speech perception under audiovisual (AV) conditions is well known to confer benefits to perception such as increased speed and accuracy. Here, we investigated how AV training might benefit or impede auditory perceptual learning of speech degraded by vocoding. In Experiments 1 and 3, participants learned paired associations between vocoded spoken nonsense words and nonsense pictures. In Experiment 1, paired-associates (PA) AV training of one group of participants was compared with audio-only (AO) training of another group. When tested under AO conditions, the AV-trained group was significantly more accurate than the AO-trained group. In addition, pre- and post-training AO forced-choice consonant identification with untrained nonsense words showed that AV-trained participants had learned significantly more than AO participants. The pattern of results pointed to their having learned at the level of the auditory phonetic features of the vocoded stimuli. Experiment 2, a no-training control with testing and re-testing on the AO consonant identification, showed that the controls were as accurate as the AO-trained participants in Experiment 1 but less accurate than the AV-trained participants. In Experiment 3, PA training alternated AV and AO conditions on a list-by-list basis within participants, and training was to criterion (92% correct). PA training with AO stimuli was reliably more effective than training with AV stimuli. We explain these discrepant results in terms of the so-called “reverse hierarchy theory” of perceptual learning and in terms of the diverse multisensory and unisensory processing resources available to speech perception. We propose that early AV speech integration can potentially impede auditory perceptual learning; but visual top-down access to relevant auditory features can promote auditory perceptual learning. PMID:23515520

  18. Age differences in visual-auditory self-motion perception during a simulated driving task

    Directory of Open Access Journals (Sweden)

    Robert eRamkhalawansingh

    2016-04-01

    Full Text Available Recent evidence suggests that visual-auditory cue integration may change as a function of age such that integration is heightened among older adults. Our goal was to determine whether these changes in multisensory integration are also observed in the context of self-motion perception under realistic task constraints. Thus, we developed a simulated driving paradigm in which we provided older and younger adults with visual motion cues (i.e. optic flow and systematically manipulated the presence or absence of congruent auditory cues to self-motion (i.e. engine, tire, and wind sounds. Results demonstrated that the presence or absence of congruent auditory input had different effects on older and younger adults. Both age groups demonstrated a reduction in speed variability when auditory cues were present compared to when they were absent, but older adults demonstrated a proportionally greater reduction in speed variability under combined sensory conditions. These results are consistent with evidence indicating that multisensory integration is heightened in older adults. Importantly, this study is the first to provide evidence to suggest that age differences in multisensory integration may generalize from simple stimulus detection tasks to the integration of the more complex and dynamic visual and auditory cues that are experienced during self-motion.

  19. Direct-location versus verbal report methods for measuring auditory distance perception in the far field.

    Science.gov (United States)

    Etchemendy, Pablo E; Spiousas, Ignacio; Calcagno, Esteban R; Abregú, Ezequiel; Eguia, Manuel C; Vergara, Ramiro O

    2017-08-07

    In this study we evaluated whether a method of direct location is an appropriate response method for measuring auditory distance perception of far-field sound sources. We designed an experimental set-up that allows participants to indicate the distance at which they perceive the sound source by moving a visual marker. We termed this method Cross-Modal Direct Location (CMDL) since the response procedure involves the visual modality while the stimulus is presented through the auditory modality. Three experiments were conducted with sound sources located from 1 to 6 m. The first one compared the perceived distances obtained using either the CMDL device or verbal report (VR), which is the response method more frequently used for reporting auditory distance in the far field, and found differences on response compression and bias. In Experiment 2, participants reported visual distance estimates to the visual marker that were found highly accurate. Then, we asked the same group of participants to report VR estimates of auditory distance and found that the spatial visual information, obtained from the previous task, did not influence their reports. Finally, Experiment 3 compared the same responses that Experiment 1 but interleaving the methods, showing a weak, but complex, mutual influence. However, the estimates obtained with each method remained statistically different. Our results show that the auditory distance psychophysical functions obtained with the CMDL method are less susceptible to previously reported underestimation for distances over 2 m.

  20. [Are visual and auditory perception modified in psychopathological assessment of expressive markers of psychiatric patients?].

    Science.gov (United States)

    Polzer, U; Gaebel, W

    1993-03-01

    The assessment of nonverbal expression (e.g. facial action, speech, body movements, etc.) are an important aspect of the diagnostic and prognostic process in psychiatric patients. By means of observer rating scales' expression is usually assessed on different observation levels. It appears that visual and auditory perception of expression interfere with one other. In the present study it was demonstrated, that ratings of certain attributes of expression was significantly more inconsistent in schizophrenic than in depressed patients, provided information was simultaneously displayed to both visual and auditory channels of perception. A "disintegration" of the components of expression in schizophrenics may explain why raters get differings impressions of the patient's overall expression. Moreover, the description of expressive behaviors seems to be influenced by diagnostic stereotypes. The development of a more objective method of assessment would therefore be promising.

  1. Auditory processing and speech perception in children with specific language impairment: relations with oral language and literacy skills.

    Science.gov (United States)

    Vandewalle, Ellen; Boets, Bart; Ghesquière, Pol; Zink, Inge

    2012-01-01

    This longitudinal study investigated temporal auditory processing (frequency modulation and between-channel gap detection) and speech perception (speech-in-noise and categorical perception) in three groups of 6 years 3 months to 6 years 8 months-old children attending grade 1: (1) children with specific language impairment (SLI) and literacy delay (n = 8), (2) children with SLI and normal literacy (n = 10) and (3) typically developing children (n = 14). Moreover, the relations between these auditory processing and speech perception skills and oral language and literacy skills in grade 1 and grade 3 were analyzed. The SLI group with literacy delay scored significantly lower than both other groups on speech perception, but not on temporal auditory processing. Both normal reading groups did not differ in terms of speech perception or auditory processing. Speech perception was significantly related to reading and spelling in grades 1 and 3 and had a unique predictive contribution to reading growth in grade 3, even after controlling reading level, phonological ability, auditory processing and oral language skills in grade 1. These findings indicated that speech perception also had a unique direct impact upon reading development and not only through its relation with phonological awareness. Moreover, speech perception seemed to be more associated with the development of literacy skills and less with oral language ability. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. Auditory Sensitivity, Speech Perception, and Reading Development and Impairment

    Science.gov (United States)

    Zhang, Juan; McBride-Chang, Catherine

    2010-01-01

    While the importance of phonological sensitivity for understanding reading acquisition and impairment across orthographies is well documented, what underlies deficits in phonological sensitivity is not well understood. Some researchers have argued that speech perception underlies variability in phonological representations. Others have…

  3. Beat Gestures Modulate Auditory Integration in Speech Perception

    Science.gov (United States)

    Biau, Emmanuel; Soto-Faraco, Salvador

    2013-01-01

    Spontaneous beat gestures are an integral part of the paralinguistic context during face-to-face conversations. Here we investigated the time course of beat-speech integration in speech perception by measuring ERPs evoked by words pronounced with or without an accompanying beat gesture, while participants watched a spoken discourse. Words…

  4. Auditory and cognitive effects of aging on perception of environmental sounds in natural auditory scenes.

    Science.gov (United States)

    Gygi, Brian; Shafiro, Valeriy

    2013-10-01

    Previously, Gygi and Shafiro (2011) found that when environmental sounds are semantically incongruent with the background scene (e.g., horse galloping in a restaurant), they can be identified more accurately by young normal-hearing listeners (YNH) than sounds congruent with the scene (e.g., horse galloping at a racetrack). This study investigated how age and high-frequency audibility affect this Incongruency Advantage (IA) effect. In Experiments 1a and 1b, elderly listeners ( N = 18 for 1a; N = 10 for 1b) with age-appropriate hearing (EAH) were tested on target sounds and auditory scenes in 5 sound-to-scene ratios (So/Sc) between -3 and -18 dB. Experiment 2 tested 11 YNH on the same sound-scene pairings lowpass-filtered at 4 kHz (YNH-4k). The EAH and YNH-4k groups exhibited an almost identical pattern of significant IA effects, but both were at approximately 3.9 dB higher So/Sc than the previously tested YNH listeners. However, the psychometric functions revealed a shallower slope for EAH listeners compared with YNH listeners for the congruent stimuli only, suggesting a greater difficulty for the EAH listeners in attending to sounds expected to occur in a scene. These findings indicate that semantic relationships between environmental sounds in soundscapes are mediated by both audibility and cognitive factors and suggest a method for dissociating these factors.

  5. Auditory-tactile speech perception in congenitally blind and sighted adults.

    Science.gov (United States)

    Sato, Marc; Cavé, Christian; Ménard, Lucie; Brasseur, Annie

    2010-10-01

    The present study investigated whether manual tactile information from a speaker's face modulates the intelligibility of speech when audio-tactile perception is compared with audio-only perception. Since more elaborated auditory and tactile skills have been reported in the blind, two groups of congenitally blind and sighted adults were compared. Participants performed a forced-choice syllable decision task across three conditions: audio-only and congruent/incongruent audio-tactile conditions. For the auditory modality, the syllables were embedded or not in noise while, for the tactile modality, participants felt in synchrony a mouthed syllable by placing a hand on the face of a talker. In the absence of acoustic noise, syllables were almost perfectly recognized in all conditions. On the contrary, with syllables embedded with acoustic noise, more correct responses were reported in case of congruent mouthing compared to no mouthing, and in case of no mouthing compared to incongruent mouthing. Interestingly, no perceptual differences were observed between blind and sighted adults. These findings demonstrate that manual tactile information relevant to recovering speech gestures modulates auditory speech perception in case of degraded acoustic information and that audio-tactile interactions occur similarly in blind and sighted untrained listeners. Copyright © 2010 Elsevier Ltd. All rights reserved.

  6. An auditory neural correlate suggests a mechanism underlying holistic pitch perception.

    Directory of Open Access Journals (Sweden)

    Daryl Wile

    Full Text Available Current theories of auditory pitch perception propose that cochlear place (spectral and activity timing pattern (temporal information are somehow combined within the brain to produce holistic pitch percepts, yet the neural mechanisms for integrating these two kinds of information remain obscure. To examine this process in more detail, stimuli made up of three pure tones whose components are individually resolved by the peripheral auditory system, but that nonetheless elicit a holistic, "missing fundamental" pitch percept, were played to human listeners. A technique was used to separate neural timing activity related to individual components of the tone complexes from timing activity related to an emergent feature of the complex (the envelope, and the region of the tonotopic map where information could originate from was simultaneously restricted by masking noise. Pitch percepts were mirrored to a very high degree by a simple combination of component-related and envelope-related neural responses with similar timing that originate within higher-frequency regions of the tonotopic map where stimulus components interact. These results suggest a coding scheme for holistic pitches whereby limited regions of the tonotopic map (spectral places carrying envelope- and component-related activity with similar timing patterns selectively provide a key source of neural pitch information. A similar mechanism of integration between local and emergent object properties may contribute to holistic percepts in a variety of sensory systems.

  7. The grain size of auditory mismatch response in speech perception

    Science.gov (United States)

    Zhang, Yang; Kuhl, Patricia; Imada, Toshiaki; Imada, Toshiaki; Kotani, Makoto

    2005-09-01

    This phonetic study examined neural encoding of within-and cross- category information as a function of language experience. Behavioral and magnetoencephalography (MEG) measures for synthetic /ba-wa/ and /ra-la/ stimuli were obtained from ten American and ten Japanese subjects. The MEG experiments employed the oddball paradigm in two conditions. One condition used single exemplars to represent the phonetic categories, and the other introduced within-category variations for both the standard and deviant stimuli. Behavioral results showed three major findings: (a) a robust phonetic boundary effect was observed only in the native listeners; (b) all listeners were able to detect within-category differences on an acoustic basis; and (c) both within- and cross- category discriminations were strongly influenced by language experience. Consistent with behavioral findings, American listeners had larger mismatch field (MMF) responses for /ra-la/ in both conditions but not for /ba-wa/ in either. Moreover, American listeners showed a significant MMF reduction in encoding within-category variations for /ba-wa/ but not for /ra-la/, and Japanese listeners had MMF reductions for both. These results strongly suggest that the grain size of auditory mismatch response is determined not only by experience-dependent phonetic knowledge, but also by the specific characteristics of speech stimuli. [Work supported by NIH.

  8. Auditory cortical deactivation during speech production and following speech perception: an EEG investigation of the temporal dynamics of the auditory alpha rhythm.

    Science.gov (United States)

    Jenson, David; Harkrider, Ashley W; Thornton, David; Bowers, Andrew L; Saltuklaroglu, Tim

    2015-01-01

    Sensorimotor integration (SMI) across the dorsal stream enables online monitoring of speech. Jenson et al. (2014) used independent component analysis (ICA) and event related spectral perturbation (ERSP) analysis of electroencephalography (EEG) data to describe anterior sensorimotor (e.g., premotor cortex, PMC) activity during speech perception and production. The purpose of the current study was to identify and temporally map neural activity from posterior (i.e., auditory) regions of the dorsal stream in the same tasks. Perception tasks required "active" discrimination of syllable pairs (/ba/ and /da/) in quiet and noisy conditions. Production conditions required overt production of syllable pairs and nouns. ICA performed on concatenated raw 68 channel EEG data from all tasks identified bilateral "auditory" alpha (α) components in 15 of 29 participants localized to pSTG (left) and pMTG (right). ERSP analyses were performed to reveal fluctuations in the spectral power of the α rhythm clusters across time. Production conditions were characterized by significant α event related synchronization (ERS; pFDR < 0.05) concurrent with EMG activity from speech production, consistent with speech-induced auditory inhibition. Discrimination conditions were also characterized by α ERS following stimulus offset. Auditory α ERS in all conditions temporally aligned with PMC activity reported in Jenson et al. (2014). These findings are indicative of speech-induced suppression of auditory regions, possibly via efference copy. The presence of the same pattern following stimulus offset in discrimination conditions suggests that sensorimotor contributions following speech perception reflect covert replay, and that covert replay provides one source of the motor activity previously observed in some speech perception tasks. To our knowledge, this is the first time that inhibition of auditory regions by speech has been observed in real-time with the ICA/ERSP technique.

  9. Using auditory classification images for the identification of fine acoustic cues used in speech perception.

    Directory of Open Access Journals (Sweden)

    Léo eVarnet

    2013-12-01

    Full Text Available An essential step in understanding the processes underlying the general mechanism of perceptual categorization is to identify which portions of a physical stimulation modulate the behavior of our perceptual system. More specifically, in the context of speech comprehension, it is still a major open challenge to understand which information is used to categorize a speech stimulus as one phoneme or another, the auditory primitives relevant for the categorical perception of speech being still unknown. Here we propose to adapt technique relying on a Generalized Linear Model with smoothness priors technique, already used in the visual domain for estimation of so-called classification images, to auditory experiments. This statistical model offers a rigorous framework for dealing with non-Gaussian noise, as it is often the case in the auditory modality, and limits the amount of noise in the estimated template by enforcing smoother solution. By applying this technique to a specific two-alternative forced choice experiment between stimuli ‘aba’ and ‘ada’ in noise with an adaptive SNR, we confirm that the second formantic transition is a key for classifying phonemes into /b/ or /d/ in noise, and that its estimation by the auditory system is a relative measurement across spectral bands and in relation to the perceived height of the second formant in the preceding syllable. Through this example, we show how the GLM with smoothness priors approach can be applied to the identification of fine functional acoustic cues in speech perception. Finally we discuss some assumptions of the model in the specific case of speech perception.

  10. Training Level Does Not Affect Auditory Perception of The Magnitude of Ball Spin in Table Tennis.

    Science.gov (United States)

    Santos, Daniel P R; Barbosa, Roberto N; Vieira, Luiz H P; Santiago, Paulo R P; Zagatto, Alessandro M; Gomes, Matheus M

    2017-01-01

    Identifying the trajectory and spin of the ball with speed and accuracy is critical for good performance in table tennis. The aim of this study was to analyze the ability of table tennis players presenting different levels of training/experience to identify the magnitude of the ball spin from the sound produced when the racket hit the ball. Four types of "forehand" contact sounds were collected in the laboratory, defined as: Fast Spin (spinning ball forward at 140 r/s); Medium Spin (105 r/s); Slow Spin (84 r/s); and Flat Hit (less than 60 r/s). Thirty-four table tennis players of both sexes (24 men and 10 women) aged 18-40 years listened to the sounds and tried to identify the magnitude of the ball spin. The results revealed that in 50.9% of the cases the table tennis players were able to identify the ball spin and the observed number of correct answers (10.2) was significantly higher (χ(2) = 270.4, p <0.05) than the number of correct answers that could occur by chance. On the other hand, the results did not show any relationship between the level of training/experience and auditory perception of the ball spin. This indicates that auditory information contributes to identification of the magnitude of the ball spin, however, it also reveals that, in table tennis, the level of training does not interfere with the auditory perception of the ball spin.

  11. Auditory Processing and Speech Perception in Children with Specific Language Impairment: Relations with Oral Language and Literacy Skills

    Science.gov (United States)

    Vandewalle, Ellen; Boets, Bart; Ghesquiere, Pol; Zink, Inge

    2012-01-01

    This longitudinal study investigated temporal auditory processing (frequency modulation and between-channel gap detection) and speech perception (speech-in-noise and categorical perception) in three groups of 6 years 3 months to 6 years 8 months-old children attending grade 1: (1) children with specific language impairment (SLI) and literacy delay…

  12. Sources of variability in consonant perception and their auditory correlates

    DEFF Research Database (Denmark)

    Zaar, Johannes; Dau, Torsten

    2015-01-01

    Responses obtained in consonant perception experiments typically show a large variability across stimuli of the same phonetic identity. The present study investigated the influence of different potential sources of this response variability. It was distinguished between source-induced variability......, referring to perceptual differences caused by acoustical differences in the speech tokens and/or the masking noise tokens, and receiver-related variability, referring to perceptual differences caused by within- and across-listener uncertainty. Two experiments were conducted with normal-hearing listeners...... using consonant-vowel combinations (CVs) in white noise. The responses were analyzed with respect to the different sources of variability based on a measure of perceptual distance. The speech-induced variability across and within talkers and the across-listener variability were substantial...

  13. Auditory Cortical Deactivation during Speech Production and following Speech Perception: An EEG investigation of the temporal dynamics of the auditory alpha rhythm

    Directory of Open Access Journals (Sweden)

    David E Jenson

    2015-10-01

    Full Text Available Sensorimotor integration within the dorsal stream enables online monitoring of speech. Jenson et al. (2014 used independent component analysis (ICA and event related spectral perturbation (ERSP analysis of EEG data to describe anterior sensorimotor (e.g., premotor cortex; PMC activity during speech perception and production. The purpose of the current study was to identify and temporally map neural activity from posterior (i.e., auditory regions of the dorsal stream in the same tasks. Perception tasks required ‘active’ discrimination of syllable pairs (/ba/ and /da/ in quiet and noisy conditions. Production conditions required overt production of syllable pairs and nouns. ICA performed on concatenated raw 68 channel EEG data from all tasks identified bilateral ‘auditory’ alpha (α components in 15 of 29 participants localized to pSTG (left and pMTG (right. ERSP analyses were performed to reveal fluctuations in the spectral power of the α rhythm clusters across time. Production conditions were characterized by significant α event related synchronization (ERS; pFDR < .05 concurrent with EMG activity from speech production, consistent with speech-induced auditory inhibition. Discrimination conditions were also characterized by α ERS following stimulus offset. Auditory α ERS in all conditions also temporally aligned with PMC activity reported in Jenson et al. (2014. These findings are indicative of speech-induced suppression of auditory regions, possibly via efference copy. The presence of the same pattern following stimulus offset in discrimination conditions suggests that sensorimotor contributions following speech perception reflect covert replay, and that covert replay provides one source of the motor activity previously observed in some speech perception tasks. To our knowledge, this is the first time that inhibition of auditory regions by speech has been observed in real-time with the ICA/ERSP technique.

  14. Auditory cortical deactivation during speech production and following speech perception: an EEG investigation of the temporal dynamics of the auditory alpha rhythm

    Science.gov (United States)

    Jenson, David; Harkrider, Ashley W.; Thornton, David; Bowers, Andrew L.; Saltuklaroglu, Tim

    2015-01-01

    Sensorimotor integration (SMI) across the dorsal stream enables online monitoring of speech. Jenson et al. (2014) used independent component analysis (ICA) and event related spectral perturbation (ERSP) analysis of electroencephalography (EEG) data to describe anterior sensorimotor (e.g., premotor cortex, PMC) activity during speech perception and production. The purpose of the current study was to identify and temporally map neural activity from posterior (i.e., auditory) regions of the dorsal stream in the same tasks. Perception tasks required “active” discrimination of syllable pairs (/ba/ and /da/) in quiet and noisy conditions. Production conditions required overt production of syllable pairs and nouns. ICA performed on concatenated raw 68 channel EEG data from all tasks identified bilateral “auditory” alpha (α) components in 15 of 29 participants localized to pSTG (left) and pMTG (right). ERSP analyses were performed to reveal fluctuations in the spectral power of the α rhythm clusters across time. Production conditions were characterized by significant α event related synchronization (ERS; pFDR speech production, consistent with speech-induced auditory inhibition. Discrimination conditions were also characterized by α ERS following stimulus offset. Auditory α ERS in all conditions temporally aligned with PMC activity reported in Jenson et al. (2014). These findings are indicative of speech-induced suppression of auditory regions, possibly via efference copy. The presence of the same pattern following stimulus offset in discrimination conditions suggests that sensorimotor contributions following speech perception reflect covert replay, and that covert replay provides one source of the motor activity previously observed in some speech perception tasks. To our knowledge, this is the first time that inhibition of auditory regions by speech has been observed in real-time with the ICA/ERSP technique. PMID:26500519

  15. Hierarchical organization of speech perception in human auditory cortex

    Directory of Open Access Journals (Sweden)

    Colin eHumphries

    2014-12-01

    Full Text Available Human speech consists of a variety of articulated sounds that vary dynamically in spectral composition. We investigated the neural activity associated with the perception of two types of speech segments: (a the period of rapid spectral transition occurring at the beginning of a stop-consonant vowel (CV syllable and (b the subsequent spectral steady-state period occurring during the vowel segment of the syllable. Functional magnetic resonance imaging (fMRI was recorded while subjects listened to series of synthesized CV syllables and non-phonemic control sounds. Adaptation to specific sound features was measured by varying either the transition or steady-state periods of the synthesized sounds. Two spatially distinct brain areas in the superior temporal cortex were found that were sensitive to either the type of adaptation or the type of stimulus. In a relatively large section of the bilateral dorsal superior temporal gyrus (STG, activity varied as a function of adaptation type regardless of whether the stimuli were phonemic or non-phonemic. Immediately adjacent to this region in a more limited area of the ventral STG, increased activity was observed for phonemic trials compared to non-phonemic trials, however, no adaptation effects were found. In addition, a third area in the bilateral medial superior temporal plane showed increased activity to non-phonemic compared to phonemic sounds. The results suggest a multi-stage hierarchical stream for speech sound processing extending ventrolaterally from the superior temporal plane to the superior temporal sulcus. At successive stages in this hierarchy, neurons code for increasingly more complex spectrotemporal features. At the same time, these representations become more abstracted from the original acoustic form of the sound.

  16. Sensory Entrainment Mechanisms in Auditory Perception: Neural Synchronization Cortico-Striatal Activation

    Science.gov (United States)

    Sameiro-Barbosa, Catia M.; Geiser, Eveline

    2016-01-01

    The auditory system displays modulations in sensitivity that can align with the temporal structure of the acoustic environment. This sensory entrainment can facilitate sensory perception and is particularly relevant for audition. Systems neuroscience is slowly uncovering the neural mechanisms underlying the behaviorally observed sensory entrainment effects in the human sensory system. The present article summarizes the prominent behavioral effects of sensory entrainment and reviews our current understanding of the neural basis of sensory entrainment, such as synchronized neural oscillations, and potentially, neural activation in the cortico-striatal system. PMID:27559306

  17. Influence of rhythmic grouping on duration perception: a novel auditory illusion.

    Directory of Open Access Journals (Sweden)

    Eveline Geiser

    Full Text Available This study investigated a potential auditory illusion in duration perception induced by rhythmic temporal contexts. Listeners with or without musical training performed a duration discrimination task for a silent period in a rhythmic auditory sequence. The critical temporal interval was presented either within a perceptual group or between two perceptual groups. We report the just-noticeable difference (difference limen, DL for temporal intervals and the point of subjective equality (PSE derived from individual psychometric functions based on performance of a two-alternative forced choice task. In musically untrained individuals, equal temporal intervals were perceived as significantly longer when presented between perceptual groups than within a perceptual group (109.25% versus 102.5% of the standard duration. Only the perceived duration of the between-group interval was significantly longer than its objective duration. Musically trained individuals did not show this effect. However, in both musically trained and untrained individuals, the relative difference limens for discriminating the comparison interval from the standard interval were larger in the between-groups condition than in the within-group condition (7.3% vs. 5.6% of the standard duration. Thus, rhythmic grouping affected sensitivity to duration changes in all listeners, with duration differences being harder to detect at boundaries of rhythm groups than within rhythm groups. Our results show for the first time that temporal Gestalt induces auditory duration illusions in typical listeners, but that musical experts are not susceptible to this effect of rhythmic grouping.

  18. Speech perception in users of hearing aid with auditory neuropathy spectrum disorder.

    Science.gov (United States)

    Fernandes, Nayara Freitas; Yamaguti, Elisabete Honda; Morettin, Marina; Costa, Orozimbo Alves

    2016-01-01

    To analyze speech perception in children with pre-lingual hearing loss with auditory neuropathy spectrum disorder users of bilateral hearing aid. This is a descriptive and exploratory study carried out at the Research Center Audiological (HRAC/USP). The study included four children aged between 8 years and 3 months and 12 years and 2 months. Lists of monosyllabic words, two syllables, nonsense words and sentences, the Infant Toddler-Meaningful Auditory Integration Scale (IT-MAIS) and the Meaningful Use of Speech Scale (MUSS), hearing, and language categories were used. All lists were applied in acoustic booth, with speakers, in free field, in silence. The results showed an average 69.5% for the list of monosyllabic words, 87.75% for the list of two-syllable words, 89.92% for the list of nonsense syllables, and 92.5% for the list of sentences. The therapeutic process that includes the use of bilateral hearing aid was extremely satisfactory, since it allowed the maximum development of auditory skills.

  19. The complementary roles of auditory and motor information evaluated in a Bayesian perceptuo-motor model of speech perception.

    Science.gov (United States)

    Laurent, Raphaël; Barnaud, Marie-Lou; Schwartz, Jean-Luc; Bessière, Pierre; Diard, Julien

    2017-10-01

    There is a consensus concerning the view that both auditory and motor representations intervene in the perceptual processing of speech units. However, the question of the functional role of each of these systems remains seldom addressed and poorly understood. We capitalized on the formal framework of Bayesian Programming to develop COSMO (Communicating Objects using Sensory-Motor Operations), an integrative model that allows principled comparisons of purely motor or purely auditory implementations of a speech perception task and tests the gain of efficiency provided by their Bayesian fusion. Here, we show 3 main results: (a) In a set of precisely defined "perfect conditions," auditory and motor theories of speech perception are indistinguishable; (b) When a learning process that mimics speech development is introduced into COSMO, it departs from these perfect conditions. Then auditory recognition becomes more efficient than motor recognition in dealing with learned stimuli, while motor recognition is more efficient in adverse conditions. We interpret this result as a general "auditory-narrowband versus motor-wideband" property; and (c) Simulations of plosive-vowel syllable recognition reveal possible cues from motor recognition for the invariant specification of the place of plosive articulation in context that are lacking in the auditory pathway. This provides COSMO with a second property, where auditory cues would be more efficient for vowel decoding and motor cues for plosive articulation decoding. These simulations provide several predictions, which are in good agreement with experimental data and suggest that there is natural complementarity between auditory and motor processing within a perceptuo-motor theory of speech perception. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  20. Perception of visual apparent motion is modulated by a gap within concurrent auditory glides, even when it is illusory

    Directory of Open Access Journals (Sweden)

    Qingcui eWang

    2015-05-01

    Full Text Available Auditory and visual events often happen concurrently, and how they group together can have a strong effect on what is perceived. We investigated whether/how intra- or cross-modal temporal grouping influenced the perceptual decision of otherwise ambiguous visual apparent motion. To achieve this, we juxtaposed auditory gap transfer illusion with visual Ternus display. The Ternus display involves a multi-element stimulus that can induce either of two different percepts of apparent motion: ‘element motion’ or ‘group motion’. In element motion, the endmost disk is seen as moving back and forth while the middle disk at the central position remains stationary; while in group motion, both disks appear to move laterally as a whole. The gap transfer illusion refers to the illusory subjective transfer of a short gap (around 100 ms from the long glide to the short continuous glide when the two glides intercede at the temporal middle point. In our experiments, observers were required to make a perceptual discrimination of Ternus motion in the presence of concurrent auditory glides (with or without a gap inside. Results showed that a gap within a short glide imposed a remarkable effect on separating visual events, and led to a dominant perception of group motion as well. The auditory configuration with gap transfer illusion triggered the same auditory capture effect. Further investigations showed that visual interval which coincided with the gap interval (50-230 ms in the long glide was perceived to be shorter than that within both the short glide and the ‘gap-transfer’ auditory configurations in the same physical intervals (gaps. The results indicated that auditory temporal perceptual grouping takes priority over the cross-modal interaction in determining the final readout of the visual perception, and the mechanism of selective attention on auditory events also plays a role.

  1. Stable individual characteristics in the perception of multiple embedded patterns in multistable auditory stimuli

    Directory of Open Access Journals (Sweden)

    Susan eDenham

    2014-02-01

    Full Text Available The ability of the auditory system to parse complex scenes into component objects in order to extract information from the environment is very robust, yet the processing principles underlying this ability are still not well understood. This study was designed to investigate the proposal that the auditory system constructs multiple interpretations of the acoustic scene in parallel, based on the finding that when listening to a long repetitive sequence listeners report switching between different perceptual organizations. Using the ‘ABA-’ auditory streaming paradigm we trained listeners until they could reliably recognise all possible embedded patterns of length four which could in principle be extracted from the sequence, and in a series of test sessions investigated their spontaneous reports of those patterns. With the training allowing them to identify and mark a wider variety of possible patterns, participants spontaneously reported many more patterns than the ones traditionally assumed (Integrated vs. Segregated. Despite receiving consistent training and despite the apparent randomness of perceptual switching, we found individual switching patterns were idiosyncratic; i.e. the perceptual switching patterns of each participant were more similar to their own switching patterns in different sessions than to those of other participants. These individual differences were found to be preserved even between test sessions held a year after the initial experiment. Our results support the idea that the auditory system attempts to extract an exhaustive set of embedded patterns which can be used to generate expectations of future events and which by competing for dominance give rise to (changing perceptual awareness, with the characteristics of pattern discovery and perceptual competition having a strong idiosyncratic component. Perceptual multistability thus provides a means for characterizing both general mechanisms and individual differences in

  2. The influence of tactile cognitive maps on auditory space perception in sighted persons.

    Directory of Open Access Journals (Sweden)

    Alessia Tonelli

    2016-11-01

    Full Text Available We have recently shown that vision is important to improve spatial auditory cognition. In this study we investigate whether touch is as effective as vision to create a cognitive map of a soundscape. In particular we tested whether the creation of a mental representation of a room, obtained through tactile exploration of a 3D model, can influence the perception of a complex auditory task in sighted people. We tested two groups of blindfolded sighted people – one experimental and one control group – in an auditory space bisection task. In the first group the bisection task was performed three times: specifically, the participants explored with their hands the 3D tactile model of the room and were led along the perimeter of the room between the first and the second execution of the space bisection. Then, they were allowed to remove the blindfold for a few minutes and look at the room between the second and third execution of the space bisection. Instead, the control group repeated for two consecutive times the space bisection task without performing any environmental exploration in between. Considering the first execution as a baseline, we found an improvement in the precision after the tactile exploration of the 3D model. Interestingly, no additional gain was obtained when room observation followed the tactile exploration, suggesting that no additional gain was obtained by vision cues after spatial tactile cues were internalized. No improvement was found between the first and the second execution of the space bisection without environmental exploration in the control group, suggesting that the improvement was not due to task learning. Our results show that tactile information modulates the precision of an ongoing space auditory task as well as visual information. This suggests that cognitive maps elicited by touch may participate in cross-modal calibration and supra-modal representations of space that increase implicit knowledge about sound

  3. Short-term plasticity in the auditory system: differential neural responses to perception and imagery of speech and music.

    Science.gov (United States)

    Meyer, Martin; Elmer, Stefan; Baumann, Simon; Jancke, Lutz

    2007-01-01

    In this EEG study we sought to examine the neuronal underpinnings of short-term plasticity as a top-down guided auditory learning process. We hypothesized, that (i) auditory imagery should elicit proper auditory evoked effects (N1/P2 complex) and a late positive component (LPC). Generally, based on recent human brain mapping studies we expected (ii) to observe the involvement of different temporal and parietal lobe areas in imagery and in perception of acoustic stimuli. Furthermore we predicted (iii) that temporal regions show an asymmetric trend due to the different specialization of the temporal lobes in processing speech and non-speech sounds. Finally we sought evidence supporting the notion that short-term training is sufficient to drive top-down activity in brain regions that are not normally recruited by sensory induced bottom up processing. 18 non-musicians partook in a 30 channels based EEG session that investigated spatio-temporal dynamics of auditory imagery of "consonant-vowel" (CV) syllables and piano triads. To control for conditioning effects, we split the volunteers in two matched groups comprising the same conditions (visual, auditory or bimodal stimulation) presented in a slightly different serial order. Furthermore the study presents electromagnetic source localization (LORETA) of perception and imagery of CV- and piano stimuli. Our results imply that auditory imagery elicited similar electrophysiological effects at an early stage (N1/P2) as auditory stimulation. However, we found an additional LPC following the N1/P2 for auditory imagery only. Source estimation evinced bilateral engagement of anterior temporal cortex, which was generally stronger for imagery of music relative to imagery of speech. While we did not observe lateralized activity for the imagery of syllables we noted significantly increased rightward activation over the anterior supratemporal plane for musical imagery. Thus, we conclude that short-term top-down training based

  4. Segregated in perception, integrated for action: immunity of rhythmic sensorimotor coordination to auditory stream segregation.

    Science.gov (United States)

    Repp, Bruno H

    2009-03-01

    Auditory stream segregation can occur when tones of different pitch (A, B) are repeated cyclically: The larger the pitch separation and the faster the tempo, the more likely perception of two separate streams is to occur. The present study assessed stream segregation in perceptual and sensorimotor tasks, using identical ABBABB ... sequences. The perceptual task required detection of single phase-shifted A tones; this was expected to be facilitated by the presence of B tones unless segregation occurred. The sensorimotor task required tapping in synchrony with the A tones; here the phase correction response (PCR) to shifted A tones was expected to be inhibited by B tones unless segregation occurred. Two sequence tempi and three pitch separations (2, 10, and 48 semitones) were used with musically trained participants. Facilitation of perception occurred only at the smallest pitch separation, whereas the PCR was reduced equally at all separations. These results indicate that auditory action control is immune to perceptual stream segregation, at least in musicians. This may help musicians coordinate with diverse instruments in ensemble playing.

  5. Training Level Does Not Affect Auditory Perception of The Magnitude of Ball Spin in Table Tennis

    Directory of Open Access Journals (Sweden)

    Santos Daniel P. R.

    2017-01-01

    Full Text Available Identifying the trajectory and spin of the ball with speed and accuracy is critical for good performance in table tennis. The aim of this study was to analyze the ability of table tennis players presenting different levels of training/experience to identify the magnitude of the ball spin from the sound produced when the racket hit the ball. Four types of “forehand” contact sounds were collected in the laboratory, defined as: Fast Spin (spinning ball forward at 140 r/s; Medium Spin (105 r/s; Slow Spin (84 r/s; and Flat Hit (less than 60 r/s. Thirty-four table tennis players of both sexes (24 men and 10 women aged 18-40 years listened to the sounds and tried to identify the magnitude of the ball spin. The results revealed that in 50.9% of the cases the table tennis players were able to identify the ball spin and the observed number of correct answers (10.2 was significantly higher (χ2 = 270.4, p <0.05 than the number of correct answers that could occur by chance. On the other hand, the results did not show any relationship between the level of training/experience and auditory perception of the ball spin. This indicates that auditory information contributes to identification of the magnitude of the ball spin, however, it also reveals that, in table tennis, the level of training does not interfere with the auditory perception of the ball spin.

  6. Auditory agnosia.

    Science.gov (United States)

    Slevc, L Robert; Shell, Alison R

    2015-01-01

    Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. © 2015 Elsevier B.V. All rights reserved.

  7. Auditory-visual speech perception in three- and four-year-olds and its relationship to perceptual attunement and receptive vocabulary.

    Science.gov (United States)

    Erdener, Doğu; Burnham, Denis

    2017-06-06

    Despite the body of research on auditory-visual speech perception in infants and schoolchildren, development in the early childhood period remains relatively uncharted. In this study, English-speaking children between three and four years of age were investigated for: (i) the development of visual speech perception - lip-reading and visual influence in auditory-visual integration; (ii) the development of auditory speech perception and native language perceptual attunement; and (iii) the relationship between these and a language skill relevant at this age, receptive vocabulary. Visual speech perception skills improved even over this relatively short time period. However, regression analyses revealed that vocabulary was predicted by auditory-only speech perception, and native language attunement, but not by visual speech perception ability. The results suggest that, in contrast to infants and schoolchildren, in three- to four-year-olds the relationship between speech perception and language ability is based on auditory and not visual or auditory-visual speech perception ability. Adding these results to existing findings allows elaboration of a more complete account of the developmental course of auditory-visual speech perception.

  8. The role of temporal structure in the investigation of sensory memory, auditory scene analysis, and speech perception: a healthy-aging perspective.

    Science.gov (United States)

    Rimmele, Johanna Maria; Sussman, Elyse; Poeppel, David

    2015-02-01

    Listening situations with multiple talkers or background noise are common in everyday communication and are particularly demanding for older adults. Here we review current research on auditory perception in aging individuals in order to gain insights into the challenges of listening under noisy conditions. Informationally rich temporal structure in auditory signals--over a range of time scales from milliseconds to seconds--renders temporal processing central to perception in the auditory domain. We discuss the role of temporal structure in auditory processing, in particular from a perspective relevant for hearing in background noise, and focusing on sensory memory, auditory scene analysis, and speech perception. Interestingly, these auditory processes, usually studied in an independent manner, show considerable overlap of processing time scales, even though each has its own 'privileged' temporal regimes. By integrating perspectives on temporal structure processing in these three areas of investigation, we aim to highlight similarities typically not recognized. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. [Development of early auditory and speech perception skills within one year after cochlear implantion in prelingual deaf children].

    Science.gov (United States)

    Fu, Ying; Chen, Yuan; Xi, Xin; Hong, Mengdi; Chen, Aiting; Wang, Qian; Wong, Lena

    2015-04-01

    To investigate the development of early auditory capability and speech perception in the prelingual deaf children after cochlear implantation, and to study the feasibility of currently available Chinese assessment instruments for the evaluation of early auditory skill and speech perception in hearing-impaired children. A total of 83 children with severe-to-profound prelingual hearing impairment participated in this study. Participants were divided into four groups according to the age for surgery: A (1-2 years), B (2-3 years), C (3-4 years) and D (4-5 years). The auditory skill and speech perception ability of CI children were evaluated by trained audiologists using the infant-toddler/meaningful auditory integration scale (IT-MAIS/MAIS) questionnaire, the Mandarin Early Speech Perception (MESP) test and the Mandarin Pediatric Speech Intelligibility (MPSI) test. The questionnaires were used in face to face interviews with the parents or guardians. Each child was assessed before the operation and 3 months, 6 months, 12 months after switch-on. After cochlear implantation, early postoperative auditory development and speech perception gradually improved. All MAIS/IT-MAIS scores showed a similar increasing trend with the rehabilitation duration (F=5.743, P=0.007). Preoperative and post operative MAIS/IT-MAIS scores of children in age group C (3-4 years) was higher than that of other groups. Children who had longer hearing aid experience before operation demonstrated higher MAIS/IT-MAIS scores than those with little or no hearing aid experience (F=4.947, P=0.000). The MESP test showed that, children were not able to perceive speech as well as detecting speech signals. However as the duration of CI use increased, speech perception ability also improved substantially. However, only about 40% of the subjects could be evaluated using the most difficult subtest on the MPSI in quiet at 12 months after switch-on. As MCR decreased, the proportion of children who could be tested

  10. Vocal development and auditory perception in CBA/CaJ mice

    Science.gov (United States)

    Radziwon, Kelly E.

    Mice are useful laboratory subjects because of their small size, their modest cost, and the fact that researchers have created many different strains to study a variety of disorders. In particular, researchers have found nearly 100 naturally occurring mouse mutations with hearing impairments. For these reasons, mice have become an important model for studies of human deafness. Although much is known about the genetic makeup and physiology of the laboratory mouse, far less is known about mouse auditory behavior. To fully understand the effects of genetic mutations on hearing, it is necessary to determine the hearing abilities of these mice. Two experiments here examined various aspects of mouse auditory perception using CBA/CaJ mice, a commonly used mouse strain. The frequency difference limens experiment tested the mouse's ability to discriminate one tone from another based solely on the frequency of the tone. The mice had similar thresholds as wild mice and gerbils but needed a larger change in frequency than humans and cats. The second psychoacoustic experiment sought to determine which cue, frequency or duration, was more salient when the mice had to identify various tones. In this identification task, the mice overwhelmingly classified the tones based on frequency instead of duration, suggesting that mice are using frequency when differentiating one mouse vocalization from another. The other two experiments were more naturalistic and involved both auditory perception and mouse vocal production. Interest in mouse vocalizations is growing because of the potential for mice to become a model of human speech disorders. These experiments traced mouse vocal development from infant to adult, and they tested the mouse's preference for various vocalizations. This was the first known study to analyze the vocalizations of individual mice across development. Results showed large variation in calling rates among the three cages of adult mice but results were highly

  11. Beta-Band Oscillations Represent Auditory Beat and Its Metrical Hierarchy in Perception and Imagery.

    Science.gov (United States)

    Fujioka, Takako; Ross, Bernhard; Trainor, Laurel J

    2015-11-11

    Dancing to music involves synchronized movements, which can be at the basic beat level or higher hierarchical metrical levels, as in a march (groups of two basic beats, one-two-one-two …) or waltz (groups of three basic beats, one-two-three-one-two-three …). Our previous human magnetoencephalography studies revealed that the subjective sense of meter influences auditory evoked responses phase locked to the stimulus. Moreover, the timing of metronome clicks was represented in periodic modulation of induced (non-phase locked) β-band (13-30 Hz) oscillation in bilateral auditory and sensorimotor cortices. Here, we further examine whether acoustically accented and subjectively imagined metric processing in march and waltz contexts during listening to isochronous beats were reflected in neuromagnetic β-band activity recorded from young adult musicians. First, we replicated previous findings of beat-related β-power decrease at 200 ms after the beat followed by a predictive increase toward the onset of the next beat. Second, we showed that the β decrease was significantly influenced by the metrical structure, as reflected by differences across beat type for both perception and imagery conditions. Specifically, the β-power decrease associated with imagined downbeats (the count "one") was larger than that for both the upbeat (preceding the count "one") in the march, and for the middle beat in the waltz. Moreover, beamformer source analysis for the whole brain volume revealed that the metric contrasts involved auditory and sensorimotor cortices; frontal, parietal, and inferior temporal lobes; and cerebellum. We suggest that the observed β-band activities reflect a translation of timing information to auditory-motor coordination. With magnetoencephalography, we examined β-band oscillatory activities around 20 Hz while participants listened to metronome beats and imagined musical meters such as a march and waltz. We demonstrated that β-band event

  12. Physiological activation of the human cerebral cortex during auditory perception and speech revealed by regional increases in cerebral blood flow

    DEFF Research Database (Denmark)

    Lassen, N A; Friberg, L

    1988-01-01

    Specific types of brain activity as sensory perception auditory, somato-sensory or visual -or the performance of movements are accompanied by increases of blood flow and oxygen consumption in the cortical areas involved with performing the respective tasks. The activation patterns observed by mea...

  13. Auditory Verbal Working Memory as a Predictor of Speech Perception in Modulated Maskers in Listeners with Normal Hearing

    Science.gov (United States)

    Millman, Rebecca E.; Mattys, Sven L.

    2017-01-01

    Purpose: Background noise can interfere with our ability to understand speech. Working memory capacity (WMC) has been shown to contribute to the perception of speech in modulated noise maskers. WMC has been assessed with a variety of auditory and visual tests, often pertaining to different components of working memory. This study assessed the…

  14. Auditory Sensitivity, Speech Perception, L1 Chinese, and L2 English Reading Abilities in Hong Kong Chinese Children

    Science.gov (United States)

    Zhang, Juan; McBride-Chang, Catherine

    2014-01-01

    A 4-stage developmental model, in which auditory sensitivity is fully mediated by speech perception at both the segmental and suprasegmental levels, which are further related to word reading through their associations with phonological awareness, rapid automatized naming, verbal short-term memory and morphological awareness, was tested with…

  15. The Development and Validation of an Auditory Perception Test in Spanish for Hispanic Children Receiving Reading Instruction in Spanish.

    Science.gov (United States)

    Morrison, James A.; Michael, William B.

    1982-01-01

    A Spanish auditory perception test, La Prueba de Analisis Auditivo, was developed and administered to 158 Spanish-speaking Latino children, kindergarten through grade 3. Psychometric data for the test are presented, including its relationship to SOBER, a criterion-referenced Spanish reading measure. (Author/BW)

  16. Echoes of the spoken past: how auditory cortex hears context during speech perception.

    Science.gov (United States)

    Skipper, Jeremy I

    2014-09-19

    What do we hear when someone speaks and what does auditory cortex (AC) do with that sound? Given how meaningful speech is, it might be hypothesized that AC is most active when other people talk so that their productions get decoded. Here, neuroimaging meta-analyses show the opposite: AC is least active and sometimes deactivated when participants listened to meaningful speech compared to less meaningful sounds. Results are explained by an active hypothesis-and-test mechanism where speech production (SP) regions are neurally re-used to predict auditory objects associated with available context. By this model, more AC activity for less meaningful sounds occurs because predictions are less successful from context, requiring further hypotheses be tested. This also explains the large overlap of AC co-activity for less meaningful sounds with meta-analyses of SP. An experiment showed a similar pattern of results for non-verbal context. Specifically, words produced less activity in AC and SP regions when preceded by co-speech gestures that visually described those words compared to those words without gestures. Results collectively suggest that what we 'hear' during real-world speech perception may come more from the brain than our ears and that the function of AC is to confirm or deny internal predictions about the identity of sounds.

  17. Deriving content-specific measures of room acoustic perception using a binaural, nonlinear auditory model.

    Science.gov (United States)

    van Dorp Schuitman, Jasper; de Vries, Diemer; Lindau, Alexander

    2013-03-01

    Acousticians generally assess the acoustic qualities of a concert hall or any other room using impulse response-based measures such as the reverberation time, clarity index, and others. These parameters are used to predict perceptual attributes related to the acoustic qualities of the room. Various studies show that these physical measures are not able to predict the related perceptual attributes sufficiently well under all circumstances. In particular, it has been shown that physical measures are dependent on the state of occupation, are prone to exaggerated spatial fluctuation, and suffer from lacking discrimination regarding the kind of acoustic stimulus being presented. Accordingly, this paper proposes a method for the derivation of signal-based measures aiming at predicting aspects of room acoustic perception from content specific signal representations produced by a binaural, nonlinear model of the human auditory system. Listening tests were performed to test the proposed auditory parameters for both speech and music. The results look promising; the parameters correlate with their corresponding perceptual attributes in most cases.

  18. Hybrid fNIRS-EEG based classification of auditory and visual perception processes

    Directory of Open Access Journals (Sweden)

    Felix ePutze

    2014-11-01

    Full Text Available For multimodal Human-Computer Interaction (HCI, it is very useful to identify the modalities on which the user is currently processing information. This would enable a system to select complementary output modalities to reduce the user's workload. In this paper, we develop a hybrid Brain-Computer Interface (BCI which uses Electroencephalography (EEG and functional Near Infrared Spectroscopy (fNIRS to discriminate and detect visual and auditory stimulus processing. We describe the experimental setup we used for collection of our data corpus with 12 subjects. We present cross validation evaluation results for different classification conditions. We show that our subject-dependent systems achieved a classification accuracy of 97.8% for discriminating visual and auditory perception processes from each other and a classification accuracy of up to 94.8% for detecting modality-specific processes independently of other cognitive activity. The same classification conditions could also be discriminated in a subject-independent fashion with accuracy of up to 94.6% and 86.7%, respectively. We also look at the contributions of the two signal types and show that the fusion of classifiers using different features significantly increases accuracy.

  19. Hybrid fNIRS-EEG based classification of auditory and visual perception processes

    Science.gov (United States)

    Putze, Felix; Hesslinger, Sebastian; Tse, Chun-Yu; Huang, YunYing; Herff, Christian; Guan, Cuntai; Schultz, Tanja

    2014-01-01

    For multimodal Human-Computer Interaction (HCI), it is very useful to identify the modalities on which the user is currently processing information. This would enable a system to select complementary output modalities to reduce the user's workload. In this paper, we develop a hybrid Brain-Computer Interface (BCI) which uses Electroencephalography (EEG) and functional Near Infrared Spectroscopy (fNIRS) to discriminate and detect visual and auditory stimulus processing. We describe the experimental setup we used for collection of our data corpus with 12 subjects. On this data, we performed cross-validation evaluation, of which we report accuracy for different classification conditions. The results show that the subject-dependent systems achieved a classification accuracy of 97.8% for discriminating visual and auditory perception processes from each other and a classification accuracy of up to 94.8% for detecting modality-specific processes independently of other cognitive activity. The same classification conditions could also be discriminated in a subject-independent fashion with accuracy of up to 94.6 and 86.7%, respectively. We also look at the contributions of the two signal types and show that the fusion of classifiers using different features significantly increases accuracy. PMID:25477777

  20. Hybrid fNIRS-EEG based classification of auditory and visual perception processes.

    Science.gov (United States)

    Putze, Felix; Hesslinger, Sebastian; Tse, Chun-Yu; Huang, YunYing; Herff, Christian; Guan, Cuntai; Schultz, Tanja

    2014-01-01

    For multimodal Human-Computer Interaction (HCI), it is very useful to identify the modalities on which the user is currently processing information. This would enable a system to select complementary output modalities to reduce the user's workload. In this paper, we develop a hybrid Brain-Computer Interface (BCI) which uses Electroencephalography (EEG) and functional Near Infrared Spectroscopy (fNIRS) to discriminate and detect visual and auditory stimulus processing. We describe the experimental setup we used for collection of our data corpus with 12 subjects. On this data, we performed cross-validation evaluation, of which we report accuracy for different classification conditions. The results show that the subject-dependent systems achieved a classification accuracy of 97.8% for discriminating visual and auditory perception processes from each other and a classification accuracy of up to 94.8% for detecting modality-specific processes independently of other cognitive activity. The same classification conditions could also be discriminated in a subject-independent fashion with accuracy of up to 94.6 and 86.7%, respectively. We also look at the contributions of the two signal types and show that the fusion of classifiers using different features significantly increases accuracy.

  1. (A)musicality in Williams syndrome: examining relationships among auditory perception, musical skill, and emotional responsiveness to music.

    Science.gov (United States)

    Lense, Miriam D; Shivers, Carolyn M; Dykens, Elisabeth M

    2013-01-01

    Williams syndrome (WS), a genetic, neurodevelopmental disorder, is of keen interest to music cognition researchers because of its characteristic auditory sensitivities and emotional responsiveness to music. However, actual musical perception and production abilities are more variable. We examined musicality in WS through the lens of amusia and explored how their musical perception abilities related to their auditory sensitivities, musical production skills, and emotional responsiveness to music. In our sample of 73 adolescents and adults with WS, 11% met criteria for amusia, which is higher than the 4% prevalence rate reported in the typically developing (TD) population. Amusia was not related to auditory sensitivities but was related to musical training. Performance on the amusia measure strongly predicted musical skill but not emotional responsiveness to music, which was better predicted by general auditory sensitivities. This study represents the first time amusia has been examined in a population with a known neurodevelopmental genetic disorder with a range of cognitive abilities. Results have implications for the relationships across different levels of auditory processing, musical skill development, and emotional responsiveness to music, as well as the understanding of gene-brain-behavior relationships in individuals with WS and TD individuals with and without amusia.

  2. Enhanced speech perception in noise and cortical auditory evoked potentials in professional musicians.

    Science.gov (United States)

    Meha-Bettison, Kiriana; Sharma, Mridula; Ibrahim, Ronny K; Mandikal Vasuki, Pragati Rao

    2018-01-01

    The current research investigated whether professional musicians outperformed non-musicians on auditory processing and speech-in-noise perception as assessed using behavioural and electrophysiological tasks. Spectro-temporal processing skills were assessed using a psychoacoustic test battery. Speech-in-noise perception was measured using the Listening in Spatialised Noise - Sentences (LiSN-S) test and Cortical Auditory Evoked Potentials (CAEPs) recorded to the speech syllable/da/presented in quiet and in 8-talker babble noise at 0, 5, and 10 dB signal-to-noise ratios (SNRs). Ten professional musicians and 10 non-musicians participated in this study. Musicians significantly outperformed non-musicians in the frequency discrimination task and low-cue condition of the LiSN-S test. Musicians' N1 amplitude showed no difference between 5 dB and 0 dB SNR conditions while non-musicians showed significantly lower N1 amplitude at 0 dB SNR compared to 5 dB SNR. Brain-behaviour correlation for musicians showed a significant association between CAEPs at 5 dB SNR and the low-cue condition of the LiSN-S test at 30-70 ms. Time-frequency analysis indicated musicians had significantly higher alpha power desynchronisation in the 0 dB SNR condition indicating involvement of attention. Through the use of behavioural and electrophysiological data, the results provide converging evidence for improved speech recognition in noise in musicians.

  3. Sound Spectrum Influences Auditory Distance Perception of Sound Sources Located in a Room Environment

    Directory of Open Access Journals (Sweden)

    Ignacio Spiousas

    2017-06-01

    Full Text Available Previous studies on the effect of spectral content on auditory distance perception (ADP focused on the physically measurable cues occurring either in the near field (low-pass filtering due to head diffraction or when the sound travels distances >15 m (high-frequency energy losses due to air absorption. Here, we study how the spectrum of a sound arriving from a source located in a reverberant room at intermediate distances (1–6 m influences the perception of the distance to the source. First, we conducted an ADP experiment using pure tones (the simplest possible spectrum of frequencies 0.5, 1, 2, and 4 kHz. Then, we performed a second ADP experiment with stimuli consisting of continuous broadband and bandpass-filtered (with center frequencies of 0.5, 1.5, and 4 kHz and bandwidths of 1/12, 1/3, and 1.5 octave pink-noise clips. Our results showed an effect of the stimulus frequency on the perceived distance both for pure tones and filtered noise bands: ADP was less accurate for stimuli containing energy only in the low-frequency range. Analysis of the frequency response of the room showed that the low accuracy observed for low-frequency stimuli can be explained by the presence of sparse modal resonances in the low-frequency region of the spectrum, which induced a non-monotonic relationship between binaural intensity and source distance. The results obtained in the second experiment suggest that ADP can also be affected by stimulus bandwidth but in a less straightforward way (i.e., depending on the center frequency, increasing stimulus bandwidth could have different effects. Finally, the analysis of the acoustical cues suggests that listeners judged source distance using mainly changes in the overall intensity of the auditory stimulus with distance rather than the direct-to-reverberant energy ratio, even for low-frequency noise bands (which typically induce high amount of reverberation. The results obtained in this study show that, depending on

  4. Gap detection measured with electrically evoked auditory event-related potentials and speech-perception abilities in children with auditory neuropathy spectrum disorder.

    Science.gov (United States)

    He, Shuman; Grose, John H; Teagle, Holly F B; Woodard, Jennifer; Park, Lisa R; Hatch, Debora R; Buchman, Craig A

    2013-01-01

    This study aimed (1) to investigate the feasibility of recording the electrically evoked auditory event-related potential (eERP), including the onset P1-N1-P2 complex and the electrically evoked auditory change complex (EACC) in response to temporal gaps, in children with auditory neuropathy spectrum disorder (ANSD); and (2) to evaluate the relationship between these measures and speech-perception abilities in these subjects. Fifteen ANSD children who are Cochlear Nucleus device users participated in this study. For each subject, the speech-processor microphone was bypassed and the eERPs were elicited by direct stimulation of one mid-array electrode (electrode 12). The stimulus was a train of biphasic current pulses 800 msec in duration. Two basic stimulation conditions were used to elicit the eERP. In the no-gap condition, the entire pulse train was delivered uninterrupted to electrode 12, and the onset P1-N1-P2 complex was measured relative to the stimulus onset. In the gapped condition, the stimulus consisted of two pulse train bursts, each being 400 msec in duration, presented sequentially on the same electrode and separated by one of five gaps (i.e., 5, 10, 20, 50, and 100 msec). Open-set speech-perception ability of these subjects with ANSD was assessed using the phonetically balanced kindergarten (PBK) word lists presented at 60 dB SPL, using monitored live voice in a sound booth. The eERPs were recorded from all subjects with ANSD who participated in this study. There were no significant differences in test-retest reliability, root mean square amplitude or P1 latency for the onset P1-N1-P2 complex between subjects with good (>70% correct on PBK words) and poorer speech-perception performance. In general, the EACC showed less mature morphological characteristics than the onset P1-N1-P2 response recorded from the same subject. There was a robust correlation between the PBK word scores and the EACC thresholds for gap detection. Subjects with poorer speech-perception

  5. Hierarchical Organization of Auditory and Motor Representations in Speech Perception: Evidence from Searchlight Similarity Analysis

    Science.gov (United States)

    Evans, Samuel; Davis, Matthew H.

    2015-01-01

    How humans extract the identity of speech sounds from highly variable acoustic signals remains unclear. Here, we use searchlight representational similarity analysis (RSA) to localize and characterize neural representations of syllables at different levels of the hierarchically organized temporo-frontal pathways for speech perception. We asked participants to listen to spoken syllables that differed considerably in their surface acoustic form by changing speaker and degrading surface acoustics using noise-vocoding and sine wave synthesis while we recorded neural responses with functional magnetic resonance imaging. We found evidence for a graded hierarchy of abstraction across the brain. At the peak of the hierarchy, neural representations in somatomotor cortex encoded syllable identity but not surface acoustic form, at the base of the hierarchy, primary auditory cortex showed the reverse. In contrast, bilateral temporal cortex exhibited an intermediate response, encoding both syllable identity and the surface acoustic form of speech. Regions of somatomotor cortex associated with encoding syllable identity in perception were also engaged when producing the same syllables in a separate session. These findings are consistent with a hierarchical account of how variable acoustic signals are transformed into abstract representations of the identity of speech sounds. PMID:26157026

  6. Hierarchical Organization of Auditory and Motor Representations in Speech Perception: Evidence from Searchlight Similarity Analysis.

    Science.gov (United States)

    Evans, Samuel; Davis, Matthew H

    2015-12-01

    How humans extract the identity of speech sounds from highly variable acoustic signals remains unclear. Here, we use searchlight representational similarity analysis (RSA) to localize and characterize neural representations of syllables at different levels of the hierarchically organized temporo-frontal pathways for speech perception. We asked participants to listen to spoken syllables that differed considerably in their surface acoustic form by changing speaker and degrading surface acoustics using noise-vocoding and sine wave synthesis while we recorded neural responses with functional magnetic resonance imaging. We found evidence for a graded hierarchy of abstraction across the brain. At the peak of the hierarchy, neural representations in somatomotor cortex encoded syllable identity but not surface acoustic form, at the base of the hierarchy, primary auditory cortex showed the reverse. In contrast, bilateral temporal cortex exhibited an intermediate response, encoding both syllable identity and the surface acoustic form of speech. Regions of somatomotor cortex associated with encoding syllable identity in perception were also engaged when producing the same syllables in a separate session. These findings are consistent with a hierarchical account of how variable acoustic signals are transformed into abstract representations of the identity of speech sounds. © The Author 2015. Published by Oxford University Press.

  7. Development and application of a /bAk/-dAk/continuum for testing auditory perception within the Dutch longitudinal dyslexia study

    NARCIS (Netherlands)

    van Beinum, F.J.; Schwippert, C.E.; Been, P.H.; van Leeuwen, T.H.; Kuijpers, C.T.L.

    2005-01-01

    A national longitudinal research program on developmental dyslexia was started in The Netherlands, including auditory perception and processing as an important research component. New test materials had to be developed, to be used for measuring the auditory sensitivity of the subjects to speech-like

  8. Development and application of a /bAk/-/dAk/ continuum for testing auditory perception within the Dutch longitudinal dyslexia study

    NARCIS (Netherlands)

    van Beinum, F.J.; Schwippert, C.E.; Been, P.H.; van Leeuwen, T.H.; Kuijpers, C.T.L.

    2005-01-01

    A national longitudinal research program on developmental dyslexia was started in The Netherlands, including auditory perception and processing as an important research component. New test materials had to be developed, to be used for measuring the auditory sensitivity of the subjects to speech-like

  9. Development and application of a /bAk/–/dAk/ continuum for testing auditory perception within the Dutch longitudinal dyslexia study

    NARCIS (Netherlands)

    van Beinum, Florien J.; Schwippert, Caroline E.; Been, Pieter H.; van Leeuwen, Theo H.; Kuijpers, Cecile T.L.

    2005-01-01

    A national longitudinal research program on developmental dyslexia was started in The Netherlands, including auditory perception and processing as an important research component. New test materials had to be developed, to be used for measuring the auditory sensitivity of the subjects to speech-like

  10. Spectro-temporal interactions in auditory-visual perception: How the eyes modulate what the ears hear

    Science.gov (United States)

    Grant, Ken W.; van Wassenhove, Virginie

    2004-05-01

    Auditory-visual speech perception has been shown repeatedly to be both more accurate and more robust than auditory speech perception. Attempts to explain these phenomena usually treat acoustic and visual speech information (i.e., accessed via speechreading) as though they were derived from independent processes. Recent electrophysiological (EEG) studies, however, suggest that visual speech processes may play a fundamental role in modulating the way we hear. For example, both the timing and amplitude of auditory-specific event-related potentials as recorded by EEG are systematically altered when speech stimuli are presented audiovisually as opposed to auditorilly. In addition, the detection of a speech signal in noise is more readily accomplished when accompanied by video images of the speaker's production, suggesting that the influence of vision on audition occurs quite early in the perception process. But the impact of visual cues on what we ultimately hear is not limited to speech. Our perceptions of loudness, timbre, and sound source location can also be influenced by visual cues. Thus, for speech and nonspeech stimuli alike, predicting a listener's response to sound based on acoustic engineering principles alone may be misleading. Examples of acoustic-visual interactions will be presented which highlight the multisensory nature of our hearing experience.

  11. Giving speech a hand: gesture modulates activity in auditory cortex during speech perception.

    Science.gov (United States)

    Hubbard, Amy L; Wilson, Stephen M; Callan, Daniel E; Dapretto, Mirella

    2009-03-01

    Viewing hand gestures during face-to-face communication affects speech perception and comprehension. Despite the visible role played by gesture in social interactions, relatively little is known about how the brain integrates hand gestures with co-occurring speech. Here we used functional magnetic resonance imaging (fMRI) and an ecologically valid paradigm to investigate how beat gesture-a fundamental type of hand gesture that marks speech prosody-might impact speech perception at the neural level. Subjects underwent fMRI while listening to spontaneously-produced speech accompanied by beat gesture, nonsense hand movement, or a still body; as additional control conditions, subjects also viewed beat gesture, nonsense hand movement, or a still body all presented without speech. Validating behavioral evidence that gesture affects speech perception, bilateral nonprimary auditory cortex showed greater activity when speech was accompanied by beat gesture than when speech was presented alone. Further, the left superior temporal gyrus/sulcus showed stronger activity when speech was accompanied by beat gesture than when speech was accompanied by nonsense hand movement. Finally, the right planum temporale was identified as a putative multisensory integration site for beat gesture and speech (i.e., here activity in response to speech accompanied by beat gesture was greater than the summed responses to speech alone and beat gesture alone), indicating that this area may be pivotally involved in synthesizing the rhythmic aspects of both speech and gesture. Taken together, these findings suggest a common neural substrate for processing speech and gesture, likely reflecting their joint communicative role in social interactions.

  12. Auditory/Verbal hallucinations, speech perception neurocircuitry, and the social deafferentation hypothesis.

    Science.gov (United States)

    Hoffman, Ralph E

    2008-04-01

    Auditory/verbal hallucinations (AVHs) are comprised of spoken conversational speech seeming to arise from specific, nonself speakers. One hertz repetitive transcranial magnetic stimulation (rTMS) reduces excitability in the brain region stimulated. Studies utilizing 1-Hz rTMS delivered to the left temporoparietal cortex, a brain area critical to speech perception, have demonstrated statistically significant improvements in AVHs relative to sham simulation. A novel mechanism of AVHs is proposed whereby dramatic pre-psychotic social withdrawal prompts neuroplastic reorganization by the "social brain" to produce spurious social meaning via hallucinations of conversational speech. Preliminary evidence supporting this hypothesis includes a very high rate of social withdrawal emerging prior to the onset of frank psychosis in patients who develop schizophrenia and AVHs. Moreover, reduced AVHs elicited by temporoparietal 1-Hz rTMS are likely to reflect enhanced long-term depression. Some evidence suggests a loss of long-term depression following experimentally-induced deafferentation. Finally, abnormal cortico-cortical coupling is associated with AVHs and also is a common outcome of deafferentation. Auditory/verbal hallucinations (AVHs) of spoken speech or "voices" are reported by 60-80% of persons with schizophrenia at various times during the course of illness. AVHs are associated with high levels of distress, functional disability, and can lead to violent acts. Among patients with AVHs, these symptoms remain poorly or incompletely responsive to currently available treatments in approximately 25% of cases. For patients with AVHs who do respond to antipsychotic drugs, there is a very high likelihood that these experiences will recur in subsequent episodes. A more precise characterization of underlying pathophysiology may lead to more efficacious treatments.

  13. Coordinated plasticity in brainstem and auditory cortex contributes to enhanced categorical speech perception in musicians.

    Science.gov (United States)

    Bidelman, Gavin M; Weiss, Michael W; Moreno, Sylvain; Alain, Claude

    2014-08-01

    Musicianship is associated with neuroplastic changes in brainstem and cortical structures, as well as improved acuity for behaviorally relevant sounds including speech. However, further advance in the field depends on characterizing how neuroplastic changes in brainstem and cortical speech processing relate to one another and to speech-listening behaviors. Here, we show that subcortical and cortical neural plasticity interact to yield the linguistic advantages observed with musicianship. We compared brainstem and cortical neuroelectric responses elicited by a series of vowels that differed along a categorical speech continuum in amateur musicians and non-musicians. Musicians obtained steeper identification functions and classified speech sounds more rapidly than non-musicians. Behavioral advantages coincided with more robust and temporally coherent brainstem phase-locking to salient speech cues (voice pitch and formant information) coupled with increased amplitude in cortical-evoked responses, implying an overall enhancement in the nervous system's responsiveness to speech. Musicians' subcortical and cortical neural enhancements (but not behavioral measures) were correlated with their years of formal music training. Associations between multi-level neural responses were also stronger in musically trained listeners, and were better predictors of speech perception than in non-musicians. Results suggest that musicianship modulates speech representations at multiple tiers of the auditory pathway, and strengthens the correspondence of processing between subcortical and cortical areas to allow neural activity to carry more behaviorally relevant information. We infer that musicians have a refined hierarchy of internalized representations for auditory objects at both pre-attentive and attentive levels that supplies more faithful phonemic templates to decision mechanisms governing linguistic operations. © 2014 Federation of European Neuroscience Societies and John Wiley

  14. Beyond colour perception: auditory-visual synaesthesia induces experiences of geometric objects in specific locations.

    Science.gov (United States)

    Chiou, Rocco; Stelter, Marleen; Rich, Anina N

    2013-06-01

    Our brain constantly integrates signals across different senses. Auditory-visual synaesthesia is an unusual form of cross-modal integration in which sounds evoke involuntary visual experiences. Previous research primarily focuses on synaesthetic colour, but little is known about non-colour synaesthetic visual features. Here we studied a group of synaesthetes for whom sounds elicit consistent visual experiences of coloured 'geometric objects' located at specific spatial location. Changes in auditory pitch alter the brightness, size, and spatial height of synaesthetic experiences in a systematic manner resembling the cross-modal correspondences of non-synaesthetes, implying synaesthesia may recruit cognitive/neural mechanisms for 'normal' cross-modal processes. To objectively assess the impact of synaesthetic objects on behaviour, we devised a multi-feature cross-modal synaesthetic congruency paradigm and asked participants to perform speeded colour or shape discrimination. We found irrelevant sounds influenced performance, as quantified by congruency effects, demonstrating that synaesthetes were not able to suppress their synaesthetic experiences even when these were irrelevant for the task. Furthermore, we found some evidence for task-specific effects consistent with feature-based attention acting on the constituent features of synaesthetic objects: synaesthetic colours appeared to have a stronger impact on performance than synaesthetic shapes when synaesthetes attended to colour, and vice versa when they attended to shape. We provide the first objective evidence that visual synaesthetic experience can involve multiple features forming object-like percepts and suggest that each feature can be selected by attention despite it being internally generated. These findings suggest theories of the brain mechanisms of synaesthesia need to incorporate a broader neural network underpinning multiple visual features, perceptual knowledge, and feature integration, rather than

  15. Auditory, Visual, and Auditory-Visual Perception of Emotions by Individuals with Cochlear Implants, Hearing Aids, and Normal Hearing

    Science.gov (United States)

    Most, Tova; Aviner, Chen

    2009-01-01

    This study evaluated the benefits of cochlear implant (CI) with regard to emotion perception of participants differing in their age of implantation, in comparison to hearing aid users and adolescents with normal hearing (NH). Emotion perception was examined by having the participants identify happiness, anger, surprise, sadness, fear, and disgust.…

  16. The effect of auditory perception training on reading performance of the 8-9-year old female students with dyslexia: A preliminary study

    Directory of Open Access Journals (Sweden)

    Nafiseh Vatandoost

    2014-01-01

    Full Text Available Background and Aim: Dyslexia is the most common learning disability. One of the main factors have role in this disability is auditory perception imperfection that cause a lot of problems in education. We aimed to study the effect of auditory perception training on reading performance of female students with dyslexia at the third grade of elementary school.Methods: Thirty-eight female students at the third grade of elementary schools of Khomeinishahr City, Iran, were selected by multistage cluster random sampling of them, 20 students which were diagnosed dyslexic by Reading test and Wechsler test, devided randomly to two equal groups of experimental and control. For experimental group, during ten 45-minute sessions, auditory perception training were conducted, but no intervention was done for control group. An participants were re-assessed by Reading test after the intervention (pre- and post- test method. Data were analyed by covariance test.Results: The effect of auditory perception training on reading performance (81% was significant (p<0.0001 for all subtests execpt the separate compound word test.Conclusion: Findings of our study confirm the hypothesis that auditory perception training effects on students' functional reading. So, auditory perception training seems to be necessary for the students with dyslexia.

  17. Irritable bowel syndrome patients show enhanced modulation of visceral perception by auditory stress.

    Science.gov (United States)

    Dickhaus, Britta; Mayer, Emeran A; Firooz, Nazanin; Stains, Jean; Conde, Francisco; Olivas, Teresa I; Fass, Ronnie; Chang, Lin; Mayer, Minou; Naliboff, Bruce D

    2003-01-01

    Symptoms in irritable bowel syndrome (IBS) patients are sensitive to psychological stressors. These effects may operate through an enhanced responsiveness of the emotional motor system, a network of brain circuits that modulate arousal, viscerosomatic perception, and autonomic responses associated with emotional responses, including anxiety and anger. The aim of this study was to test the primary hypothesis that IBS patients show altered perceptual responses to rectal balloon distention during experimentally induced psychological stress compared with healthy control subjects. A total of 15 IBS patients (nine women and six men) and 14 healthy controls (seven women and seven men) were studied during two laboratory sessions: 1) a mild stress condition (dichotomous listening to two conflicting types of music), and 2) a control condition (relaxing nature sounds). The stress and relaxation auditory stimuli were delivered over a 10-min listening period preceding rectal distentions and during the rectal distentions but not during the distention rating process. Ratings of intensity and unpleasantness of the visceral sensations, subjective emotional responses, heart rate, and neuroendocrine measures (norepinephrine, cortisol, adrenocorticotropic hormone [ACTH], and prolactin) were obtained during the study. IBS patients, but not healthy controls, rated the 45-mm Hg visceral stimulus significantly higher in terms of intensity and unpleasantness during the stress condition compared with the relaxation condition. IBS patients also reported higher ratings of stress, anger, and anxiety during the stress compared with the relaxing condition, whereas controls had smaller and nonsignificant subjective responses. Heart rate measurements, but not other neuroendocrine stress measures, were increased under the stress condition in both groups. These findings confirm the hypothesis of altered stress-induced modulation of visceral perception in IBS patients.

  18. GRM7 variants associated with age-related hearing loss based on auditory perception

    Science.gov (United States)

    Newman, Dina L.; Fisher, Laurel M.; Ohmen, Jeffrey; Parody, Robert; Fong, Chin-To; Frisina, Susan T.; Mapes, Frances; Eddins, David A.; Frisina, D. Robert; Frisina, Robert D.; Friedman, Rick A.

    2012-01-01

    Age-related hearing impairment (ARHI), or presbycusis, is a common condition of the elderly that results in significant communication difficulties in daily life. Clinically, it has been defined as a progressive loss of sensitivity to sound, starting at the high frequencies, inability to understand speech, lengthening of the minimum discernable temporal gap in sounds, and a decrease in the ability to filter out background noise. The causes of presbycusis are likely a combination of environmental and genetic factors. Previous research into the genetics of presbycusis has focused solely on hearing as measured by pure-tone thresholds. A few loci have been identified, based on a best ear pure-tone average phenotype, as having a likely role in susceptibility to this type of hearing loss; and GRM7 is the only gene that has achieved genome-wide significance. We examined the association of GRM7 variants identified from the previous study, which used an European cohort with Z-scores based on pure-tone thresholds, in a European–American population from Rochester, NY (N = 687), and used novel phenotypes of presbycusis. In the present study mixed modeling analyses were used to explore the relationship of GRM7 haplotype and SNP genotypes with various measures of auditory perception. Here we show that GRM7 alleles are associated primarily with peripheral measures of hearing loss, and particularly with speech detection in older adults. PMID:23102807

  19. Cross-modal Association between Auditory and Visuospatial Information in Mandarin Tone Perception in Noise by Native and Non-native Perceivers

    Directory of Open Access Journals (Sweden)

    Beverly Hannah

    2017-12-01

    Full Text Available Speech perception involves multiple input modalities. Research has indicated that perceivers establish cross-modal associations between auditory and visuospatial events to aid perception. Such intermodal relations can be particularly beneficial for speech development and learning, where infants and non-native perceivers need additional resources to acquire and process new sounds. This study examines how facial articulatory cues and co-speech hand gestures mimicking pitch contours in space affect non-native Mandarin tone perception. Native English as well as Mandarin perceivers identified tones embedded in noise with either congruent or incongruent Auditory-Facial (AF and Auditory-FacialGestural (AFG inputs. Native Mandarin results showed the expected ceiling-level performance in the congruent AF and AFG conditions. In the incongruent conditions, while AF identification was primarily auditory-based, AFG identification was partially based on gestures, demonstrating the use of gestures as valid cues in tone identification. The English perceivers’ performance was poor in the congruent AF condition, but improved significantly in AFG. While the incongruent AF identification showed some reliance on facial information, incongruent AFG identification relied more on gestural than auditory-facial information. These results indicate positive effects of facial and especially gestural input on non-native tone perception, suggesting that cross-modal (visuospatial resources can be recruited to aid auditory perception when phonetic demands are high. The current findings may inform patterns of tone acquisition and development, suggesting how multi-modal speech enhancement principles may be applied to facilitate speech learning.

  20. Positron Emission Tomography Imaging Reveals Auditory and Frontal Cortical Regions Involved with Speech Perception and Loudness Adaptation.

    Directory of Open Access Journals (Sweden)

    Georg Berding

    Full Text Available Considerable progress has been made in the treatment of hearing loss with auditory implants. However, there are still many implanted patients that experience hearing deficiencies, such as limited speech understanding or vanishing perception with continuous stimulation (i.e., abnormal loudness adaptation. The present study aims to identify specific patterns of cerebral cortex activity involved with such deficiencies. We performed O-15-water positron emission tomography (PET in patients implanted with electrodes within the cochlea, brainstem, or midbrain to investigate the pattern of cortical activation in response to speech or continuous multi-tone stimuli directly inputted into the implant processor that then delivered electrical patterns through those electrodes. Statistical parametric mapping was performed on a single subject basis. Better speech understanding was correlated with a larger extent of bilateral auditory cortex activation. In contrast to speech, the continuous multi-tone stimulus elicited mainly unilateral auditory cortical activity in which greater loudness adaptation corresponded to weaker activation and even deactivation. Interestingly, greater loudness adaptation was correlated with stronger activity within the ventral prefrontal cortex, which could be up-regulated to suppress the irrelevant or aberrant signals into the auditory cortex. The ability to detect these specific cortical patterns and differences across patients and stimuli demonstrates the potential for using PET to diagnose auditory function or dysfunction in implant patients, which in turn could guide the development of appropriate stimulation strategies for improving hearing rehabilitation. Beyond hearing restoration, our study also reveals a potential role of the frontal cortex in suppressing irrelevant or aberrant activity within the auditory cortex, and thus may be relevant for understanding and treating tinnitus.

  1. Positron Emission Tomography Imaging Reveals Auditory and Frontal Cortical Regions Involved with Speech Perception and Loudness Adaptation.

    Science.gov (United States)

    Berding, Georg; Wilke, Florian; Rode, Thilo; Haense, Cathleen; Joseph, Gert; Meyer, Geerd J; Mamach, Martin; Lenarz, Minoo; Geworski, Lilli; Bengel, Frank M; Lenarz, Thomas; Lim, Hubert H

    2015-01-01

    Considerable progress has been made in the treatment of hearing loss with auditory implants. However, there are still many implanted patients that experience hearing deficiencies, such as limited speech understanding or vanishing perception with continuous stimulation (i.e., abnormal loudness adaptation). The present study aims to identify specific patterns of cerebral cortex activity involved with such deficiencies. We performed O-15-water positron emission tomography (PET) in patients implanted with electrodes within the cochlea, brainstem, or midbrain to investigate the pattern of cortical activation in response to speech or continuous multi-tone stimuli directly inputted into the implant processor that then delivered electrical patterns through those electrodes. Statistical parametric mapping was performed on a single subject basis. Better speech understanding was correlated with a larger extent of bilateral auditory cortex activation. In contrast to speech, the continuous multi-tone stimulus elicited mainly unilateral auditory cortical activity in which greater loudness adaptation corresponded to weaker activation and even deactivation. Interestingly, greater loudness adaptation was correlated with stronger activity within the ventral prefrontal cortex, which could be up-regulated to suppress the irrelevant or aberrant signals into the auditory cortex. The ability to detect these specific cortical patterns and differences across patients and stimuli demonstrates the potential for using PET to diagnose auditory function or dysfunction in implant patients, which in turn could guide the development of appropriate stimulation strategies for improving hearing rehabilitation. Beyond hearing restoration, our study also reveals a potential role of the frontal cortex in suppressing irrelevant or aberrant activity within the auditory cortex, and thus may be relevant for understanding and treating tinnitus.

  2. Musical Experience, Auditory Perception and Reading-Related Skills in Children

    Science.gov (United States)

    Banai, Karen; Ahissar, Merav

    2013-01-01

    Background The relationships between auditory processing and reading-related skills remain poorly understood despite intensive research. Here we focus on the potential role of musical experience as a confounding factor. Specifically we ask whether the pattern of correlations between auditory and reading related skills differ between children with different amounts of musical experience. Methodology/Principal Findings Third grade children with various degrees of musical experience were tested on a battery of auditory processing and reading related tasks. Very poor auditory thresholds and poor memory skills were abundant only among children with no musical education. In this population, indices of auditory processing (frequency and interval discrimination thresholds) were significantly correlated with and accounted for up to 13% of the variance in reading related skills. Among children with more than one year of musical training, auditory processing indices were better, yet reading related skills were not correlated with them. A potential interpretation for the reduction in the correlations might be that auditory and reading-related skills improve at different rates as a function of musical training. Conclusions/Significance Participants’ previous musical training, which is typically ignored in studies assessing the relations between auditory and reading related skills, should be considered. Very poor auditory and memory skills are rare among children with even a short period of musical training, suggesting musical training could have an impact on both. The lack of correlation in the musically trained population suggests that a short period of musical training does not enhance reading related skills of individuals with within-normal auditory processing skills. Further studies are required to determine whether the associations between musical training, auditory processing and memory are indeed causal or whether children with poor auditory and memory skills are less

  3. Musical experience, auditory perception and reading-related skills in children.

    Science.gov (United States)

    Banai, Karen; Ahissar, Merav

    2013-01-01

    The relationships between auditory processing and reading-related skills remain poorly understood despite intensive research. Here we focus on the potential role of musical experience as a confounding factor. Specifically we ask whether the pattern of correlations between auditory and reading related skills differ between children with different amounts of musical experience. Third grade children with various degrees of musical experience were tested on a battery of auditory processing and reading related tasks. Very poor auditory thresholds and poor memory skills were abundant only among children with no musical education. In this population, indices of auditory processing (frequency and interval discrimination thresholds) were significantly correlated with and accounted for up to 13% of the variance in reading related skills. Among children with more than one year of musical training, auditory processing indices were better, yet reading related skills were not correlated with them. A potential interpretation for the reduction in the correlations might be that auditory and reading-related skills improve at different rates as a function of musical training. Participants' previous musical training, which is typically ignored in studies assessing the relations between auditory and reading related skills, should be considered. Very poor auditory and memory skills are rare among children with even a short period of musical training, suggesting musical training could have an impact on both. The lack of correlation in the musically trained population suggests that a short period of musical training does not enhance reading related skills of individuals with within-normal auditory processing skills. Further studies are required to determine whether the associations between musical training, auditory processing and memory are indeed causal or whether children with poor auditory and memory skills are less likely to study music and if so, why this is the case.

  4. Changes in auditory perceptions and cortex resulting from hearing recovery after extended congenital unilateral hearing loss

    OpenAIRE

    Firszt, Jill B.; Reeder, Ruth M.; Holden, Timothy A.; Harold eBurton; Chole, Richard A.

    2013-01-01

    Monaural hearing induces auditory system reorganization. Imbalanced input also degrades time-intensity cues for sound localization and signal segregation for listening in noise. While there have been studies of bilateral auditory deprivation and later hearing restoration (e.g. cochlear implants), less is known about unilateral auditory deprivation and subsequent hearing improvement. We investigated effects of long-term congenital unilateral hearing loss on localization, speech understanding, ...

  5. Musical experience, auditory perception and reading-related skills in children.

    Directory of Open Access Journals (Sweden)

    Karen Banai

    Full Text Available BACKGROUND: The relationships between auditory processing and reading-related skills remain poorly understood despite intensive research. Here we focus on the potential role of musical experience as a confounding factor. Specifically we ask whether the pattern of correlations between auditory and reading related skills differ between children with different amounts of musical experience. METHODOLOGY/PRINCIPAL FINDINGS: Third grade children with various degrees of musical experience were tested on a battery of auditory processing and reading related tasks. Very poor auditory thresholds and poor memory skills were abundant only among children with no musical education. In this population, indices of auditory processing (frequency and interval discrimination thresholds were significantly correlated with and accounted for up to 13% of the variance in reading related skills. Among children with more than one year of musical training, auditory processing indices were better, yet reading related skills were not correlated with them. A potential interpretation for the reduction in the correlations might be that auditory and reading-related skills improve at different rates as a function of musical training. CONCLUSIONS/SIGNIFICANCE: Participants' previous musical training, which is typically ignored in studies assessing the relations between auditory and reading related skills, should be considered. Very poor auditory and memory skills are rare among children with even a short period of musical training, suggesting musical training could have an impact on both. The lack of correlation in the musically trained population suggests that a short period of musical training does not enhance reading related skills of individuals with within-normal auditory processing skills. Further studies are required to determine whether the associations between musical training, auditory processing and memory are indeed causal or whether children with poor auditory and

  6. Auditory-Visual Speech Perception in Three- and Four-Year-Olds and Its Relationship to Perceptual Attunement and Receptive Vocabulary

    Science.gov (United States)

    Erdener, Dogu; Burnham, Denis

    2018-01-01

    Despite the body of research on auditory-visual speech perception in infants and schoolchildren, development in the early childhood period remains relatively uncharted. In this study, English-speaking children between three and four years of age were investigated for: (i) the development of visual speech perception--lip-reading and visual…

  7. Effect of 24 Hours of Sleep Deprivation on Auditory and Linguistic Perception: A Comparison among Young Controls, Sleep-Deprived Participants, Dyslexic Readers, and Aging Adults

    Science.gov (United States)

    Fostick, Leah; Babkoff, Harvey; Zukerman, Gil

    2014-01-01

    Purpose: To test the effects of 24 hr of sleep deprivation on auditory and linguistic perception and to assess the magnitude of this effect by comparing such performance with that of aging adults on speech perception and with that of dyslexic readers on phonological awareness. Method: Fifty-five sleep-deprived young adults were compared with 29…

  8. Auditory, Visual, and Auditory-Visual Speech Perception by Individuals with Cochlear Implants versus Individuals with Hearing Aids

    Science.gov (United States)

    Most, Tova; Rothem, Hilla; Luntz, Michal

    2009-01-01

    The researchers evaluated the contribution of cochlear implants (CIs) to speech perception by a sample of prelingually deaf individuals implanted after age 8 years. This group was compared with a group with profound hearing impairment (HA-P), and with a group with severe hearing impairment (HA-S), both of which used hearing aids. Words and…

  9. Auditory, Visual, and Auditory-Visual Perceptions of Emotions by Young Children with Hearing Loss versus Children with Normal Hearing

    Science.gov (United States)

    Most, Tova; Michaelis, Hilit

    2012-01-01

    Purpose: This study aimed to investigate the effect of hearing loss (HL) on emotion-perception ability among young children with and without HL. Method: A total of 26 children 4.0-6.6 years of age with prelingual sensory-neural HL ranging from moderate to profound and 14 children with normal hearing (NH) participated. They were asked to identify…

  10. An fMRI Study of Audiovisual Speech Perception Reveals Multisensory Interactions in Auditory Cortex.

    Science.gov (United States)

    Okada, Kayoko; Venezia, Jonathan H; Matchin, William; Saberi, Kourosh; Hickok, Gregory

    2013-01-01

    Research on the neural basis of speech-reading implicates a network of auditory language regions involving inferior frontal cortex, premotor cortex and sites along superior temporal cortex. In audiovisual speech studies, neural activity is consistently reported in posterior superior temporal Sulcus (pSTS) and this site has been implicated in multimodal integration. Traditionally, multisensory interactions are considered high-level processing that engages heteromodal association cortices (such as STS). Recent work, however, challenges this notion and suggests that multisensory interactions may occur in low-level unimodal sensory cortices. While previous audiovisual speech studies demonstrate that high-level multisensory interactions occur in pSTS, what remains unclear is how early in the processing hierarchy these multisensory interactions may occur. The goal of the present fMRI experiment is to investigate how visual speech can influence activity in auditory cortex above and beyond its response to auditory speech. In an audiovisual speech experiment, subjects were presented with auditory speech with and without congruent visual input. Holding the auditory stimulus constant across the experiment, we investigated how the addition of visual speech influences activity in auditory cortex. We demonstrate that congruent visual speech increases the activity in auditory cortex.

  11. An fMRI Study of Audiovisual Speech Perception Reveals Multisensory Interactions in Auditory Cortex.

    Directory of Open Access Journals (Sweden)

    Kayoko Okada

    Full Text Available Research on the neural basis of speech-reading implicates a network of auditory language regions involving inferior frontal cortex, premotor cortex and sites along superior temporal cortex. In audiovisual speech studies, neural activity is consistently reported in posterior superior temporal Sulcus (pSTS and this site has been implicated in multimodal integration. Traditionally, multisensory interactions are considered high-level processing that engages heteromodal association cortices (such as STS. Recent work, however, challenges this notion and suggests that multisensory interactions may occur in low-level unimodal sensory cortices. While previous audiovisual speech studies demonstrate that high-level multisensory interactions occur in pSTS, what remains unclear is how early in the processing hierarchy these multisensory interactions may occur. The goal of the present fMRI experiment is to investigate how visual speech can influence activity in auditory cortex above and beyond its response to auditory speech. In an audiovisual speech experiment, subjects were presented with auditory speech with and without congruent visual input. Holding the auditory stimulus constant across the experiment, we investigated how the addition of visual speech influences activity in auditory cortex. We demonstrate that congruent visual speech increases the activity in auditory cortex.

  12. Visual and auditory socio-cognitive perception in unilateral temporal lobe epilepsy in children and adolescents: a prospective controlled study.

    Science.gov (United States)

    Laurent, Agathe; Arzimanoglou, Alexis; Panagiotakaki, Eleni; Sfaello, Ignacio; Kahane, Philippe; Ryvlin, Philippe; Hirsch, Edouard; de Schonen, Scania

    2014-12-01

    A high rate of abnormal social behavioural traits or perceptual deficits is observed in children with unilateral temporal lobe epilepsy. In the present study, perception of auditory and visual social signals, carried by faces and voices, was evaluated in children or adolescents with temporal lobe epilepsy. We prospectively investigated a sample of 62 children with focal non-idiopathic epilepsy early in the course of the disorder. The present analysis included 39 children with a confirmed diagnosis of temporal lobe epilepsy. Control participants (72), distributed across 10 age groups, served as a control group. Our socio-perceptual evaluation protocol comprised three socio-visual tasks (face identity, facial emotion and gaze direction recognition), two socio-auditory tasks (voice identity and emotional prosody recognition), and three control tasks (lip reading, geometrical pattern and linguistic intonation recognition). All 39 patients also benefited from a neuropsychological examination. As a group, children with temporal lobe epilepsy performed at a significantly lower level compared to the control group with regards to recognition of facial identity, direction of eye gaze, and emotional facial expressions. We found no relationship between the type of visual deficit and age at first seizure, duration of epilepsy, or the epilepsy-affected cerebral hemisphere. Deficits in socio-perceptual tasks could be found independently of the presence of deficits in visual or auditory episodic memory, visual non-facial pattern processing (control tasks), or speech perception. A normal FSIQ did not exempt some of the patients from an underlying deficit in some of the socio-perceptual tasks. Temporal lobe epilepsy not only impairs development of emotion recognition, but can also impair development of perception of other socio-perceptual signals in children with or without intellectual deficiency. Prospective studies need to be designed to evaluate the results of appropriate re

  13. Changes in auditory perceptions and cortex resulting from hearing recovery after extended congenital unilateral hearing loss

    Directory of Open Access Journals (Sweden)

    Jill B Firszt

    2013-12-01

    Full Text Available Monaural hearing induces auditory system reorganization. Imbalanced input also degrades time-intensity cues for sound localization and signal segregation for listening in noise. While there have been studies of bilateral auditory deprivation and later hearing restoration (e.g. cochlear implants, less is known about unilateral auditory deprivation and subsequent hearing improvement. We investigated effects of long-term congenital unilateral hearing loss on localization, speech understanding, and cortical organization following hearing recovery. Hearing in the congenitally affected ear of a 41 year old female improved significantly after stapedotomy and reconstruction. Pre-operative hearing threshold levels showed unilateral, mixed, moderately-severe to profound hearing loss. The contralateral ear had hearing threshold levels within normal limits. Testing was completed prior to, and three and nine months after surgery. Measurements were of sound localization with intensity-roved stimuli and speech recognition in various noise conditions. We also evoked magnetic resonance signals with monaural stimulation to the unaffected ear. Activation magnitudes were determined in core, belt, and parabelt auditory cortex regions via an interrupted single event design. Hearing improvement following 40 years of congenital unilateral hearing loss resulted in substantially improved sound localization and speech recognition in noise. Auditory cortex also reorganized. Contralateral auditory cortex responses were increased after hearing recovery and the extent of activated cortex was bilateral, including a greater portion of the posterior superior temporal plane. Thus, prolonged predominant monaural stimulation did not prevent auditory system changes consequent to restored binaural hearing. Results support future research of unilateral auditory deprivation effects and plasticity, with consideration for length of deprivation, age at hearing correction, degree and type

  14. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events

    Directory of Open Access Journals (Sweden)

    Jeroen eStekelenburg

    2012-05-01

    Full Text Available In many natural audiovisual events (e.g., a clap of the two hands, the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have already reported that there are distinct neural correlates of temporal (when versus phonetic/semantic (which content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual part of the audiovisual stimulus. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical subadditive amplitude reductions (AV – V < A were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that the N1 suppression was larger for spatially congruent stimuli. A very early audiovisual interaction was also found at 30-50 ms in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  15. Underlying structure of auditory-visual consonant perception by hearing-impaired children and the influences of syllabic compression.

    Science.gov (United States)

    Busby, P A; Tong, Y C; Clark, G M

    1988-06-01

    The identification of consonants in (a)-C-(a) nonsense syllables, using a fourteen-alternative forced-choice procedure, was examined in 4 profoundly hearing-impaired children under five conditions: audition alone using hearing aids in free-field (A), vision alone (V), auditory-visual using hearing aids in free-field (AV1), auditory-visual with linear amplification (AV2), and auditory-visual with syllabic compression (AV3). In the AV2 and AV3 conditions, acoustic signals were binaurally presented by magnetic or acoustic coupling to the subjects' hearing aids. The syllabic compressor had a compression ratio of 10:1, and attack and release times were 1.2 ms and 60 ms. The confusion matrices were subjected to two analysis methods: hierarchical clustering and information transmission analysis using articulatory features. The same general conclusions were drawn on the basis of results obtained from either analysis method. The results indicated better performance in the V condition than in the A condition. In the three AV conditions, the subjects predominantly combined the acoustic parameter of voicing with the visual signal. No consistent differences were recorded across the three AV conditions. Syllabic compression did not, therefore, appear to have a significant influence on AV perception for these children. A high degree of subject variability was recorded for the A and three AV conditions, but not for the V condition.

  16. A case of auditory agnosia with impairment of perception and expression of music: cognitive processing of tonality.

    Science.gov (United States)

    Satoh, Masayuki; Takeda, Katsuhiko; Kuzuhara, Shigeki

    2007-01-01

    There is fairly general agreement that the melody and the rhythm are the independent components of the perception of music. In the theory of music, the melody and harmony determine to which tonality the music belongs. It remains an unsettled question whether the tonality is also an independent component of the perception of music, or a by-product of the melody and harmony. We describe a patient with auditory agnosia and expressive amusia that developed after a bilateral infarction of the temporal lobes. We carried out a detailed examination of musical ability in the patient and in control subjects. Comparing with a control population, we identified the following impairments in music perception: (a) discrimination of familiar melodies; (b) discrimination of unfamiliar phrases, and (c) discrimination of isolated chords. His performance in pitch discrimination and tonality were within normal limits. Although intrasubject statistical analysis revealed significant difference only between tonality task and unfamiliar phrase performance, comparison with control subjects suggested a dissociation between a preserved tonality analysis and impairment of perception of melody and chords. By comparing the results of our patient with those in the literature, we may say that there is a double dissociation between the tonality and the other components. Thus, it seems reasonable to suppose that tonality is an independent component of music perception. Based on our present and previous studies, we proposed the revised version of the cognitive model of musical processing in the brain. Copyright 2007 S. Karger AG, Basel.

  17. Operator auditory perception and spectral quantification of umbilical artery Doppler ultrasound signals.

    Directory of Open Access Journals (Sweden)

    Ann Thuring

    Full Text Available OBJECTIVE: An experienced sonographer can by listening to the Doppler audio signals perceive various timbres that distinguish different types of umbilical artery flow despite an unchanged pulsatility index (PI. Our aim was to develop an objective measure of the Doppler audio signals recorded from fetoplacental circulation in a sheep model. METHODS: Various degrees of pathological flow velocity waveforms in the umbilical artery, similar to those in human complicated pregnancies, were induced by microsphere embolization of the placental bed (embolization model, 7 lamb fetuses, 370 Doppler recordings or by fetal hemodilution (anemia model, 4 lamb fetuses, 184 recordings. A subjective 11-step operator auditory scale (OAS was related to conventional Doppler parameters, PI and time average mean velocity (TAM, and to sound frequency analysis of Doppler signals (sound frequency with the maximum energy content [MAXpeak] and frequency band at maximum level minus 15 dB [MAXpeak-15 dB] over several heart cycles. RESULTS: WE FOUND A NEGATIVE CORRELATION BETWEEN THE OAS AND PI: median Rho -0.73 (range -0.35- -0.94 and -0.68 (range -0.57- -0.78 in the two lamb models, respectively. There was a positive correlation between OAS and TAM in both models: median Rho 0.80 (range 0.58-0.95 and 0.90 (range 0.78-0.95, respectively. A strong correlation was found between TAM and the results of sound spectrum analysis; in the embolization model the median r was 0.91 (range 0.88-0.97 for MAXpeak and 0.91 (range 0.82-0.98 for MAXpeak-15 dB. In the anemia model, the corresponding values were 0.92 (range 0.78-0.96 and 0.96 (range 0.89-0.98, respectively. CONCLUSION: Audio-spectrum analysis reflects the subjective perception of Doppler sound signals in the umbilical artery and has a strong correlation to TAM-velocity. This information might be of importance for clinical management of complicated pregnancies as an addition to conventional Doppler parameters.

  18. Gated auditory speech perception: effects of listening conditions and cognitive capacity.

    Science.gov (United States)

    Moradi, Shahram; Lidestam, Björn; Saremi, Amin; Rönnberg, Jerker

    2014-01-01

    This study aimed to measure the initial portion of signal required for the correct identification of auditory speech stimuli (or isolation points, IPs) in silence and noise, and to investigate the relationships between auditory and cognitive functions in silence and noise. Twenty-one university students were presented with auditory stimuli in a gating paradigm for the identification of consonants, words, and final words in highly predictable and low predictable sentences. The Hearing in Noise Test (HINT), the reading span test, and the Paced Auditory Serial Attention Test were also administered to measure speech-in-noise ability, working memory and attentional capacities of the participants, respectively. The results showed that noise delayed the identification of consonants, words, and final words in highly predictable and low predictable sentences. HINT performance correlated with working memory and attentional capacities. In the noise condition, there were correlations between HINT performance, cognitive task performance, and the IPs of consonants and words. In the silent condition, there were no correlations between auditory and cognitive tasks. In conclusion, a combination of hearing-in-noise ability, working memory capacity, and attention capacity is needed for the early identification of consonants and words in noise.

  19. Deep band modulated phrase perception in quiet and noise in individuals with auditory neuropathy spectrum disorder and sensorineural hearing loss

    Directory of Open Access Journals (Sweden)

    Hemanth Narayan Shetty

    2017-01-01

    Full Text Available Context: Deep band modulation (DBM improves speech perception in individuals with learning disability and older adults, who had temporal impairment in them. However, it is unclear on perception of DBM phrases at quiet and noise conditions in individuals with auditory neuropathy spectrum disorder (ANSD and sensorineural hearing loss (SNHL, as these individuals suffer from temporal impairment. Aim: The aim is to study the effect of DBM and noise on phrase perception in individuals with normal hearing, SNHL, and ANSD. Settings and Design: A factorial design was used to study deep-band-modulated phrase perception in quiet and at noise. Materials and Methods: Twenty participants in each group (normal, SNHL, and ANSD were included to assess phrase perception on four lists of each unprocessed (UP and DBM phrases at different signal-to-noise ratios (SNRs (−1, −3, and −5 dB SNR, which were presented at most comfortable level. In addition, a temporal processing was determined by gap detection threshold test. Statistical Analysis: A mixed analysis of variance was used to investigate main and interaction effects of conditions, noise, and groups. Further, a Pearson product moment correlation was used to document relationship between phrase perception and temporal processing among study participants in each experimental condition. Results: In each group, a significant improvement was observed in DBM phrase perception over UP phrase recognition in quiet and noise conditions. Although a significant improvement was observed, the benefit of recognition from DBM over UP is negligible at −5 dB SNR in both SNHL and ANSD groups. In addition, as expected, a significant improvement in phrase perception in each condition was found in normal hearing than SNHL followed by ANSD. Further, in both atypical groups, a strong negative correlation was found between phrase perception and gap detection threshold in each of the experimental condition. Conclusion: This

  20. Auditory Hallucination

    Directory of Open Access Journals (Sweden)

    MohammadReza Rajabi

    2003-09-01

    Full Text Available Auditory Hallucination or Paracusia is a form of hallucination that involves perceiving sounds without auditory stimulus. A common is hearing one or more talking voices which is associated with psychotic disorders such as schizophrenia or mania. Hallucination, itself, is the most common feature of perceiving the wrong stimulus or to the better word perception of the absence stimulus. Here we will discuss four definitions of hallucinations:1.Perceiving of a stimulus without the presence of any subject; 2. hallucination proper which are the wrong perceptions that are not the falsification of real perception, Although manifest as a new subject and happen along with and synchronously with a real perception;3. hallucination is an out-of-body perception which has no accordance with a real subjectIn a stricter sense, hallucinations are defined as perceptions in a conscious and awake state in the absence of external stimuli which have qualities of real perception, in that they are vivid, substantial, and located in external objective space. We are going to discuss it in details here.

  1. Echoes of the spoken past: how auditory cortex hears context during speech perception

    National Research Council Canada - National Science Library

    Skipper, Jeremy I

    2014-01-01

    What do we hear when someone speaks and what does auditory cortex (AC) do with that sound? Given how meaningful speech is, it might be hypothesized that AC is most active when other people talk so that their productions get decoded...

  2. The Central Role of Recognition in Auditory Perception: A Neurobiological Model

    Science.gov (United States)

    McLachlan, Neil; Wilson, Sarah

    2010-01-01

    The model presents neurobiologically plausible accounts of sound recognition (including absolute pitch), neural plasticity involved in pitch, loudness and location information integration, and streaming and auditory recall. It is proposed that a cortical mechanism for sound identification modulates the spectrotemporal response fields of inferior…

  3. Hearing a voice in the noise : auditory hallucinations and speech perception

    NARCIS (Netherlands)

    Vercammen, A.; de Haan, E. H. F.; Aleman, A.

    Background. It has recently been suggested that auditory hallucinations are the result of a criterion shift when deciding whether or not a meaningful signal has emerged. The approach proposes that a liberal criterion may result in increased false-positive identifications, without additional

  4. Bird brains and songs : Neural mechanisms of auditory memory and perception in zebra finches

    NARCIS (Netherlands)

    Gobes, S.M.H.

    2009-01-01

    Songbirds, such as zebra finches, learn their songs from a ‘tutor’ (usually the father), early in life. There are strong parallels between the behavioural, cognitive and neural processes that underlie vocal learning in humans and songbirds. In both cases there is a sensitive period for auditory

  5. Auditory Spatial Perception: Auditory Localization

    Science.gov (United States)

    2012-05-01

    the presence of primacy and recency effects , resulting in a large number of errors in which listeners erroneously selected the loudspeaker that had...the sound source that produced this sound. As in the previous studies mentioned, pronounced primacy and recency effect were found. Further research...16 2.3.2 Sound Onset and Precedence Effect

  6. Auditory-Acoustic Basis of Consonant Perception. Attachments A thru I

    Science.gov (United States)

    1991-01-22

    DEC Screen Management routines (SMG$ Run Time Library). Finally, we developed a program to assist the user in editing a file which contains a list of...demisyllable unit over the whole syllable is large reduction in the size of the reference inventory . One study (46) shows that a reduction by about a...perceptual aspect is implied. It is within the broad framwork described above that the auditory-perceptual theory will be considered. But before beginning

  7. An investigation of the auditory perception of western lowland gorillas in an enrichment study.

    Science.gov (United States)

    Brooker, Jake S

    2016-09-01

    Previous research has highlighted the varied effects of auditory enrichment on different captive animals. This study investigated how manipulating musical components can influence the behavior of a group of captive western lowland gorillas (Gorilla gorilla gorilla) at Bristol Zoo. The gorillas were observed during exposure to classical music, rock-and-roll music, and rainforest sounds. The two music conditions were modified to create five further conditions: unmanipulated, decreased pitch, increased pitch, decreased tempo, and increased tempo. We compared the prevalence of activity, anxiety, and social behaviors between the standard conditions. We also compared the prevalence of each of these behaviors across the manipulated conditions of each type of music independently and collectively. Control observations with no sound exposure were regularly scheduled between the observations of the 12 auditory conditions. The results suggest that naturalistic rainforest sounds had no influence on the anxiety of captive gorillas, contrary to past research. The tempo of music appears to be significantly associated with activity levels among this group, and social behavior may be affected by pitch. Low tempo music also may be effective at reducing anxiety behavior in captive gorillas. Regulated auditory enrichment may provide effective means of calming gorillas, or for facilitating active behavior. Zoo Biol. 35:398-408, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  8. Temporal order perception of auditory stimuli is selectively modified by tonal and non-tonal language environments.

    Science.gov (United States)

    Bao, Yan; Szymaszek, Aneta; Wang, Xiaoying; Oron, Anna; Pöppel, Ernst; Szelag, Elzbieta

    2013-12-01

    The close relationship between temporal perception and speech processing is well established. The present study focused on the specific question whether the speech environment could influence temporal order perception in subjects whose language backgrounds are distinctively different, i.e., Chinese (tonal language) vs. Polish (non-tonal language). Temporal order thresholds were measured for both monaurally presented clicks and binaurally presented tone pairs. Whereas the click experiment showed similar order thresholds for the two language groups, the experiment with tone pairs resulted in different observations: while Chinese demonstrated better performance in discriminating the temporal order of two "close frequency" tone pairs (600 Hz and 1200 Hz), Polish subjects showed a reversed pattern, i.e., better performance for "distant frequency" tone pairs (400 Hz and 3000 Hz). These results indicate on the one hand a common temporal mechanism for perceiving the order of two monaurally presented stimuli, and on the other hand neuronal plasticity for perceiving the order of frequency-related auditory stimuli. We conclude that the auditory brain is modified with respect to temporal processing by long-term exposure to a tonal or a non-tonal language. As a consequence of such an exposure different cognitive modes of operation (analytic vs. holistic) are selected: the analytic mode is adopted for "distant frequency" tone pairs in Chinese and for "close frequency" tone pairs in Polish subjects, whereas the holistic mode is selected for "close frequency" tone pairs in Chinese and for "distant frequency" tone pairs in Polish subjects, reflecting a double dissociation of function. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.

  9. Auditory Processing in Specific Language Impairment (SLI): Relations with the Perception of Lexical and Phrasal Stress

    Science.gov (United States)

    Richards, Susan; Goswami, Usha

    2015-01-01

    Purpose: We investigated whether impaired acoustic processing is a factor in developmental language disorders. The amplitude envelope of the speech signal is known to be important in language processing. We examined whether impaired perception of amplitude envelope rise time is related to impaired perception of lexical and phrasal stress in…

  10. Impaired Pitch Perception and Memory in Congenital Amusia: The Deficit Starts in the Auditory Cortex

    Science.gov (United States)

    Albouy, Philippe; Mattout, Jeremie; Bouet, Romain; Maby, Emmanuel; Sanchez, Gaetan; Aguera, Pierre-Emmanuel; Daligault, Sebastien; Delpuech, Claude; Bertrand, Olivier; Caclin, Anne; Tillmann, Barbara

    2013-01-01

    Congenital amusia is a lifelong disorder of music perception and production. The present study investigated the cerebral bases of impaired pitch perception and memory in congenital amusia using behavioural measures, magnetoencephalography and voxel-based morphometry. Congenital amusics and matched control subjects performed two melodic tasks (a…

  11. The Role of Sensory Perception, Emotionality and Lifeworld in Auditory Word Processing: Evidence from Congenital Blindness and Synesthesia.

    Science.gov (United States)

    Papadopoulos, Judith; Domahs, Frank; Kauschke, Christina

    2017-12-01

    Although it has been established that human beings process concrete and abstract words differently, it is still a matter of debate what factors contribute to this difference. Since concrete concepts are closely tied to sensory perception, perceptual experience seems to play an important role in their processing. The present study investigated the processing of nouns during an auditory lexical decision task. Participants came from three populations differing in their visual-perceptual experience: congenitally blind persons, word-color synesthetes, and sighted non-synesthetes. Specifically, three features with potential relevance to concreteness were manipulated: sensory perception, emotionality, and Husserlian lifeworld, a concept related to the inner versus the outer world of the self. In addition to a classical concreteness effect, our results revealed a significant effect of lifeworld: words that are closely linked to the internal states of humans were processed faster than words referring to the outside world. When lifeworld was introduced as predictor, there was no effect of emotionality. Concerning participants' perceptual experience, an interaction between participant group and item characteristics was found: the effects of both concreteness and lifeworld were more pronounced for blind compared to sighted participants. We will discuss the results in the context of embodied semantics, and we will propose an approach to concreteness based on the individual's bodily experience and the relatedness of a given concept to the self.

  12. Functional associations at global brain level during perception of an auditory illusion by applying maximal information coefficient

    Science.gov (United States)

    Bhattacharya, Joydeep; Pereda, Ernesto; Ioannou, Christos

    2018-02-01

    Maximal information coefficient (MIC) is a recently introduced information-theoretic measure of functional association with a promising potential of application to high dimensional complex data sets. Here, we applied MIC to reveal the nature of the functional associations between different brain regions during the perception of binaural beat (BB); BB is an auditory illusion occurring when two sinusoidal tones of slightly different frequency are presented separately to each ear and an illusory beat at the different frequency is perceived. We recorded sixty-four channels EEG from two groups of participants, musicians and non-musicians, during the presentation of BB, and systematically varied the frequency difference from 1 Hz to 48 Hz. Participants were also presented non-binuaral beat (NBB) stimuli, in which same frequencies were presented to both ears. Across groups, as compared to NBB, (i) BB conditions produced the most robust changes in the MIC values at the whole brain level when the frequency differences were in the classical alpha range (8-12 Hz), and (ii) the number of electrode pairs showing nonlinear associations decreased gradually with increasing frequency difference. Between groups, significant effects were found for BBs in the broad gamma frequency range (34-48 Hz), but such effects were not observed between groups during NBB. Altogether, these results revealed the nature of functional associations at the whole brain level during the binaural beat perception and demonstrated the usefulness of MIC in characterizing interregional neural dependencies.

  13. Giving Speech a Hand: Gesture Modulates Activity in Auditory Cortex During Speech Perception

    OpenAIRE

    Hubbard, Amy L; Wilson, Stephen M.; Callan, Daniel E; Dapretto, Mirella

    2009-01-01

    Viewing hand gestures during face-to-face communication affects speech perception and comprehension. Despite the visible role played by gesture in social interactions, relatively little is known about how the brain integrates hand gestures with co-occurring speech. Here we used functional magnetic resonance imaging (fMRI) and an ecologically valid paradigm to investigate how beat gesture – a fundamental type of hand gesture that marks speech prosody – might impact speech perception at the neu...

  14. Dissociable neural imprints of perception and grammar in auditory functional imaging.

    Science.gov (United States)

    Herrmann, Björn; Obleser, Jonas; Kalberlah, Christian; Haynes, John-Dylan; Friederici, Angela D

    2012-03-01

    In language processing, the relative contribution of early sensory and higher cognitive brain areas is still an open issue. A recent controversial hypothesis proposes that sensory cortices show sensitivity to syntactic processes, whereas other studies suggest a wider neural network outside sensory regions. The goal of the current event-related fMRI study is to clarify the contribution of sensory cortices in auditory syntactic processing in a 2 × 2 design. Two-word utterances were presented auditorily and varied both in perceptual markedness (presence or absence of an overt word category marking "-t"), and in grammaticality (syntactically correct or incorrect). A multivariate pattern classification approach was applied to the data, flanked by conventional cognitive subtraction analyses. The combination of methods and the 2 × 2 design revealed a clear picture: The cognitive subtraction analysis found initial syntactic processing signatures in a neural network including the left IFG, the left aSTG, the left superior temporal sulcus (STS), as well as the right STS/STG. Classification of local multivariate patterns indicated the left-hemispheric regions in IFG, aSTG, and STS to be more syntax-specific than the right-hemispheric regions. Importantly, auditory sensory cortices were only sensitive to the overt perceptual marking, but not to the grammaticality, speaking against syntax-inflicted sensory cortex modulations. Instead, our data provide clear evidence for a distinction between regions involved in pure perceptual processes and regions involved in initial syntactic processes. Copyright © 2011 Wiley Periodicals, Inc.

  15. The use of auditory and visual context in speech perception by listeners with normal hearing and listeners with cochlear implants

    Directory of Open Access Journals (Sweden)

    Matthew eWinn

    2013-11-01

    Full Text Available There is a wide range of acoustic and visual variability across different talkers and different speaking contexts. Listeners with normal hearing accommodate that variability in ways that facilitate efficient perception, but it is not known whether listeners with cochlear implants can do the same. In this study, listeners with normal hearing (NH and listeners with cochlear implants (CIs were tested for accommodation to auditory and visual phonetic contexts created by gender-driven speech differences as well as vowel coarticulation and lip rounding in both consonants and vowels. Accommodation was measured as the shifting of perceptual boundaries between /s/ and /ʃ/ sounds in various contexts, as modeled by mixed-effects logistic regression. Owing to the spectral contrasts thought to underlie these context effects, CI listeners were predicted to perform poorly, but showed considerable success. Listeners with cochlear implants not only showed sensitivity to auditory cues to gender, they were also able to use visual cues to gender (i.e. faces as a supplement or proxy for information in the acoustic domain, in a pattern that was not observed for listeners with normal hearing. Spectrally-degraded stimuli heard by listeners with normal hearing generally did not elicit strong context effects, underscoring the limitations of noise vocoders and/or the importance of experience with electric hearing. Visual cues for consonant lip rounding and vowel lip rounding were perceived in a manner consistent with coarticulation and were generally used more heavily by listeners with CIs. Results suggest that listeners with cochlear implants are able to accommodate various sources of acoustic variability either by attending to appropriate acoustic cues or by inferring them via the visual signal.

  16. Compensation for Coarticulation: Disentangling Auditory and Gestural Theories of Perception of Coarticulatory Effects in Speech

    Science.gov (United States)

    Viswanathan, Navin; Magnuson, James S.; Fowler, Carol A.

    2010-01-01

    According to one approach to speech perception, listeners perceive speech by applying general pattern matching mechanisms to the acoustic signal (e.g., Diehl, Lotto, & Holt, 2004). An alternative is that listeners perceive the phonetic gestures that structured the acoustic signal (e.g., Fowler, 1986). The two accounts have offered different…

  17. Adaptive Sex Differences in Auditory Motion Perception: Looming Sounds Are Special

    Science.gov (United States)

    Neuhoff, John G.; Planisek, Rianna; Seifritz, Erich

    2009-01-01

    In 4 experiments, the authors examined sex differences in audiospatial perception of sounds that moved toward and away from the listener. Experiment 1 showed that both men and women underestimated the time-to-arrival of full-cue looming sounds. However, this perceptual bias was significantly stronger among women than among men. In Experiment 2,…

  18. Hearing Aid-Induced Plasticity in the Auditory System of Older Adults: Evidence from Speech Perception

    Science.gov (United States)

    Lavie, Limor; Banai, Karen; Karni, Avi; Attias, Joseph

    2015-01-01

    Purpose: We tested whether using hearing aids can improve unaided performance in speech perception tasks in older adults with hearing impairment. Method: Unaided performance was evaluated in dichotic listening and speech-­in-­noise tests in 47 older adults with hearing impairment; 36 participants in 3 study groups were tested before hearing aid…

  19. Individual Differences in Pseudohomophony Effect Relates to Auditory Categorical Perception Skills

    Science.gov (United States)

    Luque, David; Luque, Juan L.; Lopez-Zamora, Miguel

    2011-01-01

    The study examined whether individual differences in the quality of phonological representations, measured by a categorical perception task (CP), are related with the use of phonological information in a lexical decision pseudohomophone task. In addition, the lexical frequency of the stimuli was manipulated. The sample consisted of…

  20. A loudspeaker-based room auralisation (LoRA) system for auditory perception research

    DEFF Research Database (Denmark)

    Buchholz, Jörg; Favrot, Sylvain Emmanuel

    of the LoRA processing is first presented, followed by a battery of objective and subjective tests to demonstrate the applicability of the different components of the system. In the objective evaluation, monaural and binaural room acoustic measures (e.g., reverberation time, clarity, interaural cross...... correlation coefficient) were considered. The subject evaluation included speech intelligibility and distance perception measures....

  1. Evaluating auditory perception and communication demands required to carry out work tasks and complimentary hearing resources and skills for older workers with hearing loss.

    Science.gov (United States)

    Jennings, M B; Shaw, L; Hodgins, H; Kuchar, D A; Bataghva, L Poost-Foroosh

    2010-01-01

    For older workers with acquired hearing loss, this loss as well as the changing nature of work and the workforce, may lead to difficulties and disadvantages in obtaining and maintaining employment. Currently there are very few instruments that can assist workplaces, employers and workers to prepare for older workers with hearing loss or with the evaluation of auditory perception demands of work, especially those relevant to communication, and safety sensitive workplaces that require high levels of communication. This paper introduces key theoretical considerations that informed the development of a new framework, The Audiologic Ergonomic (AE) Framework to guide audiologists, work rehabilitation professionals and workers in developing tools to support the identification and evaluation of auditory perception demands in the workplace, the challenges to communication and the subsequent productivity and safety in the performance of work duties by older workers with hearing loss. The theoretical concepts underpinning this framework are discussed along with next steps in developing tools such as the Canadian Hearing Demands Tool (C-HearD Tool) in advancing approaches to evaluate auditory perception and communication demands in the workplace.

  2. Visual prosody and speech intelligibility: head movement improves auditory speech perception.

    Science.gov (United States)

    Munhall, K G; Jones, Jeffery A; Callan, Daniel E; Kuratate, Takaaki; Vatikiotis-Bateson, Eric

    2004-02-01

    People naturally move their heads when they speak, and our study shows that this rhythmic head motion conveys linguistic information. Three-dimensional head and face motion and the acoustics of a talker producing Japanese sentences were recorded and analyzed. The head movement correlated strongly with the pitch (fundamental frequency) and amplitude of the talker's voice. In a perception study, Japanese subjects viewed realistic talking-head animations based on these movement recordings in a speech-in-noise task. The animations allowed the head motion to be manipulated without changing other characteristics of the visual or acoustic speech. Subjects correctly identified more syllables when natural head motion was present in the animation than when it was eliminated or distorted. These results suggest that nonverbal gestures such as head movements play a more direct role in the perception of speech than previously known.

  3. Neural correlates of auditory perception in Williams syndrome: an fMRI study.

    Science.gov (United States)

    Levitin, Daniel J; Menon, Vinod; Schmitt, J Eric; Eliez, Stephan; White, Christopher D; Glover, Gary H; Kadis, Jay; Korenberg, Julie R; Bellugi, Ursula; Reiss, Allan L

    2003-01-01

    Williams syndrome (WS), a neurogenetic developmental disorder, is characterized by a rare fractionation of higher cortical functioning: selective preservation of certain complex faculties (language, music, face processing, and sociability) in contrast to marked and severe deficits in nearly every other cognitive domain (reasoning, spatial ability, motor coordination, arithmetic, problem solving). WS people are also known to suffer from hyperacusis and to experience heightened emotional reactions to music and certain classes of noise. We used functional magnetic resonance imaging to examine the neural basis of auditory processing of music and noise in WS patients and age-matched controls and found strikingly different patterns of neural organization between the groups. Those regions supporting music and noise processing in normal subjects were found not to be consistently activated in the WS participants (e.g., superior temporal and middle temporal gyri). Instead, the WS participants showed significantly reduced activation in the temporal lobes coupled with significantly greater activation in the right amygdala. In addition, WS participants (but not controls) showed a widely distributed network of activation in cortical and subcortical structures, including the brain stem, during music processing. Taken together with previous ERP and cytoarchitectonic studies, this first published report of WS using fMRI provides additional evidence of a different neurofunctional organization in WS people than normal people, which may help to explain their atypical reactions to sound. These results constitute an important first step in drawing out the links between genes, brain, cognition, and behavior in Williams syndrome.

  4. The effects of auditory perception and musical preference on anxiety in naive human subjects.

    Science.gov (United States)

    Salamon, Elliott; Bernstein, Steven R; Kim, Seung-A; Kim, Minsun; Stefano, George B

    2003-09-01

    The use of music as a method of relieving anxiety has been studied extensively by researchers from varying disciplines. The abundance of these reports focused on which genre of music best aided in the relief of stress. Little work has been performed in the area of auditory preference in an attempt to ascertain whether an individual's preferred music type aids in their anxiety reduction at levels greater than music that they have little or no propensity for. In the present report we seek to determine whether naive human subjects exposed to music of their preference show a decrease in anxiety, as measured by systolic and diastolic blood pressure values. We furthermore contrast these values to those obtained during non-preferred music listening. We found statistically significant reduction of anxiety levels only when subjects were exposed to their preferred musical selections. Students participating in the study already had knowledge of what genre of music would best relax them. It is our belief, that within the general population, many people do not have this self understanding. We conclude that music therapy may provide a mechanism for this self-understanding and subsequently help alleviate anxiety and stress.

  5. Auditory perception of non-sense and familiar Bengali rhyming words in children with and without SLD.

    Science.gov (United States)

    Sinha, Anisha; Rout, Nachiketa

    2015-12-01

    Rhyming ability is among the earliest metaphonological skills to be acquired during the process of speech and language acquisition. Metalinguistic skills, particularly metaphonological skills, greatly influence language learning during early, school grades and reportedly children with learning disorders are poor at these skills. To develop and validate a Bengali rhyming checklist and study the auditory perception of non-sense and familiar Bengali rhyming words in children with and without specific learning disability (SLD). 60 children, age range 8-11years, participated in two groups; group-A included children with SLD and group-B, typically developing children (TDC). All participants were native Bengali speakers, attending regular school, with hearing sensitivity less than 25dBHL, no history of ear discharge and middle socioeconomic background. A rhyming checklist was developed in Bengali, consisting of familiar (section-A) and non-sense (section-B) words. Test-retest reliability and validity measures were obtained. The items on the checklist were audio recorded and presented to the participants in a rhyming judgment task in one to one set up. Scores were obtained and statistically analyzed using SPSS software (version-11.0). Children with SLD scored significantly low on the rhyming judgment task as against TDC (p.05). Semantic content influences rhyming perception in children with SLD but has no significant effect on TDC. The developed rhyming checklist may be used as a screening tool for children at risk of SLD at primary school grades. Rhyming activities may be utilized by teachers and parents, to promote language learning in young learners. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  6. Perception of parents about the auditory attention skills of his kid with cleft lip and palate: retrospective study

    Directory of Open Access Journals (Sweden)

    Mondelli, Maria Fernanda Capoani Garcia

    2012-01-01

    Full Text Available Introduction: To process and decode the acoustic stimulation are necessary cognitive and neurophysiological mechanisms. The hearing stimulation is influenced by cognitive factor from the highest levels, such as the memory, attention and learning. The sensory deprivation caused by hearing loss from the conductive type, frequently in population with cleft lip and palate, can affect many cognitive functions - among them the attention, besides harm the school performance, linguistic and interpersonal. Objective: Verify the perception of the parents of children with cleft lip and palate about the hearing attention of their kids. Method: Retrospective study of infants with any type of cleft lip and palate, without any genetic syndrome associate which parents answered a relevant questionnaire about the auditory attention skills. Results: 44 are from the male kind and 26 from the female kind, 35,71% of the answers were affirmative for the hearing loss and 71,43% to otologic infections. Conclusion: Most of the interviewed parents pointed at least one of the behaviors related to attention contained in the questionnaire, indicating that the presence of cleft lip and palate can be related to difficulties in hearing attention.

  7. Computational Auditory Scene Analysis Based Perceptual and Neural Principles

    National Research Council Canada - National Science Library

    Wang, DeLiang

    2004-01-01

    .... This fundamental process of auditory perception is called auditory scene analysis. of particular importance in auditory scene analysis is the separation of speech from interfering sounds, or speech segregation...

  8. Auditory-prosodic processing in bipolar disorder; from sensory perception to emotion.

    Science.gov (United States)

    Van Rheenen, Tamsyn E; Rossell, Susan L

    2013-12-01

    Accurate emotion processing is critical to understanding the social world. Despite growing evidence of facial emotion processing impairments in patients with bipolar disorder (BD), comprehensive investigations of emotional prosodic processing is limited. The existing (albeit sparse) literature is inconsistent at best, and confounded by failures to control for the effects of gender or low level sensory-perceptual impairments. The present study sought to address this paucity of research by utilizing a novel behavioural battery to comprehensively investigate the auditory-prosodic profile of BD. Fifty BD patients and 52 healthy controls completed tasks assessing emotional and linguistic prosody, and sensitivity for discriminating tones that deviate in amplitude, duration and pitch. BD patients were less sensitive than their control counterparts in discriminating amplitude and durational cues but not pitch cues or linguistic prosody. They also demonstrated impaired ability to recognize happy intonations; although this was specific to male's with the disorder. The recognition of happy in the patient group was correlated with pitch and amplitude sensitivity in female patients only. The small sample size of patients after stratification by current mood state prevented us from conducting subgroup comparisons between symptomatic, euthymic and control participants to explicitly examine the effects of mood. Our findings indicate the existence of a female advantage for the processing of emotional prosody in BD, specifically for the processing of happy. Although male BD patients were impaired in their ability to recognize happy prosody, this was unrelated to reduced tone discrimination sensitivity. This study indicates the importance of examining both gender and low order sensory perceptual capacity when examining emotional prosody. © 2013 Elsevier B.V. All rights reserved.

  9. Visual and Auditory Components in the Perception of Asynchronous Audiovisual Speech.

    Science.gov (United States)

    García-Pérez, Miguel A; Alcalá-Quintana, Rocío

    2015-12-01

    Research on asynchronous audiovisual speech perception manipulates experimental conditions to observe their effects on synchrony judgments. Probabilistic models establish a link between the sensory and decisional processes underlying such judgments and the observed data, via interpretable parameters that allow testing hypotheses and making inferences about how experimental manipulations affect such processes. Two models of this type have recently been proposed, one based on independent channels and the other using a Bayesian approach. Both models are fitted here to a common data set, with a subsequent analysis of the interpretation they provide about how experimental manipulations affected the processes underlying perceived synchrony. The data consist of synchrony judgments as a function of audiovisual offset in a speech stimulus, under four within-subjects manipulations of the quality of the visual component. The Bayesian model could not accommodate asymmetric data, was rejected by goodness-of-fit statistics for 8/16 observers, and was found to be nonidentifiable, which renders uninterpretable parameter estimates. The independent-channels model captured asymmetric data, was rejected for only 1/16 observers, and identified how sensory and decisional processes mediating asynchronous audiovisual speech perception are affected by manipulations that only alter the quality of the visual component of the speech signal.

  10. Dopamine modulates attentional control of auditory perception: DARPP-32 (PPP1R1B) genotype effects on behavior and cortical evoked potentials.

    Science.gov (United States)

    Li, Shu-Chen; Passow, Susanne; Nietfeld, Wilfried; Schröder, Julia; Bertram, Lars; Heekeren, Hauke R; Lindenberger, Ulman

    2013-07-01

    Using a specific variant of the dichotic listening paradigm, we studied the influence of dopamine on attentional modulation of auditory perception by assessing effects of allelic variation of a single-nucleotide polymorphism (SNP) rs907094 in the DARPP-32 gene (dopamine and adenosine 3', 5'-monophosphate-regulated phosphoprotein 32 kilodations; also known as PPP1R1B) on behavior and cortical evoked potentials. A frequent DARPP-32 haplotype that includes the A allele of this SNP is associated with higher mRNA expression of DARPP-32 protein isoforms, striatal dopamine receptor function, and frontal-striatal connectivity. As we hypothesized, behaviorally the A homozygotes were more flexible in selectively attending to auditory inputs than any G carriers. Moreover, this genotype also affected auditory evoked cortical potentials that reflect early sensory and late attentional processes. Specifically, analyses of event-related potentials (ERPs) revealed that amplitudes of an early component of sensory selection (N1) and a late component (N450) reflecting attentional deployment for conflict resolution were larger in A homozygotes than in any G carriers. Taken together, our data lend support for dopamine's role in modulating auditory attention both during the early sensory selection and late conflict resolution stages. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. The effect of amblyopia on visual-auditory speech perception: why mothers may say "Look at me when I'm talking to you".

    Science.gov (United States)

    Burgmeier, Robert; Desai, Rajen U; Farner, Katherine C; Tiano, Benjamin; Lacey, Ryan; Volpe, Nicholas J; Mets, Marilyn B

    2015-01-01

    Children with a history of amblyopia, even if resolved, exhibit impaired visual-auditory integration and perceive speech differently. To determine whether a history of amblyopia is associated with abnormal visual-auditory speech integration. Retrospective observational study at an academic pediatric ophthalmologic clinic with an average of 4 years of follow-up. Participants were at least 3 years of age and without any history of neurologic or hearing disorders. Of 39 children originally in our study, 6 refused to participate. The remaining 33 participants completed the study. Twenty-four participants (mean [SD] age, 7.0 [1.5] years) had a history of amblyopia in 1 eye, with a visual acuity of at least 20/20 in the nonamblyopic eye. Nine controls (mean [SD] age, 8.0 [3.4] years) were recruited from referrals for visually insignificant etiologies or through preschool-screening eye examinations; all had 20/20 in both eyes. Participants were presented with a video demonstrating the McGurk effect (ie, a stimulus presenting an audio track playing the sound /pa/ and a separate video track of a person articulating /ka/). Normal visual-auditory integration produces the perception of hearing a fusion sound /ta/. Participants were asked to report which sound was perceived, /ka/, /pa/, or /ta/. Prevalence of perception of the fusion /ta/ sound. Prior to the study, amblyopic children were hypothesized to less frequently perceive /ta/. The McGurk effect was perceived by 11 of the 24 participants with amblyopia (45.8%) and all 9 controls (100%) (adjusted odds ratio, 22.3 [95% CI, 1.2-426.0]; P = .005). The McGurk effect was perceived by 100% of participants with amblyopia that was resolved by 5 years of age and by 100% of participants whose onset at amblyopia developed at or after 5 years of age. However, only 18.8% of participants with amblyopia that was unresolved by 5 years of age (n = 16) perceived the McGurk effect (adjusted odds ratio, 27.0 [95% CI, 1.1-654.0]; P = .02

  12. Modeling auditory processing and speech perception in hearing-impaired listeners

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve

    in a diagnostic rhyme test. The framework was constructed such that discrimination errors originating from the front-end and the back-end were separated. The front-end was fitted to individual listeners with cochlear hearing loss according to non-speech data, and speech data were obtained in the same listeners....... It was shown that an accurate simulation of cochlear input-output functions, in addition to the audiogram, played a major role in accounting both for sensitivity and supra-threshold processing. Finally, the model was used as a front-end in a framework developed to predict consonant discrimination...... and reduced speech perception performance in the listeners with cochlear hearing loss. Overall, this work suggests a possible explanation of the variability in consequences of cochlear hearing loss. The proposed model might be an interesting tool for, e.g., evaluation of hearing-aid signal processing....

  13. Motor cortex compensates for lack of sensory and motor experience during auditory speech perception.

    Science.gov (United States)

    Schmitz, Judith; Bartoli, Eleonora; Maffongelli, Laura; Fadiga, Luciano; Sebastian-Galles, Nuria; D'Ausilio, Alessandro

    2018-01-06

    Listening to speech has been shown to activate motor regions, as measured by corticobulbar excitability. In this experiment, we explored if motor regions are also recruited during listening to non-native speech, for which we lack both sensory and motor experience. By administering Transcranial Magnetic Stimulation (TMS) over the left motor cortex we recorded corticobulbar excitability of the lip muscles when Italian participants listened to native-like and non-native German vowels. Results showed that lip corticobulbar excitability increased for a combination of lip use during articulation and non-nativeness of the vowels. Lip corticobulbar excitability was further related to measures obtained in perception and production tasks showing a negative relationship with nativeness ratings and a positive relationship with the uncertainty of lip movement during production of the vowels. These results suggest an active and compensatory role of the motor system during listening to perceptually/articulatory unfamiliar phonemes. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Auditory Perception and Word Recognition in Cantonese-Chinese Speaking Children with and without Specific Language Impairment

    Science.gov (United States)

    Kidd, Joanna C.; Shum, Kathy K.; Wong, Anita M.-Y.; Ho, Connie S.-H.

    2017-01-01

    Auditory processing and spoken word recognition difficulties have been observed in Specific Language Impairment (SLI), raising the possibility that auditory perceptual deficits disrupt word recognition and, in turn, phonological processing and oral language. In this study, fifty-seven kindergarten children with SLI and fifty-three language-typical…

  15. Functional connectivity between face-movement and speech-intelligibility areas during auditory-only speech perception.

    Science.gov (United States)

    Schall, Sonja; von Kriegstein, Katharina

    2014-01-01

    It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers' voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker's face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.

  16. Functional connectivity between face-movement and speech-intelligibility areas during auditory-only speech perception.

    Directory of Open Access Journals (Sweden)

    Sonja Schall

    Full Text Available It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI data. Participants (17 normal participants, 17 developmental prosopagnosics first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker. This was followed by an auditory-only speech recognition task and a control task (voice recognition involving the learned speakers' voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker's face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.

  17. Auditory Display

    DEFF Research Database (Denmark)

    volume. The conference's topics include auditory exploration of data via sonification and audification; real time monitoring of multivariate date; sound in immersive interfaces and teleoperation; perceptual issues in auditory display; sound in generalized computer interfaces; technologies supporting...... auditory display creation; data handling for auditory display systems; applications of auditory display....

  18. Early prelingual auditory development and speech perception at 1-year follow-up in Mandarin-speaking children after cochlear implantation.

    Science.gov (United States)

    Zheng, Yun; Soli, Sigfrid D; Tao, Yong; Xu, Ke; Meng, Zhaoli; Li, Gang; Wang, Kai; Zheng, Hong

    2011-11-01

    The primary purpose of the current study was to evaluate early prelingual auditory development (EPLAD) and early speech perception longitudinally over the first year after cochlear implantation in Mandarin-speaking pediatric cochlear implant (CI) recipients. Outcome measures were designed to allow comparisons of outcomes with those of English-speaking pediatric CI recipients reported in previous research. A hierarchical outcome assessment battery designed to measure EPLAD and early speech perception was used to evaluate 39 pediatric CI recipients implanted between the ages of 1 and 6 years at baseline and 3, 6, and 12 months after implantation. The battery consists of the Mandarin Infant-Toddler Meaningful Auditory Integration Scale (ITMAIS), the Mandarin Early Speech Perception (MESP) test, and the Mandarin Pediatric Speech Intelligibility (MPSI) test. The effects of age at implantation, duration of pre-implant hearing aid use, and Mandarin dialect exposure on performance were evaluated. EPLAD results were compared with the normal developmental trajectory and with results for English-speaking pediatric CI recipients. MESP and MPSI measures of early speech perception were compared with results for English-speaking recipients obtained with comparable measures. EPLAD, as measured with the ITMAIS/MAIS, was comparable in Mandarin- and English-speaking pediatric CI recipients. Both groups exceeded the normal developmental trajectory when hearing age in CI recipients and chronological age in normal were equated. Evidence of significant EPLAD during pre-implant hearing aid use was observed; although at a more gradual rate than after implantation. Early development of speech perception, as measures with the MESP and MPSI tests, was also comparable for Mandarin- and English-speaking CI recipients throughout the first 12 months after implantation. Both Mandarin dialect exposure and the duration of pre-implant hearing aid use significantly affected measures of early speech

  19. [Auditory fatigue].

    Science.gov (United States)

    Sanjuán Juaristi, Julio; Sanjuán Martínez-Conde, Mar

    2015-01-01

    Given the relevance of possible hearing losses due to sound overloads and the short list of references of objective procedures for their study, we provide a technique that gives precise data about the audiometric profile and recruitment factor. Our objectives were to determine peripheral fatigue, through the cochlear microphonic response to sound pressure overload stimuli, as well as to measure recovery time, establishing parameters for differentiation with regard to current psychoacoustic and clinical studies. We used specific instruments for the study of cochlear microphonic response, plus a function generator that provided us with stimuli of different intensities and harmonic components. In Wistar rats, we first measured the normal microphonic response and then the effect of auditory fatigue on it. Using a 60dB pure tone acoustic stimulation, we obtained a microphonic response at 20dB. We then caused fatigue with 100dB of the same frequency, reaching a loss of approximately 11dB after 15minutes; after that, the deterioration slowed and did not exceed 15dB. By means of complex random tone maskers or white noise, no fatigue was caused to the sensory receptors, not even at levels of 100dB and over an hour of overstimulation. No fatigue was observed in terms of sensory receptors. Deterioration of peripheral perception through intense overstimulation may be due to biochemical changes of desensitisation due to exhaustion. Auditory fatigue in subjective clinical trials presumably affects supracochlear sections. The auditory fatigue tests found are not in line with those obtained subjectively in clinical and psychoacoustic trials. Copyright © 2013 Elsevier España, S.L.U. y Sociedad Española de Otorrinolaringología y Patología Cérvico-Facial. All rights reserved.

  20. The Identification and Remediation of Auditory Problems

    Science.gov (United States)

    Kottler, Sylvia B.

    1972-01-01

    Procedures and sample activities are provided for both identifying and training children with auditory perception problems related to sound localization, sound discrimination, and sound sequencing. (KW)

  1. The musical centers of the brain: Vladimir E. Larionov (1857-1929) and the functional neuroanatomy of auditory perception.

    Science.gov (United States)

    Triarhou, Lazaros C; Verina, Tatyana

    2016-11-01

    In 1899 a landmark paper entitled "On the musical centers of the brain" was published in Pflügers Archiv, based on work carried out in the Anatomo-Physiological Laboratory of the Neuropsychiatric Clinic of Vladimir M. Bekhterev (1857-1927) in St. Petersburg, Imperial Russia. The author of that paper was Vladimir E. Larionov (1857-1929), a military doctor and devoted brain scientist, who pursued the problem of the localization of function in the canine and human auditory cortex. His data detailed the existence of tonotopy in the temporal lobe and further demonstrated centrifugal auditory pathways emanating from the auditory cortex and directed to the opposite hemisphere and lower brain centers. Larionov's discoveries have been largely considered as findings of the Bekhterev school. Perhaps this is why there are limited resources on Larionov, especially keeping in mind his military medical career and the fact that after 1917 he just seems to have practiced otorhinolaryngology in Odessa. Larionov died two years after Bekhterev's mysterious death of 1927. The present study highlights the pioneering contributions of Larionov to auditory neuroscience, trusting that the life and work of Vladimir Efimovich will finally, and deservedly, emerge from the shadow of his celebrated master, Vladimir Mikhailovich. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Mathematical model for space perception to explain auditory horopter curves; Chokaku horopter wo setsumeisuru kukan ichi chikaku model

    Energy Technology Data Exchange (ETDEWEB)

    Okura, M. [Dynax Co., Tokyo (Japan); Maeda, T.; Tachi, S. [The University of Tokyo, Tokyo (Japan). Faculty of Engineering

    1998-10-31

    For binocular visual space, the horizontal line seen as a straight line on the subjective frontoparallel plane does not always agree with the physically straight line, and the shape thereof depends on distance from the observer. This phenomenon is known as a Helmhotz`s horopter. The same phenomenon may occur also in binaural space, which depends on distance to an acoustic source. This paper formulates a scaler addition model that explains auditory horopter by using two items of information: sound pressure and interaural time difference. Furthermore, this model was used to perform simulations on different learning domains, and the following results were obtained. It was verified that the distance dependence of the auditory horopter can be explained by using the above scaler addition model; and difference in horopter shapes among the subjects may be explained by individual difference in learning domains of spatial position recognition. In addition, such an auditory model was shown not to include as short distance as in the learning domain in the auditory horopter model. 21 refs., 6 figs.

  3. Communication, Listening, Cognitive and Speech Perception Skills in Children with Auditory Processing Disorder (APD) or Specific Language Impairment (SLI)

    Science.gov (United States)

    Ferguson, Melanie A.; Hall, Rebecca L.; Riley, Alison; Moore, David R.

    2011-01-01

    Purpose: Parental reports of communication, listening, and behavior in children receiving a clinical diagnosis of specific language impairment (SLI) or auditory processing disorder (APD) were compared with direct tests of intelligence, memory, language, phonology, literacy, and speech intelligibility. The primary aim was to identify whether there…

  4. [Anesthesia with flunitrazepam/fentanyl and isoflurane/fentanyl. Unconscious perception and mid-latency auditory evoked potentials].

    Science.gov (United States)

    Schwender, D; Kaiser, A; Klasing, S; Faber-Züllig, E; Golling, W; Pöppel, E; Peter, K

    1994-05-01

    There is a high incidence of intraoperative awareness during cardiac surgery. Mid-latency auditory evoked potentials (MLAEP) reflect the primary cortical processing of auditory stimuli. In the present study, we investigated MLAEP and explicit and implicit memory for information presented during cardiac anaesthesia. PATIENTS AND METHODS. Institutional approval and informed consent was obtained in 30 patients scheduled for elective cardiac surgery. Anaesthesia was induced in group I (n = 10) with flunitrazepam/fentanyl (0.01 mg/kg) and maintained with flunitrazepam/fentanyl (1.2 mg/h). The patients in group II (n = 10) received etomidate (0.25 mg/kg) and fentanyl (0.005 mg/kg) for induction and isoflurane (0.6-1.2 vol%)/fentanyl (1.2 mg/h) for maintenance of general anaesthesia. Group III (n = 10) served as a control and patients were anaesthetized as in I or II. After sternotomy an audiotape that included an implicit memory task was presented to the patients in groups I and II. The story of Robinson Crusoe was told, and it was suggested to the patients that they remember Robinson Crusoe when asked what they associated with the word Friday 3-5 days postoperatively. Auditory evoked potentials were recorded awake and during general anaesthesia before and after the audiotape presentation on vertex (positive) and mastoids on both sides (negative). Auditory clicks were presented binaurally at 70 dBnHL at a rate of 9.3 Hz. Using the electrodiagnostic system Pathfinder I (Nicolet), 1000 successive stimulus responses were averaged over a 100 ms poststimulus interval and analyzed off-line. Latencies of the peak V, Na, Pa were measured. V belongs to the brainstem-generated potentials, which demonstrates that auditory stimuli were correctly transduced. Na, Pa are generated in the primary auditory cortex of the temporal lobe and are the electrophysiological correlate of the primary cortical processing of the auditory stimuli. RESULTS. None of the patients had an explicit memory

  5. Auditory Temporal Processing as a Specific Deficit among Dyslexic Readers

    Science.gov (United States)

    Fostick, Leah; Bar-El, Sharona; Ram-Tsur, Ronit

    2012-01-01

    The present study focuses on examining the hypothesis that auditory temporal perception deficit is a basic cause for reading disabilities among dyslexics. This hypothesis maintains that reading impairment is caused by a fundamental perceptual deficit in processing rapid auditory or visual stimuli. Since the auditory perception involves a number of…

  6. Auditory distance coding in rabbit midbrain neurons and human perception: monaural amplitude modulation depth as a cue.

    Science.gov (United States)

    Kim, Duck O; Zahorik, Pavel; Carney, Laurel H; Bishop, Brian B; Kuwada, Shigeyuki

    2015-04-01

    Mechanisms underlying sound source distance localization are not well understood. Here we tested the hypothesis that a novel mechanism can create monaural distance sensitivity: a combination of auditory midbrain neurons' sensitivity to amplitude modulation (AM) depth and distance-dependent loss of AM in reverberation. We used virtual auditory space (VAS) methods for sounds at various distances in anechoic and reverberant environments. Stimulus level was constant across distance. With increasing modulation depth, some rabbit inferior colliculus neurons increased firing rates whereas others decreased. These neurons exhibited monotonic relationships between firing rates and distance for monaurally presented noise when two conditions were met: (1) the sound had AM, and (2) the environment was reverberant. The firing rates as a function of distance remained approximately constant without AM in either environment and, in an anechoic condition, even with AM. We corroborated this finding by reproducing the distance sensitivity using a neural model. We also conducted a human psychophysical study using similar methods. Normal-hearing listeners reported perceived distance in response to monaural 1 octave 4 kHz noise source sounds presented at distances of 35-200 cm. We found parallels between the rabbit neural and human responses. In both, sound distance could be discriminated only if the monaural sound in reverberation had AM. These observations support the hypothesis. When other cues are available (e.g., in binaural hearing), how much the auditory system actually uses the AM as a distance cue remains to be determined. Copyright © 2015 the authors 0270-6474/15/355360-13$15.00/0.

  7. Comparison of auditory deficits associated with neglect and auditory cortex lesions.

    Science.gov (United States)

    Gutschalk, Alexander; Brandt, Tobias; Bartsch, Andreas; Jansen, Claudia

    2012-04-01

    In contrast to lesions of the visual and somatosensory cortex, lesions of the auditory cortex are not associated with self-evident contralesional deficits. Only when two or more stimuli are presented simultaneously to the left and right, contralesional extinction has been observed after unilateral lesions of the auditory cortex. Because auditory extinction is also considered a sign of neglect, clinical separation of auditory neglect from deficits caused by lesions of the auditory cortex is challenging. Here, we directly compared a number of tests previously used for either auditory-cortex lesions or neglect in 29 controls and 27 patients suffering from unilateral auditory-cortex lesions, neglect, or both. The results showed that a dichotic-speech test revealed similar amounts of extinction for both auditory cortex lesions and neglect. Similar results were obtained for words lateralized by inter-aural time differences. Consistent extinction after auditory cortex lesions was also observed in a dichotic detection task. Neglect patients showed more general problems with target detection but no consistent extinction in the dichotic detection task. In contrast, auditory lateralization perception was biased toward the right in neglect but showed considerably less disruption by auditory cortex lesions. Lateralization of auditory-evoked magnetic fields in auditory cortex was highly correlated with extinction in the dichotic target-detection task. Moreover, activity in the right primary auditory cortex was somewhat reduced in neglect patients. The results confirm that auditory extinction is observed with lesions of the auditory cortex and auditory neglect. A distinction can nevertheless be made with dichotic target-detection tasks, auditory-lateralization perception, and magnetoencephalography. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. Functional asymmetry and effective connectivity of the auditory system during speech perception is modulated by the place of articulation of the consonant- A 7T fMRI study

    Directory of Open Access Journals (Sweden)

    Karsten eSpecht

    2014-06-01

    Full Text Available To differentiate between stop-consonants, the auditory system has to detect subtle place of articulation (PoA and voice onset time (VOT differences between stop-consonants. How this differential processing is represented on the cortical level remains unclear. The present functional magnetic resonance (fMRI study takes advantage of the superior spatial resolution and high sensitivity of ultra high field 7T MRI. Subjects were attentively listening to consonant-vowel syllables with an alveolar or bilabial stop-consonant and either a short or long voice-onset time. The results showed an overall bilateral activation pattern in the posterior temporal lobe during the processing of the consonant-vowel syllables. This was however modulated strongest by place of articulation such that syllables with an alveolar stop-consonant showed stronger left lateralized activation. In addition, analysis of underlying functional and effective connectivity revealed an inhibitory effect of the left planum temporale onto the right auditory cortex during the processing of alveolar consonant-vowel syllables. Further, the connectivity result indicated also a directed information flow from the right to the left auditory cortex, and further to the left planum temporale for all syllables. These results indicate that auditory speech perception relies on an interplay between the left and right auditory cortex, with the left planum temporale as modulator. Furthermore, the degree of functional asymmetry is determined by the acoustic properties of the consonant-vowel syllables.

  9. Modelling auditory attention.

    Science.gov (United States)

    Kaya, Emine Merve; Elhilali, Mounya

    2017-02-19

    Sounds in everyday life seldom appear in isolation. Both humans and machines are constantly flooded with a cacophony of sounds that need to be sorted through and scoured for relevant information-a phenomenon referred to as the 'cocktail party problem'. A key component in parsing acoustic scenes is the role of attention, which mediates perception and behaviour by focusing both sensory and cognitive resources on pertinent information in the stimulus space. The current article provides a review of modelling studies of auditory attention. The review highlights how the term attention refers to a multitude of behavioural and cognitive processes that can shape sensory processing. Attention can be modulated by 'bottom-up' sensory-driven factors, as well as 'top-down' task-specific goals, expectations and learned schemas. Essentially, it acts as a selection process or processes that focus both sensory and cognitive resources on the most relevant events in the soundscape; with relevance being dictated by the stimulus itself (e.g. a loud explosion) or by a task at hand (e.g. listen to announcements in a busy airport). Recent computational models of auditory attention provide key insights into its role in facilitating perception in cluttered auditory scenes.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Authors.

  10. The Role of Sensory Perception, Emotionality and "Lifeworld" in Auditory Word Processing: Evidence from Congenital Blindness and Synesthesia

    Science.gov (United States)

    Papadopoulos, Judith; Domahs, Frank; Kauschke, Christina

    2017-01-01

    Although it has been established that human beings process concrete and abstract words differently, it is still a matter of debate what factors contribute to this difference. Since concrete concepts are closely tied to sensory perception, perceptual experience seems to play an important role in their processing. The present study investigated the…

  11. Effects of multitasking on operator performance using computational and auditory tasks.

    Science.gov (United States)

    Fasanya, Bankole K

    2016-09-01

    This study investigated the effects of multiple cognitive tasks on human performance. Twenty-four students at North Carolina A&T State University participated in the study. The primary task was auditory signal change perception and the secondary task was a computational task. Results showed that participants' performance in a single task was statistically significantly different from their performance in combined tasks: (a) algebra problems (algebra problem primary and auditory perception secondary); (b) auditory perception tasks (auditory perception primary and algebra problems secondary); and (c) mean false-alarm score in auditory perception (auditory detection primary and algebra problems secondary). Using signal detection theory (SDT), participants' performance measured in terms of sensitivity was calculated as -0.54 for combined tasks (algebra problems the primary task) and -0.53 auditory perceptions the primary task. During auditory perception tasks alone, SDT was found to be 2.51. Performance was 83% in a single task compared to 17% when combined tasks.

  12. Gaze Patterns in Auditory-Visual Perception of Emotion by Children with Hearing Aids and Hearing Children

    Directory of Open Access Journals (Sweden)

    Yifang Wang

    2017-12-01

    Full Text Available This study investigated eye-movement patterns during emotion perception for children with hearing aids and hearing children. Seventy-eight participants aged from 3 to 7 were asked to watch videos with a facial expression followed by an oral statement, and these two cues were either congruent or incongruent in emotional valence. Results showed that while hearing children paid more attention to the upper part of the face, children with hearing aids paid more attention to the lower part of the face after the oral statement was presented, especially for the neutral facial expression/neutral oral statement condition. These results suggest that children with hearing aids have an altered eye contact pattern with others and a difficulty in matching visual and voice cues in emotion perception. The negative cause and effect of these gaze patterns should be avoided in earlier rehabilitation for hearing-impaired children with assistive devices.

  13. Gated auditory speech perception in elderly hearing aid users and elderly normal-hearing individuals: effects of hearing impairment and cognitive capacity.

    Science.gov (United States)

    Moradi, Shahram; Lidestam, Björn; Hällgren, Mathias; Rönnberg, Jerker

    2014-07-31

    This study compared elderly hearing aid (EHA) users and elderly normal-hearing (ENH) individuals on identification of auditory speech stimuli (consonants, words, and final word in sentences) that were different when considering their linguistic properties. We measured the accuracy with which the target speech stimuli were identified, as well as the isolation points (IPs: the shortest duration, from onset, required to correctly identify the speech target). The relationships between working memory capacity, the IPs, and speech accuracy were also measured. Twenty-four EHA users (with mild to moderate hearing impairment) and 24 ENH individuals participated in the present study. Despite the use of their regular hearing aids, the EHA users had delayed IPs and were less accurate in identifying consonants and words compared with the ENH individuals. The EHA users also had delayed IPs for final word identification in sentences with lower predictability; however, no significant between-group difference in accuracy was observed. Finally, there were no significant between-group differences in terms of IPs or accuracy for final word identification in highly predictable sentences. Our results also showed that, among EHA users, greater working memory capacity was associated with earlier IPs and improved accuracy in consonant and word identification. Together, our findings demonstrate that the gated speech perception ability of EHA users was not at the level of ENH individuals, in terms of IPs and accuracy. In addition, gated speech perception was more cognitively demanding for EHA users than for ENH individuals in the absence of semantic context. © The Author(s) 2014.

  14. ERP correlates of auditory goal-directed behavior of younger and older adults in a dynamic speech perception task.

    Science.gov (United States)

    Getzmann, Stephan; Falkenstein, Michael; Wascher, Edmund

    2015-02-01

    The ability to understand speech under adverse listening conditions deteriorates with age. In addition to genuine hearing deficits, age-related declines in attentional and inhibitory control are assumed to contribute to these difficulties. Here, the impact of task-irrelevant distractors on speech perception was studied in 28 younger and 24 older participants in a simulated "cocktail party" scenario. In a two-alternative forced-choice word discrimination task, the participants responded to a rapid succession of short speech stimuli ("on" and "off") that was presented at a frequent standard location or at a rare deviant location in silence or with a concurrent distractor speaker. Behavioral responses and event-related potentials (mismatch negativity MMN, P3a, and reorienting negativity RON) were analyzed to study the interplay of distraction, orientation, and refocusing in the presence of changes in target location. While shifts in target location decreased performance of both age groups, this effect was more pronounced in the older group. Especially in the distractor condition, the electrophysiological measures indicated a delayed attention capture and a delayed re-focussing of attention toward the task-relevant stimulus feature in the older group, relative to the young group. In sum, the results suggest that a delay in the attention switching mechanism contribute to the age-related difficulties in speech perception in dynamic listening situations with multiple speakers. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Environment for Auditory Research Facility (EAR)

    Data.gov (United States)

    Federal Laboratory Consortium — EAR is an auditory perception and communication research center enabling state-of-the-art simulation of various indoor and outdoor acoustic environments. The heart...

  16. Metaphoric Gestures Facilitate Perception of Intonation More than Length in Auditory Judgments of Non-Native Phonemic Contrasts

    Directory of Open Access Journals (Sweden)

    Spencer Kelly

    2017-03-01

    Full Text Available It is well established that hand gestures affect comprehension and learning of semantic aspects of a foreign language (FL. However, much less is known about the role of hand gestures in lower-level language processes, such as perception of phonemes. To address this gap, we explored the role that metaphoric gestures play in perceiving FL speech sounds that varied on two dimensions: length and intonation. English speaking adults listened to Japanese length contrasts and sentence-final intonational distinctions in the context of congruent, incongruent and no gestures. For intonational contrasts, identification was more accurate for congruent gestures and less accurate for incongruent gestures relative to the baseline no gesture condition. However, for the length contrasts, there was no such clear and consistent pattern, and in fact, congruent gestures made speech processing more effortful. We conclude that metaphoric gestures help with some—but not all—novel speech sounds in a FL, suggesting that gesture and speech are phonemically integrated to differing extents depending on the nature of the gesture and/or speech sound.

  17. Fingers Phrase Music Differently: Trial-to-Trial Variability in Piano Scale Playing and Auditory Perception Reveal Motor Chunking.

    Science.gov (United States)

    van Vugt, Floris Tijmen; Jabusch, Hans-Christian; Altenmüller, Eckart

    2012-01-01

    We investigated how musical phrasing and motor sequencing interact to yield timing patterns in the conservatory students' playing piano scales. We propose a novel analysis method that compared the measured note onsets to an objectively regular scale fitted to the data. Subsequently, we segment the timing variability into (i) systematic deviations from objective evenness that are perhaps residuals of expressive timing or of perceptual biases and (ii) non-systematic deviations that can be interpreted as motor execution errors, perhaps due to noise in the nervous system. The former, systematic deviations reveal that the two-octave scales are played as a single musical phrase. The latter, trial-to-trial variabilities reveal that pianists' timing was less consistent at the boundaries between the octaves, providing evidence that the octave is represented as a single motor sequence. These effects cannot be explained by low-level properties of the motor task such as the thumb passage and also did not show up in simulated scales with temporal jitter. Intriguingly, this instability in motor production around the octave boundary is mirrored by an impairment in the detection of timing deviations at those positions, suggesting that chunks overlap between perception and action. We conclude that the octave boundary instability in the scale playing motor program provides behavioral evidence that our brain chunks musical sequences into octave units that do not coincide with musical phrases. Our results indicate that trial-to-trial variability is a novel and meaningful indicator of this chunking. The procedure can readily be extended to a variety of tasks to help understand how movements are divided into units and what processing occurs at their boundaries.

  18. Fingers phrase music differently: trial-to-trial variability in piano scale playing and auditory perception reveal motor chunking

    Directory of Open Access Journals (Sweden)

    Floris Tijmen Van Vugt

    2012-11-01

    Full Text Available We investigated how musical phrasing and motor sequencing interact to yield timing patterns in the conservatory students' playing piano scales. We propose a novel analysis method that compared the measured note onsets to an objectively regular scale fitted to the data. Subsequently, we segment the timing variability into (i systematic deviations from objective evenness that are perhaps residuals of expressive timing or of perceptual biases and (ii non-systematic deviations that can be interpreted as motor execution errors, perhaps due to noise in the nervous system. The former, systematic deviations, reveal that the two octave scales are played as a single musical phrase. The latter, trial-to-trial variabilities reveal that pianists' timing was less consistent at the boundaries between the octaves, providing evidence that the octave is represented as a single motor sequence. These effects cannot be explained by low-level properties of the motor task such as the thumb-passage and also did not show up in simulated scales with temporal jitter. Intriguingly, this instability in motor production around the octave boundary is mirrored by an impairment in the detection of timing deviations at those positions, suggesting that chunks overlap between perception and action. We conclude that the octave boundary instability in the scale playing motor program provides behavioural evidence that our brain chunks musical sequences into octave units that do not coincide with musical phrases. Our results indicate that trial-to-trial variability is a novel and meaningful indicator of this chunking. The procedure can readily be extended to a variety of tasks to help understand how movements are divided into units and what processing occurs at their boundaries.

  19. Auditory-Visual Speech Integration by Adults with and without Language-Learning Disabilities

    Science.gov (United States)

    Norrix, Linda W.; Plante, Elena; Vance, Rebecca

    2006-01-01

    Auditory and auditory-visual (AV) speech perception skills were examined in adults with and without language-learning disabilities (LLD). The AV stimuli consisted of congruent consonant-vowel syllables (auditory and visual syllables matched in terms of syllable being produced) and incongruent McGurk syllables (auditory syllable differed from…

  20. Adaptation in the auditory system: an overview

    Directory of Open Access Journals (Sweden)

    David ePérez-González

    2014-02-01

    Full Text Available The early stages of the auditory system need to preserve the timing information of sounds in order to extract the basic features of acoustic stimuli. At the same time, different processes of neuronal adaptation occur at several levels to further process the auditory information. For instance, auditory nerve fiber responses already experience adaptation of their firing rates, a type of response that can be found in many other auditory nuclei and may be useful for emphasizing the onset of the stimuli. However, it is at higher levels in the auditory hierarchy where more sophisticated types of neuronal processing take place. For example, stimulus-specific adaptation, where neurons show adaptation to frequent, repetitive stimuli, but maintain their responsiveness to stimuli with different physical characteristics, thus representing a distinct kind of processing that may play a role in change and deviance detection. In the auditory cortex, adaptation takes more elaborate forms, and contributes to the processing of complex sequences, auditory scene analysis and attention. Here we review the multiple types of adaptation that occur in the auditory system, which are part of the pool of resources that the neurons employ to process the auditory scene, and are critical to a proper understanding of the neuronal mechanisms that govern auditory perception.

  1. Auditory short-term memory in the primate auditory cortex.

    Science.gov (United States)

    Scott, Brian H; Mishkin, Mortimer

    2016-06-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory. Published by Elsevier B.V.

  2. Auditory learning: a developmental method.

    Science.gov (United States)

    Zhang, Yilu; Weng, Juyang; Hwang, Wey-Shiuan

    2005-05-01

    Motivated by the human autonomous development process from infancy to adulthood, we have built a robot that develops its cognitive and behavioral skills through real-time interactions with the environment. We call such a robot a developmental robot. In this paper, we present the theory and the architecture to implement a developmental robot and discuss the related techniques that address an array of challenging technical issues. As an application, experimental results on a real robot, self-organizing, autonomous, incremental learner (SAIL), are presented with emphasis on its audition perception and audition-related action generation. In particular, the SAIL robot conducts the auditory learning from unsegmented and unlabeled speech streams without any prior knowledge about the auditory signals, such as the designated language or the phoneme models. Neither available before learning starts are the actions that the robot is expected to perform. SAIL learns the auditory commands and the desired actions from physical contacts with the environment including the trainers.

  3. Looming biases in monkey auditory cortex.

    Science.gov (United States)

    Maier, Joost X; Ghazanfar, Asif A

    2007-04-11

    Looming signals (signals that indicate the rapid approach of objects) are behaviorally relevant signals for all animals. Accordingly, studies in primates (including humans) reveal attentional biases for detecting and responding to looming versus receding signals in both the auditory and visual domains. We investigated the neural representation of these dynamic signals in the lateral belt auditory cortex of rhesus monkeys. By recording local field potential and multiunit spiking activity while the subjects were presented with auditory looming and receding signals, we show here that auditory cortical activity was biased in magnitude toward looming versus receding stimuli. This directional preference was not attributable to the absolute intensity of the sounds nor can it be attributed to simple adaptation, because white noise stimuli with identical amplitude envelopes did not elicit the same pattern of responses. This asymmetrical representation of looming versus receding sounds in the lateral belt auditory cortex suggests that it is an important node in the neural network correlate of looming perception.

  4. Neural mechanisms of auditory categorization: from across brain areas to within local microcircuits

    Directory of Open Access Journals (Sweden)

    Joji eTsunada

    2014-06-01

    Full Text Available Categorization enables listeners to efficiently encode and respond to auditory stimuli. Behavioral evidence for auditory categorization has been well documented across a broad range of human and non-human animal species. Moreover, neural correlates of auditory categorization have been documented in a variety of different brain regions in the ventral auditory pathway, which is thought to underlie auditory-object processing and auditory perception. Here, we review and discuss how neural representations of auditory categories are transformed across different scales of neural organization in the ventral auditory pathway: from across different brain areas to within local microcircuits. We propose different neural transformations across different scales of neural organization in auditory categorization. Along the ascending auditory system in the ventral pathway, there is a progression in the encoding of categories from simple acoustic categories to categories for abstract information. On the other hand, in local microcircuits, different classes of neurons differentially compute categorical information.

  5. Audiovisual Speech Perception in Infancy: The Influence of Vowel Identity and Infants' Productive Abilities on Sensitivity to (Mis)Matches between Auditory and Visual Speech Cues

    Science.gov (United States)

    Altvater-Mackensen, Nicole; Mani, Nivedita; Grossmann, Tobias

    2016-01-01

    Recent studies suggest that infants' audiovisual speech perception is influenced by articulatory experience (Mugitani et al., 2008; Yeung & Werker, 2013). The current study extends these findings by testing if infants' emerging ability to produce native sounds in babbling impacts their audiovisual speech perception. We tested 44 6-month-olds…

  6. Investigating bottom-up auditory attention

    Directory of Open Access Journals (Sweden)

    Emine Merve Kaya

    2014-05-01

    Full Text Available Bottom-up attention is a sensory-driven selection mechanism that directs perception towards a subset of the stimulus that is considered salient, or attention-grabbing. Most studies of bottom-up auditory attention have adapted frameworks similar to visual attention models whereby local or global contrast is a central concept in defining salient elements in a scene. In the current study, we take a more fundamental approach to modeling auditory attention; providing the first examination of the space of auditory saliency spanning pitch, intensity and timbre; and shedding light on complex interactions among these features. Informed by psychoacoustic results, we develop a computational model of auditory saliency implementing a novel attentional framework, guided by processes hypothesized to take place in the auditory pathway. In particular, the model tests the hypothesis that perception tracks the evolution of sound events in a multidimensional feature space, and flags any deviation from background statistics as salient. Predictions from the model corroborate the relationship between bottom-up auditory attention and statistical inference, and argues for a potential role of predictive coding as mechanism for saliency detection in acoustic scenes.

  7. Auditory agnosia due to long-term severe hydrocephalus caused by spina bifida - specific auditory pathway versus nonspecific auditory pathway.

    Science.gov (United States)

    Zhang, Qing; Kaga, Kimitaka; Hayashi, Akimasa

    2011-07-01

    A 27-year-old female showed auditory agnosia after long-term severe hydrocephalus due to congenital spina bifida. After years of hydrocephalus, she gradually suffered from hearing loss in her right ear at 19 years of age, followed by her left ear. During the time when she retained some ability to hear, she experienced severe difficulty in distinguishing verbal, environmental, and musical instrumental sounds. However, her auditory brainstem response and distortion product otoacoustic emissions were largely intact in the left ear. Her bilateral auditory cortices were preserved, as shown by neuroimaging, whereas her auditory radiations were severely damaged owing to progressive hydrocephalus. Although she had a complete bilateral hearing loss, she felt great pleasure when exposed to music. After years of self-training to read lips, she regained fluent ability to communicate. Clinical manifestations of this patient indicate that auditory agnosia can occur after long-term hydrocephalus due to spina bifida; the secondary auditory pathway may play a role in both auditory perception and hearing rehabilitation.

  8. Abnormal connectivity between attentional, language and auditory networks in schizophrenia

    NARCIS (Netherlands)

    Liemburg, Edith J.; Vercammen, Ans; Ter Horst, Gert J.; Curcic-Blake, Branislava; Knegtering, Henderikus; Aleman, Andre

    Brain circuits involved in language processing have been suggested to be compromised in patients with schizophrenia. This does not only include regions subserving language production and perception, but also auditory processing and attention. We investigated resting state network connectivity of

  9. A Factorial Study of the Carrow Auditory-Visual Abilities Test with Normal and Clinical Children.

    Science.gov (United States)

    Woodward, Paul J.; And Others

    1987-01-01

    A factor analysis of the Carrow Auditory-Visual Abilities Test identified common factors in a population of 1,032 nondisabled 4- through 10-year-olds and a clinical population of language-disordered or learning-disabled peers with auditory and/or visual perception problems. Most subtests fell into factors attributed to auditory or visual…

  10. Neuromechanistic Model of Auditory Bistability.

    Directory of Open Access Journals (Sweden)

    James Rankin

    2015-11-01

    Full Text Available Sequences of higher frequency A and lower frequency B tones repeating in an ABA- triplet pattern are widely used to study auditory streaming. One may experience either an integrated percept, a single ABA-ABA- stream, or a segregated percept, separate but simultaneous streams A-A-A-A- and -B---B--. During minutes-long presentations, subjects may report irregular alternations between these interpretations. We combine neuromechanistic modeling and psychoacoustic experiments to study these persistent alternations and to characterize the effects of manipulating stimulus parameters. Unlike many phenomenological models with abstract, percept-specific competition and fixed inputs, our network model comprises neuronal units with sensory feature dependent inputs that mimic the pulsatile-like A1 responses to tones in the ABA- triplets. It embodies a neuronal computation for percept competition thought to occur beyond primary auditory cortex (A1. Mutual inhibition, adaptation and noise are implemented. We include slow NDMA recurrent excitation for local temporal memory that enables linkage across sound gaps from one triplet to the next. Percepts in our model are identified in the firing patterns of the neuronal units. We predict with the model that manipulations of the frequency difference between tones A and B should affect the dominance durations of the stronger percept, the one dominant a larger fraction of time, more than those of the weaker percept-a property that has been previously established and generalized across several visual bistable paradigms. We confirm the qualitative prediction with our psychoacoustic experiments and use the behavioral data to further constrain and improve the model, achieving quantitative agreement between experimental and modeling results. Our work and model provide a platform that can be extended to consider other stimulus conditions, including the effects of context and volition.

  11. Functional imaging of auditory scene analysis.

    Science.gov (United States)

    Gutschalk, Alexander; Dykstra, Andrew R

    2014-01-01

    Our auditory system is constantly faced with the task of decomposing the complex mixture of sound arriving at the ears into perceptually independent streams constituting accurate representations of individual sound sources. This decomposition, termed auditory scene analysis, is critical for both survival and communication, and is thought to underlie both speech and music perception. The neural underpinnings of auditory scene analysis have been studied utilizing invasive experiments with animal models as well as non-invasive (MEG, EEG, and fMRI) and invasive (intracranial EEG) studies conducted with human listeners. The present article reviews human neurophysiological research investigating the neural basis of auditory scene analysis, with emphasis on two classical paradigms termed streaming and informational masking. Other paradigms - such as the continuity illusion, mistuned harmonics, and multi-speaker environments - are briefly addressed thereafter. We conclude by discussing the emerging evidence for the role of auditory cortex in remapping incoming acoustic signals into a perceptual representation of auditory streams, which are then available for selective attention and further conscious processing. This article is part of a Special Issue entitled Human Auditory Neuroimaging. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. Auditory motion capturing ambiguous visual motion

    Directory of Open Access Journals (Sweden)

    Arjen eAlink

    2012-01-01

    Full Text Available In this study, it is demonstrated that moving sounds have an effect on the direction in which one sees visual stimuli move. During the main experiment sounds were presented consecutively at four speaker locations inducing left- or rightwards auditory apparent motion. On the path of auditory apparent motion, visual apparent motion stimuli were presented with a high degree of directional ambiguity. The main outcome of this experiment is that our participants perceived visual apparent motion stimuli that were ambiguous (equally likely to be perceived as moving left- or rightwards more often as moving in the same direction than in the opposite direction of auditory apparent motion. During the control experiment we replicated this finding and found no effect of sound motion direction on eye movements. This indicates that auditory motion can capture our visual motion percept when visual motion direction is insufficiently determinate without affecting eye movements.

  13. Enhanced auditory temporal gap detection in listeners with musical training.

    Science.gov (United States)

    Mishra, Srikanta K; Panda, Manas R; Herbert, Carolyn

    2014-08-01

    Many features of auditory perception are positively altered in musicians. Traditionally auditory mechanisms in musicians are investigated using the Western-classical musician model. The objective of the present study was to adopt an alternative model-Indian-classical music-to further investigate auditory temporal processing in musicians. This study presents that musicians have significantly lower across-channel gap detection thresholds compared to nonmusicians. Use of the South Indian musician model provides an increased external validity for the prediction, from studies on Western-classical musicians, that auditory temporal coding is enhanced in musicians.

  14. Cross-Modal Perception of Noise-in-Music: Audiences Generate Spiky Shapes in Response to Auditory Roughness in a Novel Electroacoustic Concert Setting

    Directory of Open Access Journals (Sweden)

    Kongmeng Liew

    2018-02-01

    Full Text Available Noise has become integral to electroacoustic music aesthetics. In this paper, we define noise as sound that is high in auditory roughness, and examine its effect on cross-modal mapping between sound and visual shape in participants. In order to preserve the ecological validity of contemporary music aesthetics, we developed Rama, a novel interface, for presenting experimentally controlled blocks of electronically generated sounds that varied systematically in roughness, and actively collected data from audience interaction. These sounds were then embedded as musical drones within the overall sound design of a multimedia performance with live musicians, Audience members listened to these sounds, and collectively voted to create the shape of a visual graphic, presented as part of the audio–visual performance. The results of the concert setting were replicated in a controlled laboratory environment to corroborate the findings. Results show a consistent effect of auditory roughness on shape design, with rougher sounds corresponding to spikier shapes. We discuss the implications, as well as evaluate the audience interface.

  15. Auditory comprehension: from the voice up to the single word level

    OpenAIRE

    Jones, Anna Barbara

    2016-01-01

    Auditory comprehension, the ability to understand spoken language, consists of a number of different auditory processing skills. In the five studies presented in this thesis I investigated both intact and impaired auditory comprehension at different levels: voice versus phoneme perception, as well as single word auditory comprehension in terms of phonemic and semantic content. In the first study, using sounds from different continua of ‘male’-/pæ/ to ‘female’-/tæ/ and ‘male’...

  16. Revisiting the 'enigma' of musicians with dyslexia: auditory sequencing and speech abilities

    OpenAIRE

    Zuk, J.; Bishop-Liebler, P.; Ozernov-Palchik, O.; Moore, E.; Overy, K.; Welch, G.; Gaab, N.

    2017-01-01

    Previous research has suggested a link between musical training and auditory processing skills. Musicians have shown enhanced perception of auditory features critical to both music and speech, suggesting that this link extends beyond basic auditory processing. It remains unclear to what extent musicians who also have dyslexia show these specialized abilities, considering often-observed persistent deficits that coincide with reading impairments. The present study evaluated auditory sequencing ...

  17. Perceptual Plasticity for Auditory Object Recognition

    Science.gov (United States)

    Heald, Shannon L. M.; Van Hedger, Stephen C.; Nusbaum, Howard C.

    2017-01-01

    In our auditory environment, we rarely experience the exact acoustic waveform twice. This is especially true for communicative signals that have meaning for listeners. In speech and music, the acoustic signal changes as a function of the talker (or instrument), speaking (or playing) rate, and room acoustics, to name a few factors. Yet, despite this acoustic variability, we are able to recognize a sentence or melody as the same across various kinds of acoustic inputs and determine meaning based on listening goals, expectations, context, and experience. The recognition process relates acoustic signals to prior experience despite variability in signal-relevant and signal-irrelevant acoustic properties, some of which could be considered as “noise” in service of a recognition goal. However, some acoustic variability, if systematic, is lawful and can be exploited by listeners to aid in recognition. Perceivable changes in systematic variability can herald a need for listeners to reorganize perception and reorient their attention to more immediately signal-relevant cues. This view is not incorporated currently in many extant theories of auditory perception, which traditionally reduce psychological or neural representations of perceptual objects and the processes that act on them to static entities. While this reduction is likely done for the sake of empirical tractability, such a reduction may seriously distort the perceptual process to be modeled. We argue that perceptual representations, as well as the processes underlying perception, are dynamically determined by an interaction between the uncertainty of the auditory signal and constraints of context. This suggests that the process of auditory recognition is highly context-dependent in that the identity of a given auditory object may be intrinsically tied to its preceding context. To argue for the flexible neural and psychological updating of sound-to-meaning mappings across speech and music, we draw upon examples

  18. Perceptual Plasticity for Auditory Object Recognition

    Directory of Open Access Journals (Sweden)

    Shannon L. M. Heald

    2017-05-01

    Full Text Available In our auditory environment, we rarely experience the exact acoustic waveform twice. This is especially true for communicative signals that have meaning for listeners. In speech and music, the acoustic signal changes as a function of the talker (or instrument, speaking (or playing rate, and room acoustics, to name a few factors. Yet, despite this acoustic variability, we are able to recognize a sentence or melody as the same across various kinds of acoustic inputs and determine meaning based on listening goals, expectations, context, and experience. The recognition process relates acoustic signals to prior experience despite variability in signal-relevant and signal-irrelevant acoustic properties, some of which could be considered as “noise” in service of a recognition goal. However, some acoustic variability, if systematic, is lawful and can be exploited by listeners to aid in recognition. Perceivable changes in systematic variability can herald a need for listeners to reorganize perception and reorient their attention to more immediately signal-relevant cues. This view is not incorporated currently in many extant theories of auditory perception, which traditionally reduce psychological or neural representations of perceptual objects and the processes that act on them to static entities. While this reduction is likely done for the sake of empirical tractability, such a reduction may seriously distort the perceptual process to be modeled. We argue that perceptual representations, as well as the processes underlying perception, are dynamically determined by an interaction between the uncertainty of the auditory signal and constraints of context. This suggests that the process of auditory recognition is highly context-dependent in that the identity of a given auditory object may be intrinsically tied to its preceding context. To argue for the flexible neural and psychological updating of sound-to-meaning mappings across speech and music, we

  19. Auditory Midbrain Implant: A Review

    Science.gov (United States)

    Lim, Hubert H.; Lenarz, Minoo; Lenarz, Thomas

    2009-01-01

    The auditory midbrain implant (AMI) is a new hearing prosthesis designed for stimulation of the inferior colliculus in deaf patients who cannot sufficiently benefit from cochlear implants. The authors have begun clinical trials in which five patients have been implanted with a single shank AMI array (20 electrodes). The goal of this review is to summarize the development and research that has led to the translation of the AMI from a concept into the first patients. This study presents the rationale and design concept for the AMI as well a summary of the animal safety and feasibility studies that were required for clinical approval. The authors also present the initial surgical, psychophysical, and speech results from the first three implanted patients. Overall, the results have been encouraging in terms of the safety and functionality of the implant. All patients obtain improvements in hearing capabilities on a daily basis. However, performance varies dramatically across patients depending on the implant location within the midbrain with the best performer still not able to achieve open set speech perception without lip-reading cues. Stimulation of the auditory midbrain provides a wide range of level, spectral, and temporal cues, all of which are important for speech understanding, but they do not appear to sufficiently fuse together to enable open set speech perception with the currently used stimulation strategies. Finally, several issues and hypotheses for why current patients obtain limited speech perception along with several feasible solutions for improving AMI implementation are presented. PMID:19762428

  20. Auditory abnormalities in autism: toward functional distinctions among findings.

    Science.gov (United States)

    Kellerman, Gabriella R; Fan, Jin; Gorman, Jack M

    2005-09-01

    Recently, findings on a wide range of auditory abnormalities among individuals with autism have been reported. To date, functional distinctions among these varied findings are poorly established. Such distinctions should be of interest to clinicians and researchers alike given their potential therapeutic and experimental applications. This review suggests three general trends among these findings as a starting point for future analyses. First, studies of auditory perception of linguistic and social auditory stimuli among individuals with autism generally have found impaired perception versus normal controls. Such findings may correlate with impaired language and communication skills and social isolation observed among individuals with autism. Second, studies of auditory perception of pitch and music among individuals with autism generally have found enhanced perception versus normal controls. These findings may correlate with the restrictive and highly focused behaviors observed among individuals with autism. Third, findings on the auditory perception of non-linguistic, non-musical stimuli among autism patients resist any generalized conclusions. Ultimately, as some researchers have already suggested, the distinction between impaired global processing and enhanced local processing may prove useful in making sense of apparently discordant findings on auditory abnormalities among individuals with autism.

  1. Effective Connectivity Hierarchically Links Temporoparietal and Frontal Areas of the Auditory Dorsal Stream with the Motor Cortex Lip Area during Speech Perception

    Science.gov (United States)

    Murakami, Takenobu; Restle, Julia; Ziemann, Ulf

    2012-01-01

    A left-hemispheric cortico-cortical network involving areas of the temporoparietal junction (Tpj) and the posterior inferior frontal gyrus (pIFG) is thought to support sensorimotor integration of speech perception into articulatory motor activation, but how this network links with the lip area of the primary motor cortex (M1) during speech…

  2. Functional Mapping of the Human Auditory Cortex: fMRI Investigation of a Patient with Auditory Agnosia from Trauma to the Inferior Colliculus.

    Science.gov (United States)

    Poliva, Oren; Bestelmeyer, Patricia E G; Hall, Michelle; Bultitude, Janet H; Koller, Kristin; Rafal, Robert D

    2015-09-01

    To use functional magnetic resonance imaging to map the auditory cortical fields that are activated, or nonreactive, to sounds in patient M.L., who has auditory agnosia caused by trauma to the inferior colliculi. The patient cannot recognize speech or environmental sounds. Her discrimination is greatly facilitated by context and visibility of the speaker's facial movements, and under forced-choice testing. Her auditory temporal resolution is severely compromised. Her discrimination is more impaired for words differing in voice onset time than place of articulation. Words presented to her right ear are extinguished with dichotic presentation; auditory stimuli in the right hemifield are mislocalized to the left. We used functional magnetic resonance imaging to examine cortical activations to different categories of meaningful sounds embedded in a block design. Sounds activated the caudal sub-area of M.L.'s primary auditory cortex (hA1) bilaterally and her right posterior superior temporal gyrus (auditory dorsal stream), but not the rostral sub-area (hR) of her primary auditory cortex or the anterior superior temporal gyrus in either hemisphere (auditory ventral stream). Auditory agnosia reflects dysfunction of the auditory ventral stream. The ventral and dorsal auditory streams are already segregated as early as the primary auditory cortex, with the ventral stream projecting from hR and the dorsal stream from hA1. M.L.'s leftward localization bias, preserved audiovisual integration, and phoneme perception are explained by preserved processing in her right auditory dorsal stream.

  3. Auditory brainstem implant program development.

    Science.gov (United States)

    Schwartz, Marc S; Wilkinson, Eric P

    2017-08-01

    Auditory brainstem implants (ABIs), which have previously been used to restore auditory perception to deaf patients with neurofibromatosis type 2 (NF2), are now being utilized in other situations, including treatment of congenitally deaf children with cochlear malformations or cochlear nerve deficiencies. Concurrent with this expansion of indications, the number of centers placing and expressing interest in placing ABIs has proliferated. Because ABI placement involves posterior fossa craniotomy in order to access the site of implantation on the cochlear nucleus complex of the brainstem and is not without significant risk, we aim to highlight issues important in developing and maintaining successful ABI programs that would be in the best interests of patients. Especially with pediatric patients, the ultimate benefits of implantation will be known only after years of growth and development. These benefits have yet to be fully elucidated and continue to be an area of controversy. The limited number of publications in this area were reviewed. Review of the current literature was performed. Disease processes, risk/benefit analyses, degrees of evidence, and U.S. Food and Drug Administration approvals differ among various categories of patients in whom auditory brainstem implantation could be considered for use. We suggest sets of criteria necessary for the development of successful and sustaining ABI programs, including programs for NF2 patients, postlingually deafened adult nonneurofibromatosis type 2 patients, and congenitally deaf pediatric patients. Laryngoscope, 127:1909-1915, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.

  4. Central auditory processing. Are the emotional perceptions of those listening to classical music inherent in the composition or acquired by the listeners?

    Science.gov (United States)

    Goycoolea, Marcos; Levy, Raquel; Ramírez, Carlos

    2013-04-01

    There is seemingly some inherent component in selected musical compositions that elicits specific emotional perceptions, feelings, and physical conduct. The purpose of the study was to determine if the emotional perceptions of those listening to classical music are inherent in the composition or acquired by the listeners. Fifteen kindergarten students, aged 5 years, from three different sociocultural groups, were evaluated. They were exposed to portions of five purposefully selected classical compositions and asked to describe their emotions when listening to these musical pieces. All were instrumental compositions without human voices or spoken language. In addition, they were played to an audience of an age at which they were capable of describing their perceptions and supposedly had no significant previous experience of classical music. Regardless of their sociocultural background, the children in the three groups consistently identified similar emotions (e.g. fear, happiness, sadness), feelings (e.g. love), and mental images (e.g. giants or dangerous animals walking) when listening to specific compositions. In addition, the musical compositions generated physical conducts that were reflected by the children's corporal expressions. Although the sensations were similar, the way of expressing them differed according to their background.

  5. A virtual auditory environment for investigating the auditory signal processing of realistic sounds

    DEFF Research Database (Denmark)

    Favrot, Sylvain Emmanuel; Buchholz, Jörg

    2008-01-01

    the VAE development, special care was taken in order to achieve a realistic auditory percept and to avoid “artifacts” such as unnatural coloration. The performance of the VAE has been evaluated and optimized on a 29 loudspeaker setup using both objective and subjective measurement techniques....

  6. Haptic and visual information speed up the neural processing of auditory speech in live dyadic interactions.

    Science.gov (United States)

    Treille, Avril; Cordeboeuf, Camille; Vilain, Coriandre; Sato, Marc

    2014-05-01

    Speech can be perceived not only by the ear and by the eye but also by the hand, with speech gestures felt from manual tactile contact with the speaker׳s face. In the present electro-encephalographic study, early cross-modal interactions were investigated by comparing auditory evoked potentials during auditory, audio-visual and audio-haptic speech perception in dyadic interactions between a listener and a speaker. In line with previous studies, early auditory evoked responses were attenuated and speeded up during audio-visual compared to auditory speech perception. Crucially, shortened latencies of early auditory evoked potentials were also observed during audio-haptic speech perception. Altogether, these results suggest early bimodal interactions during live face-to-face and hand-to-face speech perception in dyadic interactions. Copyright © 2014. Published by Elsevier Ltd.

  7. BAER - brainstem auditory evoked response

    Science.gov (United States)

    ... auditory potentials; Brainstem auditory evoked potentials; Evoked response audiometry; Auditory brainstem response; ABR; BAEP ... Normal results vary. Results will depend on the person and the instruments used to perform the test.

  8. Auditory Processing Disorder (For Parents)

    Science.gov (United States)

    ... role. Auditory cohesion problems: This is when higher-level listening tasks are difficult. Auditory cohesion skills — drawing inferences from conversations, understanding riddles, or comprehending verbal math problems — require heightened auditory processing and language levels. ...

  9. Auditory Neuroimaging with fMRI and PET

    Science.gov (United States)

    Talavage, Thomas M.; Gonzalez-Castillo, Javier; Scott, Sophie K.

    2013-01-01

    For much of the past 30 years, investigations of auditory perception and language have been enhanced or even driven by the use of functional neuroimaging techniques that specialize in localization of central responses. Beginning with investigations using positron emission tomography (PET) and gradually shifting primarily to usage of functional magnetic resonance imaging (fMRI), auditory neuroimaging has greatly advanced our understanding of the organization and response properties of brain regions critical to the perception of and communication with the acoustic world in which we live. As the complexity of the questions being addressed has increased, the techniques, experiments and analyses applied have also become more nuanced and specialized. A brief review of the history of these investigations sets the stage for an overview and analysis of how these neuroimaging modalities are becoming ever more effective tools for understanding the auditory brain. We conclude with a brief discussion of open methodological issues as well as potential clinical applications for auditory neuroimaging. PMID:24076424

  10. Auditory sustained field responses to periodic noise

    Directory of Open Access Journals (Sweden)

    Keceli Sumru

    2012-01-01

    Full Text Available Abstract Background Auditory sustained responses have been recently suggested to reflect neural processing of speech sounds in the auditory cortex. As periodic fluctuations below the pitch range are important for speech perception, it is necessary to investigate how low frequency periodic sounds are processed in the human auditory cortex. Auditory sustained responses have been shown to be sensitive to temporal regularity but the relationship between the amplitudes of auditory evoked sustained responses and the repetitive rates of auditory inputs remains elusive. As the temporal and spectral features of sounds enhance different components of sustained responses, previous studies with click trains and vowel stimuli presented diverging results. In order to investigate the effect of repetition rate on cortical responses, we analyzed the auditory sustained fields evoked by periodic and aperiodic noises using magnetoencephalography. Results Sustained fields were elicited by white noise and repeating frozen noise stimuli with repetition rates of 5-, 10-, 50-, 200- and 500 Hz. The sustained field amplitudes were significantly larger for all the periodic stimuli than for white noise. Although the sustained field amplitudes showed a rising and falling pattern within the repetition rate range, the response amplitudes to 5 Hz repetition rate were significantly larger than to 500 Hz. Conclusions The enhanced sustained field responses to periodic noises show that cortical sensitivity to periodic sounds is maintained for a wide range of repetition rates. Persistence of periodicity sensitivity below the pitch range suggests that in addition to processing the fundamental frequency of voice, sustained field generators can also resolve low frequency temporal modulations in speech envelope.

  11. Neurodynamics, tonality, and the auditory brainstem response.

    Science.gov (United States)

    Large, Edward W; Almonte, Felix V

    2012-04-01

    Tonal relationships are foundational in music, providing the basis upon which musical structures, such as melodies, are constructed and perceived. A recent dynamic theory of musical tonality predicts that networks of auditory neurons resonate nonlinearly to musical stimuli. Nonlinear resonance leads to stability and attraction relationships among neural frequencies, and these neural dynamics give rise to the perception of relationships among tones that we collectively refer to as tonal cognition. Because this model describes the dynamics of neural populations, it makes specific predictions about human auditory neurophysiology. Here, we show how predictions about the auditory brainstem response (ABR) are derived from the model. To illustrate, we derive a prediction about population responses to musical intervals that has been observed in the human brainstem. Our modeled ABR shows qualitative agreement with important features of the human ABR. This provides a source of evidence that fundamental principles of auditory neurodynamics might underlie the perception of tonal relationships, and forces reevaluation of the role of learning and enculturation in tonal cognition. © 2012 New York Academy of Sciences.

  12. Caveat Emptor: The Meaning of Perception and Integration in Speech Perception

    OpenAIRE

    Dominic Massaro

    2009-01-01

    A recent letter^1^ claimed integration of auditory and tactile information in speech perception. Although I have been an advocate of multisensory integration, neither perception nor integration was sufficiently formalized, operationalized, and tested to support this claim.

  13. Tinnitus alters resting state functional connectivity (RSFC in human auditory and non-auditory brain regions as measured by functional near-infrared spectroscopy (fNIRS.

    Directory of Open Access Journals (Sweden)

    Juan San Juan

    Full Text Available Tinnitus, or phantom sound perception, leads to increased spontaneous neural firing rates and enhanced synchrony in central auditory circuits in animal models. These putative physiologic correlates of tinnitus to date have not been well translated in the brain of the human tinnitus sufferer. Using functional near-infrared spectroscopy (fNIRS we recently showed that tinnitus in humans leads to maintained hemodynamic activity in auditory and adjacent, non-auditory cortices. Here we used fNIRS technology to investigate changes in resting state functional connectivity between human auditory and non-auditory brain regions in normal-hearing, bilateral subjective tinnitus and controls before and after auditory stimulation. Hemodynamic activity was monitored over the region of interest (primary auditory cortex and non-region of interest (adjacent non-auditory cortices and functional brain connectivity was measured during a 60-second baseline/period of silence before and after a passive auditory challenge consisting of alternating pure tones (750 and 8000Hz, broadband noise and silence. Functional connectivity was measured between all channel-pairs. Prior to stimulation, connectivity of the region of interest to the temporal and fronto-temporal region was decreased in tinnitus participants compared to controls. Overall, connectivity in tinnitus was differentially altered as compared to controls following sound stimulation. Enhanced connectivity was seen in both auditory and non-auditory regions in the tinnitus brain, while controls showed a decrease in connectivity following sound stimulation. In tinnitus, the strength of connectivity was increased between auditory cortex and fronto-temporal, fronto-parietal, temporal, occipito-temporal and occipital cortices. Together these data suggest that central auditory and non-auditory brain regions are modified in tinnitus and that resting functional connectivity measured by fNIRS technology may contribute to

  14. Tinnitus alters resting state functional connectivity (RSFC) in human auditory and non-auditory brain regions as measured by functional near-infrared spectroscopy (fNIRS).

    Science.gov (United States)

    San Juan, Juan; Hu, Xiao-Su; Issa, Mohamad; Bisconti, Silvia; Kovelman, Ioulia; Kileny, Paul; Basura, Gregory

    2017-01-01

    Tinnitus, or phantom sound perception, leads to increased spontaneous neural firing rates and enhanced synchrony in central auditory circuits in animal models. These putative physiologic correlates of tinnitus to date have not been well translated in the brain of the human tinnitus sufferer. Using functional near-infrared spectroscopy (fNIRS) we recently showed that tinnitus in humans leads to maintained hemodynamic activity in auditory and adjacent, non-auditory cortices. Here we used fNIRS technology to investigate changes in resting state functional connectivity between human auditory and non-auditory brain regions in normal-hearing, bilateral subjective tinnitus and controls before and after auditory stimulation. Hemodynamic activity was monitored over the region of interest (primary auditory cortex) and non-region of interest (adjacent non-auditory cortices) and functional brain connectivity was measured during a 60-second baseline/period of silence before and after a passive auditory challenge consisting of alternating pure tones (750 and 8000Hz), broadband noise and silence. Functional connectivity was measured between all channel-pairs. Prior to stimulation, connectivity of the region of interest to the temporal and fronto-temporal region was decreased in tinnitus participants compared to controls. Overall, connectivity in tinnitus was differentially altered as compared to controls following sound stimulation. Enhanced connectivity was seen in both auditory and non-auditory regions in the tinnitus brain, while controls showed a decrease in connectivity following sound stimulation. In tinnitus, the strength of connectivity was increased between auditory cortex and fronto-temporal, fronto-parietal, temporal, occipito-temporal and occipital cortices. Together these data suggest that central auditory and non-auditory brain regions are modified in tinnitus and that resting functional connectivity measured by fNIRS technology may contribute to conscious phantom

  15. Music lessons improve auditory perceptual and cognitive performance in deaf children

    Directory of Open Access Journals (Sweden)

    Françoise eROCHETTE

    2014-07-01

    Full Text Available Despite advanced technologies in auditory rehabilitation of profound deafness, deaf children often exhibit delayed cognitive and linguistic development and auditory training remains a crucial element of their education. In the present cross-sectional study, we assess whether music would be a relevant tool for deaf children rehabilitation. In normal-hearing children, music lessons have been shown to improve cognitive and linguistic-related abilities, such as phonetic discrimination and reading. We compared auditory perception, auditory cognition, and phonetic discrimination between 14 profoundly deaf children who completed weekly music lessons for a period of 1.5 to 4 years and 14 deaf children who did not receive musical instruction. Children were assessed on perceptual and cognitive auditory tasks using environmental sounds: discrimination, identification, auditory scene analysis, auditory working memory. Transfer to the linguistic domain was tested with a phonetic discrimination task. Musically-trained children showed better performance in auditory scene analysis, auditory working memory and phonetic discrimination tasks, and multiple regressions showed that success on these tasks was at least partly driven by music lessons. We propose that musical education contributes to development of general processes such as auditory attention and perception, which, in turn, facilitate auditory-related cognitive and linguistic processes.

  16. Music lessons improve auditory perceptual and cognitive performance in deaf children.

    Science.gov (United States)

    Rochette, Françoise; Moussard, Aline; Bigand, Emmanuel

    2014-01-01

    Despite advanced technologies in auditory rehabilitation of profound deafness, deaf children often exhibit delayed cognitive and linguistic development and auditory training remains a crucial element of their education. In the present cross-sectional study, we assess whether music would be a relevant tool for deaf children rehabilitation. In normal-hearing children, music lessons have been shown to improve cognitive and linguistic-related abilities, such as phonetic discrimination and reading. We compared auditory perception, auditory cognition, and phonetic discrimination between 14 profoundly deaf children who completed weekly music lessons for a period of 1.5-4 years and 14 deaf children who did not receive musical instruction. Children were assessed on perceptual and cognitive auditory tasks using environmental sounds: discrimination, identification, auditory scene analysis, auditory working memory. Transfer to the linguistic domain was tested with a phonetic discrimination task. Musically trained children showed better performance in auditory scene analysis, auditory working memory and phonetic discrimination tasks, and multiple regressions showed that success on these tasks was at least partly driven by music lessons. We propose that musical education contributes to development of general processes such as auditory attention and perception, which, in turn, facilitate auditory-related cognitive and linguistic processes.

  17. Resizing Auditory Communities

    DEFF Research Database (Denmark)

    Kreutzfeldt, Jacob

    2012-01-01

    Heard through the ears of the Canadian composer and music teacher R. Murray Schafer the ideal auditory community had the shape of a village. Schafer’s work with the World Soundscape Project in the 70s represent an attempt to interpret contemporary environments through musical and auditory...

  18. Auditory-motor learning influences auditory memory for music.

    Science.gov (United States)

    Brown, Rachel M; Palmer, Caroline

    2012-05-01

    In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.

  19. Brain metabolism during hallucination-like auditory stimulation in schizophrenia.

    Directory of Open Access Journals (Sweden)

    Guillermo Horga

    Full Text Available Auditory verbal hallucinations (AVH in schizophrenia are typically characterized by rich emotional content. Despite the prominent role of emotion in regulating normal perception, the neural interface between emotion-processing regions such as the amygdala and auditory regions involved in perception remains relatively unexplored in AVH. Here, we studied brain metabolism using FDG-PET in 9 remitted patients with schizophrenia that previously reported severe AVH during an acute psychotic episode and 8 matched healthy controls. Participants were scanned twice: (1 at rest and (2 during the perception of aversive auditory stimuli mimicking the content of AVH. Compared to controls, remitted patients showed an exaggerated response to the AVH-like stimuli in limbic and paralimbic regions, including the left amygdala. Furthermore, patients displayed abnormally strong connections between the amygdala and auditory regions of the cortex and thalamus, along with abnormally weak connections between the amygdala and medial prefrontal cortex. These results suggest that abnormal modulation of the auditory cortex by limbic-thalamic structures might be involved in the pathophysiology of AVH and may potentially account for the emotional features that characterize hallucinatory percepts in schizophrenia.

  20. Cross-Modal Dynamic Capture: Congruency Effects in the Perception of Motion Across Sensory Modalities

    Science.gov (United States)

    Soto-Faraco, Salvador; Spence, Charles; Kingstone, Alan

    2004-01-01

    This study investigated multisensory interactions in the perception of auditory and visual motion. When auditory and visual apparent motion streams are presented concurrently in opposite directions, participants often fail to discriminate the direction of motion of the auditory stream, whereas perception of the visual stream is unaffected by the…

  1. Mode-locking neurodynamics predict human auditory brainstem responses to musical intervals.

    Science.gov (United States)

    Lerud, Karl D; Almonte, Felix V; Kim, Ji Chul; Large, Edward W

    2014-02-01

    The auditory nervous system is highly nonlinear. Some nonlinear responses arise through active processes in the cochlea, while others may arise in neural populations of the cochlear nucleus, inferior colliculus and higher auditory areas. In humans, auditory brainstem recordings reveal nonlinear population responses to combinations of pure tones, and to musical intervals composed of complex tones. Yet the biophysical origin of central auditory nonlinearities, their signal processing properties, and their relationship to auditory perception remain largely unknown. Both stimulus components and nonlinear resonances are well represented in auditory brainstem nuclei due to neural phase-locking. Recently mode-locking, a generalization of phase-locking that implies an intrinsically nonlinear processing of sound, has been observed in mammalian auditory brainstem nuclei. Here we show that a canonical model of mode-locked neural oscillation predicts the complex nonlinear population responses to musical intervals that have been observed in the human brainstem. The model makes predictions about auditory signal processing and perception that are different from traditional delay-based models, and may provide insight into the nature of auditory population responses. We anticipate that the application of dynamical systems analysis will provide the starting point for generic models of auditory population dynamics, and lead to a deeper understanding of nonlinear auditory signal processing possibly arising in excitatory-inhibitory networks of the central auditory nervous system. This approach has the potential to link neural dynamics with the perception of pitch, music, and speech, and lead to dynamical models of auditory system development. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. A case of generalized auditory agnosia with unilateral subcortical brain lesion.

    Science.gov (United States)

    Suh, Hyee; Shin, Yong-Il; Kim, Soo Yeon; Kim, Sook Hee; Chang, Jae Hyeok; Shin, Yong Beom; Ko, Hyun-Yoon

    2012-12-01

    The mechanisms and functional anatomy underlying the early stages of speech perception are still not well understood. Auditory agnosia is a deficit of auditory object processing defined as a disability to recognize spoken languages and/or nonverbal environmental sounds and music despite adequate hearing while spontaneous speech, reading and writing are preserved. Usually, either the bilateral or unilateral temporal lobe, especially the transverse gyral lesions, are responsible for auditory agnosia. Subcortical lesions without cortical damage rarely causes auditory agnosia. We present a 73-year-old right-handed male with generalized auditory agnosia caused by a unilateral subcortical lesion. He was not able to repeat or dictate but to perform fluent and comprehensible speech. He could understand and read written words and phrases. His auditory brainstem evoked potential and audiometry were intact. This case suggested that the subcortical lesion involving unilateral acoustic radiation could cause generalized auditory agnosia.

  3. Auditory Integration Training

    Directory of Open Access Journals (Sweden)

    Zahra Jafari

    2002-07-01

    Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depressin, and hyperacute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of The Sound of a Moracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.

  4. Review: Auditory Integration Training

    Directory of Open Access Journals (Sweden)

    Zahra Ja'fari

    2003-01-01

    Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depression, and hyper acute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of the sound of a miracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.

  5. A study on the influence of headphones in auditory perceptual function.

    Science.gov (United States)

    Horie, Yoshinori; Toriizuka, Takashi

    2012-01-01

    The focus of this study is a human's ability to make full use of listening and hearing. This ability consists of dividing auditory information into a signal and a noise. To evaluate the risk of using headphones, the study investigated the auditory perception when a warning sound is given in the presence of environmental noise.

  6. Intact Spectral but Abnormal Temporal Processing of Auditory Stimuli in Autism

    Science.gov (United States)

    Groen, Wouter B.; van Orsouw, Linda; ter Huurne, Niels; Swinkels, Sophie; van der Gaag, Rutger-Jan; Buitelaar, Jan K.; Zwiers, Marcel P.

    2009-01-01

    The perceptual pattern in autism has been related to either a specific localized processing deficit or a pathway-independent, complexity-specific anomaly. We examined auditory perception in autism using an auditory disembedding task that required spectral and temporal integration. 23 children with high-functioning-autism and 23 matched controls…

  7. Neural correlates of auditory scale illusion.

    Science.gov (United States)

    Kuriki, Shinya; Numao, Ryousuke; Nemoto, Iku

    2016-09-01

    The auditory illusory perception "scale illusion" occurs when ascending and descending musical scale tones are delivered in a dichotic manner, such that the higher or lower tone at each instant is presented alternately to the right and left ears. Resulting tone sequences have a zigzag pitch in one ear and the reversed (zagzig) pitch in the other ear. Most listeners hear illusory smooth pitch sequences of up-down and down-up streams in the two ears separated in higher and lower halves of the scale. Although many behavioral studies have been conducted, how and where in the brain the illusory percept is formed have not been elucidated. In this study, we conducted functional magnetic resonance imaging using sequential tones that induced scale illusion (ILL) and those that mimicked the percept of scale illusion (PCP), and we compared the activation responses evoked by those stimuli by region-of-interest analysis. We examined the effects of adaptation, i.e., the attenuation of response that occurs when close-frequency sounds are repeated, which might interfere with the changes in activation by the illusion process. Results of the activation difference of the two stimuli, measured at varied tempi of tone presentation, in the superior temporal auditory cortex were not explained by adaptation. Instead, excess activation of the ILL stimulus from the PCP stimulus at moderate tempi (83 and 126 bpm) was significant in the posterior auditory cortex with rightward superiority, while significant prefrontal activation was dominant at the highest tempo (245 bpm). We suggest that the area of the planum temporale posterior to the primary auditory cortex is mainly involved in the illusion formation, and that the illusion-related process is strongly dependent on the rate of tone presentation. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Aktiverende Undervisning i auditorier

    DEFF Research Database (Denmark)

    Parus, Judith

    Workshop om erfaringer og brug af aktiverende metoder i undervisning i auditorier og på store hold. Hvilke metoder har fungeret godt og hvilke dårligt ? Hvilke overvejelser skal man gøre sig.......Workshop om erfaringer og brug af aktiverende metoder i undervisning i auditorier og på store hold. Hvilke metoder har fungeret godt og hvilke dårligt ? Hvilke overvejelser skal man gøre sig....

  9. Temporal factors affecting somatosensory-auditory interactions in speech processing

    Directory of Open Access Journals (Sweden)

    Takayuki eIto

    2014-11-01

    Full Text Available Speech perception is known to rely on both auditory and visual information. However, sound specific somatosensory input has been shown also to influence speech perceptual processing (Ito et al., 2009. In the present study we addressed further the relationship between somatosensory information and speech perceptual processing by addressing the hypothesis that the temporal relationship between orofacial movement and sound processing contributes to somatosensory-auditory interaction in speech perception. We examined the changes in event-related potentials in response to multisensory synchronous (simultaneous and asynchronous (90 ms lag and lead somatosensory and auditory stimulation compared to individual unisensory auditory and somatosensory stimulation alone. We used a robotic device to apply facial skin somatosensory deformations that were similar in timing and duration to those experienced in speech production. Following synchronous multisensory stimulation the amplitude of the event-related potential was reliably different from the two unisensory potentials. More importantly, the magnitude of the event-related potential difference varied as a function of the relative timing of the somatosensory-auditory stimulation. Event-related activity change due to stimulus timing was seen between 160-220 ms following somatosensory onset, mostly around the parietal area. The results demonstrate a dynamic modulation of somatosensory-auditory convergence and suggest the contribution of somatosensory information for speech processing process is dependent on the specific temporal order of sensory inputs in speech production.

  10. Auditory temporal processing skills in musicians with dyslexia.

    Science.gov (United States)

    Bishop-Liebler, Paula; Welch, Graham; Huss, Martina; Thomson, Jennifer M; Goswami, Usha

    2014-08-01

    The core cognitive difficulty in developmental dyslexia involves phonological processing, but adults and children with dyslexia also have sensory impairments. Impairments in basic auditory processing show particular links with phonological impairments, and recent studies with dyslexic children across languages reveal a relationship between auditory temporal processing and sensitivity to rhythmic timing and speech rhythm. As rhythm is explicit in music, musical training might have a beneficial effect on the auditory perception of acoustic cues to rhythm in dyslexia. Here we took advantage of the presence of musicians with and without dyslexia in musical conservatoires, comparing their auditory temporal processing abilities with those of dyslexic non-musicians matched for cognitive ability. Musicians with dyslexia showed equivalent auditory sensitivity to musicians without dyslexia and also showed equivalent rhythm perception. The data support the view that extensive rhythmic experience initiated during childhood (here in the form of music training) can affect basic auditory processing skills which are found to be deficient in individuals with dyslexia. Copyright © 2014 John Wiley & Sons, Ltd.

  11. Temporal prediction errors in visual and auditory cortices.

    Science.gov (United States)

    Lee, Hweeling; Noppeney, Uta

    2014-04-14

    To form a coherent percept of the environment, the brain needs to bind sensory signals emanating from a common source, but to segregate those from different sources [1]. Temporal correlations and synchrony act as prominent cues for multisensory integration [2-4], but the neural mechanisms by which such cues are identified remain unclear. Predictive coding suggests that the brain iteratively optimizes an internal model of its environment by minimizing the errors between its predictions and the sensory inputs [5,6]. This model enables the brain to predict the temporal evolution of natural audiovisual inputs and their statistical (for example, temporal) relationship. A prediction of this theory is that asynchronous audiovisual signals violating the model's predictions induce an error signal that depends on the directionality of the audiovisual asynchrony. As the visual system generates the dominant temporal predictions for visual leading asynchrony, the delayed auditory inputs are expected to generate a prediction error signal in the auditory system (and vice versa for auditory leading asynchrony). Using functional magnetic resonance imaging (fMRI), we measured participants' brain responses to synchronous, visual leading and auditory leading movies of speech, sinewave speech or music. In line with predictive coding, auditory leading asynchrony elicited a prediction error in visual cortices and visual leading asynchrony in auditory cortices. Our results reveal predictive coding as a generic mechanism to temporally bind signals from multiple senses into a coherent percept. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. The relation between working memory capacity and auditory lateralization in children with auditory processing disorders.

    Science.gov (United States)

    Moossavi, Abdollah; Mehrkian, Saiedeh; Lotfi, Yones; Faghihzadeh, Soghrat; sajedi, Hamed

    2014-11-01

    Auditory processing disorder (APD) describes a complex and heterogeneous disorder characterized by poor speech perception, especially in noisy environments. APD may be responsible for a range of sensory processing deficits associated with learning difficulties. There is no general consensus about the nature of APD and how the disorder should be assessed or managed. This study assessed the effect of cognition abilities (working memory capacity) on sound lateralization in children with auditory processing disorders, in order to determine how "auditory cognition" interacts with APD. The participants in this cross-sectional comparative study were 20 typically developing and 17 children with a diagnosed auditory processing disorder (9-11 years old). Sound lateralization abilities investigated using inter-aural time (ITD) differences and inter-aural intensity (IID) differences with two stimuli (high pass and low pass noise) in nine perceived positions. Working memory capacity was evaluated using the non-word repetition, and forward and backward digits span tasks. Linear regression was employed to measure the degree of association between working memory capacity and localization tests between the two groups. Children in the APD group had consistently lower scores than typically developing subjects in lateralization and working memory capacity measures. The results showed working memory capacity had significantly negative correlation with ITD errors especially with high pass noise stimulus but not with IID errors in APD children. The study highlights the impact of working memory capacity on auditory lateralization. The finding of this research indicates that the extent to which working memory influences auditory processing depend on the type of auditory processing and the nature of stimulus/listening situation. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  13. Absolute Pitch-Functional Evidence of Speech-Relevant Auditory Acuity

    National Research Council Canada - National Science Library

    Oechslin, Mathias S; Meyer, Martin; Jäncke, Lutz

    2010-01-01

    Absolute pitch (AP) has been shown to be associated with morphological changes and neurophysiological adaptations in the planum temporale, a cortical area involved in higher-order auditory and speech perception processes...

  14. Auditory Spatial Layout

    Science.gov (United States)

    Wightman, Frederic L.; Jenison, Rick

    1995-01-01

    All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.

  15. [Auditory processing in specific language disorder].

    Science.gov (United States)

    Idiazábal-Aletxa, M A; Saperas-Rodríguez, M

    2008-01-01

    Specific language impairment (SLI) is diagnosed when a child has difficulty in producing or understanding spoken language for no apparent reason. The diagnosis in made when language development is out of keeping with other aspects of development, and possible explanatory causes have been excluded. During the last years neurosciences have approached to the study of SLI. The ability to process two or more rapidly presented, successive, auditory stimuli is believed to underlie successful language acquisition. It has been proposed that SLI is the consequence of low-level abnormalities in auditory perception. Too, children with SLI show a specific deficit in automatic discrimination of syllables. Electrophysiological methods may reveal underlying immaturity or other abnormality of auditory processing even when behavioural thresholds look normal. There is much controversy about the role of such deficits in causing their language problems, and it has been difficult to establish solid, replicable findings in this area because of the heterogeneity in the population and because insufficient attention has been paid to maturational aspects of auditory processing.

  16. Spatial Hearing with Incongruent Visual or Auditory Room Cues

    Science.gov (United States)

    Gil-Carvajal, Juan C.; Cubick, Jens; Santurette, Sébastien; Dau, Torsten

    2016-11-01

    In day-to-day life, humans usually perceive the location of sound sources as outside their heads. This externalized auditory spatial perception can be reproduced through headphones by recreating the sound pressure generated by the source at the listener’s eardrums. This requires the acoustical features of the recording environment and listener’s anatomy to be recorded at the listener’s ear canals. Although the resulting auditory images can be indistinguishable from real-world sources, their externalization may be less robust when the playback and recording environments differ. Here we tested whether a mismatch between playback and recording room reduces perceived distance, azimuthal direction, and compactness of the auditory image, and whether this is mostly due to incongruent auditory cues or to expectations generated from the visual impression of the room. Perceived distance ratings decreased significantly when collected in a more reverberant environment than the recording room, whereas azimuthal direction and compactness remained room independent. Moreover, modifying visual room-related cues had no effect on these three attributes, while incongruent auditory room-related cues between the recording and playback room did affect distance perception. Consequently, the external perception of virtual sounds depends on the degree of congruency between the acoustical features of the environment and the stimuli.

  17. Effect of conductive hearing loss on central auditory function

    Directory of Open Access Journals (Sweden)

    Arash Bayat

    Full Text Available Abstract Introduction: It has been demonstrated that long-term Conductive Hearing Loss (CHL may influence the precise detection of the temporal features of acoustic signals or Auditory Temporal Processing (ATP. It can be argued that ATP may be the underlying component of many central auditory processing capabilities such as speech comprehension or sound localization. Little is known about the consequences of CHL on temporal aspects of central auditory processing. Objective: This study was designed to assess auditory temporal processing ability in individuals with chronic CHL. Methods: During this analytical cross-sectional study, 52 patients with mild to moderate chronic CHL and 52 normal-hearing listeners (control, aged between 18 and 45 year-old, were recruited. In order to evaluate auditory temporal processing, the Gaps-in-Noise (GIN test was used. The results obtained for each ear were analyzed based on the gap perception threshold and the percentage of correct responses. Results: The average of GIN thresholds was significantly smaller for the control group than for the CHL group for both ears (right: p = 0.004; left: p 0.05. Conclusion: The results suggest reduced auditory temporal processing ability in adults with CHL compared to normal hearing subjects. Therefore, developing a clinical protocol to evaluate auditory temporal processing in this population is recommended.

  18. Listening to another sense: somatosensory integration in the auditory system.

    Science.gov (United States)

    Wu, Calvin; Stefanescu, Roxana A; Martel, David T; Shore, Susan E

    2015-07-01

    Conventionally, sensory systems are viewed as separate entities, each with its own physiological process serving a different purpose. However, many functions require integrative inputs from multiple sensory systems and sensory intersection and convergence occur throughout the central nervous system. The neural processes for hearing perception undergo significant modulation by the two other major sensory systems, vision and somatosensation. This synthesis occurs at every level of the ascending auditory pathway: the cochlear nucleus, inferior colliculus, medial geniculate body and the auditory cortex. In this review, we explore the process of multisensory integration from (1) anatomical (inputs and connections), (2) physiological (cellular responses), (3) functional and (4) pathological aspects. We focus on the convergence between auditory and somatosensory inputs in each ascending auditory station. This review highlights the intricacy of sensory processing and offers a multisensory perspective regarding the understanding of sensory disorders.

  19. Complex-tone pitch representations in the human auditory system

    DEFF Research Database (Denmark)

    Bianchi, Federica

    enhanced relative to the non-musicians for both resolved and unresolved harmonics in the right auditory cortex, right frontal regions and inferior colliculus. However, the increase in neural activation in the right auditory cortex of musicians was predictive of the increased pitch......Understanding how the human auditory system processes the physical properties of an acoustical stimulus to give rise to a pitch percept is a fascinating aspect of hearing research. Since most natural sounds are harmonic complex tones, this work focused on the nature of pitch-relevant cues...... of training, which seemed to be specific to the stimuli containing resolved harmonics. Finally, a functional magnetic resonance imaging paradigm was used to examine the response of the auditory cortex to resolved and unresolved harmonics in musicians and non-musicians. The neural responses in musicians were...

  20. Screening Test for Auditory Processing (STAP): a preliminary report.

    Science.gov (United States)

    Yathiraj, Asha; Maggu, Akshay Raj

    2013-10-01

    The presence of auditory processing disorder in school-age children has been documented (Katz and Wilde, 1985; Chermak and Musiek, 1997; Jerger and Musiek, 2000; Muthuselvi and Yathiraj, 2009). In order to identify these children early, there is a need for a screening test that is not very time-consuming. The present study aimed to evaluate the independence of four subsections of the Screening Test for Auditory Processing (STAP) developed by Yathiraj and Maggu (2012). The test was designed to address auditory separation/closure, binaural integration, temporal resolution, and auditory memory in school-age children. The study also aimed to examine the number of children who are at risk for different auditory processes. Factor analysis research design was used in the current study. Four hundred school-age children consisting of 218 males and 182 females were randomly selected from 2400 children attending three schools. The children, aged 8 to 13 yr, were in grade three to eight class placements. DATA COLLECTION AND ANALYSES: The children were evaluated on the four subsections of the STAP (speech perception in noise, dichotic consonant-vowel [CV], gap detection, and auditory memory) in a quiet room within their school. The responses were analyzed using principal component analysis (PCA) and confirmatory factor analysis (CFA). In addition, the data were also analyzed to determine the number of children who were at risk for an auditory processing disorder (APD). Based on the PCA, three components with Eigen values greater than 1 were extracted. The orthogonal rotation of the variables using the Varimax technique revealed that component 1 consisted of binaural integration, component 2 consisted of temporal resolution, and component 3 was shared by auditory separation/closure and auditory memory. These findings were confirmed using CFA, where the predicted model displayed a good fit with or without the inclusion of the auditory memory subsection. It was determined that 16

  1. The auditory and non-auditory brain areas involved in tinnitus. An emergent property of multiple parallel overlapping subnetworks.

    Directory of Open Access Journals (Sweden)

    Sven eVanneste

    2012-05-01

    Full Text Available Tinnitus is the perception of a sound in the absence of an external sound source. It is characterized by sensory components such as the perceived loudness, the lateralization, the tinnitus type (pure tone, noise-like and associated emotional components, such as distress and mood changes. Source localization of qEEG data demonstrate the involvement of auditory brain areas as well as several non-auditory brain areas such as the anterior cingulate cortex (dorsal and subgenual, auditory cortex (primary and secondary, dorsal lateral prefrontal cortex, insula, supplementary motor area, orbitofrontal cortex (including the inferior frontal gyrus, parahippocampus, posterior cingulate cortex and the precuneus, in different aspects of tinnitus. Explaining these non-auditory brain areas as constituents of separable subnetworks, each reflecting a specific aspect of the tinnitus percept increases the explanatory power of the non-auditory brain areas involvement in tinnitus. Thus the unified percept of tinnitus can be considered an emergent property of multiple parallel dynamically changing and partially overlapping subnetworks, each with a specific spontaneous oscillatory pattern and functional connectivity signature.

  2. Auditory hallucinations induced by trazodone

    Science.gov (United States)

    Shiotsuki, Ippei; Terao, Takeshi; Ishii, Nobuyoshi; Hatano, Koji

    2014-01-01

    A 26-year-old female outpatient presenting with a depressive state suffered from auditory hallucinations at night. Her auditory hallucinations did not respond to blonanserin or paliperidone, but partially responded to risperidone. In view of the possibility that her auditory hallucinations began after starting trazodone, trazodone was discontinued, leading to a complete resolution of her auditory hallucinations. Furthermore, even after risperidone was decreased and discontinued, her auditory hallucinations did not recur. These findings suggest that trazodone may induce auditory hallucinations in some susceptible patients. PMID:24700048

  3. Integration of auditory and visual speech information

    NARCIS (Netherlands)

    Hall, M.; Smeele, P.M.T.; Kuhl, P.K.

    1998-01-01

    The integration of auditory and visual speech is observed when modes specify different places of articulation. Influences of auditory variation on integration were examined using consonant identifi-cation, plus quality and similarity ratings. Auditory identification predicted auditory-visual

  4. Simulating Auditory Hallucinations in a Video Game

    DEFF Research Database (Denmark)

    Weinel, Jonathan; Cunningham, Stuart

    2017-01-01

    In previous work the authors have proposed the concept of 'ASC Simulations': including audio-visual installations and experiences, as well as interactive video game systems, which simulate altered states of consciousness (ASCs) such as dreams and hallucinations. Building on the discussion...... of the authors' previous paper, where a large-scale qualitative study explored the changes to auditory perception that users of various intoxicating substances report, here the authors present three prototype audio mechanisms for simulating hallucinations in a video game. These were designed in the Unity video...... game engine as an early proof-of-concept. The first mechanism simulates 'selective auditory attention' to different sound sources, by attenuating the amplitude of unattended sources. The second simulates 'enhanced sounds', by adjusting perceived brightness through filtering. The third simulates...

  5. Tuned with a tune: Talker normalization via general auditory processes

    Directory of Open Access Journals (Sweden)

    Erika J C Laing

    2012-06-01

    Full Text Available Voices have unique acoustic signatures, contributing to the acoustic variability listeners must contend with in perceiving speech, and it has long been proposed that listeners normalize speech perception to information extracted from a talker’s speech. Initial attempts to explain talker normalization relied on extraction of articulatory referents, but recent studies of context-dependent auditory perception suggest that general auditory referents such as the long-term average spectrum (LTAS of a talker’s speech similarly affect speech perception. The present study aimed to differentiate the contributions of articulatory/linguistic versus auditory referents for context-driven talker normalization effects and, more specifically, to identify the specific constraints under which such contexts impact speech perception. Synthesized sentences manipulated to sound like different talkers influenced categorization of a subsequent speech target only when differences in the sentences’ LTAS were in the frequency range of the acoustic cues relevant for the target phonemic contrast. This effect was true both for speech targets preceded by spoken sentence contexts and for targets preceded by nonspeech tone sequences that were LTAS-matched to the spoken sentence contexts. Specific LTAS characteristics, rather than perceived talker, predicted the results suggesting that general auditory mechanisms play an important role in effects considered to be instances of perceptual talker normalization.

  6. Influence of Syllable Structure on L2 Auditory Word Learning

    Science.gov (United States)

    Hamada, Megumi; Goya, Hideki

    2015-01-01

    This study investigated the role of syllable structure in L2 auditory word learning. Based on research on cross-linguistic variation of speech perception and lexical memory, it was hypothesized that Japanese L1 learners of English would learn English words with an open-syllable structure without consonant clusters better than words with a…

  7. Auditory perception of motor vehicle travel paths.

    Science.gov (United States)

    Ashmead, Daniel H; Grantham, D Wesley; Maloff, Erin S; Hornsby, Benjamin; Nakamura, Takabun; Davis, Timothy J; Pampel, Faith; Rushing, Erin G

    2012-06-01

    These experiments address concerns that motor vehicles in electric engine mode are so quiet that they pose a risk to pedestrians, especially those with visual impairments. The "quiet car" issue has focused on hybrid and electric vehicles, although it also applies to internal combustion engine vehicles. Previous research has focused on detectability of vehicles, mostly in quiet settings. Instead, we focused on the functional ability to perceive vehicle motion paths. Participants judged whether simulated vehicles were traveling straight or turning, with emphasis on the impact of background traffic sound. In quiet, listeners made the straight-or-turn judgment soon enough in the vehicle's path to be useful for deciding whether to start crossing the street. This judgment is based largely on sound level cues rather than the spatial direction of the vehicle. With even moderate background traffic sound, the ability to tell straight from turn paths is severely compromised. The signal-to-noise ratio needed for the straight-or-turn judgment is much higher than that needed to detect a vehicle. Although a requirement for a minimum vehicle sound level might enhance detection of vehicles in quiet settings, it is unlikely that this requirement would contribute to pedestrian awareness of vehicle movements in typical traffic settings with many vehicles present. The findings are relevant to deliberations by government agencies and automobile manufacturers about standards for minimum automobile sounds and, more generally, for solutions to pedestrians' needs for information about traffic, especially for pedestrians with sensory impairments.

  8. Auditory Perception in Open Field: Distance Estimation

    Science.gov (United States)

    2013-07-01

    in a concert hall, while the musicians have the tendency to underestimate such a distance. 6. Distance Estimation in an Open Field The difficulty... Musicians and Sound Engineers in Estimation of Egocentric Source Distance in a Concert Hall Stereophonic Recording. Proceedings of the 28th International...Servos, P. Distance Estimation in the Visual and Visuomotor Systems. Experimental Brain Research 2000, 130, 35–47. Shaw, B. K.; McGowan, R. S.; Turvey, M

  9. Auditory reafferences: The influence of real-time feedback on movement control

    Directory of Open Access Journals (Sweden)

    Christian eKennel

    2015-01-01

    Full Text Available Auditory reafferences are real-time auditory products created by a person’s own movements. Whereas the interdependency of action and perception is generally well studied, the auditory feedback channel and the influence of perceptual processes during movement execution remain largely unconsidered. We argue that movements have a rhythmic character that is closely connected to sound, making it possible to manipulate auditory reafferences online to understand their role in motor control. We examined if step sounds, occurring as a by-product of running, have an influence on the performance of a complex movement task. Twenty participants completed a hurdling task in three auditory feedback conditions: a control condition with normal auditory feedback, a white noise condition in which sound was masked, and a delayed auditory feedback condition. Overall time and kinematic data were collected. Results show that delayed auditory feedback led to a significantly slower overall time and changed kinematic parameters. Our findings complement previous investigations in a natural movement situation with nonartificial auditory cues. Our results support the existing theoretical understanding of action–perception coupling and hold potential for applied work, where naturally occurring movement sounds can be implemented in the motor learning processes.

  10. Audiovisual integration in speech perception: a multi-stage process

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias

    2011-01-01

    Integration of speech signals from ear and eye is a well-known feature of speech perception. This is evidenced by the McGurk illusion in which visual speech alters auditory speech perception and by the advantage observed in auditory speech detection when a visual signal is present. Here we invest...

  11. Octave effect in auditory attention

    National Research Council Canada - National Science Library

    Tobias Borra; Huib Versnel; Chantal Kemner; A. John van Opstal; Raymond van Ee

    2013-01-01

    ... tones. Current auditory models explain this phenomenon by a simple bandpass attention filter. Here, we demonstrate that auditory attention involves multiple pass-bands around octave-related frequencies above and below the cued tone...

  12. Perceptual-auditory and orthographic performance of fricative consonants in writing acquisition.

    Science.gov (United States)

    Schier, Ana Cândida; Berti, Larissa Cristina; Chacon, Lourenço

    2013-01-01

    To investigate the perceptual-auditory and orthographic performances of students regarding identification of contrasts among the fricatives of Brazilian Portuguese, and to investigate the extent to which these two types of performances are related. Data from perceptual-auditory and orthographic performances of 20 children attending the two first grades of elementary education at a public school in Mallet (PR), Brazil, were analyzed. Data collection regarding auditory perception was based on the Assessment Tool in Speech Perception (PERCEFAL), using the software Perceval. Data collection regarding orthography was carried out through dictation of the same words used in the assessment tool PERCEFAL. We observed: more accuracy in perceptual-auditory than in orthographic skills; tendency of shorter response time and lesser variability in the perceptual-auditory hits than in the errors; mismatch of errors in orthographic and auditory perception, since, in perception, the highest percentage of errors involved the point of articulation of fricatives, while in orthography the highest percentage involved voicing. Although related to each other, perceptual-auditory and orthographic performances do not match term by term. Therefore, in clinical practice, attention should focus not only on the aspects that bring these two performances together, but also on the aspects that differentiate them.

  13. Auditory and Visual Sensations

    CERN Document Server

    Ando, Yoichi

    2010-01-01

    Professor Yoichi Ando, acoustic architectural designer of the Kirishima International Concert Hall in Japan, presents a comprehensive rational-scientific approach to designing performance spaces. His theory is based on systematic psychoacoustical observations of spatial hearing and listener preferences, whose neuronal correlates are observed in the neurophysiology of the human brain. A correlation-based model of neuronal signal processing in the central auditory system is proposed in which temporal sensations (pitch, timbre, loudness, duration) are represented by an internal autocorrelation representation, and spatial sensations (sound location, size, diffuseness related to envelopment) are represented by an internal interaural crosscorrelation function. Together these two internal central auditory representations account for the basic auditory qualities that are relevant for listening to music and speech in indoor performance spaces. Observed psychological and neurophysiological commonalities between auditor...

  14. Predicting Future Reading Problems Based on Pre-reading Auditory Measures: A Longitudinal Study of Children with a Familial Risk of Dyslexia

    OpenAIRE

    Law, Jeremy M.; Vandermosten, Maaike; Ghesquière, Pol; Wouters, Jan

    2017-01-01

    Purpose: This longitudinal study examines measures of temporal auditory processing in pre-reading children with a family risk of dyslexia. Specifically, it attempts to ascertain whether pre-reading auditory processing, speech perception, and phonological awareness (PA) reliably predict later literacy achievement. Additionally, this study retrospectively examines the presence of pre-reading auditory processing, speech perception, and PA impairments in children later found to be literacy impair...

  15. Effect of age at cochlear implantation on auditory and speech development of children with auditory neuropathy spectrum disorder.

    Science.gov (United States)

    Liu, Yuying; Dong, Ruijuan; Li, Yuling; Xu, Tianqiu; Li, Yongxin; Chen, Xueqing; Gong, Shusheng

    2014-12-01

    To evaluate the auditory and speech abilities in children with auditory neuropathy spectrum disorder (ANSD) after cochlear implantation (CI) and determine the role of age at implantation. Ten children participated in this retrospective case series study. All children had evidence of ANSD. All subjects had no cochlear nerve deficiency on magnetic resonance imaging and had used the cochlear implants for a period of 12-84 months. We divided our children into two groups: children who underwent implantation before 24 months of age and children who underwent implantation after 24 months of age. Their auditory and speech abilities were evaluated using the following: behavioral audiometry, the Categories of Auditory Performance (CAP), the Meaningful Auditory Integration Scale (MAIS), the Infant-Toddler Meaningful Auditory Integration Scale (IT-MAIS), the Standard-Chinese version of the Monosyllabic Lexical Neighborhood Test (LNT), the Multisyllabic Lexical Neighborhood Test (MLNT), the Speech Intelligibility Rating (SIR) and the Meaningful Use of Speech Scale (MUSS). All children showed progress in their auditory and language abilities. The 4-frequency average hearing level (HL) (500Hz, 1000Hz, 2000Hz and 4000Hz) of aided hearing thresholds ranged from 17.5 to 57.5dB HL. All children developed time-related auditory perception and speech skills. Scores of children with ANSD who received cochlear implants before 24 months tended to be better than those of children who received cochlear implants after 24 months. Seven children completed the Mandarin Lexical Neighborhood Test. Approximately half of the children showed improved open-set speech recognition. Cochlear implantation is helpful for children with ANSD and may be a good optional treatment for many ANSD children. In addition, children with ANSD fitted with cochlear implants before 24 months tended to acquire auditory and speech skills better than children fitted with cochlear implants after 24 months. Copyright © 2014

  16. An examination of auditory processing and affective prosody in relatives of patients with auditory hallucinations

    Directory of Open Access Journals (Sweden)

    Rachel eTucker

    2013-09-01

    Full Text Available Research on auditory verbal hallucinations (AVHs indicates that AVH schizophrenia patients show greater abnormalities on tasks requiring recognition of affective prosody (AP than non-AVH patients. Detecting AP requires accurate perception of manipulations in pitch, amplitude and duration. Schizophrenia patients with AVHs also experience difficulty detecting these acoustic manipulations; with a number of theorists speculating that difficulties in pitch, amplitude and duration discrimination underlie AP abnormalities. This study examined whether both AP and these aspects of auditory processing are also impaired in first degree relatives of persons with AVHs. It also examined whether pitch, amplitude and duration discrimination were related to AP, and to hallucination proneness. Unaffected relatives of AVH schizophrenia patients (N=19 and matched healthy controls (N=33 were compared using tone discrimination tasks, an AP task, and clinical measures. Relatives were slower at identifying emotions on the AP task (p =.002, with secondary analysis showing this was especially so for happy (p = .014 and neutral (p =.001 sentences. There was a significant interaction effect for pitch between tone deviation level and group (p = .019, and relatives performed worse than controls on amplitude discrimination and duration discrimination. AP performance for happy and neutral sentences was significantly correlated with amplitude perception. Lastly, AVH proneness in the entire sample was significantly correlated with pitch discrimination (r = .44 and pitch perception was shown to predict AVH proneness in the sample (p = .005. These results suggest basic impairments in auditory processing are present in relatives of AVH patients; they potentially underlie processing speed in AP tasks, and predict AVH proneness. This indicates auditory processing deficits may be a core feature of AVHs in schizophrenia, and are worthy of further study as a potential endophenotype for

  17. Phonological and Phonetic Biases in Speech Perception

    Science.gov (United States)

    Key, Michael Parrish

    2012-01-01

    This dissertation investigates how knowledge of phonological generalizations influences speech perception, with a particular focus on evidence that phonological processing is autonomous from (rather than interactive with) auditory processing. A model is proposed in which auditory cue constraints and markedness constraints interact to determine a…

  18. Hearing suppression induced by electrical stimulation of human auditory cortex.

    Science.gov (United States)

    Fenoy, Albert J; Severson, Meryl A; Volkov, Igor O; Brugge, John F; Howard, Matthew A

    2006-11-06

    In the course of performing electrical stimulation functional mapping (ESFM) in neurosurgery patients, we identified three subjects who experienced hearing suppression during stimulation of sites within the superior temporal gyrus (STG). One of these patients had long standing tinnitus that affected both ears. In all subjects, auditory event related potentials (ERPs) were recorded from chronically implanted intracranial electrodes and the results were used to localize auditory cortical fields within the STG. Hearing suppression sites were identified within anterior lateral Heschl's gyrus (HG) and posterior lateral STG, in what may be auditory belt and parabelt fields. Cortical stimulation suppressed hearing in both ears, which persisted beyond the period of electrical stimulation. Subjects experienced other stimulation-evoked perceptions at some of these same sites, including symptoms of vestibular activation and alteration of audio-visual speech processing. In contrast, stimulation of presumed core auditory cortex within posterior medial HG evoked sound perceptions, or in one case an increase in tinnitus intensity, that affected the contralateral ear and did not persist beyond the period of stimulation. The current results confirm a rarely reported experimental observation, and correlate the cortical sites associated with hearing suppression with physiologically identified auditory cortical fields.

  19. Estudo do comportamento vocal no ciclo menstrual: avaliação perceptivo-auditiva, acústica e auto-perceptiva Vocal behavior during menstrual cycle: perceptual-auditory, acoustic and self-perception analysis

    Directory of Open Access Journals (Sweden)

    Luciane C. de Figueiredo

    2004-06-01

    Full Text Available Durante o período pré-menstrual é comum a ocorrência de disfonia, e são poucas as mulheres que se dão conta dessa variação da voz dentro do ciclo menstrual (Quinteiro, 1989. OBJETIVO: Verificar se há diferença no padrão vocal de mulheres no período de ovulação em relação ao primeiro dia do ciclo menstrual, utilizando-se da análise perceptivo-auditiva, da espectrografia, dos parâmetros acústicos e quando esta diferença está presente, se é percebida pelas mulheres. FORMA DE ESTUDO: Caso-controle. MATERIAL E MÉTODO: A amostra coletada foi de 30 estudantes de Fonoaudiologia, na faixa etária de 18 a 25 anos, não-fumantes, com ciclo menstrual regular e sem o uso de contraceptivo oral. As vozes foram gravadas no primeiro dia de menstruação e no décimo-terceiro dia pós-menstruação (ovulação, para posterior comparação. RESULTADOS: Observou-se durante o período menstrual que as vozes estão rouco-soprosa de grau leve a moderado, instáveis, sem a presença de quebra de sonoridade, com pitch e loudness adequados e ressonância equilibrada. Há pior qualidade de definição dos harmônicos, maior quantidade de ruído entre eles e menor extensão dos harmônicos superiores. Encontramos uma f0 mais aguda, jitter e shimmer aumentados e PHR diminuída. CONCLUSÃO: No período menstrual há mudanças na qualidade vocal, no comportamento dos harmônicos e nos parâmetros vocais (f0,jitter, shimmer e PHR. Além disso, a maioria das estudantes de Fonoaudiologia não percebeu a variação da voz durante o ciclo menstrual.During the premenstruation period dysphonia often can be observed and only few women are aware of this voice variation (Quinteiro, 1989. AIM: To verify if there are vocal quality variations between the ovulation period and the first day of the menstrual cycle, by using perceptual-auditory and acoustic analysis, including spectrography, and the self perception of the vocal changes when it occurs. STUDY DESIGN: Case

  20. Auditory Training Effects on the Listening Skills of Children With Auditory Processing Disorder.

    Science.gov (United States)

    Loo, Jenny Hooi Yin; Rosen, Stuart; Bamiou, Doris-Eva

    2016-01-01

    Children with auditory processing disorder (APD) typically present with "listening difficulties,"' including problems understanding speech in noisy environments. The authors examined, in a group of such children, whether a 12-week computer-based auditory training program with speech material improved the perception of speech-in-noise test performance, and functional listening skills as assessed by parental and teacher listening and communication questionnaires. The authors hypothesized that after the intervention, (1) trained children would show greater improvements in speech-in-noise perception than untrained controls; (2) this improvement would correlate with improvements in observer-rated behaviors; and (3) the improvement would be maintained for at least 3 months after the end of training. This was a prospective randomized controlled trial of 39 children with normal nonverbal intelligence, ages 7 to 11 years, all diagnosed with APD. This diagnosis required a normal pure-tone audiogram and deficits in at least two clinical auditory processing tests. The APD children were randomly assigned to (1) a control group that received only the current standard treatment for children diagnosed with APD, employing various listening/educational strategies at school (N = 19); or (2) an intervention group that undertook a 3-month 5-day/week computer-based auditory training program at home, consisting of a wide variety of speech-based listening tasks with competing sounds, in addition to the current standard treatment. All 39 children were assessed for language and cognitive skills at baseline and on three outcome measures at baseline and immediate postintervention. Outcome measures were repeated 3 months postintervention in the intervention group only, to assess the sustainability of treatment effects. The outcome measures were (1) the mean speech reception threshold obtained from the four subtests of the listening in specialized noise test that assesses sentence perception in

  1. Incidental auditory category learning.

    Science.gov (United States)

    Gabay, Yafit; Dick, Frederic K; Zevin, Jason D; Holt, Lori L

    2015-08-01

    Very little is known about how auditory categories are learned incidentally, without instructions to search for category-diagnostic dimensions, overt category decisions, or experimenter-provided feedback. This is an important gap because learning in the natural environment does not arise from explicit feedback and there is evidence that the learning systems engaged by traditional tasks are distinct from those recruited by incidental category learning. We examined incidental auditory category learning with a novel paradigm, the Systematic Multimodal Associations Reaction Time (SMART) task, in which participants rapidly detect and report the appearance of a visual target in 1 of 4 possible screen locations. Although the overt task is rapid visual detection, a brief sequence of sounds precedes each visual target. These sounds are drawn from 1 of 4 distinct sound categories that predict the location of the upcoming visual target. These many-to-one auditory-to-visuomotor correspondences support incidental auditory category learning. Participants incidentally learn categories of complex acoustic exemplars and generalize this learning to novel exemplars and tasks. Further, learning is facilitated when category exemplar variability is more tightly coupled to the visuomotor associations than when the same stimulus variability is experienced across trials. We relate these findings to phonetic category learning. (c) 2015 APA, all rights reserved).

  2. Auditory Channel Problems.

    Science.gov (United States)

    Mann, Philip H.; Suiter, Patricia A.

    This teacher's guide contains a list of general auditory problem areas where students have the following problems: (a) inability to find or identify source of sound; (b) difficulty in discriminating sounds of words and letters; (c) difficulty with reproducing pitch, rhythm, and melody; (d) difficulty in selecting important from unimportant sounds;…

  3. Auditory/visual distance estimation: accuracy and variability

    Directory of Open Access Journals (Sweden)

    Paul Wallace Anderson

    2014-10-01

    Full Text Available Past research has shown that auditory distance estimation improves when listeners are given the opportunity to see all possible sound sources when compared to no visual input. It has also been established that distance estimation is more accurate in vision than in audition. The present study investigates the degree to which auditory distance estimation is improved when matched with a congruent visual stimulus. Virtual sound sources based on binaural room impulse response (BRIR measurements made from distances ranging from approximately 0.3 to 9.8 m in a concert hall were used as auditory stimuli. Visual stimuli were photographs taken from the listener’s perspective at each distance in the impulse response measurement setup presented on a large HDTV monitor. Listeners were asked to estimate egocentric distance to the sound source in each of three conditions: auditory only (A, visual only (V, and congruent auditory/visual stimuli (A+V. Each condition was presented within its own block. Sixty-two listeners were tested in order to quantify the response variability inherent in auditory distance perception. Distance estimates from both the V and A+V conditions were found to be considerably more accurate and less variable than estimates from the A condition.

  4. Entrainment to an auditory signal: Is attention involved?

    Science.gov (United States)

    Kunert, Richard; Jongman, Suzanne R

    2017-01-01

    Many natural auditory signals, including music and language, change periodically. The effect of such auditory rhythms on the brain is unclear however. One widely held view, dynamic attending theory, proposes that the attentional system entrains to the rhythm and increases attention at moments of rhythmic salience. In support, 2 experiments reported here show reduced response times to visual letter strings shown at auditory rhythm peaks, compared with rhythm troughs. However, we argue that an account invoking the entrainment of general attention should further predict rhythm entrainment to also influence memory for visual stimuli. In 2 pseudoword memory experiments we find evidence against this prediction. Whether a pseudoword is shown during an auditory rhythm peak or not is irrelevant for its later recognition memory in silence. Other attention manipulations, dividing attention and focusing attention, did result in a memory effect. This raises doubts about the suggested attentional nature of rhythm entrainment. We interpret our findings as support for auditory rhythm perception being based on auditory-motor entrainment, not general attention entrainment. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  5. Neural responses to complex auditory rhythms: the role of attending

    Directory of Open Access Journals (Sweden)

    Heather L Chapin

    2010-12-01

    Full Text Available The aim of this study was to explore the role of attention in pulse and meter perception using complex rhythms. We used a selective attention paradigm in which participants attended to either a complex auditory rhythm or a visually presented word list. Performance on a reproduction task was used to gauge whether participants were attending to the appropriate stimulus. We hypothesized that attention to complex rhythms – which contain no energy at the pulse frequency – would lead to activations in motor areas involved in pulse perception. Moreover, because multiple repetitions of a complex rhythm are needed to perceive a pulse, activations in pulse related areas would be seen only after sufficient time had elapsed for pulse perception to develop. Selective attention was also expected to modulate activity in sensory areas specific to the modality. We found that selective attention to rhythms led to increased BOLD responses in basal ganglia, and basal ganglia activity was observed only after the rhythms had cycled enough times for a stable pulse percept to develop. These observations suggest that attention is needed to recruit motor activations associated with the perception of pulse in complex rhythms. Moreover, attention to the auditory stimulus enhanced activity in an attentional sensory network including primary auditory, insula, anterior cingulate, and prefrontal cortex, and suppressed activity in sensory areas associated with attending to the visual stimulus.

  6. The Influence of Auditory Information on Visual Size Adaptation

    Directory of Open Access Journals (Sweden)

    Alessia Tonelli

    2017-10-01

    Full Text Available Size perception can be influenced by several visual cues, such as spatial (e.g., depth or vergence and temporal contextual cues (e.g., adaptation to steady visual stimulation. Nevertheless, perception is generally multisensory and other sensory modalities, such as auditory, can contribute to the functional estimation of the size of objects. In this study, we investigate whether auditory stimuli at different sound pitches can influence visual size perception after visual adaptation. To this aim, we used an adaptation paradigm (Pooresmaeili et al., 2013 in three experimental conditions: visual-only, visual-sound at 100 Hz and visual-sound at 9,000 Hz. We asked participants to judge the size of a test stimulus in a size discrimination task. First, we obtained a baseline for all conditions. In the visual-sound conditions, the auditory stimulus was concurrent to the test stimulus. Secondly, we repeated the task by presenting an adapter (twice as big as the reference stimulus before the test stimulus. We replicated the size aftereffect in the visual-only condition: the test stimulus was perceived smaller than its physical size. The new finding is that we found the auditory stimuli have an effect on the perceived size of the test stimulus after visual adaptation: low frequency sound decreased the effect of visual adaptation, making the stimulus perceived bigger compared to the visual-only condition, and contrarily, the high frequency sound had the opposite effect, making the test size perceived even smaller.

  7. Visual speech gestures modulate efferent auditory system.

    Science.gov (United States)

    Namasivayam, Aravind Kumar; Wong, Wing Yiu Stephanie; Sharma, Dinaay; van Lieshout, Pascal

    2015-03-01

    Visual and auditory systems interact at both cortical and subcortical levels. Studies suggest a highly context-specific cross-modal modulation of the auditory system by the visual system. The present study builds on this work by sampling data from 17 young healthy adults to test whether visual speech stimuli evoke different responses in the auditory efferent system compared to visual non-speech stimuli. The descending cortical influences on medial olivocochlear (MOC) activity were indirectly assessed by examining the effects of contralateral suppression of transient-evoked otoacoustic emissions (TEOAEs) at 1, 2, 3 and 4 kHz under three conditions: (a) in the absence of any contralateral noise (Baseline), (b) contralateral noise + observing facial speech gestures related to productions of vowels /a/ and /u/ and (c) contralateral noise + observing facial non-speech gestures related to smiling and frowning. The results are based on 7 individuals whose data met strict recording criteria and indicated a significant difference in TEOAE suppression between observing speech gestures relative to the non-speech gestures, but only at the 1 kHz frequency. These results suggest that observing a speech gesture compared to a non-speech gesture may trigger a difference in MOC activity, possibly to enhance peripheral neural encoding. If such findings can be reproduced in future research, sensory perception models and theories positing the downstream convergence of unisensory streams of information in the cortex may need to be revised.

  8. Auditory Discrimination Learning: Role of Working Memory.

    Directory of Open Access Journals (Sweden)

    Yu-Xuan Zhang

    Full Text Available Perceptual training is generally assumed to improve perception by modifying the encoding or decoding of sensory information. However, this assumption is incompatible with recent demonstrations that transfer of learning can be enhanced by across-trial variation of training stimuli or task. Here we present three lines of evidence from healthy adults in support of the idea that the enhanced transfer of auditory discrimination learning is mediated by working memory (WM. First, the ability to discriminate small differences in tone frequency or duration was correlated with WM measured with a tone n-back task. Second, training frequency discrimination around a variable frequency transferred to and from WM learning, but training around a fixed frequency did not. The transfer of learning in both directions was correlated with a reduction of the influence of stimulus variation in the discrimination task, linking WM and its improvement to across-trial stimulus interaction in auditory discrimination. Third, while WM training transferred broadly to other WM and auditory discrimination tasks, variable-frequency training on duration discrimination did not improve WM, indicating that stimulus variation challenges and trains WM only if the task demands stimulus updating in the varied dimension. The results provide empirical evidence as well as a theoretic framework for interactions between cognitive and sensory plasticity during perceptual experience.

  9. Motor Training: Comparison of Visual and Auditory Coded Proprioceptive Cues

    Directory of Open Access Journals (Sweden)

    Philip Jepson

    2012-05-01

    Full Text Available Self-perception of body posture and movement is achieved through multi-sensory integration, particularly the utilisation of vision, and proprioceptive information derived from muscles and joints. Disruption to these processes can occur following a neurological accident, such as stroke, leading to sensory and physical impairment. Rehabilitation can be helped through use of augmented visual and auditory biofeedback to stimulate neuro-plasticity, but the effective design and application of feedback, particularly in the auditory domain, is non-trivial. Simple auditory feedback was tested by comparing the stepping accuracy of normal subjects when given a visual spatial target (step length and an auditory temporal target (step duration. A baseline measurement of step length and duration was taken using optical motion capture. Subjects (n=20 took 20 ‘training’ steps (baseline ±25% using either an auditory target (950 Hz tone, bell-shaped gain envelope or visual target (spot marked on the floor and were then asked to replicate the target step (length or duration corresponding to training with all feedback removed. Visual cues (mean percentage error=11.5%; SD ± 7.0%; auditory cues (mean percentage error = 12.9%; SD ± 11.8%. Visual cues elicit a high degree of accuracy both in training and follow-up un-cued tasks; despite the novelty of the auditory cues present for subjects, the mean accuracy of subjects approached that for visual cues, and initial results suggest a limited amount of practice using auditory cues can improve performance.

  10. Quadri-stability of a spatially ambiguous auditory illusion

    Directory of Open Access Journals (Sweden)

    Constance May Bainbridge

    2015-01-01

    Full Text Available In addition to vision, audition plays an important role in sound localization in our world. One way we estimate the motion of an auditory object moving towards or away from us is from changes in volume intensity. However, the human auditory system has unequally distributed spatial resolution, including difficulty distinguishing sounds in front versus behind the listener. Here, we introduce a novel quadri-stable illusion, the Transverse-and-Bounce Auditory Illusion, which combines front-back confusion with changes in volume levels of a nonspatial sound to create ambiguous percepts of an object approaching and withdrawing from the listener. The sound can be perceived as traveling transversely from front to back or back to front, or bouncing to remain exclusively in front of or behind the observer. Here we demonstrate how human listeners experience this illusory phenomenon by comparing ambiguous and unambiguous stimuli for each of the four possible motion percepts. When asked to rate their confidence in perceiving each sound’s motion, participants reported equal confidence for the illusory and unambiguous stimuli. Participants perceived all four illusory motion percepts, and could not distinguish the illusion from the unambiguous stimuli. These results show that this illusion is effectively quadri-stable. In a second experiment, the illusory stimulus was looped continuously in headphones while participants identified its perceived path of motion to test properties of perceptual switching, locking, and biases. Participants were biased towards perceiving transverse compared to bouncing paths, and they became perceptually locked into alternating between front-to-back and back-to-front percepts, perhaps reflecting how auditory objects commonly move in the real world. This multi-stable auditory illusion opens opportunities for studying the perceptual, cognitive, and neural representation of objects in motion, as well as exploring multimodal perceptual

  11. Auditory pathways: anatomy and physiology.

    Science.gov (United States)

    Pickles, James O

    2015-01-01

    This chapter outlines the anatomy and physiology of the auditory pathways. After a brief analysis of the external, middle ears, and cochlea, the responses of auditory nerve fibers are described. The central nervous system is analyzed in more detail. A scheme is provided to help understand the complex and multiple auditory pathways running through the brainstem. The multiple pathways are based on the need to preserve accurate timing while extracting complex spectral patterns in the auditory input. The auditory nerve fibers branch to give two pathways, a ventral sound-localizing stream, and a dorsal mainly pattern recognition stream, which innervate the different divisions of the cochlear nucleus. The outputs of the two streams, with their two types of analysis, are progressively combined in the inferior colliculus and onwards, to produce the representation of what can be called the "auditory objects" in the external world. The progressive extraction of critical features in the auditory stimulus in the different levels of the central auditory system, from cochlear nucleus to auditory cortex, is described. In addition, the auditory centrifugal system, running from cortex in multiple stages to the organ of Corti of the cochlea, is described. © 2015 Elsevier B.V. All rights reserved.

  12. Auditory object cognition in dementia

    Science.gov (United States)

    Goll, Johanna C.; Kim, Lois G.; Hailstone, Julia C.; Lehmann, Manja; Buckley, Aisling; Crutch, Sebastian J.; Warren, Jason D.

    2011-01-01

    The cognition of nonverbal sounds in dementia has been relatively little explored. Here we undertook a systematic study of nonverbal sound processing in patient groups with canonical dementia syndromes comprising clinically diagnosed typical amnestic Alzheimer's disease (AD; n = 21), progressive nonfluent aphasia (PNFA; n = 5), logopenic progressive aphasia (LPA; n = 7) and aphasia in association with a progranulin gene mutation (GAA; n = 1), and in healthy age-matched controls (n = 20). Based on a cognitive framework treating complex sounds as ‘auditory objects’, we designed a novel neuropsychological battery to probe auditory object cognition at early perceptual (sub-object), object representational (apperceptive) and semantic levels. All patients had assessments of peripheral hearing and general neuropsychological functions in addition to the experimental auditory battery. While a number of aspects of auditory object analysis were impaired across patient groups and were influenced by general executive (working memory) capacity, certain auditory deficits had some specificity for particular dementia syndromes. Patients with AD had a disproportionate deficit of auditory apperception but preserved timbre processing. Patients with PNFA had salient deficits of timbre and auditory semantic processing, but intact auditory size and apperceptive processing. Patients with LPA had a generalised auditory deficit that was influenced by working memory function. In contrast, the patient with GAA showed substantial preservation of auditory function, but a mild deficit of pitch direction processing and a more severe deficit of auditory apperception. The findings provide evidence for separable stages of auditory object analysis and separable profiles of impaired auditory object cognition in different dementia syndromes. PMID:21689671

  13. Sonic morphology: Aesthetic dimensional auditory spatial awareness

    Science.gov (United States)

    Whitehouse, Martha M.

    The sound and ceramic sculpture installation, " Skirting the Edge: Experiences in Sound & Form," is an integration of art and science demonstrating the concept of sonic morphology. "Sonic morphology" is herein defined as aesthetic three-dimensional auditory spatial awareness. The exhibition explicates my empirical phenomenal observations that sound has a three-dimensional form. Composed of ceramic sculptures that allude to different social and physical situations, coupled with sound compositions that enhance and create a three-dimensional auditory and visual aesthetic experience (see accompanying DVD), the exhibition supports the research question, "What is the relationship between sound and form?" Precisely how people aurally experience three-dimensional space involves an integration of spatial properties, auditory perception, individual history, and cultural mores. People also utilize environmental sound events as a guide in social situations and in remembering their personal history, as well as a guide in moving through space. Aesthetically, sound affects the fascination, meaning, and attention one has within a particular space. Sonic morphology brings art forms such as a movie, video, sound composition, and musical performance into the cognitive scope by generating meaning from the link between the visual and auditory senses. This research examined sonic morphology as an extension of musique concrete, sound as object, originating in Pierre Schaeffer's work in the 1940s. Pointing, as John Cage did, to the corporeal three-dimensional experience of "all sound," I composed works that took their total form only through the perceiver-participant's participation in the exhibition. While contemporary artist Alvin Lucier creates artworks that draw attention to making sound visible, "Skirting the Edge" engages the perceiver-participant visually and aurally, leading to recognition of sonic morphology.

  14. Auditory Reserve and the Legacy of Auditory Experience

    OpenAIRE

    Skoe, Erika; Kraus, Nina

    2014-01-01

    Musical training during childhood has been linked to more robust encoding of sound later in life. We take this as evidence for an auditory reserve: a mechanism by which individuals capitalize on earlier life experiences to promote auditory processing. We assert that early auditory experiences guide how the reserve develops and is maintained over the lifetime. Experiences that occur after childhood, or which are limited in nature, are theorized to affect the reserve, although their influence o...

  15. Early hominin auditory capacities.

    Science.gov (United States)

    Quam, Rolf; Martínez, Ignacio; Rosa, Manuel; Bonmatí, Alejandro; Lorenzo, Carlos; de Ruiter, Darryl J; Moggi-Cecchi, Jacopo; Conde Valverde, Mercedes; Jarabo, Pilar; Menter, Colin G; Thackeray, J Francis; Arsuaga, Juan Luis

    2015-09-01

    Studies of sensory capacities in past life forms have offered new insights into their adaptations and lifeways. Audition is particularly amenable to study in fossils because it is strongly related to physical properties that can be approached through their skeletal structures. We have studied the anatomy of the outer and middle ear in the early hominin taxa Australopithecus africanus and Paranthropus robustus and estimated their auditory capacities. Compared with chimpanzees, the early hominin taxa are derived toward modern humans in their slightly shorter and wider external auditory canal, smaller tympanic membrane, and lower malleus/incus lever ratio, but they remain primitive in the small size of their stapes footplate. Compared with chimpanzees, both early hominin taxa show a heightened sensitivity to frequencies between 1.5 and 3.5 kHz and an occupied band of maximum sensitivity that is shifted toward slightly higher frequencies. The results have implications for sensory ecology and communication, and suggest that the early hominin auditory pattern may have facilitated an increased emphasis on short-range vocal communication in open habitats.

  16. Early hominin auditory capacities

    Science.gov (United States)

    Quam, Rolf; Martínez, Ignacio; Rosa, Manuel; Bonmatí, Alejandro; Lorenzo, Carlos; de Ruiter, Darryl J.; Moggi-Cecchi, Jacopo; Conde Valverde, Mercedes; Jarabo, Pilar; Menter, Colin G.; Thackeray, J. Francis; Arsuaga, Juan Luis

    2015-01-01

    Studies of sensory capacities in past life forms have offered new insights into their adaptations and lifeways. Audition is particularly amenable to study in fossils because it is strongly related to physical properties that can be approached through their skeletal structures. We have studied the anatomy of the outer and middle ear in the early hominin taxa Australopithecus africanus and Paranthropus robustus and estimated their auditory capacities. Compared with chimpanzees, the early hominin taxa are derived toward modern humans in their slightly shorter and wider external auditory canal, smaller tympanic membrane, and lower malleus/incus lever ratio, but they remain primitive in the small size of their stapes footplate. Compared with chimpanzees, both early hominin taxa show a heightened sensitivity to frequencies between 1.5 and 3.5 kHz and an occupied band of maximum sensitivity that is shifted toward slightly higher frequencies. The results have implications for sensory ecology and communication, and suggest that the early hominin auditory pattern may have facilitated an increased emphasis on short-range vocal communication in open habitats. PMID:26601261

  17. A Transient Auditory Signal Shifts the Perceived Offset Position of a Moving Visual Object

    Directory of Open Access Journals (Sweden)

    Sung-En eChien

    2013-02-01

    Full Text Available Information received from different sensory modalities profoundly influences human perception. For example, changes in the auditory flutter rate induce changes in the apparent flicker rate of a flashing light (Shipley, 1964. In the present study, we investigated whether auditory information would affect the perceived offset position of a moving object. In Experiment 1, a visual object moved toward the center of the computer screen and disappeared abruptly. A transient auditory signal was presented at different times relative to the moment when the object disappeared. The results showed that if the auditory signal was presented before the abrupt offset of the moving object, the perceived final position was shifted backward, implying that the perceived offset position was affected by the transient auditory information. In Experiment 2, we presented the transient auditory signal to either the left or the right ear. The results showed that the perceived offset shifted backward more strongly when the auditory signal was presented to the same side from which the moving object originated. In Experiment 3, we found that the perceived timing of the visual offset was not affected by the spatial relation between the auditory signal and the visual offset. The present results are interpreted as indicating that an auditory signal may influence the offset position of a moving object through both spatial and temporal processes.

  18. Impairments in musical abilities reflected in the auditory brainstem: evidence from congenital amusia.

    Science.gov (United States)

    Lehmann, Alexandre; Skoe, Erika; Moreau, Patricia; Peretz, Isabelle; Kraus, Nina

    2015-07-01

    Congenital amusia is a neurogenetic condition, characterized by a deficit in music perception and production, not explained by hearing loss, brain damage or lack of exposure to music. Despite inferior musical performance, amusics exhibit normal auditory cortical responses, with abnormal neural correlates suggested to lie beyond auditory cortices. Here we show, using auditory brainstem responses to complex sounds in humans, that fine-grained automatic processing of sounds is impoverished in amusia. Compared with matched non-musician controls, spectral amplitude was decreased in amusics for higher harmonic components of the auditory brainstem response. We also found a delayed response to the early transient aspects of the auditory stimulus in amusics. Neural measures of spectral amplitude and response timing correlated with participants' behavioral assessments of music processing. We demonstrate, for the first time, that amusia affects how complex acoustic signals are processed in the auditory brainstem. This neural signature of amusia mirrors what is observed in musicians, such that the aspects of the auditory brainstem responses that are enhanced in musicians are degraded in amusics. By showing that gradients of music abilities are reflected in the auditory brainstem, our findings have implications not only for current models of amusia but also for auditory functioning in general. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  19. Tinnitus, Diminished Sound-Level Tolerance, and Elevated Auditory Activity in Humans With Clinically Normal Hearing Sensitivity

    OpenAIRE

    Gu, Jianwen Wendy; Halpin, Christopher F; Nam, Eui-Cheol; Levine, Robert A.; Melcher, Jennifer R.

    2010-01-01

    Phantom sensations and sensory hypersensitivity are disordered perceptions that characterize a variety of intractable conditions involving the somatosensory, visual, and auditory modalities. We report physiological correlates of two perceptual abnormalities in the auditory domain: tinnitus, the phantom perception of sound, and hyperacusis, a decreased tolerance of sound based on loudness. Here, subjects with and without tinnitus, all with clinically normal hearing thresholds, underwent 1) beh...

  20. Perception of Audio-Visual Speech Synchrony in Spanish-Speaking Children with and without Specific Language Impairment

    Science.gov (United States)

    Pons, Ferran; Andreu, Llorenc; Sanz-Torrent, Monica; Buil-Legaz, Lucia; Lewkowicz, David J.

    2013-01-01

    Speech perception involves the integration of auditory and visual articulatory information, and thus requires the perception of temporal synchrony between this information. There is evidence that children with specific language impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the…

  1. Development of visuo-auditory integration in space and time

    Directory of Open Access Journals (Sweden)

    Monica eGori

    2012-09-01

    Full Text Available Adults integrate multisensory information optimally (e.g. Ernst & Banks, 2002 while children are not able to integrate multisensory visual haptic cues until 8-10 years of age (e.g. Gori, Del Viva, Sandini, & Burr, 2008. Before that age strong unisensory dominance is present for size and orientation visual-haptic judgments maybe reflecting a process of cross-sensory calibration between modalities. It is widely recognized that audition dominates time perception, while vision dominates space perception. If the cross sensory calibration process is necessary for development, then the auditory modality should calibrate vision in a bimodal temporal task, and the visual modality should calibrate audition in a bimodal spatial task. Here we measured visual-auditory integration in both the temporal and the spatial domains reproducing for the spatial task a child-friendly version of the ventriloquist stimuli used by Alais and Burr (2004 and for the temporal task a child-friendly version of the stimulus used by Burr, Banks and Morrone (2009. Unimodal and bimodal (conflictual or not conflictual audio-visual thresholds and PSEs were measured and compared with the Bayesian predictions. In the temporal domain, we found that both in children and adults, audition dominates the bimodal visuo-auditory task both in perceived time and precision thresholds. Contrarily, in the visual-auditory spatial task, children younger than 12 years of age show clear visual dominance (on PSEs and bimodal thresholds higher than the Bayesian prediction. Only in the adult group bimodal thresholds become optimal. In agreement with previous studies, our results suggest that also visual-auditory adult-like behaviour develops late. Interestingly, the visual dominance for space and the auditory dominance for time that we found might suggest a cross-sensory comparison of vision in a spatial visuo-audio task and a cross-sensory comparison of audition in a temporal visuo-audio task.

  2. Auditory Perceptual Abilities Are Associated with Specific Auditory Experience

    Directory of Open Access Journals (Sweden)

    Yael Zaltz

    2017-11-01

    Full Text Available The extent to which auditory experience can shape general auditory perceptual abilities is still under constant debate. Some studies show that specific auditory expertise may have a general effect on auditory perceptual abilities, while others show a more limited influence, exhibited only in a relatively narrow range associated with the area of expertise. The current study addresses this issue by examining experience-dependent enhancement in perceptual abilities in the auditory domain. Three experiments were performed. In the first experiment, 12 pop and rock musicians and 15 non-musicians were tested in frequency discrimination (DLF, intensity discrimination, spectrum discrimination (DLS, and time discrimination (DLT. Results showed significant superiority of the musician group only for the DLF and DLT tasks, illuminating enhanced perceptual skills in the key features of pop music, in which miniscule changes in amplitude and spectrum are not critical to performance. The next two experiments attempted to differentiate between generalization and specificity in the influence of auditory experience, by comparing subgroups of specialists. First, seven guitar players and eight percussionists were tested in the DLF and DLT tasks that were found superior for musicians. Results showed superior abilities on the DLF task for guitar players, though no difference between the groups in DLT, demonstrating some dependency of auditory learning on the specific area of expertise. Subsequently, a third experiment was conducted, testing a possible influence of vowel density in native language on auditory perceptual abilities. Ten native speakers of German (a language characterized by a dense vowel system of 14 vowels, and 10 native speakers of Hebrew (characterized by a sparse vowel system of five vowels, were tested in a formant discrimination task. This is the linguistic equivalent of a DLS task. Results showed that German speakers had superior formant

  3. Auditory Temporal Processing Abilities in Early Azari-Persian Bilinguals

    Directory of Open Access Journals (Sweden)

    Roya Sanayi

    2013-10-01

    Full Text Available Introduction: Auditory temporal resolution and auditory temporal ordering are two major components of the auditory temporal processing abilities that contribute to speech perception and language development. Auditory temporal resolution and auditory temporal ordering can be evaluated by gap-in-noise (GIN and pitch-pattern-sequence (PPS tests, respectively. In this survey, the effect of bilingualism as a potential confounding factor on auditory temporal processing abilities was investigated in early Azari-Persian bilinguals.   Materials and Methods:                                     In this cross-sectional non-interventional study, GIN and PPS tests were performed on 24 (12 men and 12 women early Azari-Persian bilingual persons and 24 (12 men and 12 women Persian monolingual subjects in the age range of 18–30 years, with a mean age of 24.57 years in bilingual and 24.68 years in monolingual subjects. Data were analyzed with t-test using SPSS software version 16.   Results: There was no statistically significant difference between mean gap threshold and mean percentages of the correct response of the GIN test and average percentage of correct responses in the PPS test between early Azari-Persian bilinguals and Persian monolinguals (P≥0.05.   Conclusion:  According to the findings of this study, bilingualism did not have notable effect on auditory temporal processing abilities.

  4. Multi-Regional Adaptation in Human Auditory Association Cortex

    Directory of Open Access Journals (Sweden)

    Urszula Malinowska

    2017-05-01

    Full Text Available In auditory cortex, neural responses decrease with stimulus repetition, known as adaptation. Adaptation is thought to facilitate detection of novel sounds and improve perception in noisy environments. Although it is well established that adaptation occurs in primary auditory cortex, it is not known whether adaptation also occurs in higher auditory areas involved in processing complex sounds, such as speech. Resolving this issue is important for understanding the neural bases of adaptation and to avoid potential post-operative deficits after temporal lobe surgery for treatment of focal epilepsy. Intracranial electrocorticographic recordings were acquired simultaneously from electrodes implanted in primary and association auditory areas of the right (non-dominant temporal lobe in a patient with complex partial seizures originating from the inferior parietal lobe. Simple and complex sounds were presented in a passive oddball paradigm. We measured changes in single-trial high-gamma power (70–150 Hz and in regional and inter-regional network-level activity indexed by cross-frequency coupling. Repetitive tones elicited the greatest adaptation and corresponding increases in cross-frequency coupling in primary auditory cortex. Conversely, auditory association cortex showed stronger adaptation for complex sounds, including speech. This first report of multi-regional adaptation in human auditory cortex highlights the role of the non-dominant temporal lobe in suppressing neural responses to repetitive background sounds (noise. These results underscore the clinical utility of functional mapping to avoid potential post-operative deficits including increased listening difficulties in noisy, real-world environments.

  5. Auditory Discrimination and Auditory Sensory Behaviours in Autism Spectrum Disorders

    Science.gov (United States)

    Jones, Catherine R. G.; Happe, Francesca; Baird, Gillian; Simonoff, Emily; Marsden, Anita J. S.; Tregay, Jenifer; Phillips, Rebecca J.; Goswami, Usha; Thomson, Jennifer M.; Charman, Tony

    2009-01-01

    It has been hypothesised that auditory processing may be enhanced in autism spectrum disorders (ASD). We tested auditory discrimination ability in 72 adolescents with ASD (39 childhood autism; 33 other ASD) and 57 IQ and age-matched controls, assessing their capacity for successful discrimination of the frequency, intensity and duration…

  6. Auditory and non-auditory effects of noise on health

    NARCIS (Netherlands)

    Basner, M.; Babisch, W.; Davis, A.; Brink, M.; Clark, C.; Janssen, S.A.; Stansfeld, S.

    2013-01-01

    Noise is pervasive in everyday life and can cause both auditory and non-auditory health eff ects. Noise-induced hearing loss remains highly prevalent in occupational settings, and is increasingly caused by social noise exposure (eg, through personal music players). Our understanding of molecular

  7. The Central Auditory Processing Kit[TM]. Book 1: Auditory Memory [and] Book 2: Auditory Discrimination, Auditory Closure, and Auditory Synthesis [and] Book 3: Auditory Figure-Ground, Auditory Cohesion, Auditory Binaural Integration, and Compensatory Strategies.

    Science.gov (United States)

    Mokhemar, Mary Ann

    This kit for assessing central auditory processing disorders (CAPD), in children in grades 1 through 8 includes 3 books, 14 full-color cards with picture scenes, and a card depicting a phone key pad, all contained in a sturdy carrying case. The units in each of the three books correspond with auditory skill areas most commonly addressed in…

  8. Visual Distance Cues Amplify Neuromagnetic Auditory N1m Responses

    Directory of Open Access Journals (Sweden)

    Christian F Altmann

    2011-10-01

    Full Text Available Ranging of auditory objects relies on several acoustic cues and is possibly modulated by additional visual information. Sound pressure level can serve as a cue for distance perception because it decreases with increasing distance. In this agnetoencephalography (MEG experiment, we tested whether psychophysical loudness judgment and N1m MEG responses are modulated by visual distance cues. To this end, we paired noise bursts at different sound pressure levels with synchronous visual cues at different distances. We hypothesized that noise bursts paired with far visual cues will be perceived louder and result in increased N1m amplitudes compared to a pairing with close visual cues. The rationale behind this was that listeners might compensate the visually induced object distance when processing loudness. Psychophysically, we observed no significant modulation of loudness judgments by visual cues. However, N1m MEG responses at about 100 ms after stimulus onset were significantly stronger for far versus close visual cues in the left auditory cortex. N1m responses in the right auditory cortex increased with increasing sound pressure level, but were not modulated by visual distance cues. Thus, our results suggest an audio-visual interaction in the left auditory cortex that is possibly related to cue integration for auditory distance processing.

  9. Functional changes between seasons in the male songbird auditory forebrain

    Science.gov (United States)

    De Groof, Geert; Poirier, Colline; George, Isabelle; Hausberger, Martine; Van der Linden, Annemie

    2013-01-01

    Songbirds are an excellent model for investigating the perception of learned complex acoustic communication signals. Male European starlings (Sturnus vulgaris) sing throughout the year distinct types of song that bear either social or individual information. Although the relative importance of social and individual information changes seasonally, evidence of functional seasonal changes in neural response to these songs remains elusive. We thus decided to use in vivo functional magnetic resonance imaging (fMRI) to examine auditory responses of male starlings that were exposed to songs that convey different levels of information (species-specific and group identity or individual identity), both during (when mate recognition is particularly important) and outside the breeding season (when group recognition is particularly important). We report three main findings: (1) the auditory area caudomedial nidopallium (NCM), an auditory region that is analogous to the mammalian auditory cortex, is clearly involved in the processing/categorization of conspecific songs; (2) season-related change in differential song processing is limited to a caudal part of NCM; in the more rostral parts, songs bearing individual information induce higher BOLD responses than songs bearing species and group information, regardless of the season; (3) the differentiation between songs bearing species and group information and songs bearing individual information seems to be biased toward the right hemisphere. This study provides evidence that auditory processing of behaviorally-relevant (conspecific) communication signals changes seasonally, even when the spectro-temporal properties of these signals do not change. PMID:24391561

  10. Transcranial Random Noise Stimulation (tRNS Shapes the Processing of Rapidly Changing Auditory Information

    Directory of Open Access Journals (Sweden)

    Katharina S. Rufener

    2017-06-01

    Full Text Available Neural oscillations in the gamma range are the dominant rhythmic activation pattern in the human auditory cortex. These gamma oscillations are functionally relevant for the processing of rapidly changing acoustic information in both speech and non-speech sounds. Accordingly, there is a tight link between the temporal resolution ability of the auditory system and inherent neural gamma oscillations. Transcranial random noise stimulation (tRNS has been demonstrated to specifically increase gamma oscillation in the human auditory cortex. However, neither the physiological mechanisms of tRNS nor the behavioral consequences of this intervention are completely understood. In the present study we stimulated the human auditory cortex bilaterally with tRNS while EEG was continuously measured. Modulations in the participants’ temporal and spectral resolution ability were investigated by means of a gap detection task and a pitch discrimination task. Compared to sham, auditory tRNS increased the detection rate for near-threshold stimuli in the temporal domain only, while no such effect was present for the discrimination of spectral features. Behavioral findings were paralleled by reduced peak latencies of the P50 and N1 component of the auditory event-related potentials (ERP indicating an impact on early sensory processing. The facilitating effect of tRNS was limited to the processing of near-threshold stimuli while stimuli clearly below and above the individual perception threshold were not affected by tRNS. This non-linear relationship between the signal-to-noise level of the presented stimuli and the effect of stimulation further qualifies stochastic resonance (SR as the underlying mechanism of tRNS on auditory processing. Our results demonstrate a tRNS related improvement in acoustic perception of time critical auditory information and, thus, provide further indices that auditory tRNS can amplify the resonance frequency of the auditory system.

  11. Reduced object related negativity response indicates impaired auditory scene analysis in adults with autistic spectrum disorder

    Directory of Open Access Journals (Sweden)

    Veema Lodhia

    2014-02-01

    Full Text Available Auditory Scene Analysis provides a useful framework for understanding atypical auditory perception in autism. Specifically, a failure to segregate the incoming acoustic energy into distinct auditory objects might explain the aversive reaction autistic individuals have to certain auditory stimuli or environments. Previous research with non-autistic participants has demonstrated the presence of an Object Related Negativity (ORN in the auditory event related potential that indexes pre-attentive processes associated with auditory scene analysis. Also evident is a later P400 component that is attention dependent and thought to be related to decision-making about auditory objects. We sought to determine whether there are differences between individuals with and without autism in the levels of processing indexed by these components. Electroencephalography (EEG was used to measure brain responses from a group of 16 autistic adults, and 16 age- and verbal-IQ-matched typically-developing adults. Auditory responses were elicited using lateralized dichotic pitch stimuli in which inter-aural timing differences create the illusory perception of a pitch that is spatially separated from a carrier noise stimulus. As in previous studies, control participants produced an ORN in response to the pitch stimuli. However, this component was significantly reduced in the participants with autism. In contrast, processing differences were not observed between the groups at the attention-dependent level (P400. These findings suggest that autistic individuals have difficulty segregating auditory stimuli into distinct auditory objects, and that this difficulty arises at an early pre-attentive level of processing.

  12. Altered top-down cognitive control and auditory processing in tinnitus: evidences from auditory and visual spatial stroop.

    Science.gov (United States)

    Araneda, Rodrigo; De Volder, Anne G; Deggouj, Naïma; Philippot, Pierre; Heeren, Alexandre; Lacroix, Emilie; Decat, Monique; Rombaux, Philippe; Renier, Laurent

    2015-01-01

    Tinnitus is the perception of a sound in the absence of external stimulus. Currently, the pathophysiology of tinnitus is not fully understood, but recent studies indicate that alterations in the brain involve non-auditory areas, including the prefrontal cortex. Here, we hypothesize that these brain alterations affect top-down cognitive control mechanisms that play a role in the regulation of sensations, emotions and attention resources. The efficiency of the executive control as well as simple reaction speed and processing speed were evaluated in tinnitus participants (TP) and matched control subjects (CS) in both the auditory and the visual modalities using a spatial Stroop paradigm. TP were slower and less accurate than CS during both the auditory and the visual spatial Stroop tasks, while simple reaction speed and stimulus processing speed were affected in TP in the auditory modality only. Tinnitus is associated both with modality-specific deficits along the auditory processing system and an impairment of cognitive control mechanisms that are involved both in vision and audition (i.e. that are supra-modal). We postulate that this deficit in the top-down cognitive control is a key-factor in the development and maintenance of tinnitus and may also explain some of the cognitive difficulties reported by tinnitus sufferers.

  13. Partial Epilepsy with Auditory Features

    Directory of Open Access Journals (Sweden)

    J Gordon Millichap

    2004-07-01

    Full Text Available The clinical characteristics of 53 sporadic (S cases of idiopathic partial epilepsy with auditory features (IPEAF were analyzed and compared to previously reported familial (F cases of autosomal dominant partial epilepsy with auditory features (ADPEAF in a study at the University of Bologna, Italy.

  14. Auditory-motor entrainment and phonological skills: precise auditory timing hypothesis (PATH).

    Science.gov (United States)

    Tierney, Adam; Kraus, Nina

    2014-01-01

    Phonological skills are enhanced by music training, but the mechanisms enabling this cross-domain enhancement remain unknown. To explain this cross-domain transfer, we propose a precise auditory timing hypothesis (PATH) whereby entrainment practice is the core mechanism underlying enhanced phonological abilities in musicians. Both rhythmic synchronization and language skills such as consonant discrimination, detection of word and phrase boundaries, and conversational turn-taking rely on the perception of extremely fine-grained timing details in sound. Auditory-motor timing is an acoustic feature which meets all five of the pre-conditions necessary for cross-domain enhancement to occur (Patel, 2011, 2012, 2014). There is overlap between the neural networks that process timing in the context of both music and language. Entrainment to music demands more precise timing sensitivity than does language processing. Moreover, auditory-motor timing integration captures the emotion of the trainee, is repeatedly practiced, and demands focused attention. The PATH predicts that musical training emphasizing entrainment will be particularly effective in enhancing phonological skills.

  15. Peripheral Auditory Mechanisms

    CERN Document Server

    Hall, J; Hubbard, A; Neely, S; Tubis, A

    1986-01-01

    How weIl can we model experimental observations of the peripheral auditory system'? What theoretical predictions can we make that might be tested'? It was with these questions in mind that we organized the 1985 Mechanics of Hearing Workshop, to bring together auditory researchers to compare models with experimental observations. Tbe workshop forum was inspired by the very successful 1983 Mechanics of Hearing Workshop in Delft [1]. Boston University was chosen as the site of our meeting because of the Boston area's role as a center for hearing research in this country. We made a special effort at this meeting to attract students from around the world, because without students this field will not progress. Financial support for the workshop was provided in part by grant BNS- 8412878 from the National Science Foundation. Modeling is a traditional strategy in science and plays an important role in the scientific method. Models are the bridge between theory and experiment. Tbey test the assumptions made in experim...

  16. Mismatch negativity in children with specific language impairment and auditory processing disorder

    Directory of Open Access Journals (Sweden)

    Caroline Nunes Rocha-Muniz

    2015-08-01

    Full Text Available INTRODUCTION: Mismatch negativity, an electrophysiological measure, evaluates the brain's capacity to discriminate sounds, regardless of attentional and behavioral capacity. Thus, this auditory event-related potential is promising in the study of the neurophysiological basis underlying auditory processing.OBJECTIVE: To investigate complex acoustic signals (speech encoded in the auditory nervous system of children with specific language impairment and compare with children with auditory processing disorders and typical development through the mismatch negativity paradigm.METHODS: It was a prospective study. 75 children (6-12 years participated in this study: 25 children with specific language impairment, 25 with auditory processing disorders, and 25 with typical development. Mismatch negativity was obtained by subtracting from the waves obtained by the stimuli /ga/ (frequent and /da/ (rare. Measures of mismatch negativity latency and two amplitude measures were analyzed.RESULTS: It was possible to verify an absence of mismatch negativity in 16% children with specific language impairment and 24% children with auditory processing disorders. In the comparative analysis, auditory processing disorders and specific language impairment showed higher latency values and lower amplitude values compared to typical development.CONCLUSION: These data demonstrate changes in the automatic discrimination of crucial acoustic components of speech sounds in children with specific language impairment and auditory processing disorders. It could indicate problems in physiological processes responsible for ensuring the discrimination of acoustic contrasts in pre-attentional and pre-conscious levels, contributing to poor perception.

  17. Mismatch negativity in children with specific language impairment and auditory processing disorder.

    Science.gov (United States)

    Rocha-Muniz, Caroline Nunes; Befi-Lopes, Débora Maria; Schochat, Eliane

    2015-01-01

    Mismatch negativity, an electrophysiological measure, evaluates the brain's capacity to discriminate sounds, regardless of attentional and behavioral capacity. Thus, this auditory event-related potential is promising in the study of the neurophysiological basis underlying auditory processing. To investigate complex acoustic signals (speech) encoded in the auditory nervous system of children with specific language impairment and compare with children with auditory processing disorders and typical development through the mismatch negativity paradigm. It was a prospective study. 75 children (6-12 years) participated in this study: 25 children with specific language impairment, 25 with auditory processing disorders, and 25 with typical development. Mismatch negativity was obtained by subtracting from the waves obtained by the stimuli /ga/ (frequent) and /da/ (rare). Measures of mismatch negativity latency and two amplitude measures were analyzed. It was possible to verify an absence of mismatch negativity in 16% children with specific language impairment and 24% children with auditory processing disorders. In the comparative analysis, auditory processing disorders and specific language impairment showed higher latency values and lower amplitude values compared to typical development. These data demonstrate changes in the automatic discrimination of crucial acoustic components of speech sounds in children with specific language impairment and auditory processing disorders. It could indicate problems in physiological processes responsible for ensuring the discrimination of acoustic contrasts in pre-attentional and pre-conscious levels, contributing to poor perception. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  18. [Application of Brain-Boy Universal Professional in preliminary assessment of auditory processing disorder].

    Science.gov (United States)

    Rutkowska, Joanna; Łobaczuk-Sitnik, Anna; Kosztyła-Hojna, Bożena

    2017-09-29

    Increasing numbers of hearing pathology is auditory processing disorders. Auditory Processing Disorders (APD) are defined as difficulty in using auditory information to communicate and learn in the presence of normal peripheral hearing. It may be recognized as a problem with understanding of speech in noise and perception disorder of distorted speech. APD may accompany to articulation disorders, language problems and difficulties in reading and writing. The diagnosis of auditory processing disorders causes many difficulties primarily due to the lack of common testing procedures, precise criteria for qualification to the group of norm and pathology. The Brain-Boy Universal Professional (BUP) is one of diagnostics tools. It enables to assess the higher auditory functions. The aim of the study was preliminary assessment of hearing difficulties that may suggest the occurrence of auditory processing disorders in children. The questionnaire of hearing difficulties and BUP was used. Study includes 20 participants 2nd grade students of elementary school. The examination of the basic central functions was carried out with BUP. The parents and teacher complete the questionnaire to evaluate the hearing problems. Studies carried out indicate that the 40% schoolchild have hearing difficulties. The high percentage of deficits in auditory functions was confirmed with research results of medical device and the questionnaire for teacher. On the basis of the studies conducted may establish that the Warnke Method can serve as preliminary assessment of hearing difficulties that may suggest the occurrence of auditory processing disorders in children.

  19. Auditory Grouping Mechanisms Reflect a Sound’s Relative Position in a Sequence

    Directory of Open Access Journals (Sweden)

    Kevin Thomas Hill

    2012-06-01

    Full Text Available The human brain uses acoustic cues to decompose complex auditory scenes into its components. For instance to improve communication, a listener can select an individual stream, such as a talker in a crowded room, based on cues such as pitch or location. Despite numerous investigations into auditory streaming, few have demonstrated clear correlates of perception; instead, in many studies perception covaries with changes in physical stimulus properties (e.g. frequency separation. In the current report, we employ a classic ABA streaming paradigm and human electroencephalography (EEG to disentangle the individual contributions of stimulus properties from changes in auditory perception. We find that changes in perceptual state – that is the perception of one versus two auditory streams with physically identical stimuli – and changes in physical stimulus properties are reflected independently in the event-related potential (ERP during overlapping time windows. These findings emphasize the necessity of controlling for stimulus properties when studying perceptual effects of streaming. Furthermore, the independence of the perceptual effect from stimulus properties suggests the neural correlates of streaming reflect a tone’s relative position within a larger sequence (1st, 2nd, 3rd rather than its acoustics. By clarifying the role of stimulus attributes along with perceptual changes, this study helps explain precisely how the brain is able to distinguish a sound source of interest in an auditory scene.

  20. A auto-percepção da saúde auditiva e vestibular de trabalhadores expostos a organofosforados Auto-perception of auditory and vestibular health in workers exposed to organophosphate

    Directory of Open Access Journals (Sweden)

    Ana Cristina Hiromi Hoshino

    2009-12-01

    Full Text Available OBJETIVO: caracterizar os sintomas auditivos e vestibulares de trabalhadores rurais expostos aos agrotóxicos organofosforados. MÉTODOS: foi realizado um estudo epidemiológico descritivo com uma amostra de 50 trabalhadores rurais. A faixa etária variou de 21 a 59 anos, média de 38,3 anos, sendo 20 (40% trabalhadores de sexo masculino e 30 (60% de sexo feminino. Foi utilizado um questionário com perguntas relacionadas à saúde auditiva e dados sobre tempo de exposição ao agrotóxico. RESULTADOS: os resultados mostraram que 38 trabalhadores (76% referiram ter apresentado pelo menos um episódio de tontura em sua vida e destes, 29 (58% trabalhadores ainda sentem tontura; 27 (54% sentem zumbidos; 23 (46% sentem a orelha abafada; 37 (74% acham que possuem boa acuidade auditiva, porém 35 (70% acham que, sentem dificuldades na compreensão de palavras, sugerindo que os agrotóxicos podem induzir alterações do sistema auditivo e vestibular por meio de uma intoxicação lenta e silenciosa. CONCLUSÃO: a tontura e a perda auditiva aparecem como sintomas subjetivos e constantes da exposição ocupacional podendo ser um sinal precoce da intoxicação, prejudicando a qualidade de vida destes trabalhadores.PURPOSE: to characterize the auditory and vestibular symptoms of rural workers under an environmental exposed of organophosphate pesticides. METHODS: this is a descriptive epidemic study that evaluated 50 workers. The age group varied from 21 to 59 years with a mean age of 38.3 years. There were 20 (40% male and 30 (60% female workers. A questionnaire was used with questions related to the auditory health in addition to some specific questions on time exposure. RESULTS: the results showed that 38 (76% workers showed dizziness and 29 (58% of them continued showing these symptom; 27 (54% related tinnitus; 23 (46% fullness sensation. 37 (74% workers don't have problem with hearing but 35 (70% can't understand very well the spoken words. Data suggest

  1. Auditory hallucinations treated by radio headphones.

    Science.gov (United States)

    Feder, R

    1982-09-01

    A young man with chronic auditory hallucinations was treated according to the principle that increasing external auditory stimulation decreases the likelihood of auditory hallucinations. Listening to a radio through stereo headphones in conditions of low auditory stimulation eliminated the patient's hallucinations.

  2. Modeling binaural responses in the auditory brainstem to electric stimulation of the auditory nerve.

    Science.gov (United States)

    Chung, Yoojin; Delgutte, Bertrand; Colburn, H Steven

    2015-02-01

    Bilateral cochlear implants (CIs) provide improvements in sound localization and speech perception in noise over unilateral CIs. However, the benefits arise mainly from the perception of interaural level differences, while bilateral CI listeners' sensitivity to interaural time difference (ITD) is poorer than normal. To help understand this limitation, a set of ITD-sensitive neural models was developed to study binaural responses to electric stimulation. Our working hypothesis was that central auditory processing is normal with bilateral CIs so that the abnormality in the response to electric stimulation at the level of the auditory nerve fibers (ANFs) is the source of the limited ITD sensitivity. A descriptive model of ANF response to both acoustic and electric stimulation was implemented and used to drive a simplified biophysical model of neurons in the medial superior olive (MSO). The model's ITD sensitivity was found to depend strongly on the specific configurations of membrane and synaptic parameters for different stimulation rates. Specifically, stronger excitatory synaptic inputs and faster membrane responses were required for the model neurons to be ITD-sensitive at high stimulation rates, whereas weaker excitatory synaptic input and slower membrane responses were necessary at low stimulation rates, for both electric and acoustic stimulation. This finding raises the possibility of frequency-dependent differences in neural mechanisms of binaural processing; limitations in ITD sensitivity with bilateral CIs may be due to a mismatch between stimulation rate and cell parameters in ITD-sensitive neurons.

  3. Common Misconceptions Regarding Pediatric Auditory Processing Disorder

    Directory of Open Access Journals (Sweden)

    Vasiliki Iliadou

    2018-01-01

    Full Text Available Pediatric hearing evaluation based on pure tone audiometry does not always reflect how a child hears in everyday life. This practice is inappropriate when evaluating the difficulties children experiencing auditory processing disorder (APD in school or on the playground. Despite the marked increase in research on pediatric APD, there remains limited access to proper evaluation worldwide. This perspective article presents five common misconceptions of APD that contribute to inappropriate or limited management in children experiencing these deficits. The misconceptions discussed are (1 the disorder cannot be diagnosed due to the lack of a gold standard diagnostic test; (2 making generalizations based on profiles of children suspected of APD and not diagnosed with the disorder; (3 it is best to discard an APD diagnosis when another disorder is present; (4 arguing that the known link between auditory perception and higher cognition function precludes the validity of APD as a clinical entity; and (5 APD is not a clinical entity. These five misconceptions are described and rebutted using published data as well as critical thinking on current available knowledge on APD.

  4. Modelling the Emergence and Dynamics of Perceptual Organisation in Auditory Streaming

    Science.gov (United States)

    Mill, Robert W.; Bőhm, Tamás M.; Bendixen, Alexandra; Winkler, István; Denham, Susan L.

    2013-01-01

    Many sound sources can only be recognised from the pattern of sounds they emit, and not from the individual sound events that make up their emission sequences. Auditory scene analysis addresses the difficult task of interpreting the sound world in terms of an unknown number of discrete sound sources (causes) with possibly overlapping signals, and therefore of associating each event with the appropriate source. There are potentially many different ways in which incoming events can be assigned to different causes, which means that the auditory system has to choose between them. This problem has been studied for many years using the auditory streaming paradigm, and recently it has become apparent that instead of making one fixed perceptual decision, given sufficient time, auditory perception switches back and forth between the alternatives—a phenomenon known as perceptual bi- or multi-stability. We propose a new model of auditory scene analysis at the core of which is a process that seeks to discover predictable patterns in the ongoing sound sequence. Representations of predictable fragments are created on the fly, and are maintained, strengthened or weakened on the basis of their predictive success, and conflict with other representations. Auditory perceptual organisation emerges spontaneously from the nature of the competition between these representations. We present detailed comparisons between the model simulations and data from an auditory streaming experiment, and show that the model accounts for many important findings, including: the emergence of, and switching between, alternative organisations; the influence of stimulus parameters on perceptual dominance, switching rate and perceptual phase durations; and the build-up of auditory streaming. The principal contribution of the model is to show that a two-stage process of pattern discovery and competition between incompatible patterns can account for both the contents (perceptual organisations) and the

  5. Neural dynamics of phonological processing in the dorsal auditory stream.

    Science.gov (United States)

    Liebenthal, Einat; Sabri, Merav; Beardsley, Scott A; Mangalathu-Arumana, Jain; Desai, Anjali

    2013-09-25

    Neuroanatomical models hypothesize a role for the dorsal auditory pathway in phonological processing as a feedforward efferent system (Davis and Johnsrude, 2007; Rauschecker and Scott, 2009; Hickok et al., 2011). But the functional organization of the pathway, in terms of time course of interactions between auditory, somatosensory, and motor regions, and the hemispheric lateralization pattern is largely unknown. Here, ambiguous duplex syllables, with elements presented dichotically at varying interaural asynchronies, were used to parametrically modulate phonological processing and associated neural activity in the human dorsal auditory stream. Subjects performed syllable and chirp identification tasks, while event-related potentials and functional magnetic resonance images were concurrently collected. Joint independent component analysis was applied to fuse the neuroimaging data and study the neural dynamics of brain regions involved in phonological processing with high spatiotemporal resolution. Results revealed a highly interactive neural network associated with phonological processing, composed of functional fields in posterior temporal gyrus (pSTG), inferior parietal lobule (IPL), and ventral central sulcus (vCS) that were engaged early and almost simultaneously (at 80-100 ms), consistent with a direct influence of articulatory somatomotor areas on phonemic perception. Left hemispheric lateralization was observed 250 ms earlier in IPL and vCS than pSTG, suggesting that functional specialization of somatomotor (and not auditory) areas determined lateralization in the dorsal auditory pathway. The temporal dynamics of the dorsal auditory pathway described here offer a new understanding of its functional organization and demonstrate that temporal information is essential to resolve neural circuits underlying complex behaviors.

  6. Auditory Streaming as an Online Classification Process with Evidence Accumulation.

    Science.gov (United States)

    Barniv, Dana; Nelken, Israel

    2015-01-01

    When human subjects hear a sequence of two alternating pure tones, they often perceive it in one of two ways: as one integrated sequence (a single "stream" consisting of the two tones), or as two segregated sequences, one sequence of low tones perceived separately from another sequence of high tones (two "streams"). Perception of this stimulus is thus bistable. Moreover, subjects report on-going switching between the two percepts: unless the frequency separation is large, initial perception tends to be of integration, followed by toggling between integration and segregation phases. The process of stream formation is loosely named "auditory streaming". Auditory streaming is believed to be a manifestation of human ability to analyze an auditory scene, i.e. to attribute portions of the incoming sound sequence to distinct sound generating entities. Previous studies suggested that the durations of the successive integration and segregation phases are statistically independent. This independence plays an important role in current models of bistability. Contrary to this, we show here, by analyzing a large set of data, that subsequent phase durations are positively correlated. To account together for bistability and positive correlation between subsequent durations, we suggest that streaming is a consequence of an evidence accumulation process. Evidence for segregation is accumulated during the integration phase and vice versa; a switch to the opposite percept occurs stochastically based on this evidence. During a long phase, a large amount of evidence for the opposite percept is accumulated, resulting in a long subsequent phase. In contrast, a short phase is followed by another short phase. We implement these concepts using a probabilistic model that shows both bistability and correlations similar to those observed experimentally.

  7. Auditory Streaming as an Online Classification Process with Evidence Accumulation.

    Directory of Open Access Journals (Sweden)

    Dana Barniv

    Full Text Available When human subjects hear a sequence of two alternating pure tones, they often perceive it in one of two ways: as one integrated sequence (a single "stream" consisting of the two tones, or as two segregated sequences, one sequence of low tones perceived separately from another sequence of high tones (two "streams". Perception of this stimulus is thus bistable. Moreover, subjects report on-going switching between the two percepts: unless the frequency separation is large, initial perception tends to be of integration, followed by toggling between integration and segregation phases. The process of stream formation is loosely named "auditory streaming". Auditory streaming is believed to be a manifestation of human ability to analyze an auditory scene, i.e. to attribute portions of the incoming sound sequence to distinct sound generating entities. Previous studies suggested that the durations of the successive integration and segregation phases are statistically independent. This independence plays an important role in current models of bistability. Contrary to this, we show here, by analyzing a large set of data, that subsequent phase durations are positively correlated. To account together for bistability and positive correlation between subsequent durations, we suggest that streaming is a consequence of an evidence accumulation process. Evidence for segregation is accumulated during the integration phase and vice versa; a switch to the opposite percept occurs stochastically based on this evidence. During a long phase, a large amount of evidence for the opposite percept is accumulated, resulting in a long subsequent phase. In contrast, a short phase is followed by another short phase. We implement these concepts using a probabilistic model that shows both bistability and correlations similar to those observed experimentally.

  8. Auditory Streaming as an Online Classification Process with Evidence Accumulation

    Science.gov (United States)

    Barniv, Dana; Nelken, Israel

    2015-01-01

    When human subjects hear a sequence of two alternating pure tones, they often perceive it in one of two ways: as one integrated sequence (a single "stream" consisting of the two tones), or as two segregated sequences, one sequence of low tones perceived separately from another sequence of high tones (two "streams"). Perception of this stimulus is thus bistable. Moreover, subjects report on-going switching between the two percepts: unless the frequency separation is large, initial perception tends to be of integration, followed by toggling between integration and segregation phases. The process of stream formation is loosely named “auditory streaming”. Auditory streaming is believed to be a manifestation of human ability to analyze an auditory scene, i.e. to attribute portions of the incoming sound sequence to distinct sound generating entities. Previous studies suggested that the durations of the successive integration and segregation phases are statistically independent. This independence plays an important role in current models of bistability. Contrary to this, we show here, by analyzing a large set of data, that subsequent phase durations are positively correlated. To account together for bistability and positive correlation between subsequent durations, we suggest that streaming is a consequence of an evidence accumulation process. Evidence for segregation is accumulated during the integration phase and vice versa; a switch to the opposite percept occurs stochastically based on this evidence. During a long phase, a large amount of evidence for the opposite percept is accumulated, resulting in a long subsequent phase. In contrast, a short phase is followed by another short phase. We implement these concepts using a probabilistic model that shows both bistability and correlations similar to those observed experimentally. PMID:26671774

  9. Auditory-olfactory synesthesia coexisting with auditory-visual synesthesia.

    Science.gov (United States)

    Jackson, Thomas E; Sandramouli, Soupramanien

    2012-09-01

    Synesthesia is an unusual condition in which stimulation of one sensory modality causes an experience in another sensory modality or when a sensation in one sensory modality causes another sensation within the same modality. We describe a previously unreported association of auditory-olfactory synesthesia coexisting with auditory-visual synesthesia. Given that many types of synesthesias involve vision, it is important that the clinician provide these patients with the necessary information and support that is available.

  10. Auditory Processing Training in Learning Disability

    OpenAIRE

    Nívea Franklin Chaves Martins; Hipólito Virgílio Magalhães Jr

    2006-01-01

    The aim of this case report was to promote a reflection about the importance of speech-therapy for stimulation a person with learning disability associated to language and auditory processing disorders. Data analysis considered the auditory abilities deficits identified in the first auditory processing test, held on April 30,2002 compared with the new auditory processing test done on May 13,2003,after one year of therapy directed to acoustic stimulation of auditory abilities disorders,in acco...

  11. The U.S. Army Research Laboratory’s Auditory Research for the Dismounted Soldier: Present (2009-2011) and Future

    Science.gov (United States)

    2012-03-01

    release; distribution is unlimited. ii REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this...steady-state acoustic threats. This has led to research into the effects of various types of headgear on directional sound detection, auditory...research infrastructure available at ARL-HRED includes a unique world- class multispace auditory spatial perception laboratory, the Environment for

  12. Oral Kinesthetic Sensitivity and the Perception of Speech

    Science.gov (United States)

    Larson, Stephen; Hudson, Floyd G.

    1973-01-01

    Studied the relationship between auditory ability and oral form discrimination in children with varying degrees of speech and language development. Results lend support to motor theory of speech perception. (ST)

  13. How We Hear: The Perception and Neural Coding of Sound.

    Science.gov (United States)

    Oxenham, Andrew J

    2018-01-04

    Auditory perception is our main gateway to communication with others via speech and music, and it also plays an important role in alerting and orienting us to new events. This review provides an overview of selected topics pertaining to the perception and neural coding of sound, starting with the first stage of filtering in the cochlea and its profound impact on perception. The next topic, pitch, has been debated for millennia, but recent technical and theoretical developments continue to provide us with new insights. Cochlear filtering and pitch both play key roles in our ability to parse the auditory scene, enabling us to attend to one auditory object or stream while ignoring others. An improved understanding of the basic mechanisms of auditory perception will aid us in the quest to tackle the increasingly important problem of hearing loss in our aging population.

  14. Modulation of auditory attention by training: evidence from dichotic listening.

    Science.gov (United States)

    Soveri, Anna; Tallus, Jussi; Laine, Matti; Nyberg, Lars; Bäckman, Lars; Hugdahl, Kenneth; Tuomainen, Jyrki; Westerhausen, René; Hämäläinen, Heikki

    2013-01-01

    We studied the effects of training on auditory attention in healthy adults with a speech perception task involving dichotically presented syllables. Training involved bottom-up manipulation (facilitating responses from the harder-to-report left ear through a decrease of right-ear stimulus intensity), top-down manipulation (focusing attention on the left-ear stimuli through instruction), or their combination. The results showed significant training-related effects for top-down training. These effects were evident as higher overall accuracy rates in the forced-left dichotic listening (DL) condition that sets demands on attentional control, as well as a response shift toward left-sided reports in the standard DL task. Moreover, a transfer effect was observed in an untrained auditory-spatial attention task involving bilateral stimulation where top-down training led to a relatively stronger focus on left-sided stimuli. Our results indicate that training of attentional control can modulate the allocation of attention in the auditory space in adults. Malleability of auditory attention in healthy adults raises the issue of potential training gains in individuals with attentional deficits.

  15. Synchronization and phonological skills: precise auditory timing hypothesis (PATH

    Directory of Open Access Journals (Sweden)

    Adam eTierney

    2014-11-01

    Full Text Available Phonological skills are enhanced by music training, but the mechanisms enabling this cross-domain enhancement remain unknown. To explain this cross-domain transfer, we propose a precise auditory timing hypothesis (PATH whereby entrainment practice is the core mechanism underlying enhanced phonological abilities in musicians. Both rhythmic synchronization and language skills such as consonant discrimination, detection of word and phrase boundaries, and conversational turn-taking rely on the perception of extremely fine-grained timing details in sound. Auditory-motor timing is an acoustic feature which meets all five of the pre-conditions necessary for cross-domain enhancement to occur (Patel 2011, 2012, 2014. There is overlap between the neural networks that process timing in the context of both music and language. Entrainment to music demands more precise timing sensitivity than does language processing. Moreover, auditory-motor timing integration captures the emotion of the trainee, is repeatedly practiced, and demands focused attention. The precise auditory timing hypothesis predicts that musical training emphasizing entrainment will be particularly effective in enhancing phonological skills.

  16. Neural Representation of Concurrent Vowels in Macaque Primary Auditory Cortex.

    Science.gov (United States)

    Fishman, Yonatan I; Micheyl, Christophe; Steinschneider, Mitchell

    2016-01-01

    Successful speech perception in real-world environments requires that the auditory system segregate competing voices that overlap in frequency and time into separate streams. Vowels are major constituents of speech and are comprised of frequencies (harmonics) that are integer multiples of a common fundamental frequency (F0). The pitch and identity of a vowel are determined by its F0 and spectral envelope (formant structure), respectively. When two spectrally overlapping vowels differing in F0 are presented concurrently, they can be readily perceived as two separate "auditory objects" with pitches at their respective F0s. A difference in pitch between two simultaneous vowels provides a powerful cue for their segregation, which in turn, facilitates their individual identification. The neural mechanisms underlying the segregation of concurrent vowels based on pitch differences are poorly understood. Here, we examine neural population responses in macaque primary auditory cortex (A1) to single and double concurrent vowels (/a/ and /i/) that differ in F0 such that they are heard as two separate auditory objects with distinct pitches. We find that neural population responses in A1 can resolve, via a rate-place code, lower harmonics of both single and double concurrent vowels. Furthermore, we show that the formant structures, and hence the identities, of single vowels can be reliably recovered from the neural representation of double concurrent vowels. We conclude that A1 contains sufficient spectral information to enable concurrent vowel segregation and identification by downstream cortical areas.

  17. Aero-tactile integration in speech perception

    OpenAIRE

    Gick, Bryan; Derrick, Donald

    2009-01-01

    Visual information from a speaker’s face can enhance1 or interfere with2 accurate auditory perception. This integration of information across auditory and visual streams has been observed in functional imaging studies3,4, and has typically been attributed to the frequency and robustness with which perceivers jointly encounter event-specific information from these two modalities5. Adding the tactile modality has long been considered a crucial next step in understanding multisensory integration...

  18. Screening for auditory processing performance in primary school children.

    Science.gov (United States)

    Mourad, Mona; Hassan, Mona; El-Banna, Manal; Asal, Samir; Hamza, Yasmeen

    2015-04-01

    A deficit in the processing of auditory information may underlie problems in understanding speech in the presence of background noise, degraded speech, and in following spoken instructions. Children with auditory processing disorders are challenged in the classroom because of ambient noise levels and maybe at risk for learning disabilities. 1) Set up and execute screening protocol for auditory processing performance (APP) in primary school children. 2) Construct database for APP in the classroom. 3) Set critical limits for deviant performance. Our hypothesis is that screening for APP in the classroom identifies pupils at risk for auditory processing disorders. Study consisted of two phases. Phase 1: 2,015 pupils were selected from fourth-, fifth-, and sixth-graders using stratified random sampling with the proportional allocation method. Male and female students were equally represented. Otoscopic examination, screening audiometery, and screening tests for auditory processing (AP) abilities (Pitch Pattern Sequence Test [PPST], speech perception in noise [SPIN] right, SPIN left, and Dichotic Digit Test) were conducted. A questionnaire emphasizing auditory listening behaviors (ALB) was answered by classroom teacher. Phase 2 included 69 pupils who were randomly selected based on percentile scores of phase 1. Students were examined for the corresponding full version AP tests in addition to Auditory Fusion Test-Revised and masking level difference. Intelligence quotient and learning disabilities were evaluated. Phase 1: Results are displayed in frequency polygons for10th, 25th, 50th, 75th, and 90th percentiles score for each AP test. Fourth-graders scored significantly lower than fifth- and sixth-graders on all tests. Males scored lower than females on PPST. A composite score was calculated to represent a summed score performance for PPST, SPIN right ear, SPIN left ear, and Dichotic Digit Test. Scores Auditory Fusion Test-Revised mean thresholds were statistically

  19. Spatial auditory attention is modulated by tactile priming.

    Science.gov (United States)

    Menning, Hans; Ackermann, Hermann; Hertrich, Ingo; Mathiak, Klaus

    2005-07-01

    Previous studies have shown that cross-modal processing affects perception at a variety of neuronal levels. In this study, event-related brain responses were recorded via whole-head magnetoencephalography (MEG). Spatial auditory attention was directed via tactile pre-cues (primes) to one of four locations in the peripersonal space (left and right hand versus face). Auditory stimuli were white noise bursts, convoluted with head-related transfer functions, which ensured spatial perception of the four locations. Tactile primes (200-300 ms prior to acoustic onset) were applied randomly to one of these locations. Attentional load was controlled by three different visual distraction tasks. The auditory P50m (about 50 ms after stimulus onset) showed a significant "proximity" effect (larger responses to face stimulation as well as a "contralaterality" effect between side of stimulation and hemisphere). The tactile primes essentially reduced both the P50m and N100m components. However, facial tactile pre-stimulation yielded an enhanced ipsilateral N100m. These results show that earlier responses are mainly governed by exogenous stimulus properties whereas cross-sensory interaction is spatially selective at a later (endogenous) processing stage.

  20. Biomimetic Sonar for Electrical Activation of the Auditory Pathway

    Directory of Open Access Journals (Sweden)

    D. Menniti

    2017-01-01

    Full Text Available Relying on the mechanism of bat’s echolocation system, a bioinspired electronic device has been developed to investigate the cortical activity of mammals in response to auditory sensorial stimuli. By means of implanted electrodes, acoustical information about the external environment generated by a biomimetic system and converted in electrical signals was delivered to anatomically selected structures of the auditory pathway. Electrocorticographic recordings showed that cerebral activity response is highly dependent on the information carried out by ultrasounds and is frequency-locked with the signal repetition rate. Frequency analysis reveals that delta and beta rhythm content increases, suggesting that sensorial information is successfully transferred and integrated. In addition, principal component analysis highlights how all the stimuli generate patterns of neural activity which can be clearly classified. The results show that brain response is modulated by echo signal features suggesting that spatial information sent by biomimetic sonar is efficiently interpreted and encoded by the auditory system. Consequently, these results give new perspective in artificial environmental perception, which could be used for developing new techniques useful in treating pathological conditions or influencing our perception of the surroundings.

  1. Experience-dependent learning of auditory temporal resolution: evidence from Carnatic-trained musicians.

    Science.gov (United States)

    Mishra, Srikanta K; Panda, Manasa R

    2014-01-22

    Musical training and experience greatly enhance the cortical and subcortical processing of sounds, which may translate to superior auditory perceptual acuity. Auditory temporal resolution is a fundamental perceptual aspect that is critical for speech understanding in noise in listeners with normal hearing, auditory disorders, cochlear implants, and language disorders, yet very few studies have focused on music-induced learning of temporal resolution. This report demonstrates that Carnatic musical training and experience have a significant impact on temporal resolution assayed by gap detection thresholds. This experience-dependent learning in Carnatic-trained musicians exhibits the universal aspects of human perception and plasticity. The present work adds the perceptual component to a growing body of neurophysiological and imaging studies that suggest plasticity of the peripheral auditory system at the level of the brainstem. The present work may be intriguing to researchers and clinicians alike interested in devising cross-cultural training regimens to alleviate listening-in-noise difficulties.

  2. Comparing the effect of auditory-only and auditory-visual modes in two groups of Persian children using cochlear implants: a randomized clinical trial.

    Science.gov (United States)

    Oryadi Zanjani, Mohammad Majid; Hasanzadeh, Saeid; Rahgozar, Mehdi; Shemshadi, Hashem; Purdy, Suzanne C; Mahmudi Bakhtiari, Behrooz; Vahab, Maryam

    2013-09-01

    Since the introduction of cochlear implantation, researchers have considered children's communication and educational success before and after implantation. Therefore, the present study aimed to compare auditory, speech, and language development scores following one-sided cochlear implantation between two groups of prelingual deaf children educated through either auditory-only (unisensory) or auditory-visual (bisensory) modes. A randomized controlled trial with a single-factor experimental design was used. The study was conducted in the Instruction and Rehabilitation Private Centre of Hearing Impaired Children and their Family, called Soroosh in Shiraz, Iran. We assessed 30 Persian deaf children for eligibility and 22 children qualified to enter the study. They were aged between 27 and 66 months old and had been implanted between the ages of 15 and 63 months. The sample of 22 children was randomly assigned to two groups: auditory-only mode and auditory-visual mode; 11 participants in each group were analyzed. In both groups, the development of auditory perception, receptive language, expressive language, speech, and speech intelligibility was assessed pre- and post-intervention by means of instruments which were validated and standardized in the Persian population. No significant differences were found between the two groups. The children with cochlear implants who had been instructed using either the auditory-only or auditory-visual modes acquired auditory, receptive language, expressive language, and speech skills at the same rate. Overall, spoken language significantly developed in both the unisensory group and the bisensory group. Thus, both the auditory-only mode and the auditory-visual mode were effective. Therefore, it is not essential to limit access to the visual modality and to rely solely on the auditory modality when instructing hearing, language, and speech in children with cochlear implants who are exposed to spoken language both at home and at school

  3. Pure word deafness with auditory object agnosia after bilateral lesion of the superior temporal sulcus.

    Science.gov (United States)

    Gutschalk, Alexander; Uppenkamp, Stefan; Riedel, Bernhard; Bartsch, Andreas; Brandt, Tobias; Vogt-Schaden, Marlies

    2015-12-01

    Based on results from functional imaging, cortex along the superior temporal sulcus (STS) has been suggested to subserve phoneme and pre-lexical speech perception. For vowel classification, both superior temporal plane (STP) and STS areas have been suggested relevant. Lesion of bilateral STS may conversely be expected to cause pure word deafness and possibly also impaired vowel classification. Here we studied a patient with bilateral STS lesions caused by ischemic strokes and relatively intact medial STPs to characterize the behavioral consequences of STS loss. The patient showed severe deficits in auditory speech perception, whereas his speech production was fluent and communication by written speech was grossly intact. Auditory-evoked fields in the STP were within normal limits on both sides, suggesting that major parts of the auditory cortex were functionally intact. Further studies showed that the patient had normal hearing thresholds and only mild disability in tests for telencephalic hearing disorder. Prominent deficits were discovered in an auditory-object classification task, where the patient performed four standard deviations below the control group. In marked contrast, performance in a vowel-classification task was intact. Auditory evoked fields showed enhanced responses for vowels compared to matched non-vowels within normal limits. Our results are consistent with the notion that cortex along STS is important for auditory speech perception, although it does not appear to be entirely speech specific. Formant analysis and single vowel classification, however, appear to be already implemented in auditory cortex on the STP. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Investigating the Role of Auditory Feedback in a Multimodal Biking Experience

    DEFF Research Database (Denmark)

    Bruun-Pedersen, Jon Ram; Grani, Francesco; Serafin, Stefania

    2017-01-01

    In this paper, we investigate the role of auditory feedback in affecting perception of effort while biking in a virtual environment. Subjects were biking on a stationary chair bike, while exposed to 3D renditions of a recumbent bike inside a virtual environment (VE). The VE simulated a park...... and was created in the Unity5 engine. While biking, subjects were exposed to 9 kinds of auditory feedback (3 amplitude levels with three different filters) which were continuously triggered corresponding to pedal speed, representing the sound of the wheels and bike/chain mechanics. Subjects were asked to rate...... the perception of exertion using the Borg RPE scale. Results of the experiment showed that most subjects perceived a difference in mechanical resistance from the bike between conditions, but did not consciously notice the variations of the auditory feedback, although these were significantly varied. This points...

  5. Event-Related Potentials Reflect Speech-Relevant Somatosensory-Auditory Interactions

    OpenAIRE

    Takayuki Ito; Gracco, Vincent L.; Ostry, David J

    2011-01-01

    An interaction between orofacial somatosensation and the perception of speech was demonstrated in recent psychophysical studies (Ito et al. 2009; Ito and Ostry 2009). To explore further the neural mechanisms of the speech-related somatosensory-auditory interaction, we assessed to what extent multisensory evoked potentials reflect multisensory interaction during speech perception. We also examined the dynamic modulation of multisensory integration resulting from relative timing differences bet...

  6. Proceedings of the 2009 international conference on auditory display

    DEFF Research Database (Denmark)

      I am pleased to present the 15th International Conference on Auditory Display (ICAD), which takes place in Copenhagen, Denmark, May 18-21, 2009. The ICAD 2009 theme is Timeless Sound, including the universal aspect of sounds as well as the influence of time in the perception of sounds. ICAD 2009...... with the re-new festival. The conference addresses all aspects related to the design of sounds, either conceptual or technical. Besides traditionally topics addressed by ICAD, I would like to take the opportunity of ICAD being organized by re-new to highlight the ICAD 2009 theme Timeless Sound......, and the possibilities of a full week of artistic presentations, including installations, concerts and much more. The joint organisation of CMMR with ICAD offers a great opportunity to discuss the links between auditory display, sound modeling and music information retrieval.   ...

  7. Auditory and proprioceptive spatial impairments in blind children and adults.

    Science.gov (United States)

    Cappagli, Giulia; Cocchi, Elena; Gori, Monica

    2017-05-01

    It is not clear what role visual information plays in the development of space perception. It has previously been shown that in absence of vision, both the ability to judge orientation in the haptic modality and bisect intervals in the auditory modality are severely compromised (Gori, Sandini, Martinoli & Burr, 2010; Gori, Sandini, Martinoli & Burr, 2014). Here we report for the first time also a strong deficit in proprioceptive reproduction and audio distance evaluation in early blind children and adults. Interestingly, the deficit is not present in a small group of adults with acquired visual disability. Our results support the idea that in absence of vision the audio and proprioceptive spatial representations may be delayed or drastically weakened due to the lack of visual calibration over the auditory and haptic modalities during the critical period of development. © 2015 John Wiley & Sons Ltd.

  8. Auditory stream segregation in children with Asperger syndrome

    Science.gov (United States)

    Lepistö, T.; Kuitunen, A.; Sussman, E.; Saalasti, S.; Jansson-Verkasalo, E.; Nieminen-von Wendt, T.; Kujala, T.

    2009-01-01

    Individuals with Asperger syndrome (AS) often have difficulties in perceiving speech in noisy environments. The present study investigated whether this might be explained by deficient auditory stream segregation ability, that is, by a more basic difficulty in separating simultaneous sound sources from each other. To this end, auditory event-related brain potentials were recorded from a group of school-aged children with AS and a group of age-matched controls using a paradigm specifically developed for studying stream segregation. Differences in the amplitudes of ERP components were found between groups only in the stream segregation conditions and not for simple feature discrimination. The results indicated that children with AS have difficulties in segregating concurrent sound streams, which ultimately may contribute to the difficulties in speech-in-noise perception. PMID:19751798

  9. Music Training and Education Slow the Deterioration of Music Perception Produced by Presbycusis in the Elderly

    OpenAIRE

    Moreno-G?mez, Felipe N.; V?liz, Guillermo; Rojas, Marcos; Mart?nez, Cristi?n; Olmedo, Rub?n; Panussis, Felipe; Dagnino-Subiabre, Alexies; Delgado, Carolina; Delano, Paul H.

    2017-01-01

    The perception of music depends on the normal function of the peripheral and central auditory system. Aged subjects without hearing loss have altered music perception, including pitch and temporal features. Presbycusis or age-related hearing loss is a frequent condition in elderly people, produced by neurodegenerative processes that affect the cochlear receptor cells and brain circuits involved in auditory perception. Clinically, presbycusis patients have bilateral high-frequency hearing loss...

  10. Examining frontotemporal connectivity and rTMS in healthy controls: implications for auditory hallucinations in schizophrenia.

    NARCIS (Netherlands)

    Gromann, P.M.; Tracy, D.K.; Giampietro, V.; Brammer, M.J.; Krabbendam, A.C.; Shergill, S.S.

    2012-01-01

    Objective: Repetitive transcranial magnetic stimulation (rTMS) has been shown to have clinically beneficial effects in altering the perception of auditory hallucinations (AH) in patients with schizophrenia. However, the mode of action is not clear. Recent neuroimaging findings indicate that rTMS has

  11. Investigating the Role of Auditory Feedback in a Multimodal Biking Experience

    DEFF Research Database (Denmark)

    Bruun-Pedersen, Jon Ram; Grani, Francesco; Serafin, Stefania

    2017-01-01

    In this paper, we investigate the role of auditory feedback in affecting perception of effort while biking in a virtual environment. Subjects were biking on a stationary chair bike, while exposed to 3D renditions of a recumbent bike inside a virtual environment (VE). The VE simulated a park and w...

  12. An EMG Study of the Lip Muscles during Covert Auditory Verbal Hallucinations in Schizophrenia

    Science.gov (United States)

    Rapin, Lucile; Dohen, Marion; Polosan, Mircea; Perrier, Pascal; Loevenbruck, Hélène

    2013-01-01

    Purpose: "Auditory verbal hallucinations" (AVHs) are speech perceptions in the absence of external stimulation. According to an influential theoretical account of AVHs in schizophrenia, a deficit in inner-speech monitoring may cause the patients' verbal thoughts to be perceived as external voices. The account is based on a…

  13. Characteristics of Auditory Agnosia in a Child with Severe Traumatic Brain Injury: A Case Report

    Science.gov (United States)

    Hattiangadi, Nina; Pillion, Joseph P.; Slomine, Beth; Christensen, James; Trovato, Melissa K.; Speedie, Lynn J.

    2005-01-01

    We present a case that is unusual in many respects from other documented incidences of auditory agnosia, including the mechanism of injury, age of the individual, and location of neurological insult. The clinical presentation is one of disturbance in the perception of spoken language, music, pitch, emotional prosody, and temporal auditory…

  14. Contribution of auditory working memory to speech understanding in mandarin-speaking cochlear implant users.

    Directory of Open Access Journals (Sweden)

    Duoduo Tao

    Full Text Available To investigate how auditory working memory relates to speech perception performance by Mandarin-speaking cochlear implant (CI users.Auditory working memory and speech perception was measured in Mandarin-speaking CI and normal-hearing (NH participants. Working memory capacity was measured using forward digit span and backward digit span; working memory efficiency was measured using articulation rate. Speech perception was assessed with: (a word-in-sentence recognition in quiet, (b word-in-sentence recognition in speech-shaped steady noise at +5 dB signal-to-noise ratio, (c Chinese disyllable recognition in quiet, (d Chinese lexical tone recognition in quiet. Self-reported school rank was also collected regarding performance in schoolwork.There was large inter-subject variability in auditory working memory and speech performance for CI participants. Working memory and speech performance were significantly poorer for CI than for NH participants. All three working memory measures were strongly correlated with each other for both CI and NH participants. Partial correlation analyses were performed on the CI data while controlling for demographic variables. Working memory efficiency was significantly correlated only with sentence recognition in quiet when working memory capacity was partialled out. Working memory capacity was correlated with disyllable recognition and school rank when efficiency was partialled out. There was no correlation between working memory and lexical tone recognition in the present CI participants.Mandarin-speaking CI users experience significant deficits in auditory working memory and speech performance compared with NH listeners. The present data suggest that auditory working memory may contribute to CI users' difficulties in speech understanding. The present pattern of results with Mandarin-speaking CI users is consistent with previous auditory working memory studies with English-speaking CI users, suggesting that the lexical

  15. Contribution of auditory working memory to speech understanding in mandarin-speaking cochlear implant users.

    Science.gov (United States)

    Tao, Duoduo; Deng, Rui; Jiang, Ye; Galvin, John J; Fu, Qian-Jie; Chen, Bing

    2014-01-01

    To investigate how auditory working memory relates to speech perception performance by Mandarin-speaking cochlear implant (CI) users. Auditory working memory and speech perception was measured in Mandarin-speaking CI and normal-hearing (NH) participants. Working memory capacity was measured using forward digit span and backward digit span; working memory efficiency was measured using articulation rate. Speech perception was assessed with: (a) word-in-sentence recognition in quiet, (b) word-in-sentence recognition in speech-shaped steady noise at +5 dB signal-to-noise ratio, (c) Chinese disyllable recognition in quiet, (d) Chinese lexical tone recognition in quiet. Self-reported school rank was also collected regarding performance in schoolwork. There was large inter-subject variability in auditory working memory and speech performance for CI participants. Working memory and speech performance were significantly poorer for CI than for NH participants. All three working memory measures were strongly correlated with each other for both CI and NH participants. Partial correlation analyses were performed on the CI data while controlling for demographic variables. Working memory efficiency was significantly correlated only with sentence recognition in quiet when working memory capacity was partialled out. Working memory capacity was correlated with disyllable recognition and school rank when efficiency was partialled out. There was no correlation between working memory and lexical tone recognition in the present CI participants. Mandarin-speaking CI users experience significant deficits in auditory working memory and speech performance compared with NH listeners. The present data suggest that auditory working memory may contribute to CI users' difficulties in speech understanding. The present pattern of results with Mandarin-speaking CI users is consistent with previous auditory working memory studies with English-speaking CI users, suggesting that the lexical importance

  16. [Auditory threshold for white noise].

    Science.gov (United States)

    Carrat, R; Thillier, J L; Durivault, J

    1975-01-01

    The liminal auditory threshold for white noise and for coloured noise was determined from a statistical survey of a group of 21 young people with normal hearing. The normal auditory threshold for white noise with a spectrum covering the whole of the auditory field is between -- 0.57 dB +/- 8.78. The normal auditory threshold for bands of filtered white noise (coloured noise with a central frequency corresponding to the pure frequencies usually employed in tonal audiometry) describes a typical curve which, instead of being homothetic to the usual tonal curves, sinks to low frequencies and then rises. The peak of this curve is replaced by a broad plateau ranging from 750 to 6000 Hz and contained in the concavity of the liminal tonal curves. The ear is therefore less sensitive but, at limited acoustic pressure, white noise first impinges with the same discrimination upon the whole of the conversational zone of the auditory field. Discovery of the audiometric threshold for white noise constitutes a synthetic method of measuring acuteness of hearing which considerably reduces the amount of manipulation required.

  17. Multi-sensory integration in brainstem and auditory cortex.

    Science.gov (United States)

    Basura, Gregory J; Koehler, Seth D; Shore, Susan E

    2012-11-16

    Tinnitus is the perception of sound in the absence of a physical sound stimulus. It is thought to arise from aberrant neural activity within central auditory pathways that may be influenced by multiple brain centers, including the somatosensory system. Auditory-somatosensory (bimodal) integration occurs in the dorsal cochlear nucleus (DCN), where electrical activation of somatosensory regions alters pyramidal cell spike timing and rates of sound stimuli. Moreover, in conditions of tinnitus, bimodal integration in DCN is enhanced, producing greater spontaneous and sound-driven neural activity, which are neural correlates of tinnitus. In primary auditory cortex (A1), a similar auditory-somatosensory integration has been described in the normal system (Lakatos et al., 2007), where sub-threshold multisensory modulation may be a direct reflection of subcortical multisensory responses (Tyll et al., 2011). The present work utilized simultaneous recordings from both DCN and A1 to directly compare bimodal integration across these separate brain stations of the intact auditory pathway. Four-shank, 32-channel electrodes were placed in DCN and A1 to simultaneously record tone-evoked unit activity in the presence and absence of spinal trigeminal nucleus (Sp5) electrical activation. Bimodal stimulation led to long-lasting facilitation or suppression of single and multi-unit responses to subsequent sound in both DCN and A1. Immediate (bimodal response) and long-lasting (bimodal plasticity) effects of Sp5-tone stimulation were facilitation or suppression of tone-evoked firing rates in DCN and A1 at all Sp5-tone pairing intervals (10, 20, and 40 ms), and greater suppression at 20 ms pairing-intervals for single unit responses. Understanding the complex relationships between DCN and A1 bimodal processing in the normal animal provides the basis for studying its disruption in hearing loss and tinnitus models. This article is part of a Special Issue entitled: Tinnitus Neuroscience

  18. Visual face-movement sensitive cortex is relevant for auditory-only speech recognition.

    Science.gov (United States)

    Riedel, Philipp; Ragert, Patrick; Schelinski, Stefanie; Kiebel, Stefan J; von Kriegstein, Katharina

    2015-07-01

    with the 'auditory-visual view' of auditory speech perception, which assumes that auditory speech recognition is optimized by using predictions from previously encoded speaker-specific audio-visual internal models. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Age-dependent changes of calcium related activity in the central auditory pathway.

    Science.gov (United States)

    Gröschel, Moritz; Hubert, Nikolai; Müller, Susanne; Ernst, Arne; Basta, Dietmar

    2014-10-01

    Age-related hearing loss (ARHL) represents one of the most common chronic health problems that faces an aging population. In the peripheral auditory system, aging is accompanied by functional loss or degeneration of sensory as well as non-sensory tissue. It has been recently described that besides the degeneration of cochlear structures, the central auditory system is also involved in ARHL. Although mechanisms of central presbycusis are not well understood, previous animal studies have reported some signs of central neurodegeneration in the lower auditory pathway. Moreover, changes in neurophysiology are indicated by alterations in synaptic transmission. In particular, neurotransmission and spontaneous neuronal activity appear to be affected in aging animals. Therefore, it was the aim of the present study to determine the neuronal activity within the central auditory pathway in aging mice over their whole lifespan compared to a control group (young adult animals, ~3months of age) using the non-invasive manganese-enhanced MRI technique. MRI signal strength showed a comparable pattern in most investigated auditory brain areas. An increase in activity was particularly pronounced in the middle-aged groups (13 or 18 months), with the largest effect in the dorsal and ventral cochlear nucleus. In higher auditory structures, namely the inferior colliculus, medial geniculate body and auditory cortex, the enhancement was much less expressed; while a decrease was detected in the superior olivary complex. Interestingly, calcium-dependent activity reduced to control levels in the oldest animals (22 months) in the cochlear nucleus and was significantly reduced in higher auditory structures. A similar finding was also found in the hippocampus. The observed changes might be related to central neuroplasticity (including hyperactivity) as well as neurodegenerative mechanisms and represent central nervous correlates of the age-related decline in auditory processing and perception

  20. Neural basis of the time window for subjective motor-auditory integration

    Directory of Open Access Journals (Sweden)

    Koichi eToida

    2016-01-01

    Full Text Available Temporal contiguity between an action and corresponding auditory feedback is crucial to the perception of self-generated sound. However, the neural mechanisms underlying motor–auditory temporal integration are unclear. Here, we conducted four experiments with an oddball paradigm to examine the specific event-related potentials (ERPs elicited by delayed auditory feedback for a self-generated action. The first experiment confirmed that a pitch-deviant auditory stimulus elicits mismatch negativity (MMN and P300, both when it is generated passively and by the participant’s action. In our second and third experiments, we investigated the ERP components elicited by delayed auditory feedback of for a self-generated action. We found that delayed auditory feedback elicited an enhancement of P2 (enhanced-P2 and a N300 component, which were apparently different from the MMN and P300 components observed in the first experiment. We further investigated the sensitivity of the enhanced-P2 and N300 to delay length in our fourth experiment. Strikingly, the amplitude of the N300 increased as a function of the delay length. Additionally, the N300 amplitude was significantly correlated with the conscious detection of the delay (the 50% detection point was around 200 ms, and hence reduction in the feeling of authorship of the sound (the sense of agency. In contrast, the enhanced-P2 was most prominent in short-delay (≤ 200 ms conditions and diminished in long-delay conditions. Our results suggest that different neural mechanisms are employed for the processing of temporally-deviant and pitch-deviant auditory feedback. Additionally, the temporal window for subjective motor–auditory integration is likely about 200 ms, as indicated by these auditory ERP components.

  1. Devices and Procedures for Auditory Learning.

    Science.gov (United States)

    Ling, Daniel

    1986-01-01

    The article summarizes information on assistive devices (hearing aids, cochlear implants, tactile aids, visual aids) and rehabilitation procedures (auditory training, speechreading, cued speech, and speech production) to aid the auditory learning of the hearing impaired.(DB)

  2. Auditory presentation of experimental data

    Science.gov (United States)

    Lunney, David; Morrison, Robert C.

    1990-08-01

    Our research group has been working for several years on the development of auditory alternatives to visual graphs, primarily in order to give blind science students and scientists access to instrumental measurements. In the course of this work we have tried several modes for auditory presentation of data: synthetic speech, tones of varying pitch, complex waveforms, electronic music, and various non-musical sounds. Our most successful translation of data into sound has been presentation of infrared spectra as musical patterns. We have found that if the stick spectra of two compounds are visibly different, their musical patterns will be audibly different. Other possibilities for auditory presentation of data are also described, among them listening to Fourier transforms of spectra, and encoding data in complex waveforms (including synthetic speech).

  3. Context effects on auditory distraction

    Science.gov (United States)

    Chen, Sufen; Sussman, Elyse S.

    2014-01-01

    The purpose of the study was to test the hypothesis that sound context modulates the magnitude of auditory distraction, indexed by behavioral and electrophysiological measures. Participants were asked to identify tone duration, while irrelevant changes occurred in tone frequency, tone intensity, and harmonic structure. Frequency deviants were randomly intermixed with standards (Uni-Condition), with intensity deviants (Bi-Condition), and with both intensity and complex deviants (Tri-Condition). Only in the Tri-Condition did the auditory distraction effect reflect the magnitude difference among the frequency and intensity deviants. The mixture of the different types of deviants in the Tri-Condition modulated the perceived level of distraction, demonstrating that the sound context can modulate the effect of deviance level on processing irrelevant acoustic changes in the environment. These findings thus indicate that perceptual contrast plays a role in change detection processes that leads to auditory distraction. PMID:23886958

  4. Auditory Hallucinations in Acute Stroke

    Directory of Open Access Journals (Sweden)

    Yair Lampl

    2005-01-01

    Full Text Available Auditory hallucinations are uncommon phenomena which can be directly caused by acute stroke, mostly described after lesions of the brain stem, very rarely reported after cortical strokes. The purpose of this study is to determine the frequency of this phenomenon. In a cross sectional study, 641 stroke patients were followed in the period between 1996–2000. Each patient underwent comprehensive investigation and follow-up. Four patients were found to have post cortical stroke auditory hallucinations. All of them occurred after an ischemic lesion of the right temporal lobe. After no more than four months, all patients were symptom-free and without therapy. The fact the auditory hallucinations may be of cortical origin must be taken into consideration in the treatment of stroke patients. The phenomenon may be completely reversible after a couple of months.

  5. Octave effect in auditory attention.

    Science.gov (United States)

    Borra, Tobias; Versnel, Huib; Kemner, Chantal; van Opstal, A John; van Ee, Raymond

    2013-09-17

    After hearing a tone, the human auditory system becomes more sensitive to similar tones than to other tones. Current auditory models explain this phenomenon by a simple bandpass attention filter. Here, we demonstrate that auditory attention involves multiple pass-bands around octave-related frequencies above and below the cued tone. Intriguingly, this "octave effect" not only occurs for physically presented tones, but even persists for the missing fundamental in complex tones, and for imagined tones. Our results suggest neural interactions combining octave-related frequencies, likely located in nonprimary cortical regions. We speculate that this connectivity scheme evolved from exposure to natural vibrations containing octave-related spectral peaks, e.g., as produced by vocal cords.

  6. Music, rhythm, rise time perception and developmental dyslexia: Perception of musical meter predicts reading and phonology

    OpenAIRE

    M. Huss; Verney, J.P.; Fosker, Tim; Mead, N.; Goswami, U.

    2011-01-01

    Introduction: Rhythm organises musical events into patterns and forms, and rhythm perception in music is usually studied by using metrical tasks. Metrical structure also plays an organisational function in the phonology of language, via speech prosody, and there is evidence for rhythmic perceptual difficulties in developmental dyslexia. Here we investigate the hypothesis that the accurate perception of musical metrical structure is related to basic auditory perception of rise time, and also t...

  7. Mental imagery changes multisensory perception.

    Science.gov (United States)

    Berger, Christopher C; Ehrsson, H Henrik

    2013-07-22

    Multisensory interactions are the norm in perception, and an abundance of research on the interaction and integration of the senses has demonstrated the importance of combining sensory information from different modalities on our perception of the external world. However, although research on mental imagery has revealed a great deal of functional and neuroanatomical overlap between imagery and perception, this line of research has primarily focused on similarities within a particular modality and has yet to address whether imagery is capable of leading to multisensory integration. Here, we devised novel versions of classic multisensory paradigms to systematically examine whether imagery is capable of integrating with perceptual stimuli to induce multisensory illusions. We found that imagining an auditory stimulus at the moment two moving objects met promoted an illusory bounce percept, as in the classic cross-bounce illusion; an imagined visual stimulus led to the translocation of sound toward the imagined stimulus, as in the classic ventriloquist illusion; and auditory imagery of speech stimuli led to a promotion of an illusory speech percept in a modified version of the McGurk illusion. Our findings provide support for perceptually based theories of imagery and suggest that neuronal signals produced by imagined stimuli can integrate with signals generated by real stimuli of a different sensory modality to create robust multisensory percepts. These findings advance our understanding of the relationship between imagery and perception and provide new opportunities for investigating how the brain distinguishes between endogenous and exogenous sensory events. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Distortions of subjective time perception within and across senses.

    Directory of Open Access Journals (Sweden)

    Virginie van Wassenhove

    Full Text Available BACKGROUND: The ability to estimate the passage of time is of fundamental importance for perceptual and cognitive processes. One experience of time is the perception of duration, which is not isomorphic to physical duration and can be distorted by a number of factors. Yet, the critical features generating these perceptual shifts in subjective duration are not understood. METHODOLOGY/FINDINGS: We used prospective duration judgments within and across sensory modalities to examine the effect of stimulus predictability and feature change on the perception of duration. First, we found robust distortions of perceived duration in auditory, visual and auditory-visual presentations despite the predictability of the feature changes in the stimuli. For example, a looming disc embedded in a series of steady discs led to time dilation, whereas a steady disc embedded in a series of looming discs led to time compression. Second, we addressed whether visual (auditory inputs could alter the perception of duration of auditory (visual inputs. When participants were presented with incongruent audio-visual stimuli, the perceived duration of auditory events could be shortened or lengthened by the presence of conflicting visual information; however, the perceived duration of visual events was seldom distorted by the presence of auditory information and was never perceived shorter than their actual durations. CONCLUSIONS/SIGNIFICANCE: These results support the existence of multisensory interactions in the perception of duration and, importantly, suggest that vision can modify auditory temporal perception in a pure timing task. Insofar as distortions in subjective duration can neither be accounted for by the unpredictability of an auditory, visual or auditory-visual event, we propose that it is the intrinsic features of the stimulus that critically affect subjective time distortions.

  9. Distortions of subjective time perception within and across senses.

    Science.gov (United States)

    van Wassenhove, Virginie; Buonomano, Dean V; Shimojo, Shinsuke; Shams, Ladan

    2008-01-16

    The ability to estimate the passage of time is of fundamental importance for perceptual and cognitive processes. One experience of time is the perception of duration, which is not isomorphic to physical duration and can be distorted by a number of factors. Yet, the critical features generating these perceptual shifts in subjective duration are not understood. We used prospective duration judgments within and across sensory modalities to examine the effect of stimulus predictability and feature change on the perception of duration. First, we found robust distortions of perceived duration in auditory, visual and auditory-visual presentations despite the predictability of the feature changes in the stimuli. For example, a looming disc embedded in a series of steady discs led to time dilation, whereas a steady disc embedded in a series of looming discs led to time compression. Second, we addressed whether visual (auditory) inputs could alter the perception of duration of auditory (visual) inputs. When participants were presented with incongruent audio-visual stimuli, the perceived duration of auditory events could be shortened or lengthened by the presence of conflicting visual information; however, the perceived duration of visual events was seldom distorted by the presence of auditory information and was never perceived shorter than their actual durations. These results support the existence of multisensory interactions in the perception of duration and, importantly, suggest that vision can modify auditory temporal perception in a pure timing task. Insofar as distortions in subjective duration can neither be accounted for by the unpredictability of an auditory, visual or auditory-visual event, we propose that it is the intrinsic features of the stimulus that critically affect subjective time distortions.

  10. Predictive uncertainty in auditory sequence processing

    Directory of Open Access Journals (Sweden)

    Niels Chr. eHansen

    2014-09-01

    Full Text Available Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty - a property of listeners’ prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure.Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex. Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty. We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty. Finally, we simulate listeners’ perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature.The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music.

  11. Predictive uncertainty in auditory sequence processing.

    Science.gov (United States)

    Hansen, Niels Chr; Pearce, Marcus T

    2014-01-01

    Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty-a property of listeners' prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure. Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex). Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty). We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty). Finally, we simulate listeners' perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature. The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music.

  12. Predictive uncertainty in auditory sequence processing

    Science.gov (United States)

    Hansen, Niels Chr.; Pearce, Marcus T.

    2014-01-01

    Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty—a property of listeners' prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure. Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex). Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty). We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty). Finally, we simulate listeners' perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature. The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music. PMID:25295018

  13. Non-auditory factors affecting urban soundscape evaluation.

    Science.gov (United States)

    Jeon, Jin Yong; Lee, Pyoung Jik; Hong, Joo Young; Cabrera, Densil

    2011-12-01

    The aim of this study is to characterize urban spaces, which combine landscape, acoustics, and lighting, and to investigate people's perceptions of urban soundscapes through quantitative and qualitative analyses. A general questionnaire survey and soundwalk were performed to investigate soundscape perception in urban spaces. Non-auditory factors (visual image, day lighting, and olfactory perceptions), as well as acoustic comfort, were selected as the main contexts that affect soundscape perception, and context preferences and overall impressions were evaluated using an 11-point numerical scale. For qualitative analysis, a semantic differential test was performed in the form of a social survey, and subjects were also asked to describe their impressions during a soundwalk. The results showed that urban soundscapes can be characterized by soundmarks, and soundscape perceptions are dominated by acoustic comfort, visual images, and day lighting, whereas reverberance in urban spaces does not yield consistent preference judgments. It is posited that the subjective evaluation of reverberance can be replaced by physical measurements. The categories extracted from the qualitative analysis revealed that spatial impressions such as openness and density emerged as some of the contexts of soundscape perception. © 2011 Acoustical Society of America

  14. Early auditory enrichment with music enhances auditory discrimination learning and alters NR2B protein expression in rat auditory cortex.

    Science.gov (United States)

    Xu, Jinghong; Yu, Liping; Cai, Rui; Zhang, Jiping; Sun, Xinde

    2009-01-03

    Previous studies have shown that the functional development of auditory system is substantially influenced by the structure of environmental acoustic inputs in early life. In our present study, we investigated the effects of early auditory enrichment with music on rat auditory discrimination learning. We found that early auditory enrichment with music from postnatal day (PND) 14 enhanced learning ability in auditory signal-detection task and in sound duration-discrimination task. In parallel, a significant increase was noted in NMDA receptor subunit NR2B protein expression in the auditory cortex. Furthermore, we found that auditory enrichment with music starting from PND 28 or 56 did not influence NR2B expression in the auditory cortex. No difference was found in the NR2B expression in the inferior colliculus (IC) between music-exposed and normal rats, regardless of when the auditory enrichment with music was initiated. Our findings suggest that early auditory enrichment with music influences NMDA-mediated neural plasticity, which results in enhanced auditory discrimination learning.

  15. Rendering visual events as sounds: Spatial attention capture by auditory augmented reality.

    Directory of Open Access Journals (Sweden)

    Scott A Stone

    Full Text Available Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible.

  16. Auditory and verbal memory predictors of spoken language skills in children with cochlear implants.

    Science.gov (United States)

    de Hoog, Brigitte E; Langereis, Margreet C; van Weerdenburg, Marjolijn; Keuning, Jos; Knoors, Harry; Verhoeven, Ludo

    2016-10-01

    Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. In the present study, we examined the extent of delay in lexical and morphosyntactic spoken language levels of children with CIs as compared to those of a normative sample of age-matched children with normal hearing. Furthermore, the predictive value of auditory and verbal memory factors in the spoken language performance of implanted children was analyzed. Thirty-nine profoundly deaf children with CIs were assessed using a test battery including measures of lexical, grammatical, auditory and verbal memory tests. Furthermore, child-related demographic characteristics were taken into account. The majority of the children with CIs did not reach age-equivalent lexical and morphosyntactic language skills. Multiple linear regression analyses revealed that lexical spoken language performance in children with CIs was best predicted by age at testing, phoneme perception, and auditory word closure. The morphosyntactic language outcomes of the CI group were best predicted by lexicon, auditory word closure, and auditory memory for words. Qualitatively good speech perception skills appear to be crucial for lexical and grammatical development in children with CIs. Furthermore, strongly developed vocabulary skills and verbal memory abilities predict morphosyntactic language skills. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Auditory Hallucinations Nomenclature and Classification

    NARCIS (Netherlands)

    Blom, Jan Dirk; Sommer, Iris E. C.

    Introduction: The literature on the possible neurobiologic correlates of auditory hallucinations is expanding rapidly. For an adequate understanding and linking of this emerging knowledge, a clear and uniform nomenclature is a prerequisite. The primary purpose of the present article is to provide an

  18. Auditory Risk of Air Rifles

    Science.gov (United States)

    Lankford, James E.; Meinke, Deanna K.; Flamme, Gregory A.; Finan, Donald S.; Stewart, Michael; Tasko, Stephen; Murphy, William J.

    2016-01-01

    Objective To characterize the impulse noise exposure and auditory risk for air rifle users for both youth and adults. Design Acoustic characteristics were examined and the auditory risk estimates were evaluated using contemporary damage-risk criteria for unprotected adult listeners and the 120-dB peak limit and LAeq75 exposure limit suggested by the World Health Organization (1999) for children. Study sample Impulses were generated by 9 pellet air rifles and 1 BB air rifle. Results None of the air rifles generated peak levels that exceeded the 140 dB peak limit for adults and 8 (80%) exceeded the 120 dB peak SPL limit for youth. In general, for both adults and youth there is minimal auditory risk when shooting less than 100 unprotected shots with pellet air rifles. Air rifles with suppressors were less hazardous than those without suppressors and the pellet air rifles with higher velocities were generally more hazardous than those with lower velocities. Conclusion To minimize auditory risk, youth should utilize air rifles with an integrated suppressor and lower velocity ratings. Air rifle shooters are advised to wear hearing protection whenever engaging in shooting activities in order to gain self-efficacy and model appropriate hearing health behaviors necessary for recreational firearm use. PMID:26840923

  19. Molecular approach of auditory neuropathy.

    Science.gov (United States)

    Silva, Magali Aparecida Orate Menezes da; Piatto, Vânia Belintani; Maniglia, Jose Victor

    2015-01-01

    Mutations in the otoferlin gene are responsible for auditory neuropathy. To investigate the prevalence of mutations in the mutations in the otoferlin gene in patients with and without auditory neuropathy. This original cross-sectional case study evaluated 16 index cases with auditory neuropathy, 13 patients with sensorineural hearing loss, and 20 normal-hearing subjects. DNA was extracted from peripheral blood leukocytes, and the mutations in the otoferlin gene sites were amplified by polymerase chain reaction/restriction fragment length polymorphism. The 16 index cases included nine (56%) females and seven (44%) males. The 13 deaf patients comprised seven (54%) males and six (46%) females. Among the 20 normal-hearing subjects, 13 (65%) were males and seven were (35%) females. Thirteen (81%) index cases had wild-type genotype (AA) and three (19%) had the heterozygous AG genotype for IVS8-2A-G (intron 8) mutation. The 5473C-G (exon 44) mutation was found in a heterozygous state (CG) in seven (44%) index cases and nine (56%) had the wild-type allele (CC). Of these mutants, two (25%) were compound heterozygotes for the mutations found in intron 8 and exon 44. All patients with sensorineural hearing loss and normal-hearing individuals did not have mutations (100%). There are differences at the molecular level in patients with and without auditory neuropathy. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  20. Nigel: A Severe Auditory Dyslexic

    Science.gov (United States)

    Cotterell, Gill

    1976-01-01

    Reported is the case study of a boy with severe auditory dyslexia who received remedial treatment from the age of four and progressed through courses at a technical college and a 3-year apprenticeship course in mechanics by the age of eighteen. (IM)

  1. Auditory Processing Disorder in Children

    Science.gov (United States)

    ... Auditory Neuropathy Autism Spectrum Disorder: Communication Problems in Children Dysphagia Quick Statistics About Voice, Speech, Language Speech and Language Developmental Milestones What Is Voice? What Is Speech? What Is Language? ... communication provides better outcomes for children with cochlear implants University of Texas at Dallas ...

  2. Auditory post-processing in a passive listening task is deficient in Alzheimer's disease.

    Science.gov (United States)

    Bender, Stephan; Bluschke, Annet; Dippel, Gabriel; Rupp, André; Weisbrod, Matthias; Thomas, Christine

    2014-01-01

    To investigate whether automatic auditory post-processing is deficient in patients with Alzheimer's disease and is related to sensory gating. Event-related potentials were recorded during a passive listening task to examine the automatic transient storage of auditory information (short click pairs). Patients with Alzheimer's disease were compared to a healthy age-matched control group. A young healthy control group was included to assess effects of physiological aging. A bilateral frontal negativity in combination with deep temporal positivity occurring 500 ms after stimulus offset was reduced in patients with Alzheimer's disease, but was unaffected by physiological aging. Its amplitude correlated with short-term memory capacity, but was independent of sensory gating in healthy elderly controls. Source analysis revealed a dipole pair in the anterior temporal lobes. Results suggest that auditory post-processing is deficient in Alzheimer's disease, but is not typically related to sensory gating. The deficit could neither be explained by physiological aging nor by problems in earlier stages of auditory perception. Correlations with short-term memory capacity and executive control tasks suggested an association with memory encoding and/or overall cognitive control deficits. An auditory late negative wave could represent a marker of auditory working memory encoding deficits in Alzheimer's disease. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  3. Turning down the noise: the benefit of musical training on the aging auditory brain.

    Science.gov (United States)

    Alain, Claude; Zendel, Benjamin Rich; Hutka, Stefanie; Bidelman, Gavin M

    2014-02-01

    Age-related decline in hearing abilities is a ubiquitous part of aging, and commonly impacts speech understanding, especially when there are competing sound sources. While such age effects are partially due to changes within the cochlea, difficulties typically exist beyond measurable hearing loss, suggesting that central brain processes, as opposed to simple peripheral mechanisms (e.g., hearing sensitivity), play a critical role in governing hearing abilities late into life. Current training regimens aimed to improve central auditory processing abilities have experienced limited success in promoting listening benefits. Interestingly, recent studies suggest that in young adults, musical training positively modifies neural mechanisms, providing robust, long-lasting improvements to hearing abilities as well as to non-auditory tasks that engage cognitive control. These results offer the encouraging possibility that musical training might be used to counteract age-related changes in auditory cognition commonly observed in older adults. Here, we reviewed studies that have examined the effects of age and musical experience on auditory cognition with an emphasis on auditory scene analysis. We infer that musical training may offer potential benefits to complex listening and might be utilized as a means to delay or even attenuate declines in auditory perception and cognition that often emerge later in life. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Using auditory steady state responses to outline the functional connectivity in the tinnitus brain.

    Directory of Open Access Journals (Sweden)

    Winfried Schlee

    Full Text Available BACKGROUND: Tinnitus is an auditory phantom perception that is most likely generated in the central nervous system. Most of the tinnitus research has concentrated on the auditory system. However, it was suggested recently that also non-auditory structures are involved in a global network that encodes subjective tinnitus. We tested this assumption using auditory steady state responses to entrain the tinnitus network and investigated long-range functional connectivity across various non-auditory brain regions. METHODS AND FINDINGS: Using whole-head magnetoencephalography we investigated cortical connectivity by means of phase synchronization in tinnitus subjects and healthy controls. We found evidence for a deviating pattern of long-range functional connectivity in tinnitus that was strongly correlated with individual ratings of the tinnitus percept. Phase couplings between the anterior cingulum and the right frontal lobe and phase couplings between the anterior cingulum and the right parietal lobe showed significant condition x group interactions and were correlated with the individual tinnitus distress ratings only in the tinnitus condition and not in the control conditions. CONCLUSIONS: To the best of our knowledge this is the first study that demonstrates existence of a global tinnitus network of long-range cortical connections outside the central auditory system. This result extends the current knowledge of how tinnitus is generated in the brain. We propose that this global extend of the tinnitus network is crucial for the continuos perception of the tinnitus tone and a therapeutical intervention that is able to change this network should result in relief of tinnitus.

  5. Frontal top-down signals increase coupling of auditory low-frequency oscillations to continuous speech in human listeners.

    Science.gov (United States)

    Park, Hyojin; Ince, Robin A A; Schyns, Philippe G; Thut, Gregor; Gross, Joachim

    2015-06-15

    Humans show a remarkable ability to understand continuous speech even under adverse listening conditions. This ability critically relies on dynamically updated predictions of incoming sensory information, but exactly how top-down predictions improve speech processing is still unclear. Brain oscillations are a likely mechanism for these top-down predictions [1, 2]. Quasi-rhythmic components in speech are known to entrain low-frequency oscillations in auditory areas [3, 4], and this entrainment increases with intelligibility [5]. We hypothesize that top-down signals from frontal brain areas causally modulate the phase of brain oscillations in auditory cortex. We use magnetoencephalography (MEG) to monitor brain oscillations in 22 participants during continuous speech perception. We characterize prominent spectral components of speech-brain coupling in auditory cortex and use causal connectivity analysis (transfer entropy) to identify the top-down signals driving this coupling more strongly during intelligible speech than during unintelligible speech. We report three main findings. First, frontal and motor cortices significantly modulate the phase of speech-coupled low-frequency oscillations in auditory cortex, and this effect depends on intelligibility of speech. Second, top-down signals are significantly stronger for left auditory cortex than for right auditory cortex. Third, speech-auditory cortex coupling is enhanced as a function of stronger top-down signals. Together, our results suggest that low-frequency brain oscillations play a role in implementing predictive top-down control during continuous speech perception and that top-down control is largely directed at left auditory cortex. This suggests a close relationship between (left-lateralized) speech production areas and the implementation of top-down control in continuous speech perception. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. Unconscious auditory information can prime visual word processing: a process-dissociation procedure study.

    Science.gov (United States)

    Lamy, Dominique; Mudrik, Liad; Deouell, Leon Y

    2008-09-01

    Whether information perceived without awareness can affect overt performance, and whether such effects can cross sensory modalities, remains a matter of debate. Whereas influence of unconscious visual information on auditory perception has been documented, the reverse influence has not been reported. In addition, previous reports of unconscious cross-modal priming relied on procedures in which contamination of conscious processes could not be ruled out. We present the first report of unconscious cross-modal priming when the unaware prime is auditory and the test stimulus is visual. We used the process-dissociation procedure [Debner, J. A., & Jacoby, L. L. (1994). Unconscious perception: Attention, awareness and control. Journal of Experimental Psychology: Learning, Memory, and Cognition, 20, 304-317] which allowed us to assess the separate contributions of conscious and unconscious perception of a degraded prime (either seen or heard) to performance on a visual fragment-completion task. Unconscious cross-modal priming (auditory prime, visual fragment) was significant and of a magnitude similar to that of unconscious within-modality priming (visual prime, visual fragment). We conclude that cross-modal integration, at least between visual and auditory information, is more symmetrical than previously shown, and does not require conscious mediation.

  7. The encoding of auditory objects in auditory cortex: insights from magnetoencephalography.

    Science.gov (United States)

    Simon, Jonathan Z

    2015-02-01

    Auditory objects, like their visual counterparts, are perceptually defined constructs, but nevertheless must arise from underlying neural circuitry. Using magnetoencephalography (MEG) recordings of the neural responses of human subjects listening to complex auditory scenes, we review studies that demonstrate that auditory objects are indeed neurally represented in auditory cortex. The studies use neural responses obtained from different experiments in which subjects selectively listen to one of two competing auditory streams embedded in a variety of auditory scenes. The auditory streams overlap spatially and often spectrally. In particular, the studies demonstrate that selective attentional gain does not act globally on the entire auditory scene, but rather acts differentially on the separate auditory streams. This stream-based attentional gain is then used as a tool to individually analyze the different neural representations of the competing auditory streams. The neural representation of the attended stream, located in posterior auditory cortex, dominates the neural responses. Critically, when the intensities of the attended and background streams are separately varied over a wide intensity range, the neural representation of the attended speech adapts only to the intensity of that speaker, irrespective of the intensity of the background speaker. This demonstrates object-level intensity gain control in addition to the above object-level selective attentional gain. Overall, these results indicate that concurrently streaming auditory objects, even if spectrally overlapping and not resolvable at the auditory periphery, are individually neurally encoded in auditory cortex, as separate objects. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Auditory Neural Prostheses – A Window to the Future

    Directory of Open Access Journals (Sweden)

    Mohan Kameshwaran

    2015-06-01

    Full Text Available Hearing loss is one of the commonest congenital anomalies to affect children world-over. The incidence of congenital hearing loss is more pronounced in developing countries like the Indian sub-continent, especially with the problems of consanguinity. Hearing loss is a double tragedy, as it leads to not only deafness but also language deprivation. However, hearing loss is the only truly remediable handicap, due to remarkable advances in biomedical engineering and surgical techniques. Auditory neural prostheses help to augment or restore hearing by integration of an external circuitry with the peripheral hearing apparatus and the central circuitry of the brain. A cochlear implant (CI is a surgically implantable device that helps restore hearing in patients with severe-profound hearing loss, unresponsive to amplification by conventional hearing aids. CIs are electronic devices designed to detect mechanical sound energy and convert it into electrical signals that can be delivered to the coch­lear nerve, bypassing the damaged hair cells of the coch­lea. The only true prerequisite is an intact auditory nerve. The emphasis is on implantation as early as possible to maximize speech understanding and perception. Bilateral CI has significant benefits which include improved speech perception in noisy environments and improved sound localization. Presently, the indications for CI have widened and these expanded indications for implantation are related to age, additional handicaps, residual hearing, and special etiologies of deafness. Combined electric and acoustic stimulation (EAS / hybrid device is designed for individuals with binaural low-frequency hearing and severe-to-profound high-frequency hearing loss. Auditory brainstem implantation (ABI is a safe and effective means of hearing rehabilitation in patients with retrocochlear disorders, such as neurofibromatosis type 2 (NF2 or congenital cochlear nerve aplasia, wherein the cochlear nerve is damaged

  9. Temporal Integration of Auditory Stimulation and Binocular Disparity Signals

    Directory of Open Access Journals (Sweden)

    Marina Zannoli

    2011-10-01

    Full Text Available Several studies using visual objects defined by luminance have reported that the auditory event must be presented 30 to 40 ms after the visual stimulus to perceive audiovisual synchrony. In the present study, we used visual objects defined only by their binocular disparity. We measured the optimal latency between visual and auditory stimuli for the perception of synchrony using a method introduced by Moutoussis & Zeki (1997. Visual stimuli were defined either by luminance and disparity or by disparity only. They moved either back and forth between 6 and 12 arcmin or from left to right at a constant disparity of 9 arcmin. This visual modulation was presented together with an amplitude-modulated 500 Hz tone. Both modulations were sinusoidal (frequency: 0.7 Hz. We found no difference between 2D and 3D motion for luminance stimuli: a 40 ms auditory lag was necessary for perceived synchrony. Surprisingly, even though stereopsis is often thought to be slow, we found a similar optimal latency in the disparity 3D motion condition (55 ms. However, when participants had to judge simultaneity for disparity 2D motion stimuli, it led to larger latencies (170 ms, suggesting that stereo motion detectors are poorly suited to track 2D motion.

  10. Decoding sound level in the marmoset primary auditory cortex.

    Science.gov (United States)

    Sun, Wensheng; Marongelli, Ellisha N; Watkins, Paul V; Barbour, Dennis L

    2017-10-01

    Neurons that respond favorably to a particular sound level have been observed throughout the central auditory system, becoming steadily more common at higher processing areas. One theory about the role of these level-tuned or nonmonotonic neurons is the level-invariant encoding of sounds. To investigate this theory, we simulated various subpopulations of neurons by drawing from real primary auditory cortex (A1) neuron responses and surveyed their performance in forming different sound level representations. Pure nonmonotonic subpopulations did not provide the best level-invariant decoding; instead, mixtures of monotonic and nonmonotonic neurons provided the most accurate decoding. For level-fidelity decoding, the inclusion of nonmonotonic neurons slightly improved or did not change decoding accuracy until they constituted a high proportion. These results indicate that nonmonotonic neurons fill an encoding role complementary to, rather than alternate to, monotonic neurons.NEW & NOTEWORTHY Neurons with nonmonotonic rate-level functions are unique to the central auditory system. These level-tuned neurons have been proposed to account for invariant sound perception across sound levels. Through systematic simulations based on real neuron responses, this study shows that neuron populations perform sound encoding optimally when containing both monotonic and nonmonotonic neurons. The results indicate that instead of working independently, nonmonotonic neurons complement the function of monotonic neurons in different sound-encoding contexts. Copyright © 2017 the American Physiological Society.

  11. The auditory corticocollicular system: molecular and circuit-level considerations.

    Science.gov (United States)

    Stebbings, Kevin A; Lesicko, Alexandria M H; Llano, Daniel A

    2014-08-01

    We live in a world imbued with a rich mixture of complex sounds. Successful acoustic communication requires the ability to extract meaning from those sounds, even when degraded. One strategy used by the auditory system is to harness high-level contextual cues to modulate the perception of incoming sounds. An ideal substrate for this process is the massive set of top-down projections emanating from virtually every level of the auditory system. In this review, we provide a molecular and circuit-level description of one of the largest of these pathways: the auditory corticocollicular pathway. While its functional role remains to be fully elucidated, activation of this projection system can rapidly and profoundly change the tuning of neurons in the inferior colliculus. Several specific issues are reviewed. First, we describe the complex heterogeneous anatomical organization of the corticocollicular pathway, with particular emphasis on the topography of the pathway. We also review the laminar origin of the corticocollicular projection and discuss known physiological and morphological differences between subsets of corticocollicular cells. Finally, we discuss recent findings about the molecular micro-organization of the inferior colliculus and how it interfaces with corticocollicular termination patterns. Given the assortment of molecular tools now available to the investigator, it is hoped that his review will help guide future research on the role of this pathway in normal hearing. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Relating auditory attributes of multichannel sound to preference and to physical parameters

    DEFF Research Database (Denmark)

    Choisel, Sylvain; Wickelmaier, Florian Maria

    2006-01-01

    Sound reproduced by multichannel systems is affected by many factors giving rise to various sensations, or auditory attributes. Relating specific attributes to overall preference and to physical measures of the sound field provides valuable information for a better understanding of the parameters...... within and between musical program materials, allowing for a careful generalization regarding the perception of spatial audio reproduction. Finally, a set of objective measures is derived from analysis of the sound field at the listening position in an attempt to predict the auditory attributes....

  13. Hemispheric asymmetry in the auditory facilitation effect in dual-stream rapid serial visual presentation tasks.

    Directory of Open Access Journals (Sweden)

    Yasuhiro Takeshima

    Full Text Available Even though auditory stimuli do not directly convey information related to visual stimuli, they often improve visual detection and identification performance. Auditory stimuli often alter visual perception depending on the reliability of the sensory input, with visual and auditory information reciprocally compensating for ambiguity in the other sensory domain. Perceptual processing is characterized by hemispheric asymmetry. While the left hemisphere is more involved in linguistic processing, the right hemisphere dominates spatial processing. In this context, we hypothesized that an auditory facilitation effect in the right visual field for the target identification task, and a similar effect would be observed in the left visual field for the target localization task. In the present study, we conducted target identification and localization tasks using a dual-stream rapid serial visual presentation. When two targets are embedded in a rapid serial visual presentation stream, the target detection or discrimination performance for the second target is generally lower than for the first target; this deficit is well known as attentional blink. Our results indicate that auditory stimuli improved target identification performance for the second target within the stream when visual stimuli were presented in the right, but not the left visual field. In contrast, auditory stimuli improved second target localization performance when visual stimuli were presented in the left visual field. An auditory facilitation effect was observed in perceptual processing, depending on the hemispheric specialization. Our results demonstrate a dissociation between the lateral visual hemifield in which a stimulus is projected and the kind of visual judgment that may benefit from the presentation of an auditory cue.

  14. Auditory N1 reveals planning and monitoring processes during music performance.

    Science.gov (United States)

    Mathias, Brian; Gehring, William J; Palmer, Caroline

    2017-02-01

    The current study investigated the relationship between planning processes and feedback monitoring during music performance, a complex task in which performers prepare upcoming events while monitoring their sensory outcomes. Theories of action planning in auditory-motor production tasks propose that the planning of future events co-occurs with the perception of auditory feedback. This study investigated the neural correlates of planning and feedback monitoring by manipulating the contents of auditory feedback during music performance. Pianists memorized and performed melodies at a cued tempo in a synchronization-continuation task while the EEG was recorded. During performance, auditory feedback associated with single melody tones was occasionally substituted with tones corresponding to future (next), present (current), or past (previous) melody tones. Only future-oriented altered feedback disrupted behavior: Future-oriented feedback caused pianists to slow down on the subsequent tone more than past-oriented feedback, and amplitudes of the auditory N1 potential elicited by the tone immediately following the altered feedback were larger for future-oriented than for past-oriented or noncontextual (unrelated) altered feedback; larger N1 amplitudes were associated with greater slowing following altered feedback in the future condition only. Feedback-related negativities were elicited in all altered feedback conditions. In sum, behavioral and neural evidence suggests that future-oriented feedback disrupts performance more than past-oriented feedback, consistent with planning theories that posit similarity-based interference between feedback and planning contents. Neural sensory processing of auditory feedback, reflected in the N1 ERP, may serve as a marker for temporal disruption caused by altered auditory feedback in auditory-motor production tasks. © 2016 Society for Psychophysiological Research.

  15. Audiovisual Perception of Congruent and Incongruent Dutch Front Vowels

    NARCIS (Netherlands)

    Valkenier, Bea; Duyne, Jurriaan Y.; Andringa, Tjeerd C.; Başkent, Deniz

    2012-01-01

    Purpose: Auditory perception of vowels in background noise is enhanced when combined with visually perceived speech features. The objective of this study was to investigate whether the influence of visual cues on vowel perception extends to incongruent vowels, in a manner similar to the McGurk

  16. Audiovisual Perception of Congruent and Incongruent Dutch Front Vowels

    Science.gov (United States)

    Valkenier, Bea; Duyne, Jurriaan Y.; Andringa, Tjeerd C.; Baskent, Deniz

    2012-01-01

    Purpose: Auditory perception of vowels in background noise is enhanced when combined with visually perceived speech features. The objective of this study was to investigate whether the influence of visual cues on vowel perception extends to incongruent vowels, in a manner similar to the McGurk effect observed with consonants. Method:…

  17. Auditory performance and speech intelligibility of Mandarin-speaking children implanted before age 5.

    Science.gov (United States)

    Fang, Hsuan-Yeh; Ko, Hui-Chen; Wang, Nan-Mai; Fang, Tuan-Jen; Chao, Wei-Chieh; Tsou, Yung-Ting; Wu, Che-Ming

    2014-05-01

    (1) To report the auditory performance and speech intelligibility of 84 Mandarin-speaking prelingually deaf children after using cochlear implants (CIs) for one, two, three, four, and five years to understand how many years of implant use were needed for them to reach a plateau-level performance; (2) to investigate the relation between subjective rating scales and objective measurements (i.e., speech perception tests); (3) to understand the effect of age at implantation on auditory and speech development. Eighty-four children with CIs participated in this study. Their auditory performance and speech intelligibility were rated using the Categorical Auditory Performance (CAP) and the Speech Intelligibility Rating (SIR) scales, respectively. The evaluations were made before implantation and six months, one, two, three, four, and five years after implantation. At the fifth year after implantation, monosyllabic-word, easy-sentence, and difficult-sentence perception tests were administered. The median CAP score reached a plateau at category 6 after three years of implant use. The median SIR arrived at the highest level after five years of use. With five years of CI experiences, 86% of the subjects understood conversation without lip-reading, and 58% were fully intelligible to all listeners. The three speech perception tests had a moderate-to-strong correlation with the CAP and SIR scores. The children implanted before the age of three years had significantly better CAP and monosyllabic word perception test scores. Five years of follow-up are needed for assessing the post-implantation development of communication ability of prelingually deafened children. It is recommended that hearing-impaired children receive cochlear implantation at a younger age to acquire better auditory ability for developing language skills. Constant postoperative aural-verbal rehabilitation and speech and language therapy are most likely required for the patients to reach the highest level on the

  18. The auditory brainstem is a barometer of rapid auditory learning.

    Science.gov (United States)

    Skoe, E; Krizman, J; Spitzer, E; Kraus, N

    2013-07-23

    To capture patterns in the environment, neurons in the auditory brainstem rapidly alter their firing based on the statistical properties of the soundscape. How this neural sensitivity relates to behavior is unclear. We tackled this question by combining neural and behavioral measures of statistical learning, a general-purpose learning mechanism governing many complex behaviors including language acquisition. We recorded complex auditory brainstem responses (cABRs) while human adults implicitly learned to segment patterns embedded in an uninterrupted sound sequence based on their statistical characteristics. The brainstem's sensitivity to statistical structure was measured as the change in the cABR between a patterned and a pseudo-randomized sequence composed from the same set of sounds but differing in their sound-to-sound probabilities. Using this methodology, we provide the first demonstration that behavioral-indices of rapid learning relate to individual differences in brainstem physiology. We found that neural sensitivity to statistical structure manifested along a continuum, from adaptation to enhancement, where cABR enhancement (patterned>pseudo-random) tracked with greater rapid statistical learning than adaptation. Short- and long-term auditory experiences (days to years) are known to promote brainstem plasticity and here we provide a conceptual advance by showing that the brainstem is also integral to rapid learning occurring over minutes. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.

  19. Conceptual priming for realistic auditory scenes and for auditory words.

    Science.gov (United States)

    Frey, Aline; Aramaki, Mitsuko; Besson, Mireille

    2014-02-01

    Two experiments were conducted using both behavioral and Event-Related brain Potentials methods to examine conceptual priming effects for realistic auditory scenes and for auditory words. Prime and target sounds were presented in four stimulus combinations: Sound-Sound, Word-Sound, Sound-Word and Word-Word. Within each combination, targets were conceptually related to the prime, unrelated or ambiguous. In Experiment 1, participants were asked to judge whether the primes and targets fit together (explicit task) and in Experiment 2 they had to decide whether the target was typical or ambiguous (implicit task). In both experiments and in the four stimulus combinations, reaction times and/or error rates were longer/higher and the N400 component was larger to ambiguous targets than to conceptually related targets, thereby pointing to a common conceptual system for processing auditory scenes and linguistic stimuli in both explicit and implicit tasks. However, fine-grained analyses also revealed some differences between experiments and conditions in scalp topography and duration of the priming effects possibly reflecting differences in the integration of perceptual and cognitive attributes of linguistic and nonlinguistic sounds. These results have clear implications for the building-up of virtual environments that need to convey meaning without words. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. Auditory Dysfunction in Patients with Cerebrovascular Disease

    Directory of Open Access Journals (Sweden)

    Sadaharu Tabuchi

    2014-01-01

    Full Text Available Auditory dysfunction is a common clinical symptom that can induce profound effects on the quality of life of those affected. Cerebrovascular disease (CVD is the most prevalent neurological disorder today, but it has generally been considered a rare cause of auditory dysfunction. However, a substantial proportion of patients with stroke might have auditory dysfunction that has been underestimated due to difficulties with evaluation. The present study reviews relationships between auditory dysfunction and types of CVD including cerebral infarction, intracerebral hemorrhage, subarachnoid hemorrhage, cerebrovascular malformation, moyamoya disease, and superficial siderosis. Recent advances in the etiology, anatomy, and strategies to diagnose and treat these conditions are described. The numbers of patients with CVD accompanied by auditory dysfunction will increase as the population ages. Cerebrovascular diseases often include the auditory system, resulting in various types of auditory dysfunctions, such as unilateral or bilateral deafness, cortical deafness, pure word deafness, auditory agnosia, and auditory hallucinations, some of which are subtle and can only be detected by precise psychoacoustic and electrophysiological testing. The contribution of CVD to auditory dysfunction needs to be understood because CVD can be fatal if overlooked.

  1. Functional Changes in the Human Auditory Cortex in Ageing

    Science.gov (United States)

    Profant, Oliver; Tintěra, Jaroslav; Balogová, Zuzana; Ibrahim, Ibrahim; Jilek, Milan; Syka, Josef

    2015-01-01

    Hearing loss, presbycusis, is one of the most common sensory declines in the ageing population. Presbycusis is characterised by a deterioration in the processing of temporal sound features as well as a decline in speech perception, thus indicating a possible central component. With the aim to explore the central component of presbycusis, we studied the function of the auditory cortex by functional MRI in two groups of elderly subjects (>65 years) and compared the results with young subjects (presbycusis (EP) differed from the elderly group with mild presbycusis (MP) in hearing thresholds measured by pure tone audiometry, presence and amplitudes of transient otoacoustic emissions (TEOAE) and distortion-product oto-acoustic emissions (DPOAE), as well as in speech-understanding under noisy conditions. Acoustically evoked activity (pink noise centered around 350 Hz, 700 Hz, 1.5 kHz, 3 kHz, 8 kHz), recorded by BOLD fMRI from an area centered on Heschl’s gyrus, was used to determine age-related changes at the level of the auditory cortex. The fMRI showed only minimal activation in response to the 8 kHz stimulation, despite the fact that all subjects heard the stimulus. Both elderly groups showed greater activation in response to acoustical stimuli in the temporal lobes in comparison with young subjects. In addition, activation in the right temporal lobe was more expressed than in the left temporal lobe in both elderly groups, whereas in the young control subjects (YC) leftward lateralization was present. No statistically significant differences in activation of the auditory cortex were found between the MP and EP groups. The greater extent of cortical activation in elderly subjects in comparison with young subjects, with an asymmetry towards the right side, may serve as a compensatory mechanism for the impaired processing of auditory information appearing as a consequence of ageing. PMID:25734519

  2. Functional changes in the human auditory cortex in ageing.

    Directory of Open Access Journals (Sweden)

    Oliver Profant

    Full Text Available Hearing loss, presbycusis, is one of the most common sensory declines in the ageing population. Presbycusis is characterised by a deterioration in the processing of temporal sound features as well as a decline in speech perception, thus indicating a possible central component. With the aim to explore the central component of presbycusis, we studied the function of the auditory cortex by functional MRI in two groups of elderly subjects (>65 years and compared the results with young subjects (auditory cortex. The fMRI showed only minimal activation in response to the 8 kHz stimulation, despite the fact that all subjects heard the stimulus. Both elderly groups showed greater activation in response to acoustical stimuli in the temporal lobes in comparison with young subjects. In addition, activation in the right temporal lobe was more expressed than in the left temporal lobe in both elderly groups, whereas in the young control subjects (YC leftward lateralization was present. No statistically significant differences in activation of the auditory cortex were found between the MP and EP groups. The greater extent of cortical activation in elderly subjects in comparison with young subjects, with an asymmetry towards the right side, may serve as a compensatory mechanism for the impaired processing of auditory information appearing as a consequence of ageing.

  3. Dynamic Reweighting of Auditory Modulation Filters.

    Directory of Open Access Journals (Sweden)

    Eva R M Joosten

    2016-07-01

    Full Text Available Sound waveforms convey information largely via amplitude modulations (AM. A large body of experimental evidence has provided support for a modulation (bandpass filterbank. Details of this model have varied over time partly reflecting different experimental conditions and diverse datasets from distinct task strategies, contributing uncertainty to the bandwidth measurements and leaving important issues unresolved. We adopt here a solely data-driven measurement approach in which we first demonstrate how different models can be subsumed within a common 'cascade' framework, and then proceed to characterize the cascade via system identification analysis using a single stimulus/task specification and hence stable task rules largely unconstrained by any model or parameters. Observers were required to detect a brief change in level superimposed onto random level changes that served as AM noise; the relationship between trial-by-trial noisy fluctuations and corresponding human responses enables targeted identification of distinct cascade elements. The resulting measurements exhibit a dynamic complex picture in which human perception of auditory modulations appears adaptive in nature, evolving from an initial lowpass to bandpass modes (with broad tuning, Q∼1 following repeated stimulus exposure.

  4. Rise Time Perception and Detection of Syllable Stress in Adults with Developmental Dyslexia

    Science.gov (United States)

    Leong, Victoria; Hamalainen, Jarmo; Soltesz, Fruzsina; Goswami, Usha

    2011-01-01

    Introduction: The perception of syllable stress has not been widely studied in developmental dyslexia, despite strong evidence for auditory rhythmic perceptual difficulties. Here we investigate the hypothesis that perception of sound rise time is related to the perception of syllable stress in adults with developmental dyslexia. Methods: A…

  5. Reality of auditory verbal hallucinations.

    Science.gov (United States)

    Raij, Tuukka T; Valkonen-Korhonen, Minna; Holi, Matti; Therman, Sebastian; Lehtonen, Johannes; Hari, Riitta

    2009-11-01

    Distortion of the sense of reality, actualized in delusions and hallucinations, is the key feature of psychosis but the underlying neuronal correlates remain largely unknown. We studied 11 highly functioning subjects with schizophrenia or schizoaffective disorder while they rated the reality of auditory verbal hallucinations (AVH) during functional magnetic resonance imaging (fMRI). The subjective reality of AVH correlated strongly and specifically with the hallucination-related activation strength of the inferior frontal gyri (IFG), including the Broca's language region. Furthermore, how real the hallucination that subjects experienced was depended on the hallucination-related coupling between the IFG, the ventral striatum, the auditory cortex, the right posterior temporal lobe, and the cingulate cortex. Our findings suggest that the subjective reality of AVH is related to motor mechanisms of speech comprehension, with contributions from sensory and salience-detection-related brain regions as well as circuitries related to self-monitoring and the experience of agency.

  6. A Longitudinal Evaluation of the Speech Perception Capabilities of Children Using Multichannel Tactile Vocoders.

    Science.gov (United States)

    Eilers, Rebecca E.; And Others

    1996-01-01

    Thirty children with profound hearing impairments were followed over a three-year period with a semiannual battery of speech perception tests. Testing utilized multichannel tactile vocoders in variations of tactile and/or auditory/visual conditions. Performance in the tactile plus auditory condition generally exceeded that in other conditions,…

  7. Speech Perception Results: Audition and Lipreading Enhancement.

    Science.gov (United States)

    Geers, Ann; Brenner, Chris

    1994-01-01

    This paper describes changes in speech perception performance of deaf children using cochlear implants, tactile aids, or conventional hearing aids over a three-year period. Eleven of the 13 children with cochlear implants were able to identify words on the basis of auditory consonant cues. Significant lipreading enhancement was also achieved with…

  8. Aero-tactile integration in speech perception.

    Science.gov (United States)

    Gick, Bryan; Derrick, Donald

    2009-11-26

    Visual information from a speaker's face can enhance or interfere with accurate auditory perception. This integration of information across auditory and visual streams has been observed in functional imaging studies, and has typically been attributed to the frequency and robustness with which perceivers jointly encounter event-specific information from these two modalities. Adding the tactile modality has long been considered a crucial next step in understanding multisensory integration. However, previous studies have found an influence of tactile input on speech perception only under limited circumstances, either where perceivers were aware of the task or where they had received training to establish a cross-modal mapping. Here we show that perceivers integrate naturalistic tactile information during auditory speech perception without previous training. Drawing on the observation that some speech sounds produce tiny bursts of aspiration (such as English 'p'), we applied slight, inaudible air puffs on participants' skin at one of two locations: the right hand or the neck. Syllables heard simultaneously with cutaneous air puffs were more likely to be heard as aspirated (for example, causing participants to mishear 'b' as 'p'). These results demonstrate that perceivers integrate event-relevant tactile information in auditory perception in much the same way as they do visual information.

  9. Neural and behavioral investigations into timbre perception

    Directory of Open Access Journals (Sweden)

    Stephen Michael Town

    2013-11-01

    Full Text Available Timbre is the attribute that distinguishes sounds of equal pitch, loudness and duration. It contributes to our perception and discrimination of different vowels and consonants in speech, instruments in music and environmental sounds. Here we begin by reviewing human timbre perception and the spectral and temporal acoustic features that give rise to timbre in speech, musical and environmental sounds. We also consider the perception of timbre by animals, both in the case of human vowels and non-human vocalizations. We then explore the neural representation of timbre, first within the peripheral auditory system and later at the level of the auditory cortex. We examine the neural networks that are implicated in timbre perception and the computations that may be performed in auditory cortex to enable listeners to extract information about timbre. We consider whether single neurons in auditory cortex are capable of representing spectral timbre independently of changes in other perceptual attributes and the mechanisms that may shape neural sensitivity to timbre. Finally, we conclude by outlining some of the questions that remain about the role of neural mechanisms in behavior and consider some potentially fruitful avenues for future research.

  10. The Neural Substrates of Infant Speech Perception

    Science.gov (United States)

    Homae, Fumitaka; Watanabe, Hama; Taga, Gentaro

    2014-01-01

    Infants often pay special attention to speech sounds, and they appear to detect key features of these sounds. To investigate the neural foundation of speech perception in infants, we measured cortical activation using near-infrared spectroscopy. We presented the following three types of auditory stimuli while 3-month-old infants watched a silent…

  11. Selective integration of auditory-visual looming cues by humans.

    Science.gov (United States)

    Cappe, Céline; Thut, Gregor; Romei, Vincenzo; Murray, Micah M

    2009-03-01

    An object's motion relative to an observer can confer ethologically meaningful information. Approaching or looming stimuli can signal threats/collisions to be avoided or prey to be confronted, whereas receding stimuli can signal successful escape or failed pursuit. Using movement detection and subjective ratings, we investigated the multisensory integration of looming and receding auditory and visual information by humans. While prior research has demonstrated a perceptual bias for unisensory and more recently multisensory looming stimuli, none has investigated whether there is integration of looming signals between modalities. Our findings reveal selective integration of multisensory looming stimuli. Performance was significantly enhanced for looming stimuli over all other multisensory conditions. Contrasts with static multisensory conditions indicate that only multisensory looming stimuli resulted in facilitation beyond that induced by the sheer presence of auditory-visual stimuli. Controlling for variation in physical energy replicated the advantage for multisensory looming stimuli. Finally, only looming stimuli exhibited a negative linear relationship between enhancement indices for detection speed and for subjective ratings. Maximal detection speed was attained when motion perception was already robust under unisensory conditions. The preferential integration of multisensory looming stimuli highlights that complex ethologically salient stimuli likely require synergistic cooperation between existing principles of multisensory integration. A new conceptualization of the neurophysiologic mechanisms mediating real-world multisensory perceptions and action is therefore supported.

  12. Pitch representations in the auditory nerve: two concurrent complex tones.

    Science.gov (United States)

    Larsen, Erik; Cedolin, Leonardo; Delgutte, Bertrand

    2008-09-01

    Pitch differences between concurrent sounds are important cues used in auditory scene analysis and also play a major role in music perception. To investigate the neural codes underlying these perceptual abilities, we recorded from single fibers in the cat auditory nerve in response to two concurrent harmonic complex tones with missing fundamentals and equal-amplitude harmonics. We investigated the efficacy of rate-place and interspike-interval codes to represent both pitches of the two tones, which had fundamental frequency (F0) ratios of 15/14 or 11/9. We relied on the principle of scaling invariance in cochlear mechanics to infer the spatiotemporal response patterns to a given stimulus from a series of measurements made in a single fiber as a function of F0. Templates created by a peripheral auditory model were used to estimate the F0s of double complex tones from the inferred distribution of firing rate along the tonotopic axis. This rate-place representation was accurate for F0s greater, similar900 Hz. Surprisingly, rate-based F0 estimates were accurate even when the two-tone mixture contained no resolved harmonics, so long as some harmonics were resolved prior to mixing. We also extended methods used previously for single complex tones to estimate the F0s of concurrent complex tones from interspike-interval distributions pooled over the tonotopic axis. The interval-based representation was accurate for F0s less, similar900 Hz, where the two-tone mixture contained no resolved harmonics. Together, the rate-place and interval-based representations allow accurate pitch perception for concurrent sounds over the entire range of human voice and cat vocalizations.

  13. Auditory based neuropsychology in neurosurgery.

    Science.gov (United States)

    Wester, Knut

    2008-04-01

    In this article, an account is given on the author's experience with auditory based neuropsychology in a clinical, neurosurgical setting. The patients that were included in the studies are patients with traumatic or vascular brain lesions, patients undergoing brain surgery to alleviate symptoms of Parkinson's disease, or patients harbouring an intracranial arachnoid cyst affecting the temporal or the frontal lobe. The aims of these investigations were to collect information about the location of cognitive processes in the human brain, or to disclose dyscognition in patients with an arachnoid cyst. All the patients were tested with the DL technique. In addition, the cyst patients were subjected to a number of non-auditory, standard neuropsychological tests, such as Benton Visual Retention Test, Street Gestalt Test, Stroop Test and Trails Test A and B. The neuropsychological tests revealed that arachnoid cysts in general cause dyscognition that also includes auditory processes, and more importantly, that these cognition deficits normalise after surgical removal of the cyst. These observations constitute strong evidence in favour of surgical decompression.

  14. Differential responses of primary auditory cortex in autistic spectrum disorder with auditory hypersensitivity.

    Science.gov (United States)

    Matsuzaki, Junko; Kagitani-Shimono, Kuriko; Goto, Tetsu; Sanefuji, Wakako; Yamamoto, Tomoka; Sakai, Saeko; Uchida, Hiroyuki; Hirata, Masayuki; Mohri, Ikuko; Yorifuji, Shiro; Taniike, Masako

    2012-01-25

    The aim of this study was to investigate the differential responses of the primary auditory cortex to auditory stimuli in autistic spectrum disorder with or without auditory hypersensitivity. Auditory-evoked field values were obtained from 18 boys (nine with and nine without auditory hypersensitivity) with autistic spectrum disorder and 12 age-matched controls. Autistic disorder with hypersensitivity showed significantly more delayed M50/M100 peak latencies than autistic disorder without hypersensitivity or the control. M50 dipole moments in the hypersensitivity group were larger than those in the other two groups [corrected]. M50/M100 peak latencies were correlated with the severity of auditory hypersensitivity; furthermore, severe hypersensitivity induced more behavioral problems. This study indicates auditory hypersensitivity in autistic spectrum disorder as a characteristic response of the primary auditory cortex, possibly resulting from neurological immaturity or functional abnormalities in it. © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins.

  15. Using neuroimaging to understand the cortical mechanisms of auditory selective attention.

    Science.gov (United States)

    Lee, Adrian K C; Larson, Eric; Maddox, Ross K; Shinn-Cunningham, Barbara G

    2014-01-01

    Over the last four decades, a range of different neuroimaging tools have been used to study human auditory attention, spanning from classic event-related potential studies using electroencephalography to modern multimodal imaging approaches (e.g., combining anatomical information based on magnetic resonance imaging with magneto- and electroencephalography). This review begins by exploring the different strengths and limitations inherent to different neuroimaging methods, and then outlines some common behavioral paradigms that have been adopted to study auditory attention. We argue that in order to design a neuroimaging experiment that produces interpretable, unambiguous results, the experimenter must not only have a deep appreciation of the imaging technique employed, but also a sophisticated understanding of perception and behavior. Only with the proper caveats in mind can one begin to infer how the cortex supports a human in solving the "cocktail party" problem. This article is part of a Special Issue entitled Human Auditory Neuroimaging. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Classification of Auditory Evoked Potentials based on the wavelet decomposition and SVM network

    Directory of Open Access Journals (Sweden)

    Michał Suchocki

    2015-12-01

    Full Text Available For electrophysiological hearing assessment and diagnosis of brain stem lesions, the most often used are auditory brainstem evoked potentials of short latency. They are characterized by successively arranged maxima as a function of time, called waves. Morphology of the course, in particular, the timing and amplitude of each wave, allow a neurologist to make diagnose, what is not an easy task. A neurologist should be experienced, concentrated, and should have very good perception. In order to support his diagnostic process, the authors have developed an algorithm implementing the automated classification of auditory evoked potentials to the group of pathological and physiological cases, the sensitivity and specificity determined for an independent test group (of 50 cases of respectively 84% and 88%.[b]Keywords[/b]: biomedical engineering, brainstem auditory evoked potentials, wavelet decomposition, support vector machine

  17. Effect of Cognitive and Central Auditory Impairments on Satisfaction of Amplification in Hearing Impaired Older Adults

    Directory of Open Access Journals (Sweden)

    Younes Lotfi

    2012-07-01

    Full Text Available Objectives: Older adults show many difficulties of speech perception in noisy situations due to peripheral and central auditory impairments, and cognitive dysfunctions. One of the most common rehabilitative procedures for older adults with hearing loss is amplification. However, there is some evidence of dissatisfaction of amplification in older adults. Methods & Materials: We assessed cognitive station, central auditory function, and satisfaction of 19 participants with hearing aids using mini-mental state examination (MMSE, dichotic digits test (DDT, and the satisfaction with amplification in daily life scale respectively. Our cases had moderate sensory hearing loss in both ears. Results: Kruskal-Wallis statistics showed significant correlation between cognitive impairments (MMSE scores and satisfaction of amplification (P0.05. Conclusion: We showed central auditory processing impairments in hearing impaired older adults with cognitive dysfunctions. It is indicated that older adults with hearing loss might have cognitive impairments inducing dissatisfaction of amplification.

  18. Noise differentially impacts phoneme representations in the auditory and speech motor systems.

    Science.gov (United States)

    Du, Yi; Buchsbaum, Bradley R; Grady, Cheryl L; Alain, Claude

    2014-05-13

    Although it is well accepted that the speech motor system (SMS) is activated during speech perception, the functional role of this activation remains unclear. Here we test the hypothesis that the redundant motor activation contributes to categorical speech perception under adverse listening conditions. In this functional magnetic resonance imaging study, participants identified one of four phoneme tokens (/ba/, /ma/, /da/, or /ta/) under one of six signal-to-noise ratio (SNR) levels (-12, -9, -6, -2, 8 dB, and no noise). Univariate and multivariate pattern analyses were used to determine the role of the SMS during perception of noise-impoverished phonemes. Results revealed a negative correlation between neural activity and perceptual accuracy in the left ventral premotor cortex and Broca's area. More importantly, multivoxel patterns of activity in the left ventral premotor cortex and Broca's area exhibited effective phoneme categorization when SNR ≥ -6 dB. This is in sharp contrast with phoneme discriminability in bilateral auditory cortices and sensorimotor interface areas (e.g., left posterior superior temporal gyrus), which was reliable only when the noise was extremely weak (SNR > 8 dB). Our findings provide strong neuroimaging evidence for a greater robustness of the SMS than auditory regions for categorical speech perception in noise. Under adverse listening conditions, better discriminative activity in the SMS may compensate for loss of specificity in the auditory system via sensorimotor integration.

  19. Music, rhythm, rise time perception and developmental dyslexia: perception of musical meter predicts reading and phonology.

    Science.gov (United States)

    Huss, Martina; Verney, John P; Fosker, Tim; Mead, Natasha; Goswami, Usha

    2011-06-01

    Rhythm organises musical events into patterns and forms, and rhythm perception in music is usually studied by using metrical tasks. Metrical structure also plays an organisational function in the phonology of language, via speech prosody, and there is evidence for rhythmic perceptual difficulties in developmental dyslexia. Here we investigate the hypothesis that the accurate perception of musical metrical structure is related to basic auditory perception of rise time, and also to phonological and literacy development in children. A battery of behavioural tasks was devised to explore relations between musical metrical perception, auditory perception of amplitude envelope structure, phonological awareness (PA) and reading in a sample of 64 typically-developing children and children with developmental dyslexia. We show that individual differences in the perception of amplitude envelope rise time are linked to musical metrical sensitivity, and that musical metrical sensitivity predicts PA and reading development, accounting for over 60% of variance in reading along with age and I.Q. Even the simplest metrical task, based on a duple metrical structure, was performed significantly more poorly by the children with dyslexia. The accurate perception of metrical structure may be critical for phonological development and consequently for the development of literacy. Difficulties in metrical processing are associated with basic auditory rise time processing difficulties, suggesting a primary sensory impairment in developmental dyslexia in tracking the lower-frequency modulations in the speech envelope. Copyright © 2010 Elsevier Srl. All rights reserved.

  20. Auditory neuropathy/Auditory dyssynchrony - An underdiagnosed condition: A case report with review of literature

    OpenAIRE

    Vinish Agarwal; Saurabh Varshney; Sampan Singh Bist; Sanjiv Bhagat; Sarita Mishra; Vivek Jha

    2012-01-01

    Auditory neuropathy (AN)/auditory dyssynchrony (AD) is a very often missed diagnosis, hence an underdiagnosed condition in clinical practice. Auditory neuropathy is a condition in which patients, on audiologic evaluation, are found to have normal outer hair cell function and abnormal neural function at the level of the eighth nerve. These patients, on clinical testing, are found to have normal otoacoustic emissions, whereas auditory brainstem response audiometry reveals the absence of neural ...

  1. Increased discriminability of authenticity from multimodal laughter is driven by auditory information.

    Science.gov (United States)

    Lavan, Nadine; McGettigan, Carolyn

    2017-10-01

    We present an investigation of the perception of authenticity in audiovisual laughter, in which we contrast spontaneous and volitional samples and examine the contributions of unimodal affective information to multimodal percepts. In a pilot study, we demonstrate that listeners perceive spontaneous laughs as more authentic than volitional ones, both in unimodal (audio-only, visual-only) and multimodal contexts (audiovisual). In the main experiment, we show that the discriminability of volitional and spontaneous laughter is enhanced for multimodal laughter. Analyses of relationships between affective ratings and the perception of authenticity show that, while both unimodal percepts significantly predict evaluations of audiovisual laughter, it is auditory affective cues that have the greater influence on multimodal percepts. We discuss differences and potential mismatches in emotion signalling through voices and faces, in the context of spontaneous and volitional behaviour, and highlight issues that should be addressed in future studies of dynamic multimodal emotion processing.

  2. Comparison of perceptual properties of auditory streaming between spectral and amplitude modulation domains.

    Science.gov (United States)

    Yamagishi, Shimpei; Otsuka, Sho; Furukawa, Shigeto; Kashino, Makio

    2017-07-01

    The two-tone sequence (ABA_), which comprises two different sounds (A and B) and a silent gap, has been used to investigate how the auditory system organizes sequential sounds depending on various stimulus conditions or brain states. Auditory streaming can be evoked by differences not only in the tone frequency ("spectral cue": ΔFTONE, TONE condition) but also in the amplitude modulation rate ("AM cue": ΔFAM, AM condition). The aim of the present study was to explore the relationship between the perceptual properties of auditory streaming for the TONE and AM conditions. A sequence with a long duration (400 repetitions of ABA_) was used to examine the property of the bistability of streaming. The ratio of feature differences that evoked an equivalent probability of the segregated percept was close to the ratio of the Q-values of the auditory and modulation filters, consistent with a "channeling theory" of auditory streaming. On the other hand, for values of ΔFAM and ΔFTONE evoking equal probabilities of the segregated percept, the number of perceptual switches was larger for the TONE condition than for the AM condition, indicating that the mechanism(s) that determine the bistability of auditory streaming are different between or sensitive to the two domains. Nevertheless, the number of switches for individual listeners was positively correlated between the spectral and AM domains. The results suggest a possibility that the neural substrates for spectral and AM processes share a common switching mechanism but differ in location and/or in the properties of neural activity or the strength of internal noise at each level. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  3. The effect of auditory memory load on intensity resolution in individuals with Parkinson's disease

    Science.gov (United States)

    Richardson, Kelly C.

    Purpose: The purpose of the current study was to investigate the effect of auditory memory load on intensity resolution in individuals with Parkinson's disease (PD) as compared to two groups of listeners without PD. Methods: Nineteen individuals with Parkinson's disease, ten healthy age- and hearing-matched adults, and ten healthy young adults were studied. All listeners participated in two intensity discrimination tasks differing in auditory memory load; a lower memory load, 4IAX task and a higher memory load, ABX task. Intensity discrimination performance was assessed using a bias-free measurement of signal detectability known as d' (d-prime). Listeners further participated in a continuous loudness scaling task where they were instructed to rate the loudness level of each signal intensity using a computerized 150mm visual analogue scale. Results: Group discrimination functions indicated significantly lower intensity discrimination sensitivity (d') across tasks for the individuals with PD, as compared to the older and younger controls. No significant effect of aging on intensity discrimination was observed for either task. All three listeners groups demonstrated significantly lower intensity discrimination sensitivity for the higher auditory memory load, ABX task, compared to the lower auditory memory load, 4IAX task. Furthermore, a significant effect of aging was identified for the loudness scaling condition. The younger controls were found to rate most stimuli along the continuum as significantly louder than the older controls and the individuals with PD. Conclusions: The persons with PD showed evidence of impaired auditory perception for intensity information, as compared to the older and younger controls. The significant effect of aging on loudness perception may indicate peripheral and/or central auditory involvement.

  4. Human Factors Military Lexicon: Auditory Displays

    National Research Council Canada - National Science Library

    Letowski, Tomasz

    2001-01-01

    .... In addition to definitions specific to auditory displays, speech communication, and audio technology, the lexicon includes several terms unique to military operational environments and human factors...

  5. Developing Auditory Measures of General Speediness

    Directory of Open Access Journals (Sweden)

    Ian T. Zajac

    2011-10-01

    Full Text Available This study examined whether the broad ability general speediness (Gs could be measured via the auditory modality. Existing and purpose-developed auditory tasks that maintained the cognitive requirements of established visually presented Gs markers were completed by 96 university undergraduates. Exploratory and confirmatory factor analyses showed that the auditory tasks combined with established visual measures to define latent Gs and reaction time factors. These findings provide preliminary evidence that suggests that if auditory tasks are developed that maintain the same cognitive requirements as existing visual measures, then they are likely to index similar cognitive processes.

  6. Auditory, visual and auditory-visual memory and sequencing performance in typically developing children.

    Science.gov (United States)

    Pillai, Roshni; Yathiraj, Asha

    2017-09-01

    The study evaluated whether there exists a difference/relation in the way four different memory skills (memory score, sequencing score, memory span, & sequencing span) are processed through the auditory modality, visual modality and combined modalities. Four memory skills were evaluated on 30 typically developing children aged 7 years and 8 years across three modality conditions (auditory, visual, & auditory-visual). Analogous auditory and visual stimuli were presented to evaluate the three modality conditions across the two age groups. The children obtained significantly higher memory scores through the auditory modality compared to the visual modality. Likewise, their memory scores were significantly higher through the auditory-visual modality condition than through the visual modality. However, no effect of modality was observed on the sequencing scores as well as for the memory and the sequencing span. A good agreement was seen between the different modality conditions that were studied (auditory, visual, & auditory-visual) for the different memory skills measures (memory scores, sequencing scores, memory span, & sequencing span). A relatively lower agreement was noted only between the auditory and visual modalities as well as between the visual and auditory-visual modality conditions for the memory scores, measured using Bland-Altman plots. The study highlights the efficacy of using analogous stimuli to assess the auditory, visual as well as combined modalities. The study supports the view that the performance of children on different memory skills was better through the auditory modality compared to the visual modality. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Seeing the song: left auditory structures may track auditory-visual dynamic alignment.

    Directory of Open Access Journals (Sweden)

    Julia A Mossbridge

    Full Text Available Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements, it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment.

  8. Syllabic (~2-5 Hz) and fluctuation (~1-10 Hz) ranges in speech and auditory processing

    Science.gov (United States)

    Edwards, Erik; Chang, Edward F.

    2013-01-01

    Given recent interest in syllabic rates (~2-5 Hz) for speech processing, we review the perception of “fluctuation” range (~1-10 Hz) modulations during listening to speech and technical auditory stimuli (AM and FM tones and noises, and ripple sounds). We find evidence that the temporal modulation transfer function (TMTF) of human auditory perception is not simply low-pass in nature, but rather exhibits a peak in sensitivity in the syllabic range (~2-5 Hz). We also address human and animal neurophysiological evidence, and argue that this bandpass tuning arises at the thalamocortical level and is more associated with non-primary regions than primary regions of cortex. The bandpass rather than low-pass TMTF has implications for modeling auditory central physiology and speech processing: this implicates temporal contrast rather than simple temporal integration, with contrast enhancement for dynamic stimuli in the fluctuation range. PMID:24035819

  9. [Speech audiometry, speech perception and cognitive functions. German version].

    Science.gov (United States)

    Meister, H

    2017-03-01

    Examination of cognitive functions in the framework of speech perception has recently gained increasing scientific and clinical interest. Especially against the background of age-related hearing impairment and cognitive decline potential new perspectives in terms of better individualisation of auditory diagnosis and rehabilitation might arise. This review addresses the relationships of speech audiometry, speech perception and cognitive functions. It presents models of speech perception, discusses associations of neuropsychological with audiometric outcomes and shows recent efforts to consider cognitive functions with speech audiometry.

  10. Auditory Processing Disorders (APD): a distinct clinical disorder or not?

    NARCIS (Netherlands)

    Ellen de Wit

    2015-01-01

    Presentatie CPLOL congres Florence In this systematic review, six electronic databases were searched for peer-reviewed studies using the key words auditory processing, auditory diseases, central [Mesh], and auditory perceptual. Two reviewers independently assessed relevant studies by inclusion

  11. Stuttering adults' lack of pre-speech auditory modulation normalizes when speaking with delayed auditory feedback.

    Science.gov (United States)

    Daliri, Ayoub; Max, Ludo

    2018-02-01

    Auditory modulation during speech movement planning is limited in adults who stutter (AWS), but the functional relevance of the phenomenon itself remains unknown. We investigated for AWS and adults who do not stutter (AWNS) (a) a potential relationship between pre-speech auditory modulation and auditory feedback contributions to speech motor learning and (b) the effect on pre-speech auditory modulation of real-time versus delayed auditory feedback. Experiment I used a sensorimotor adaptation paradigm to estimate auditory-motor speech learning. Using acoustic speech recordings, we quantified subjects' formant frequency adjustments across trials when continually exposed to formant-shifted auditory feedback. In Experiment II, we used electroencephalography to determine the same subjects' extent of pre-speech auditory modulation (reductions in auditory evoked potential N1 amplitude) when probe tones were delivered prior to speaking versus not speaking. To manipulate subjects' ability to monitor real-time feedback, we included speaking conditions with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF). Experiment I showed that auditory-motor learning was limited for AWS versus AWNS, and the extent of learning was negatively correlated with stuttering frequency. Experiment II yielded several key findings: (a) our prior finding of limited pre-speech auditory modulation in AWS was replicated; (b) DAF caused a decrease in auditory modulation for most AWNS but an increase for most AWS; and (c) for AWS, the amount of auditory modulation when speaking with DAF was positively correlated with stuttering frequency. Lastly, AWNS showed no correlation between pre-speech auditory modulation (Experiment II) and extent of auditory-motor learning (Experiment I) whereas AWS showed a negative correlation between these measures. Thus, findings suggest that AWS show deficits in both pre-speech auditory modulation and auditory-motor learning; however, limited pre

  12. Toward a Neural Basis of Music Perception – A Review and Updated Model

    OpenAIRE

    Koelsch, Stefan

    2011-01-01

    Music perception involves acoustic analysis, auditory memory, auditory scene analysis, processing of interval relations, of musical syntax and semantics, and activation of (pre)motor representations of actions. Moreover, music perception potentially elicits emotions, thus giving rise to the modulation of emotional effector systems such as the subjective feeling system, the autonomic nervous system, the hormonal, and the immune system. Building on a previous article (Koelsch and Siebel, 2005),...

  13. Revisiting the "enigma" of musicians with dyslexia: Auditory sequencing and speech abilities.

    Science.gov (United States)

    Zuk, Jennifer; Bishop-Liebler, Paula; Ozernov-Palchik, Ola; Moore, Emma; Overy, Katie; Welch, Graham; Gaab, Nadine

    2017-04-01

    Previous research has suggested a link between musical training and auditory processing skills. Musicians have shown enhanced perception of auditory features critical to both music and speech, suggesting that this link extends beyond basic auditory processing. It remains unclear to what extent musicians who also have dyslexia show these specialized abilities, considering often-observed persistent deficits that coincide with reading impairments. The present study evaluated auditory sequencing and speech discrimination in 52 adults comprised of musicians with dyslexia, nonmusicians with dyslexia, and typical musicians. An auditory sequencing task measuring perceptual acuity for tone sequences of increasing length was administered. Furthermore, subjects were asked to discriminate synthesized syllable continua varying in acoustic components of speech necessary for intraphonemic discrimination, which included spectral (formant frequency) and temporal (voice onset time [VOT] and amplitude envelope) features. Results indicate that musicians with dyslexia did not significantly differ from typical musicians and performed better than nonmusicians with dyslexia for auditory sequencing as well as discrimination of spectral and VOT cues within syllable continua. However, typical musicians demonstrated superior performance relative to both groups with dyslexia for discrimination of syllables varying in amplitude information. These findings suggest a distinct profile of speech processing abilities in musicians with dyslexia, with specific weaknesses in discerning amplitude cues within speech. Because these difficulties seem to remain persistent in adults with dyslexia despite musical training, this study only partly supports the potential for musical training to enhance the auditory processing skills known to be crucial for literacy in individuals with dyslexia. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  14. Auditory-motor learning during speech production in 9-11-year-old children.

    Directory of Open Access Journals (Sweden)

    Douglas M Shiller

    Full Text Available BACKGROUND: Hearing ability is essential for normal speech development, however the precise mechanisms linking auditory input and the improvement of speaking ability remain poorly understood. Auditory feedback during speech production is believed to play a critical role by providing the nervous system with information about speech outcomes that is used to learn and subsequently fine-tune speech motor output. Surprisingly, few studies have directly investigated such auditory-motor learning in the speech production of typically developing children. METHODOLOGY/PRINCIPAL FINDINGS: In the present study, we manipulated auditory feedback during speech production in a group of 9-11-year old children, as well as in adults. Following a period of speech practice under conditions of altered auditory feedback, compensatory changes in speech production and perception were examined. Consistent with prior studies, the adults exhibited compensatory changes in both their speech motor output and their perceptual representations of speech sound categories. The children exhibited compensatory changes in the motor domain, with a change in speech output that was similar in magnitude to that of the adults, however the children showed no reliable compensatory effect on their perceptual representations. CONCLUSIONS: The results indicate that 9-11-year-old children, whose speech motor and perceptual abilities are still not fully developed, are nonetheless capable of auditory-feedback-based sensorimotor adaptation, supporting a role for such learning processes in speech motor development. Auditory feedback may play a more limited role, however, in the fine-tuning of children's perceptual representations of speech sound categories.

  15. The representation of level and loudness in the central auditory system for unilateral stimulation.

    Science.gov (United States)

    Behler, Oliver; Uppenkamp, Stefan

    2016-10-01

    Loudness is the perceptual correlate of the physical intensity of a sound. However, loudness judgments depend on a variety of other variables and can vary considerably between individual listeners. While functional magnetic resonance imaging (fMRI) has been extensively used to characterize the neural representation of physical sound intensity in the human auditory system, only few studies have also investigated brain activity in relation to individual loudness. The physiological correlate of loudness perception is not yet fully understood. The present study systematically explored the interrelation of sound pressure level, ear of entry, individual loudness judgments, and fMRI activation along different stages of the central auditory system and across hemispheres for a group of normal hearing listeners. 4-kHz-bandpass filtered noise stimuli were presented monaurally to each ear at levels from 37 to 97dB SPL. One diotic condition and a silence condition were included as control conditions. The participants completed a categorical loudness scaling procedure with similar stimuli before auditory fMRI was performed. The relationship between brain activity, as inferred from blood oxygenation level dependent (BOLD) contrasts, and both sound level and loudness estimates were analyzed by means of functional activation maps and linear mixed effects models for various anatomically defined regions of interest in the ascending auditory pathway and in the cortex. Our findings are overall in line with the notion that fMRI activation in several regions within auditory cortex as well as in certain stages of the ascending auditory pathway might be more a direct linear reflection of perceived loudness rather than of sound pressure level. The results indicate distinct functional differences between midbrain and cortical areas as well as between specific regions within auditory cortex, suggesting a systematic hierarchy in terms of lateralization and the representation of level and

  16. The role of event-related brain potentials in assessing central auditory processing.

    Science.gov (United States)

    Alain, Claude; Tremblay, Kelly

    2007-01-01

    The perception of complex acoustic signals such as speech and music depends on the interaction between peripheral and central auditory processing. As information travels from the cochlea to primary and associative auditory cortices, the incoming sound is subjected to increasingly more detailed and refined analysis. These various levels of analyses are thought to include low-level automatic processes that detect, discriminate and group sounds that are similar in physical attributes such as frequency, intensity, and location as well as higher-level schema-driven processes that reflect listeners' experience and knowledge of the auditory environment. In this review, we describe studies that have used event-related brain potentials in investigating the processing of complex acoustic signals (e.g., speech, music). In particular, we examine the role of hearing loss on the neural representation of sound and how cognitive factors and learning can help compensate for perceptual difficulties. The notion of auditory scene analysis is used as a conceptual framework for interpreting and studying the perception of sound.

  17. Auditory Outcomes with Hearing Rehabilitation in Children with Unilateral Hearing Loss: A Systematic Review.

    Science.gov (United States)

    Appachi, Swathi; Specht, Jessica L; Raol, Nikhila; Lieu, Judith E C; Cohen, Michael S; Dedhia, Kavita; Anne, Samantha

    2017-10-01

    Objective Options for management of unilateral hearing loss (UHL) in children include conventional hearing aids, bone-conduction hearing devices, contralateral routing of signal (CROS) aids, and frequency-modulating (FM) systems. The objective of this study was to systematically review the current literature to characterize auditory outcomes of hearing rehabilitation options in UHL. Data Sources PubMed, EMBASE, Medline, CINAHL, and Cochrane Library were searched from inception to January 2016. Manual searches of bibliographies were also performed. Review Methods Studies analyzing auditory outcomes of hearing amplification in children with UHL were included. Outcome measures included functional and objective auditory results. Two independent reviewers evaluated each abstract and article. Results Of the 249 articles identified, 12 met inclusion criteria. Seven articles solely focused on outcomes with bone-conduction hearing devices. Outcomes favored improved pure-tone averages, speech recognition thresholds, and sound localization in implanted patients. Five studies focused on FM systems, conventional hearing aids, or CROS hearing aids. Limited data are available but suggest a trend toward improvement in speech perception with hearing aids. FM systems were shown to have the most benefit for speech recognition in noise. Studies evaluating CROS hearing aids demonstrated variable outcomes. Conclusions Data evaluating functional and objective auditory measures following hearing amplification in children with UHL are limited. Most studies do suggest improvement in speech perception, speech recognition in noise, and sound localization with a hearing rehabilitation device.

  18. Broadened population-level frequency tuning in the auditory cortex of tinnitus patients.

    Science.gov (United States)

    Sekiya, Kenichi; Takahashi, Mariko; Murakami, Shingo; Kakigi, Ryusuke; Okamoto, Hidehiko

    2017-03-01

    Tinnitus is a phantom auditory perception without an external sound source and is one of the most common public health concerns that impair the quality of life of many individuals. However, its neural mechanisms remain unclear. We herein examined population-level frequency tuning in the auditory cortex of unilateral tinnitus patients with similar hearing levels in both ears using magnetoencephalography. We compared auditory-evoked neural activities elicited by a stimulation to the tinnitus and nontinnitus ears. Objective magnetoencephalographic data suggested that population-level frequency tuning corresponding to the tinnitus ear was significantly broader than that corresponding to the nontinnitus ear in the human auditory cortex. The results obtained support the hypothesis that pathological alterations in inhibitory neural networks play an important role in the perception of subjective tinnitus.NEW & NOTEWORTHY Although subjective tinnitus is one of the most common public health concerns that impair the quality of life of many individuals, no standard treatment or objective diagnostic method currently exists. We herein revealed that population-level frequency tuning was significantly broader in the tinnitus ear than in the nontinnitus ear. The results of the present study provide an insight into the development of an objective diagnostic method for subjective tinnitus. Copyright © 2017 the American Physiological Society.

  19. Impairment in predictive processes during auditory mismatch negativity in ScZ : Evidence from event-related fields

    NARCIS (Netherlands)

    Sauer, Andreas; Zeev-Wolf, Maor; Grent-'t-Jong, Tineke; Recasens, Marc; Wacongne, C.; Wibral, Michael; Helbling, Saskia; Peled, Abraham; Grinshpoon, Alexander; Singer, Wolf; Goldstein, Abraham; Uhlhaas, Peter J

    2017-01-01

    Patients with schizophrenia (ScZ) show pronounced dysfunctions in auditory perception but the underlying mechanisms as well as the localization of the deficit remain unclear. To examine these questions, the current study examined whether alterations in the neuromagnetic mismatch negativity (MMNm) in

  20. A European Perspective on Auditory Processing Disorder-Current Knowledge and Future Research Focus

    Science.gov (United States)

    Iliadou, Vasiliki (Vivian); Ptok, Martin; Grech, Helen; Pedersen, Ellen Raben; Brechmann, André; Deggouj, Naïma; Kiese-Himmel, Christiane; Śliwińska-Kowalska, Mariola; Nickisch, Andreas; Demanez, Laurent; Veuillet, Evelyne; Thai-Van, Hung; Sirimanna, Tony; Callimachou, Marina; Santarelli, Rosamaria; Kuske, Sandra; Barajas, Jose; Hedjever, Mladen; Konukseven, Ozlem; Veraguth, Dorothy; Stokkereit Mattsson, Tone; Martins, Jorge Humberto; Bamiou, Doris-Eva

    2017-01-01

    Current notions of “hearing impairment,” as reflected in clinical audiological practice, do not acknowledge the needs of individuals who have normal hearing pure tone sensitivity but who experience auditory processing difficulties in everyday life that are indexed by reduced performance in other more sophisticated audiometric tests such as speech audiometry in noise or complex non-speech sound perception. This disorder, defined as “Auditory Processing Disorder” (APD) or “Central Auditory Processing Disorder” is classified in the current tenth version of the International Classification of diseases as H93.25 and in the forthcoming beta eleventh version. APDs may have detrimental effects on the affected individual, with low esteem, anxiety, and depression, and symptoms may remain into adulthood. These disorders may interfere with learning per se and with communication, social, emotional, and academic-work aspects of life. The objective of the present paper is to define a baseline European APD consensus formulated by experienced clinicians and researchers in this specific field of human auditory science. A secondary aim is to identify issues that future research needs to address in order to further clarify the nature of APD and thus assist in optimum diagnosis and evidence-based management. This European consensus presents the main symptoms, conditions, and specific medical history elements that should lead to auditory processing evaluation. Consensus on definition of the disorder, optimum diagnostic pathway, and appropriate management are highlighted alongside a perspective on future research focus.

  1. Hearing an illusory vowel in noise: suppression of auditory cortical activity.

    Science.gov (United States)

    Riecke, Lars; Vanbussel, Mieke; Hausfeld, Lars; Başkent, Deniz; Formisano, Elia; Esposito, Fabrizio

    2012-06-06

    Human hearing is constructive. For example, when a voice is partially replaced by an extraneous sound (e.g., on the telephone due to a transmission problem), the auditory system may restore the missing portion so that the voice can be perceived as continuous (Miller and Licklider, 1950; for review, see Bregman, 1990; Warren, 1999). The neural mechanisms underlying this continuity illusion have been studied mostly with schematic stimuli (e.g., simple tones) and are still a matter of debate (for review, see Petkov and Sutter, 2011). The goal of the present study was to elucidate how these mechanisms operate under more natural conditions. Using psychophysics and electroencephalography (EEG), we assessed simultaneously the perceived continuity of a human vowel sound through interrupting noise and the concurrent neural activity. We found that vowel continuity illusions were accompanied by a suppression of the 4 Hz EEG power in auditory cortex (AC) that was evoked by the vowel interruption. This suppression was stronger than the suppression accompanying continuity illusions of a simple tone. Finally, continuity perception and 4 Hz power depended on the intactness of the sound that preceded the vowel (i.e., the auditory context). These findings show that a natural sound may be restored during noise due to the suppression of 4 Hz AC activity evoked early during the noise. This mechanism may attenuate sudden pitch changes, adapt the resistance of the auditory system to extraneous sounds across auditory scenes, and provide a useful model for assisted hearing devices.

  2. Synchronization to auditory and visual rhythms in hearing and deaf individuals

    Science.gov (United States)

    Iversen, John R.; Patel, Aniruddh D.; Nicodemus, Brenda; Emmorey, Karen

    2014-01-01

    A striking asymmetry in human sensorimotor processing is that humans synchronize movements to rhythmic sound with far greater precision than to temporally equivalent visual stimuli (e.g., to an auditory vs. a flashing visual metronome). Traditionally, this finding is thought to reflect a fundamental difference in auditory vs. visual processing, i.e., superior temporal processing by the auditory system and/or privileged coupling between the auditory and motor systems. It is unclear whether this asymmetry is an inevitable consequence of brain organization or whether it can be modified (or even eliminated) by stimulus characteristics or by experience. With respect to stimulus characteristics, we found that a moving, colliding visual stimulus (a silent image of a bouncing ball with a distinct collision point on the floor) was able to drive synchronization nearly as accurately as sound in hearing participants. To study the role of experience, we compared synchronization to flashing metronomes in hearing and profoundly deaf individuals. Deaf individuals performed better than hearing individuals when synchronizing with visual flashes, suggesting that cross-modal plasticity enhances the ability to synchronize with temporally discrete visual stimuli. Furthermore, when deaf (but not hearing) individuals synchronized with the bouncing ball, their tapping patterns suggest that visual timing may access higher-order beat perception mechanisms for deaf individuals. These results indicate that the auditory advantage in rhythmic synchronization is more experience- and stimulus-dependent than has been previously reported. PMID:25460395

  3. A European Perspective on Auditory Processing Disorder-Current Knowledge and Future Research Focus

    Directory of Open Access Journals (Sweden)

    Vasiliki (Vivian Iliadou

    2017-11-01

    Full Text Available Current notions of “hearing impairment,” as reflected in clinical audiological practice, do not acknowledge the needs of individuals who have normal hearing pure tone sensitivity but who experience auditory processing difficulties in everyday life that are indexed by reduced performance in other more sophisticated audiometric tests such as speech audiometry in noise or complex non-speech sound perception. This disorder, defined as “Auditory Processing Disorder” (APD or “Central Auditory Processing Disorder” is classified in the current tenth version of the International Classification of diseases as H93.25 and in the forthcoming beta eleventh version. APDs may have detrimental effects on the affected individual, with low esteem, anxiety, and depression, and symptoms may remain into adulthood. These disorders may interfere with learning per se and with communication, social, emotional, and academic-work aspects of life. The objective of the present paper is to define a baseline European APD consensus formulated by experienced clinicians and researchers in this specific field of human auditory science. A secondary aim is to identify issues that future research needs to address in order to further clarify the nature of APD and thus assist in optimum diagnosis and evidence-based management. This European consensus presents the main symptoms, conditions, and specific medical history elements that should lead to auditory processing evaluation. Consensus on definition of the disorder, optimum diagnostic pathway, and appropriate management are highlighted alongside a perspective on future research focus.

  4. The effects of voluntary movements on auditory-haptic and haptic-haptic temporal order judgments.

    Science.gov (United States)

    Frissen, Ilja; Ziat, Mounia; Campion, Gianni; Hayward, Vincent; Guastavino, Catherine

    2012-10-01

    In two experiments we investigated the effects of voluntary movements on temporal haptic perception. Measures of sensitivity (JND) and temporal alignment (PSS) were obtained from temporal order judgments made on intermodal auditory-haptic (Experiment 1) or intramodal haptic (Experiment 2) stimulus pairs under three movement conditions. In the baseline, static condition, the arm of the participants remained stationary. In the passive condition, the arm was displaced by a servo-controlled motorized device. In the active condition, the participants moved voluntarily. The auditory stimulus was a short, 500Hz tone presented over headphones and the haptic stimulus was a brief suprathreshold force pulse applied to the tip of the index finger orthogonally to the finger movement. Active movement did not significantly affect discrimination sensitivity on the auditory-haptic stimulus pairs, whereas it significantly improved sensitivity in the case of the haptic stimulus pair, demonstrating a key role for motor command information in temporal sensitivity in the haptic system. Points of subjective simultaneity were by-and-large coincident with physical simultaneity, with one striking exception in the passive condition with the auditory-haptic stimulus pair. In the latter case, the haptic stimulus had to be presented 45ms before the auditory stimulus in order to obtain subjective simultaneity. A model is proposed to explain the discrimination performance. Copyright © 2012 Elsevier B.V. All rights reserved.

  5. Age-related dissociation of sensory and decision-based auditory motion processing

    Directory of Open Access Journals (Sweden)

    Alexandra Annemarie Ludwig

    2012-03-01

    Full Text Available Studies on the maturation of auditory motion processing in children have yielded inconsistent reports. The present study combines subjective and objective measurements to investigate how the auditory perceptual abilities of children change during development and whether these changes are paralleled by changes in the event-related brain potential (ERP.We employed the mismatch negativity (MMN to determine maturational changes in the discrimination of interaural time differences (ITD that generate lateralized moving auditory percepts. MMNs were elicited in children, teenagers, and adults, using a small and a large ITD at stimulus offset with respect to each subject’s discrimination threshold. In adults and teenagers large deviants elicited prominent MMNs, whereas small deviants at the behavioral threshold elicited only a marginal or no MMN. In contrast, pronounced MMNs for both deviant sizes were found in children. Behaviourally, however, most of the children showed higher discrimination thresholds than teens and adults.Although automatic ITD detection is functional, active discrimination is still limited in children. The lack of MMN deviance dependency in children suggests that unlike in teenagers and adults, neural signatures of automatic auditory motion processing do not mirror discrimination abilities.The study critically accounts for advanced understanding of children’s central auditory development.

  6. Auditory Cortical Maturation in a Child with Cochlear Implant: Analysis of Electrophysiological and Behavioral Measures

    Science.gov (United States)

    Silva, Liliane Aparecida Fagundes; Couto, Maria Inês Vieira; Tsuji, Robinson Koji; Bento, Ricardo Ferreira; de Carvalho, Ana Claudia Martinho; Matas, Carla Gentile

    2015-01-01

    The purpose of this study was to longitudinally assess the behavioral and electrophysiological hearing changes of a girl inserted in a CI program, who had bilateral profound sensorineural hearing loss and underwent surgery of cochlear implantation with electrode activation at 21 months of age. She was evaluated using the P1 component of Long Latency Auditory Evoked Potential (LLAEP); speech perception tests of the Glendonald Auditory Screening Procedure (GASP); Infant Toddler Meaningful Auditory Integration Scale (IT-MAIS); and Meaningful Use of Speech Scales (MUSS). The study was conducted prior to activation and after three, nine, and 18 months of cochlear implant activation. The results of the LLAEP were compared with data from a hearing child matched by gender and chronological age. The results of the LLAEP of the child with cochlear implant showed gradual decrease in latency of the P1 component after auditory stimulation (172 ms–134 ms). In the GASP, IT-MAIS, and MUSS, gradual development of listening skills and oral language was observed. The values of the LLAEP of the hearing child were expected for chronological age (132 ms–128 ms). The use of different clinical instruments allow a better understanding of the auditory habilitation and rehabilitation process via CI. PMID:26881163

  7. Using binocular rivalry to tag foreground sounds: Towards an objective visual measure for auditory multistability.

    Science.gov (United States)

    Einhäuser, Wolfgang; Thomassen, Sabine; Bendixen, Alexandra

    2017-01-01

    In binocular rivalry, paradigms have been proposed for unobtrusive moment-by-moment readout of observers' perceptual experience ("no-report paradigms"). Here, we take a first step to extend this concept to auditory multistability. Observers continuously reported which of two concurrent tone sequences they perceived in the foreground: high-pitch (1008 Hz) or low-pitch (400 Hz) tones. Interstimulus intervals were either fixed per sequence (Experiments 1 and 2) or random with tones alternating (Experiment 3). A horizontally drifting grating was presented to each eye; to induce binocular rivalry, gratings had distinct colors and motion directions. To associate each grating with one tone sequence, a pattern on the grating jumped vertically whenever the respective tone occurred. We found that the direction of the optokinetic nystagmus (OKN)-induced by the visually dominant grating-could be used to decode the tone (high/low) that was perceived in the foreground well above chance. This OKN-based readout improved after observers had gained experience with the auditory task (Experiments 1 and 2) and for simpler auditory tasks (Experiment 3). We found no evidence that the visual stimulus affected auditory multistability. Although decoding performance is still far from perfect, our paradigm may eventually provide a continuous estimate of the currently dominant percept in auditory multistability.

  8. Auditory Cortical Maturation in a Child with Cochlear Implant: Analysis of Electrophysiological and Behavioral Measures

    Directory of Open Access Journals (Sweden)

    Liliane Aparecida Fagundes Silva

    2015-01-01

    Full Text Available The purpose of this study was to longitudinally assess the behavioral and electrophysiological hearing changes of a girl inserted in a CI program, who had bilateral profound sensorineural hearing loss and underwent surgery of cochlear implantation with electrode activation at 21 months of age. She was evaluated using the P1 component of Long Latency Auditory Evoked Potential (LLAEP; speech perception tests of the Glendonald Auditory Screening Procedure (GASP; Infant Toddler Meaningful Auditory Integration Scale (IT-MAIS; and Meaningful Use of Speech Scales (MUSS. The study was conducted prior to activation and after three, nine, and 18 months of cochlear implant activation. The results of the LLAEP were compared with data from a hearing child matched by gender and chronological age. The results of the LLAEP of the child with cochlear implant showed gradual decrease in latency of the P1 component after auditory stimulation (172 ms–134 ms. In the GASP, IT-MAIS, and MUSS, gradual development of listening skills and oral language was observed. The values of the LLAEP of the hearing child were expected for chronological age (132 ms–128 ms. The use of different clinical instruments allow a better understanding of the auditory habilitation and rehabilitation process via CI.

  9. Psychophysical and neural correlates of noised-induced tinnitus in animals: Intra- and inter-auditory and non-auditory brain structure studies.

    Science.gov (United States)

    Zhang, Jinsheng; Luo, Hao; Pace, Edward; Li, Liang; Liu, Bin

    2016-04-01

    Tinnitus, a ringing in the ear or head without an external sound source, is a prevalent health problem. It is often associated with a number of limbic-associated disorders such as anxiety, sleep disturbance, and emotional distress. Thus, to investigate tinnitus, it is important to consider both auditory and non-auditory brain structures. This paper summarizes the psychophysical, immunocytochemical and electrophysiological evidence found in rats or hamsters with behavioral evidence of tinnitus. Behaviorally, we tested for tinnitus using a conditioned suppression/avoidance paradigm, gap detection acoustic reflex behavioral paradigm, and our newly developed conditioned licking suppression paradigm. Our new tinnitus behavioral paradigm requires relatively short baseline training, examines frequency specification of tinnitus perception, and achieves sensitive tinnitus testing at an individual level. To test for tinnitus-related anxiety and cognitive impairment, we used the elevated plus maze and Morris water maze. Our results showed that not all animals with tinnitus demonstrate anxiety and cognitive impairment. Immunocytochemically, we found that animals with tinnitus manifested increased Fos-like immunoreactivity (FLI) in both auditory and non-auditory structures. The manner in which FLI appeared suggests that lower brainstem structures may be involved in acute tinnitus whereas the midbrain and cortex are involved in more chronic tinnitus. Meanwhile, animals with tinnitus also manifested increased FLI in non-auditory brain structures that are involved in autonomic reactions, stress, arousal and attention. Electrophysiologically, we found that rats with tinnitus developed increased spontaneous firing in the auditory cortex (AC) and amygdala (AMG), as well as intra- and inter-AC and AMG neurosynchrony, which demonstrate that tinnitus may be actively produced and maintained by the interactions between the AC and AMG. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Auditory Association Cortex Lesions Impair Auditory Short-Term Memory in Monkeys

    Science.gov (United States)

    Colombo, Michael; D'Amato, Michael R.; Rodman, Hillary R.; Gross, Charles G.

    1990-