WorldWideScience

Sample records for auditory spatial cues

  1. Spatial Hearing with Incongruent Visual or Auditory Room Cues

    Science.gov (United States)

    Gil-Carvajal, Juan C.; Cubick, Jens; Santurette, Sébastien; Dau, Torsten

    2016-11-01

    In day-to-day life, humans usually perceive the location of sound sources as outside their heads. This externalized auditory spatial perception can be reproduced through headphones by recreating the sound pressure generated by the source at the listener’s eardrums. This requires the acoustical features of the recording environment and listener’s anatomy to be recorded at the listener’s ear canals. Although the resulting auditory images can be indistinguishable from real-world sources, their externalization may be less robust when the playback and recording environments differ. Here we tested whether a mismatch between playback and recording room reduces perceived distance, azimuthal direction, and compactness of the auditory image, and whether this is mostly due to incongruent auditory cues or to expectations generated from the visual impression of the room. Perceived distance ratings decreased significantly when collected in a more reverberant environment than the recording room, whereas azimuthal direction and compactness remained room independent. Moreover, modifying visual room-related cues had no effect on these three attributes, while incongruent auditory room-related cues between the recording and playback room did affect distance perception. Consequently, the external perception of virtual sounds depends on the degree of congruency between the acoustical features of the environment and the stimuli.

  2. Spatial Hearing with Incongruent Visual or Auditory Room Cues

    DEFF Research Database (Denmark)

    Gil Carvajal, Juan Camilo; Cubick, Jens; Santurette, Sébastien

    2016-01-01

    features of the recording environment and listener’s anatomy to be recorded at the listener’s ear canals. Although the resulting auditory images can be indistinguishable from real-world sources, their externalization may be less robust when the playback and recording environments differ. Here we tested...... whether a mismatch between playback and recording room reduces perceived distance, azimuthal direction, and compactness of the auditory image, and whether this is mostly due to incongruent auditory cues or to expectations generated from the visual impression of the room. Perceived distance ratings...

  3. Spatial Cues Provided by Sound Improve Postural Stabilization: Evidence of a Spatial Auditory Map?

    Science.gov (United States)

    Gandemer, Lennie; Parseihian, Gaetan; Kronland-Martinet, Richard; Bourdin, Christophe

    2017-01-01

    It has long been suggested that sound plays a role in the postural control process. Few studies however have explored sound and posture interactions. The present paper focuses on the specific impact of audition on posture, seeking to determine the attributes of sound that may be useful for postural purposes. We investigated the postural sway of young, healthy blindfolded subjects in two experiments involving different static auditory environments. In the first experiment, we compared effect on sway in a simple environment built from three static sound sources in two different rooms: a normal vs. an anechoic room. In the second experiment, the same auditory environment was enriched in various ways, including the ambisonics synthesis of a immersive environment, and subjects stood on two different surfaces: a foam vs. a normal surface. The results of both experiments suggest that the spatial cues provided by sound can be used to improve postural stability. The richer the auditory environment, the better this stabilization. We interpret these results by invoking the “spatial hearing map” theory: listeners build their own mental representation of their surrounding environment, which provides them with spatial landmarks that help them to better stabilize. PMID:28694770

  4. Spatial Cues Provided by Sound Improve Postural Stabilization: Evidence of a Spatial Auditory Map?

    Science.gov (United States)

    Gandemer, Lennie; Parseihian, Gaetan; Kronland-Martinet, Richard; Bourdin, Christophe

    2017-01-01

    It has long been suggested that sound plays a role in the postural control process. Few studies however have explored sound and posture interactions. The present paper focuses on the specific impact of audition on posture, seeking to determine the attributes of sound that may be useful for postural purposes. We investigated the postural sway of young, healthy blindfolded subjects in two experiments involving different static auditory environments. In the first experiment, we compared effect on sway in a simple environment built from three static sound sources in two different rooms: a normal vs. an anechoic room. In the second experiment, the same auditory environment was enriched in various ways, including the ambisonics synthesis of a immersive environment, and subjects stood on two different surfaces: a foam vs. a normal surface. The results of both experiments suggest that the spatial cues provided by sound can be used to improve postural stability. The richer the auditory environment, the better this stabilization. We interpret these results by invoking the "spatial hearing map" theory: listeners build their own mental representation of their surrounding environment, which provides them with spatial landmarks that help them to better stabilize.

  5. Spatial Cues Provided by Sound Improve Postural Stabilization: Evidence of a Spatial Auditory Map?

    Directory of Open Access Journals (Sweden)

    Lennie Gandemer

    2017-06-01

    Full Text Available It has long been suggested that sound plays a role in the postural control process. Few studies however have explored sound and posture interactions. The present paper focuses on the specific impact of audition on posture, seeking to determine the attributes of sound that may be useful for postural purposes. We investigated the postural sway of young, healthy blindfolded subjects in two experiments involving different static auditory environments. In the first experiment, we compared effect on sway in a simple environment built from three static sound sources in two different rooms: a normal vs. an anechoic room. In the second experiment, the same auditory environment was enriched in various ways, including the ambisonics synthesis of a immersive environment, and subjects stood on two different surfaces: a foam vs. a normal surface. The results of both experiments suggest that the spatial cues provided by sound can be used to improve postural stability. The richer the auditory environment, the better this stabilization. We interpret these results by invoking the “spatial hearing map” theory: listeners build their own mental representation of their surrounding environment, which provides them with spatial landmarks that help them to better stabilize.

  6. Atypical brain responses to auditory spatial cues in adults with autism spectrum disorder.

    Science.gov (United States)

    Lodhia, Veema; Hautus, Michael J; Johnson, Blake W; Brock, Jon

    2017-09-09

    The auditory processing atypicalities experienced by many individuals on the autism spectrum disorder might be understood in terms of difficulties parsing the sound energy arriving at the ears into discrete auditory 'objects'. Here, we asked whether autistic adults are able to make use of two important spatial cues to auditory object formation - the relative timing and amplitude of sound energy at the left and right ears. Using electroencephalography, we measured the brain responses of 15 autistic adults and 15 age- and verbal-IQ-matched control participants as they listened to dichotic pitch stimuli - white noise stimuli in which interaural timing or amplitude differences applied to a narrow frequency band of noise typically lead to the perception of a pitch sound that is spatially segregated from the noise. Responses were contrasted with those to stimuli in which timing and amplitude cues were removed. Consistent with our previous studies, autistic adults failed to show a significant object-related negativity (ORN) for timing-based pitch, although their ORN was not significantly smaller than that of the control group. Autistic participants did show an ORN to amplitude cues, indicating that they do not experience a general impairment in auditory object formation. However, their P400 response - thought to indicate the later attention-dependent aspects of auditory object formation - was missing. These findings provide further evidence of atypical auditory object processing in autism with potential implications for understanding the perceptual and communication difficulties associated with the condition. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  7. Selective attention modulates human auditory brainstem responses: relative contributions of frequency and spatial cues.

    Directory of Open Access Journals (Sweden)

    Alexandre Lehmann

    Full Text Available Selective attention is the mechanism that allows focusing one's attention on a particular stimulus while filtering out a range of other stimuli, for instance, on a single conversation in a noisy room. Attending to one sound source rather than another changes activity in the human auditory cortex, but it is unclear whether attention to different acoustic features, such as voice pitch and speaker location, modulates subcortical activity. Studies using a dichotic listening paradigm indicated that auditory brainstem processing may be modulated by the direction of attention. We investigated whether endogenous selective attention to one of two speech signals affects amplitude and phase locking in auditory brainstem responses when the signals were either discriminable by frequency content alone, or by frequency content and spatial location. Frequency-following responses to the speech sounds were significantly modulated in both conditions. The modulation was specific to the task-relevant frequency band. The effect was stronger when both frequency and spatial information were available. Patterns of response were variable between participants, and were correlated with psychophysical discriminability of the stimuli, suggesting that the modulation was biologically relevant. Our results demonstrate that auditory brainstem responses are susceptible to efferent modulation related to behavioral goals. Furthermore they suggest that mechanisms of selective attention actively shape activity at early subcortical processing stages according to task relevance and based on frequency and spatial cues.

  8. Potential for using visual, auditory, and olfactory cues to manage foraging behaviour and spatial distribution of rangeland livestock

    Science.gov (United States)

    This paper reviews the literature and reports on the current state of knowledge regarding the potential for managers to use visual (VC), auditory (AC), and olfactory (OC) cues to manage foraging behavior and spatial distribution of rangeland livestock. We present evidence that free-ranging livestock...

  9. ERP Indications for Sustained and Transient Auditory Spatial Attention with Different Lateralization Cues

    Science.gov (United States)

    Widmann, Andreas; Schröger, Erich

    The presented study was designed to investigate ERP effects of auditory spatial attention in sustained attention condition (where the to-be-attended location is defined in a blockwise manner) and in a transient attention condition (where the to-be-attended location is defined in a trial-by-trial manner). Lateralization in the azimuth plane was manipulated (a) via monaural presentation of l- and right-ear sounds, (b) via interaural intensity differences, (c) via interaural time differences, (d) via an artificial-head recording, and (e) via free-field stimulation. Ten participants were delivered with frequent Nogo- and infrequent Go-Stimuli. In one half of the experiment participants were instructed to press a button if they detected a Go-stimulus at a predefined side (sustained attention), in the other half they were required to detect Go-stimuli following an arrow-cue at the cued side (transient attention). Results revealed negative differences (Nd) between ERPs elicited by to-be-attended and to-be-ignored sounds in all conditions. These Nd-effects were larger for the sustained than for the transient attention condition indicating that attentional selection according to spatial criteria is improved when subjects can focus to one and the same location for a series of stimuli.

  10. Accurate sound localization in reverberant environments is mediated by robust encoding of spatial cues in the auditory midbrain.

    Science.gov (United States)

    Devore, Sasha; Ihlefeld, Antje; Hancock, Kenneth; Shinn-Cunningham, Barbara; Delgutte, Bertrand

    2009-04-16

    In reverberant environments, acoustic reflections interfere with the direct sound arriving at a listener's ears, distorting the spatial cues for sound localization. Yet, human listeners have little difficulty localizing sounds in most settings. Because reverberant energy builds up over time, the source location is represented relatively faithfully during the early portion of a sound, but this representation becomes increasingly degraded later in the stimulus. We show that the directional sensitivity of single neurons in the auditory midbrain of anesthetized cats follows a similar time course, although onset dominance in temporal response patterns results in more robust directional sensitivity than expected, suggesting a simple mechanism for improving directional sensitivity in reverberation. In parallel behavioral experiments, we demonstrate that human lateralization judgments are consistent with predictions from a population rate model decoding the observed midbrain responses, suggesting a subcortical origin for robust sound localization in reverberant environments.

  11. Competition between auditory and visual spatial cues during visual task performance

    NARCIS (Netherlands)

    Koelewijn, T.; Bronkhorst, A.; Theeuwes, J.

    2009-01-01

    There is debate in the crossmodal cueing literature as to whether capture of visual attention by means of sound is a fully automatic process. Recent studies show that when visual attention is endogenously focused sound still captures attention. The current study investigated whether there is

  12. Accurate Sound Localization in Reverberant Environments Is Mediated by Robust Encoding of Spatial Cues in the Auditory Midbrain

    OpenAIRE

    Devore, Sasha; Ihlefeld, Antje; Hancock, Kenneth; Shinn-Cunningham, Barbara; Delgutte, Bertrand

    2009-01-01

    In reverberant environments, acoustic reflections interfere with the direct sound arriving at a listener’s ears, distorting the spatial cues for sound localization. Yet, human listeners have little difficulty localizing sounds in most settings. Because reverberant energy builds up over time, the source location is represented relatively faithfully during the early portion of a sound, but this representation becomes increasingly degraded later in the stimulus. We show that the directional sens...

  13. Motor Training: Comparison of Visual and Auditory Coded Proprioceptive Cues

    Directory of Open Access Journals (Sweden)

    Philip Jepson

    2012-05-01

    Full Text Available Self-perception of body posture and movement is achieved through multi-sensory integration, particularly the utilisation of vision, and proprioceptive information derived from muscles and joints. Disruption to these processes can occur following a neurological accident, such as stroke, leading to sensory and physical impairment. Rehabilitation can be helped through use of augmented visual and auditory biofeedback to stimulate neuro-plasticity, but the effective design and application of feedback, particularly in the auditory domain, is non-trivial. Simple auditory feedback was tested by comparing the stepping accuracy of normal subjects when given a visual spatial target (step length and an auditory temporal target (step duration. A baseline measurement of step length and duration was taken using optical motion capture. Subjects (n=20 took 20 ‘training’ steps (baseline ±25% using either an auditory target (950 Hz tone, bell-shaped gain envelope or visual target (spot marked on the floor and were then asked to replicate the target step (length or duration corresponding to training with all feedback removed. Visual cues (mean percentage error=11.5%; SD ± 7.0%; auditory cues (mean percentage error = 12.9%; SD ± 11.8%. Visual cues elicit a high degree of accuracy both in training and follow-up un-cued tasks; despite the novelty of the auditory cues present for subjects, the mean accuracy of subjects approached that for visual cues, and initial results suggest a limited amount of practice using auditory cues can improve performance.

  14. Auditory Spatial Layout

    Science.gov (United States)

    Wightman, Frederic L.; Jenison, Rick

    1995-01-01

    All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.

  15. Improving visual spatial working memory in younger and older adults: effects of cross-modal cues.

    Science.gov (United States)

    Curtis, Ashley F; Turner, Gary R; Park, Norman W; Murtha, Susan J E

    2017-11-06

    Spatially informative auditory and vibrotactile (cross-modal) cues can facilitate attention but little is known about how similar cues influence visual spatial working memory (WM) across the adult lifespan. We investigated the effects of cues (spatially informative or alerting pre-cues vs. no cues), cue modality (auditory vs. vibrotactile vs. visual), memory array size (four vs. six items), and maintenance delay (900 vs. 1800 ms) on visual spatial location WM recognition accuracy in younger adults (YA) and older adults (OA). We observed a significant interaction between spatially informative pre-cue type, array size, and delay. OA and YA benefitted equally from spatially informative pre-cues, suggesting that attentional orienting prior to WM encoding, regardless of cue modality, is preserved with age.  Contrary to predictions, alerting pre-cues generally impaired performance in both age groups, suggesting that maintaining a vigilant state of arousal by facilitating the alerting attention system does not help visual spatial location WM.

  16. Visual Distance Cues Amplify Neuromagnetic Auditory N1m Responses

    Directory of Open Access Journals (Sweden)

    Christian F Altmann

    2011-10-01

    Full Text Available Ranging of auditory objects relies on several acoustic cues and is possibly modulated by additional visual information. Sound pressure level can serve as a cue for distance perception because it decreases with increasing distance. In this agnetoencephalography (MEG experiment, we tested whether psychophysical loudness judgment and N1m MEG responses are modulated by visual distance cues. To this end, we paired noise bursts at different sound pressure levels with synchronous visual cues at different distances. We hypothesized that noise bursts paired with far visual cues will be perceived louder and result in increased N1m amplitudes compared to a pairing with close visual cues. The rationale behind this was that listeners might compensate the visually induced object distance when processing loudness. Psychophysically, we observed no significant modulation of loudness judgments by visual cues. However, N1m MEG responses at about 100 ms after stimulus onset were significantly stronger for far versus close visual cues in the left auditory cortex. N1m responses in the right auditory cortex increased with increasing sound pressure level, but were not modulated by visual distance cues. Thus, our results suggest an audio-visual interaction in the left auditory cortex that is possibly related to cue integration for auditory distance processing.

  17. Cross-modal cueing in audiovisual spatial attention

    DEFF Research Database (Denmark)

    Blurton, Steven Paul; Greenlee, Mark W.; Gondan, Matthias

    2015-01-01

    Visual processing is most effective at the location of our attentional focus. It has long been known that various spatial cues can direct visuospatial attention and influence the detection of auditory targets. Cross-modal cueing, however, seems to depend on the type of the visual cue: facilitation...... that the perception of multisensory signals is modulated by a single, supramodal system operating in a top-down manner (Experiment 1). In contrast, bottom-up control of attention, as observed in the exogenous cueing task of Experiment 2, mainly exerts its influence through modality-specific subsystems. Experiment 3...

  18. Designing auditory cues for Parkinson's disease gait rehabilitation.

    Science.gov (United States)

    Cancela, Jorge; Moreno, Eugenio M; Arredondo, Maria T; Bonato, Paolo

    2014-01-01

    Recent works have proved that Parkinson's disease (PD) patients can be largely benefit by performing rehabilitation exercises based on audio cueing and music therapy. Specially, gait can benefit from repetitive sessions of exercises using auditory cues. Nevertheless, all the experiments are based on the use of a metronome as auditory stimuli. Within this work, Human-Computer Interaction methodologies have been used to design new cues that could benefit the long-term engagement of PD patients in these repetitive routines. The study has been also extended to commercial music and musical pieces by analyzing features and characteristics that could benefit the engagement of PD patients to rehabilitation tasks.

  19. Visual form Cues, Biological Motions, Auditory Cues, and Even Olfactory Cues Interact to Affect Visual Sex Discriminations

    Directory of Open Access Journals (Sweden)

    Rick Van Der Zwan

    2011-05-01

    Full Text Available Johnson and Tassinary (2005 proposed that visually perceived sex is signalled by structural or form cues. They suggested also that biological motion cues signal sex, but do so indirectly. We previously have shown that auditory cues can mediate visual sex perceptions (van der Zwan et al., 2009. Here we demonstrate that structural cues to body shape are alone sufficient for visual sex discriminations but that biological motion cues alone are not. Interestingly, biological motions can resolve ambiguous structural cues to sex, but so can olfactory cues even when those cues are not salient. To accommodate these findings we propose an alternative model of the processes mediating visual sex discriminations: Form cues can be used directly if they are available and unambiguous. If there is any ambiguity other sensory cues are used to resolve it, suggesting there may exist sex-detectors that are stimulus independent.

  20. The plastic ear and perceptual relearning in auditory spatial perception.

    Directory of Open Access Journals (Sweden)

    Simon eCarlile

    2014-08-01

    Full Text Available The auditory system of adult listeners has been shown to accommodate to altered spectral cues to sound location which presumably provides the basis for recalibration to changes in the shape of the ear over a life time. Here we review the role of auditory and non-auditory inputs to the perception of sound location and consider a range of recent experiments looking at the role of non-auditory inputs in the process of accommodation to these altered spectral cues. A number of studies have used small ear moulds to modify the spectral cues that result in significant degradation in localization performance. Following chronic exposure (10-60 days performance recovers to some extent and recent work has demonstrated that this occurs for both audio-visual and audio-only regions of space. This begs the questions as to the teacher signal for this remarkable functional plasticity in the adult nervous system. Following a brief review of influence of the motor state in auditory localisation, we consider the potential role of auditory-motor learning in the perceptual recalibration of the spectral cues. Several recent studies have considered how multi-modal and sensory-motor feedback might influence accommodation to altered spectral cues produced by ear moulds or through virtual auditory space stimulation using non-individualised spectral cues. The work with ear moulds demonstrates that a relatively short period of training involving sensory-motor feedback (5 – 10 days significantly improved both the rate and extent of accommodation to altered spectral cues. This has significant implications not only for the mechanisms by which this complex sensory information is encoded to provide a spatial code but also for adaptive training to altered auditory inputs. The review concludes by considering the implications for rehabilitative training with hearing aids and cochlear prosthesis.

  1. Auditory feedback blocks memory benefits of cueing during sleep.

    Science.gov (United States)

    Schreiner, Thomas; Lehmann, Mick; Rasch, Björn

    2015-10-28

    It is now widely accepted that re-exposure to memory cues during sleep reactivates memories and can improve later recall. However, the underlying mechanisms are still unknown. As reactivation during wakefulness renders memories sensitive to updating, it remains an intriguing question whether reactivated memories during sleep also become susceptible to incorporating further information after the cue. Here we show that the memory benefits of cueing Dutch vocabulary during sleep are in fact completely blocked when memory cues are directly followed by either correct or conflicting auditory feedback, or a pure tone. In addition, immediate (but not delayed) auditory stimulation abolishes the characteristic increases in oscillatory theta and spindle activity typically associated with successful reactivation during sleep as revealed by high-density electroencephalography. We conclude that plastic processes associated with theta and spindle oscillations occurring during a sensitive period immediately after the cue are necessary for stabilizing reactivated memory traces during sleep.

  2. Auditory spatial localization: Developmental delay in children with visual impairments.

    Science.gov (United States)

    Cappagli, Giulia; Gori, Monica

    2016-01-01

    For individuals with visual impairments, auditory spatial localization is one of the most important features to navigate in the environment. Many works suggest that blind adults show similar or even enhanced performance for localization of auditory cues compared to sighted adults (Collignon, Voss, Lassonde, & Lepore, 2009). To date, the investigation of auditory spatial localization in children with visual impairments has provided contrasting results. Here we report, for the first time, that contrary to visually impaired adults, children with low vision or total blindness show a significant impairment in the localization of static sounds. These results suggest that simple auditory spatial tasks are compromised in children, and that this capacity recovers over time. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. An auditory cue-depreciation effect.

    Science.gov (United States)

    Gibson, J M; Watkins, M J

    1991-01-01

    An experiment is reported in which subjects first heard a list of words and then tried to identify these same words from degraded utterances. Paralleling previous findings in the visual modality, the probability of identifying a given utterance was reduced when the utterance was immediately preceded by other, more degraded, utterances of the same word. A second experiment replicated this "cue-depreciation effect" and in addition found the effect to be weakened, if not eliminated, when the target word was not included in the initial list or when the test was delayed by two days.

  4. Tactile feedback improves auditory spatial localization.

    Science.gov (United States)

    Gori, Monica; Vercillo, Tiziana; Sandini, Giulio; Burr, David

    2014-01-01

    Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gori et al., 2014). To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds before and after training, either with tactile feedback, verbal feedback, or no feedback. Audio thresholds were first measured with a spatial bisection task: subjects judged whether the second sound of a three sound sequence was spatially closer to the first or the third sound. The tactile feedback group underwent two audio-tactile feedback sessions of 100 trials, where each auditory trial was followed by the same spatial sequence played on the subject's forearm; auditory spatial bisection thresholds were evaluated after each session. In the verbal feedback condition, the positions of the sounds were verbally reported to the subject after each feedback trial. The no feedback group did the same sequence of trials, with no feedback. Performance improved significantly only after audio-tactile feedback. The results suggest that direct tactile feedback interacts with the auditory spatial localization system, possibly by a process of cross-sensory recalibration. Control tests with the subject rotated suggested that this effect occurs only when the tactile and acoustic sequences are spatially congruent. Our results suggest that the tactile system can be used to recalibrate the auditory sense of space. These results encourage the possibility of designing rehabilitation programs to help blind persons establish a robust auditory sense of space, through training with the tactile modality.

  5. Great cormorants ( Phalacrocorax carbo) can detect auditory cues while diving

    Science.gov (United States)

    Hansen, Kirstin Anderson; Maxwell, Alyssa; Siebert, Ursula; Larsen, Ole Næsbye; Wahlberg, Magnus

    2017-06-01

    In-air hearing in birds has been thoroughly investigated. Sound provides birds with auditory information for species and individual recognition from their complex vocalizations, as well as cues while foraging and for avoiding predators. Some 10% of existing species of birds obtain their food under the water surface. Whether some of these birds make use of acoustic cues while underwater is unknown. An interesting species in this respect is the great cormorant ( Phalacrocorax carbo), being one of the most effective marine predators and relying on the aquatic environment for food year round. Here, its underwater hearing abilities were investigated using psychophysics, where the bird learned to detect the presence or absence of a tone while submerged. The greatest sensitivity was found at 2 kHz, with an underwater hearing threshold of 71 dB re 1 μPa rms. The great cormorant is better at hearing underwater than expected, and the hearing thresholds are comparable to seals and toothed whales in the frequency band 1-4 kHz. This opens up the possibility of cormorants and other aquatic birds having special adaptations for underwater hearing and making use of underwater acoustic cues from, e.g., conspecifics, their surroundings, as well as prey and predators.

  6. Effects of incongruent auditory and visual room-related cues on sound externalization

    DEFF Research Database (Denmark)

    Carvajal, Juan Camilo Gil; Santurette, Sébastien; Cubick, Jens

    recorded [1,2,3]. This may be due to incongruent auditory cues between the recording and playback room during sound reproduction [2]. Alternatively, an expectation effect caused by the visual impression of the room may affect the position of the perceived auditory image [3]. Here, we systematically...... investigated whether incongruent auditory and visual roomrelated cues affected sound externalization in terms of perceived distance, azimuthal localization, and compactness....

  7. A review on auditory space adaptations to altered head-related cues

    Directory of Open Access Journals (Sweden)

    Catarina eMendonça

    2014-07-01

    Full Text Available In this article we present a review of current literature on adaptations to altered head-related auditory localization cues. Localization cues can be altered through ear blocks, ear molds, electronic hearing devices and altered head-related transfer functions. Three main methods have been used to induce auditory space adaptation: sound exposure, training with feedback, and explicit training. Adaptations induced by training, rather than exposure, are consistently faster. Studies on localization with altered head-related cues have reported poor initial localization, but improved accuracy and discriminability with training. Also, studies that displaced the auditory space by altering cue values reported adaptations in perceived source position to compensate for such displacements. Auditory space adaptations can last for a few months even without further contact with the learned cues. In most studies, localization with the subject’s own unaltered cues remained intact despite the adaptation to a second set of cues. Generalization is observed from trained to untrained sound source positions, but there is mixed evidence regarding cross-frequency generalization. Multiple brain areas might be involved in auditory space adaptation processes, but the auditory cortex may play a critical role. Auditory space plasticity may involve context-dependent cue reweighting.

  8. Listener orientation and spatial judgments of elevated auditory percepts

    Science.gov (United States)

    Parks, Anthony J.

    How do listener head rotations affect auditory perception of elevation? This investi-. gation addresses this in the hopes that perceptual judgments of elevated auditory. percepts may be more thoroughly understood in terms of dynamic listening cues. engendered by listener head rotations and that this phenomenon can be psychophys-. ically and computationally modeled. Two listening tests were conducted and a. psychophysical model was constructed to this end. The frst listening test prompted. listeners to detect an elevated auditory event produced by a virtual noise source. orbiting the median plane via 24-channel ambisonic spatialization. Head rotations. were tracked using computer vision algorithms facilitated by camera tracking. The. data were used to construct a dichotomous criteria model using factorial binary. logistic regression model. The second auditory test investigated the validity of the. historically supported frequency dependence of auditory elevation perception using. narrow-band noise for continuous and brief stimuli with fxed and free-head rotation. conditions. The data were used to construct a multinomial logistic regression model. to predict categorical judgments of above, below, and behind. Finally, in light. of the psychophysical data found from the above studies, a functional model of. elevation perception for point sources along the cone of confusion was constructed. using physiologically-inspired signal processing methods along with top-down pro-. cessing utilizing principles of memory and orientation. The model is evaluated using. white noise bursts for 42 subjects' head-related transfer functions. The investigation. concludes with study limitations, possible implications, and speculation on future. research trajectories.

  9. When and where of auditory spatial processing in cortex: a novel approach using electrotomography.

    Directory of Open Access Journals (Sweden)

    Jörg Lewald

    Full Text Available The modulation of brain activity as a function of auditory location was investigated using electro-encephalography in combination with standardized low-resolution brain electromagnetic tomography. Auditory stimuli were presented at various positions under anechoic conditions in free-field space, thus providing the complete set of natural spatial cues. Variation of electrical activity in cortical areas depending on sound location was analyzed by contrasts between sound locations at the time of the N1 and P2 responses of the auditory evoked potential. A clear-cut double dissociation with respect to the cortical locations and the points in time was found, indicating spatial processing (1 in the primary auditory cortex and posterodorsal auditory cortical pathway at the time of the N1, and (2 in the anteroventral pathway regions about 100 ms later at the time of the P2. Thus, it seems as if both auditory pathways are involved in spatial analysis but at different points in time. It is possible that the late processing in the anteroventral auditory network reflected the sharing of this region by analysis of object-feature information and spectral localization cues or even the integration of spatial and non-spatial sound features.

  10. Evidence for enhanced discrimination of virtual auditory distance among blind listeners using level and direct-to-reverberant cues.

    Science.gov (United States)

    Kolarik, Andrew J; Cirstea, Silvia; Pardhan, Shahina

    2013-02-01

    Totally blind listeners often demonstrate better than normal capabilities when performing spatial hearing tasks. Accurate representation of three-dimensional auditory space requires the processing of available distance information between the listener and the sound source; however, auditory distance cues vary greatly depending upon the acoustic properties of the environment, and it is not known which distance cues are important to totally blind listeners. Our data show that totally blind listeners display better performance compared to sighted age-matched controls for distance discrimination tasks in anechoic and reverberant virtual rooms simulated using a room-image procedure. Totally blind listeners use two major auditory distance cues to stationary sound sources, level and direct-to-reverberant ratio, more effectively than sighted controls for many of the virtual distances tested. These results show that significant compensation among totally blind listeners for virtual auditory spatial distance leads to benefits across a range of simulated acoustic environments. No significant differences in performance were observed between listeners with partial non-correctable visual losses and sighted controls, suggesting that sensory compensation for virtual distance does not occur for listeners with partial vision loss.

  11. Probing the time course of head-motion cues integration during auditory scene analysis.

    Science.gov (United States)

    Kondo, Hirohito M; Toshima, Iwaki; Pressnitzer, Daniel; Kashino, Makio

    2014-01-01

    The perceptual organization of auditory scenes is a hard but important problem to solve for human listeners. It is thus likely that cues from several modalities are pooled for auditory scene analysis, including sensory-motor cues related to the active exploration of the scene. We previously reported a strong effect of head motion on auditory streaming. Streaming refers to an experimental paradigm where listeners hear sequences of pure tones, and rate their perception of one or more subjective sources called streams. To disentangle the effects of head motion (changes in acoustic cues at the ear, subjective location cues, and motor cues), we used a robotic telepresence system, Telehead. We found that head motion induced perceptual reorganization even when the acoustic scene had not changed. Here we reanalyzed the same data to probe the time course of sensory-motor integration. We show that motor cues had a different time course compared to acoustic or subjective location cues: motor cues impacted perceptual organization earlier and for a shorter time than other cues, with successive positive and negative contributions to streaming. An additional experiment controlled for the effects of volitional anticipatory components, and found that arm or leg movements did not have any impact on scene analysis. These data provide a first investigation of the time course of the complex integration of sensory-motor cues in an auditory scene analysis task, and they suggest a loose temporal coupling between the different mechanisms involved.

  12. Probing the time course of head-motion cues integration during auditory scene analysis

    Directory of Open Access Journals (Sweden)

    Hirohito M. Kondo

    2014-06-01

    Full Text Available The perceptual organization of auditory scenes is a hard but important problem to solve for human listeners. It is thus likely that cues from several modalities are pooled for auditory scene analysis, including sensory-motor cues related to the active exploration of the scene. We previously reported a strong effect of head motion on auditory streaming. Streaming refers to an experimental paradigm where listeners hear sequences of pure tones, and report their perception of one or more subjective sources called streams. To disentangle the effects of head motion (changes in acoustic cues at the ear, subjective location cues, and motor cues, we used a robotic telepresence system, Telehead. We found that head motion induced perceptual reorganization even when the acoustic scene had not changed. Here we reanalyzed the same data to probe the time course of sensory-motor integration. We show that motor cues had a different time course compared to acoustic or subjective location cues: motor cues impacted perceptual organization earlier and for a shorter time than other cues, with successive positive and negative contributions to streaming. An additional experiment controlled for the effects of volitional anticipatory components, and found that arm or leg movements did not have any impact on scene analysis. These data provide a first investigation of the time course of the complex integration of sensory-motor cues in an auditory scene analysis task, and they suggest a loose temporal coupling between the different mechanisms involved.

  13. Spatial auditory attention is modulated by tactile priming.

    Science.gov (United States)

    Menning, Hans; Ackermann, Hermann; Hertrich, Ingo; Mathiak, Klaus

    2005-07-01

    Previous studies have shown that cross-modal processing affects perception at a variety of neuronal levels. In this study, event-related brain responses were recorded via whole-head magnetoencephalography (MEG). Spatial auditory attention was directed via tactile pre-cues (primes) to one of four locations in the peripersonal space (left and right hand versus face). Auditory stimuli were white noise bursts, convoluted with head-related transfer functions, which ensured spatial perception of the four locations. Tactile primes (200-300 ms prior to acoustic onset) were applied randomly to one of these locations. Attentional load was controlled by three different visual distraction tasks. The auditory P50m (about 50 ms after stimulus onset) showed a significant "proximity" effect (larger responses to face stimulation as well as a "contralaterality" effect between side of stimulation and hemisphere). The tactile primes essentially reduced both the P50m and N100m components. However, facial tactile pre-stimulation yielded an enhanced ipsilateral N100m. These results show that earlier responses are mainly governed by exogenous stimulus properties whereas cross-sensory interaction is spatially selective at a later (endogenous) processing stage.

  14. Tuning to Binaural Cues in Human Auditory Cortex.

    Science.gov (United States)

    McLaughlin, Susan A; Higgins, Nathan C; Stecker, G Christopher

    2016-02-01

    Interaural level and time differences (ILD and ITD), the primary binaural cues for sound localization in azimuth, are known to modulate the tuned responses of neurons in mammalian auditory cortex (AC). The majority of these neurons respond best to cue values that favor the contralateral ear, such that contralateral bias is evident in the overall population response and thereby expected in population-level functional imaging data. Human neuroimaging studies, however, have not consistently found contralaterally biased binaural response patterns. Here, we used functional magnetic resonance imaging (fMRI) to parametrically measure ILD and ITD tuning in human AC. For ILD, contralateral tuning was observed, using both univariate and multivoxel analyses, in posterior superior temporal gyrus (pSTG) in both hemispheres. Response-ILD functions were U-shaped, revealing responsiveness to both contralateral and—to a lesser degree—ipsilateral ILD values, consistent with rate coding by unequal populations of contralaterally and ipsilaterally tuned neurons. In contrast, for ITD, univariate analyses showed modest contralateral tuning only in left pSTG, characterized by a monotonic response-ITD function. A multivoxel classifier, however, revealed ITD coding in both hemispheres. Although sensitivity to ILD and ITD was distributed in similar AC regions, the differently shaped response functions and different response patterns across hemispheres suggest that basic ILD and ITD processes are not fully integrated in human AC. The results support opponent-channel theories of ILD but not necessarily ITD coding, the latter of which may involve multiple types of representation that differ across hemispheres.

  15. Disentangling attention from action in the emotional spatial cueing task.

    Science.gov (United States)

    Mulckhuyse, Manon; Crombez, Geert

    2014-01-01

    In the emotional spatial cueing task, a peripheral cue--either emotional or non-emotional--is presented before target onset. A stronger cue validity effect with an emotional relative to a non-emotional cue (i.e., more efficient responding to validly cued targets relative to invalidly cued targets) is taken as an indication of emotional modulation of attentional processes. However, results from previous emotional spatial cueing studies are not consistent. Some studies find an effect at the validly cued location (shorter reaction times compared to a non-emotional cue), whereas other studies find an effect at the invalidly cued location (longer reaction times compared to a non-emotional cue). In the current paper, we explore which parameters affect emotional modulation of the cue validity effect in the spatial cueing task. Results from five experiments in healthy volunteers led to the conclusion that a threatening spatial cue did not affect attention processes but rather indicate that motor processes are affected. A possible mechanism might be that a strong aversive cue stimulus decreases reaction times by means of stronger action preparation. Consequently, in case of a spatially congruent response with the peripheral cue, a stronger cue validity effect could be obtained due to stronger response priming. The implications for future research are discussed.

  16. From ear to body: the auditory-motor loop in spatial cognition

    Directory of Open Access Journals (Sweden)

    Isabelle eViaud-Delmon

    2014-09-01

    Full Text Available Spatial memory is mainly studied through the visual sensory modality: navigation tasks in humans rarely integrate dynamic and spatial auditory information. In order to study how a spatial scene can be memorized on the basis of auditory and idiothetic cues only, we constructed an auditory equivalent of the Morris water maze, a task widely used to assess spatial learning and memory in rodents. Participants were equipped with wireless headphones, which delivered a soundscape updated in real time according to their movements in 3D space. A wireless tracking system (video infrared with passive markers was used to send the coordinates of the subject’s head to the sound rendering system. The rendering system used advanced HRTF-based synthesis of directional cues and room acoustic simulation for the auralization of a realistic acoustic environment. Participants were guided blindfolded in an experimental room. Their task was to explore a delimitated area in order to find a hidden auditory target, i.e. a sound that was only triggered when walking on a precise location of the area. The position of this target could be coded in relationship to auditory landmarks constantly rendered during the exploration of the area. The task was composed of a practice trial, 6 acquisition trials during which they had to memorise the localisation of the target, and 4 test trials in which some aspects of the auditory scene were modified. The task ended with a probe trial in which the auditory target was removed.The configuration of searching paths allowed observing how auditory information was coded to memorise the position of the target. They suggested that space can be efficiently coded without visual information in normal sighted subjects. In conclusion, space representation can be based on sensorimotor and auditory cues only, providing another argument in favour of the hypothesis that the brain has access to a modality-invariant representation of external space.

  17. Visual sensitivity is a stronger determinant of illusory processes than auditory cue parameters in the sound-induced flash illusion.

    Science.gov (United States)

    Kumpik, Daniel P; Roberts, Helen E; King, Andrew J; Bizley, Jennifer K

    2014-06-24

    The sound-induced flash illusion (SIFI) is a multisensory perceptual phenomenon in which the number of brief visual stimuli perceived by an observer is influenced by the number of concurrently presented sounds. While the strength of this illusion has been shown to be modulated by the temporal congruence of the stimuli from each modality, there is conflicting evidence regarding its dependence upon their spatial congruence. We addressed this question by examining SIFIs under conditions in which the spatial reliability of the visual stimuli was degraded and different sound localization cues were presented using either free-field or closed-field stimulation. The likelihood of reporting a SIFI varied with the spatial cue composition of the auditory stimulus and was highest when binaural cues were presented over headphones. SIFIs were more common for small flashes than for large flashes, and for small flashes at peripheral locations, subjects experienced a greater number of illusory fusion events than fission events. However, the SIFI was not dependent on the spatial proximity of the audiovisual stimuli, but was instead determined primarily by differences in subjects' underlying sensitivity across the visual field to the number of flashes presented. Our findings indicate that the influence of auditory stimulation on visual numerosity judgments can occur independently of the spatial relationship between the stimuli. © 2014 ARVO.

  18. Cross-modal preference acquisition: Evaluative conditioning of pictures by affective olfactory and auditory cues.

    NARCIS (Netherlands)

    van Reekum, C.M.; van den Berg, H.; Frijda, N.H.

    1999-01-01

    A cross-modal paradigm was chosen to test the hypothesis that affective olfactory and auditory cues paired with neutral visual stimuli bearing no resemblance or logical connection to the affective cues can evoke preference shifts in those stimuli. Neutral visual stimuli of abstract paintings were

  19. Biologically-variable rhythmic auditory cues are superior to isochronous cues in fostering natural gait variability in Parkinson's disease.

    Science.gov (United States)

    Dotov, D G; Bayard, S; Cochen de Cock, V; Geny, C; Driss, V; Garrigue, G; Bardy, B; Dalla Bella, S

    2017-01-01

    Rhythmic auditory cueing improves certain gait symptoms of Parkinson's disease (PD). Cues are typically stimuli or beats with a fixed inter-beat interval. We show that isochronous cueing has an unwanted side-effect in that it exacerbates one of the motor symptoms characteristic of advanced PD. Whereas the parameters of the stride cycle of healthy walkers and early patients possess a persistent correlation in time, or long-range correlation (LRC), isochronous cueing renders stride-to-stride variability random. Random stride cycle variability is also associated with reduced gait stability and lack of flexibility. To investigate how to prevent patients from acquiring a random stride cycle pattern, we tested rhythmic cueing which mimics the properties of variability found in healthy gait (biological variability). PD patients (n=19) and age-matched healthy participants (n=19) walked with three rhythmic cueing stimuli: isochronous, with random variability, and with biological variability (LRC). Synchronization was not instructed. The persistent correlation in gait was preserved only with stimuli with biological variability, equally for patients and controls (p'scycle. Notably, the individual's tendency to synchronize steps with beats determined the amount of negative effects of isochronous and random cues (p'sgait dynamics during cueing. The beneficial effects of biological variability provide useful guidelines for improving existing cueing treatments. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Objective fidelity evaluation in multisensory virtual environments: auditory cue fidelity in flight simulation.

    Directory of Open Access Journals (Sweden)

    Georg F Meyer

    Full Text Available We argue that objective fidelity evaluation of virtual environments, such as flight simulation, should be human-performance-centred and task-specific rather than measure the match between simulation and physical reality. We show how principled experimental paradigms and behavioural models to quantify human performance in simulated environments that have emerged from research in multisensory perception provide a framework for the objective evaluation of the contribution of individual cues to human performance measures of fidelity. We present three examples in a flight simulation environment as a case study: Experiment 1: Detection and categorisation of auditory and kinematic motion cues; Experiment 2: Performance evaluation in a target-tracking task; Experiment 3: Transferrable learning of auditory motion cues. We show how the contribution of individual cues to human performance can be robustly evaluated for each task and that the contribution is highly task dependent. The same auditory cues that can be discriminated and are optimally integrated in experiment 1, do not contribute to target-tracking performance in an in-flight refuelling simulation without training, experiment 2. In experiment 3, however, we demonstrate that the auditory cue leads to significant, transferrable, performance improvements with training. We conclude that objective fidelity evaluation requires a task-specific analysis of the contribution of individual cues.

  1. Spatial orienting of attention to social cues is modulated by cue type and gender of viewer.

    Science.gov (United States)

    Cooney, Sarah Maeve; Brady, Nuala; Ryan, Katie

    2017-05-01

    Across three experiments, we examined the efficacy of three cues from the human body-body orientation, head turning, and eye-gaze direction-to shift an observer's attention in space. Using a modified Posner cueing paradigm, we replicate the previous findings of gender differences in the gaze-cueing effect whereby female but not male participants responded significantly faster to validly cued than to invalidly cued targets. In contrast to the previous studies, we report a robust cueing effect for both male and female participants when head turning direction was used as the central cue, whereas oriented bodies proved ineffectual as cues to attention for both males and females. These results are discussed with reference to the time course of central cueing effects, gender differences in spatial attention, and current models of how cues from the human body are combined to judge another person's direction of attention.

  2. Simultaneous EEG-fMRI brain signatures of auditory cue utilization

    Directory of Open Access Journals (Sweden)

    Mathias eScharinger

    2014-06-01

    Full Text Available Optimal utilization of acoustic cues during auditory categorization is a vital skill, particularly when informative cues become occluded or degraded. Consequently, the acoustic environment requires flexible choosing and switching amongst available cues. The present study targets the brain functions underlying such changes in cue utilization. Participants performed a categorization task with immediate feedback on acoustic stimuli from two categories that varied in duration and spectral properties, while we simultaneously recorded Blood Oxygenation Level Dependent (BOLD responses in fMRI and electroencephalograms (EEGs. In the first half of the experiment, categories could be best discriminated by spectral properties. Halfway through the experiment, spectral degradation rendered the stimulus duration the more informative cue. Behaviorally, degradation decreased the likelihood of utilizing spectral cues. Spectrally degrading the acoustic signal led to increased alpha power compared to nondegraded stimuli. The EEG-informed fMRI analyses revealed that alpha power correlated with BOLD changes in inferior parietal cortex and right posterior superior temporal gyrus (including planum temporale. In both areas, spectral degradation led to a weaker coupling of BOLD response to behavioral utilization of the spectral cue. These data provide converging evidence from behavioral modeling, electrophysiology, and hemodynamics that (a increased alpha power mediates the inhibition of uninformative (here spectral stimulus features, and that (b the parietal attention network supports optimal cue utilization in auditory categorization. The results highlight the complex cortical processing of auditory categorization under realistic listening challenges.

  3. Effects of auditory cues on gait initiation and turning in patients with Parkinson's disease.

    Science.gov (United States)

    Gómez-González, J; Martín-Casas, P; Cano-de-la-Cuerda, R

    2016-12-08

    To review the available scientific evidence about the effectiveness of auditory cues during gait initiation and turning in patients with Parkinson's disease. We conducted a literature search in the following databases: Brain, PubMed, Medline, CINAHL, Scopus, Science Direct, Web of Science, Cochrane Database of Systematic Reviews, Cochrane Library Plus, CENTRAL, Trip Database, PEDro, DARE, OTseeker, and Google Scholar. We included all studies published between 2007 and 2016 and evaluating the influence of auditory cues on independent gait initiation and turning in patients with Parkinson's disease. The methodological quality of the studies was assessed with the Jadad scale. We included 13 studies, all of which had a low methodological quality (Jadad scale score≤2). In these studies, high-intensity, high-frequency auditory cues had a positive impact on gait initiation and turning. More specifically, they 1) improved spatiotemporal and kinematic parameters; 2) decreased freezing, turning duration, and falls; and 3) increased gait initiation speed, muscle activation, and gait speed and cadence in patients with Parkinson's disease. We need studies of better methodological quality to establish the Parkinson's disease stage in which auditory cues are most beneficial, as well as to determine the most effective type and frequency of the auditory cue during gait initiation and turning in patients with Parkinson's disease. Copyright © 2016 Sociedad Española de Neurología. Publicado por Elsevier España, S.L.U. All rights reserved.

  4. Sonic morphology: Aesthetic dimensional auditory spatial awareness

    Science.gov (United States)

    Whitehouse, Martha M.

    The sound and ceramic sculpture installation, " Skirting the Edge: Experiences in Sound & Form," is an integration of art and science demonstrating the concept of sonic morphology. "Sonic morphology" is herein defined as aesthetic three-dimensional auditory spatial awareness. The exhibition explicates my empirical phenomenal observations that sound has a three-dimensional form. Composed of ceramic sculptures that allude to different social and physical situations, coupled with sound compositions that enhance and create a three-dimensional auditory and visual aesthetic experience (see accompanying DVD), the exhibition supports the research question, "What is the relationship between sound and form?" Precisely how people aurally experience three-dimensional space involves an integration of spatial properties, auditory perception, individual history, and cultural mores. People also utilize environmental sound events as a guide in social situations and in remembering their personal history, as well as a guide in moving through space. Aesthetically, sound affects the fascination, meaning, and attention one has within a particular space. Sonic morphology brings art forms such as a movie, video, sound composition, and musical performance into the cognitive scope by generating meaning from the link between the visual and auditory senses. This research examined sonic morphology as an extension of musique concrete, sound as object, originating in Pierre Schaeffer's work in the 1940s. Pointing, as John Cage did, to the corporeal three-dimensional experience of "all sound," I composed works that took their total form only through the perceiver-participant's participation in the exhibition. While contemporary artist Alvin Lucier creates artworks that draw attention to making sound visible, "Skirting the Edge" engages the perceiver-participant visually and aurally, leading to recognition of sonic morphology.

  5. Task-dependent calibration of auditory spatial perception through environmental visual observation.

    Directory of Open Access Journals (Sweden)

    Alessia eTonelli

    2015-06-01

    Full Text Available Visual information is paramount to space perception. Vision influences auditory space estimation. Many studies show that simultaneous visual and auditory cues improve precision of the final multisensory estimate. However, the amount or the temporal extent of visual information, that is sufficient to influence auditory perception, is still unknown. It is therefore interesting to know if vision can improve auditory precision through a short-term environmental observation preceding the audio task and whether this influence is task-specific or environment-specific or both. To test these issues we investigate possible improvements of acoustic precision with sighted blindfolded participants in two audio tasks (minimum audible angle and space bisection and two acoustically different environments (normal room and anechoic room. With respect to a baseline of auditory precision, we found an improvement of precision in the space bisection task but not in the minimum audible angle after the observation of a normal room. No improvement was found when performing the same task in an anechoic chamber. In addition, no difference was found between a condition of short environment observation and a condition of full vision during the whole experimental session. Our results suggest that even short-term environmental observation can calibrate auditory spatial performance. They also suggest that echoes can be the cue that underpins visual calibration. Echoes may mediate the transfer of information from the visual to the auditory system.

  6. Task-dependent calibration of auditory spatial perception through environmental visual observation.

    Science.gov (United States)

    Tonelli, Alessia; Brayda, Luca; Gori, Monica

    2015-01-01

    Visual information is paramount to space perception. Vision influences auditory space estimation. Many studies show that simultaneous visual and auditory cues improve precision of the final multisensory estimate. However, the amount or the temporal extent of visual information, that is sufficient to influence auditory perception, is still unknown. It is therefore interesting to know if vision can improve auditory precision through a short-term environmental observation preceding the audio task and whether this influence is task-specific or environment-specific or both. To test these issues we investigate possible improvements of acoustic precision with sighted blindfolded participants in two audio tasks [minimum audible angle (MAA) and space bisection] and two acoustically different environments (normal room and anechoic room). With respect to a baseline of auditory precision, we found an improvement of precision in the space bisection task but not in the MAA after the observation of a normal room. No improvement was found when performing the same task in an anechoic chamber. In addition, no difference was found between a condition of short environment observation and a condition of full vision during the whole experimental session. Our results suggest that even short-term environmental observation can calibrate auditory spatial performance. They also suggest that echoes can be the cue that underpins visual calibration. Echoes may mediate the transfer of information from the visual to the auditory system.

  7. Visual and auditory cue effects on risk assessment in a highway training simulation

    NARCIS (Netherlands)

    Toet, A.; Houtkamp, J.M.; Meulen, R. van der

    2013-01-01

    We investigated whether manipulation of visual and auditory depth and speed cues can affect a user’s sense of risk for a low-cost nonimmersive virtual environment (VE) representing a highway environment with traffic incidents. The VE is currently used in an examination program to assess procedural

  8. Visual and Auditory Cue Effects on Risk Assessment in a Highway Training Simulation

    NARCIS (Netherlands)

    Toet, A.; Houtkamp, J.M.; Meulen, van der R.

    2013-01-01

    We investigated whether manipulation of visual and auditory depth and speed cues can affect a user’s sense of risk for a low-cost nonimmersive virtual environment (VE) representing a highway environment with traffic incidents. The VE is currently used in an examination program to assess procedural

  9. Volume Attenuation and High Frequency Loss as Auditory Depth Cues in Stereoscopic 3D Cinema

    Science.gov (United States)

    Manolas, Christos; Pauletto, Sandra

    2014-09-01

    Assisted by the technological advances of the past decades, stereoscopic 3D (S3D) cinema is currently in the process of being established as a mainstream form of entertainment. The main focus of this collaborative effort is placed on the creation of immersive S3D visuals. However, with few exceptions, little attention has been given so far to the potential effect of the soundtrack on such environments. The potential of sound both as a means to enhance the impact of the S3D visual information and to expand the S3D cinematic world beyond the boundaries of the visuals is large. This article reports on our research into the possibilities of using auditory depth cues within the soundtrack as a means of affecting the perception of depth within cinematic S3D scenes. We study two main distance-related auditory cues: high-end frequency loss and overall volume attenuation. A series of experiments explored the effectiveness of these auditory cues. Results, although not conclusive, indicate that the studied auditory cues can influence the audience judgement of depth in cinematic 3D scenes, sometimes in unexpected ways. We conclude that 3D filmmaking can benefit from further studies on the effectiveness of specific sound design techniques to enhance S3D cinema.

  10. Nonword repetition in adults who stutter: The effects of stimuli stress and auditory-orthographic cues.

    Directory of Open Access Journals (Sweden)

    Geoffrey A Coalson

    Full Text Available Adults who stutter (AWS are less accurate in their immediate repetition of novel phonological sequences compared to adults who do not stutter (AWNS. The present study examined whether manipulation of the following two aspects of traditional nonword repetition tasks unmask distinct weaknesses in phonological working memory in AWS: (1 presentation of stimuli with less-frequent stress patterns, and (2 removal of auditory-orthographic cues immediately prior to response.Fifty-two participants (26 AWS, 26 AWNS produced 12 bisyllabic nonwords in the presence of corresponding auditory-orthographic cues (i.e., immediate repetition task, and the absence of auditory-orthographic cues (i.e., short-term recall task. Half of each cohort (13 AWS, 13 AWNS were exposed to the stimuli with high-frequency trochaic stress, and half (13 AWS, 13 AWNS were exposed to identical stimuli with lower-frequency iambic stress.No differences in immediate repetition accuracy for trochaic or iambic nonwords were observed for either group. However, AWS were less accurate when recalling iambic nonwords than trochaic nonwords in the absence of auditory-orthographic cues.Manipulation of two factors which may minimize phonological demand during standard nonword repetition tasks increased the number of errors in AWS compared to AWNS. These findings suggest greater vulnerability in phonological working memory in AWS, even when producing nonwords as short as two syllables.

  11. Effect of rhythmic auditory cueing on parkinsonian gait: A systematic review and meta-analysis.

    Science.gov (United States)

    Ghai, Shashank; Ghai, Ishan; Schmitz, Gerd; Effenberg, Alfred O

    2018-01-11

    The use of rhythmic auditory cueing to enhance gait performance in parkinsonian patients' is an emerging area of interest. Different theories and underlying neurophysiological mechanisms have been suggested for ascertaining the enhancement in motor performance. However, a consensus as to its effects based on characteristics of effective stimuli, and training dosage is still not reached. A systematic review and meta-analysis was carried out to analyze the effects of different auditory feedbacks on gait and postural performance in patients affected by Parkinson's disease. Systematic identification of published literature was performed adhering to PRISMA guidelines, from inception until May 2017, on online databases; Web of science, PEDro, EBSCO, MEDLINE, Cochrane, EMBASE and PROQUEST. Of 4204 records, 50 studies, involving 1892 participants met our inclusion criteria. The analysis revealed an overall positive effect on gait velocity, stride length, and a negative effect on cadence with application of auditory cueing. Neurophysiological mechanisms, training dosage, effects of higher information processing constraints, and use of cueing as an adjunct with medications are thoroughly discussed. This present review bridges the gaps in literature by suggesting application of rhythmic auditory cueing in conventional rehabilitation approaches to enhance motor performance and quality of life in the parkinsonian community.

  12. A psychophysical imaging method evidencing auditory cue extraction during speech perception: a group analysis of auditory classification images.

    Science.gov (United States)

    Varnet, Léo; Knoblauch, Kenneth; Serniclaes, Willy; Meunier, Fanny; Hoen, Michel

    2015-01-01

    Although there is a large consensus regarding the involvement of specific acoustic cues in speech perception, the precise mechanisms underlying the transformation from continuous acoustical properties into discrete perceptual units remains undetermined. This gap in knowledge is partially due to the lack of a turnkey solution for isolating critical speech cues from natural stimuli. In this paper, we describe a psychoacoustic imaging method known as the Auditory Classification Image technique that allows experimenters to estimate the relative importance of time-frequency regions in categorizing natural speech utterances in noise. Importantly, this technique enables the testing of hypotheses on the listening strategies of participants at the group level. We exemplify this approach by identifying the acoustic cues involved in da/ga categorization with two phonetic contexts, Al- or Ar-. The application of Auditory Classification Images to our group of 16 participants revealed significant critical regions on the second and third formant onsets, as predicted by the literature, as well as an unexpected temporal cue on the first formant. Finally, through a cluster-based nonparametric test, we demonstrate that this method is sufficiently sensitive to detect fine modifications of the classification strategies between different utterances of the same phoneme.

  13. Auditory spectral versus spatial temporal order judgment: Threshold distribution analysis.

    Science.gov (United States)

    Fostick, Leah; Babkoff, Harvey

    2017-05-01

    Some researchers suggested that one central mechanism is responsible for temporal order judgments (TOJ), within and across sensory channels. This suggestion is supported by findings of similar TOJ thresholds in same modality and cross-modality TOJ tasks. In the present study, we challenge this idea by analyzing and comparing the threshold distributions of the spectral and spatial TOJ tasks. In spectral TOJ, the tones differ in their frequency ("high" and "low") and are delivered either binaurally or monaurally. In spatial (or dichotic) TOJ, the two tones are identical but are presented asynchronously to the two ears and thus differ with respect to which ear received the first tone and which ear received the second tone ("left"/"left"). Although both tasks are regarded as measures of auditory temporal processing, a review of data published in the literature suggests that they trigger different patterns of response. The aim of the current study was to systematically examine spectral and spatial TOJ threshold distributions across a large number of studies. Data are based on 388 participants in 13 spectral TOJ experiments, and 222 participants in 9 spatial TOJ experiments. None of the spatial TOJ distributions deviated significantly from the Gaussian; while all of the spectral TOJ threshold distributions were skewed to the right, with more than half of the participants accurately judging temporal order at very short interstimulus intervals (ISI). The data do not support the hypothesis that 1 central mechanism is responsible for all temporal order judgments. We suggest that different perceptual strategies are employed when performing spectral TOJ than when performing spatial TOJ. We posit that the spectral TOJ paradigm may provide the opportunity for two-tone masking or temporal integration, which is sensitive to the order of the tones and thus provides perceptual cues that may be used to judge temporal order. This possibility should be considered when interpreting

  14. Auditory and visual spatial impression: Recent studies of three auditoria

    Science.gov (United States)

    Nguyen, Andy; Cabrera, Densil

    2004-10-01

    Auditory spatial impression is widely studied for its contribution to auditorium acoustical quality. By contrast, visual spatial impression in auditoria has received relatively little attention in formal studies. This paper reports results from a series of experiments investigating the auditory and visual spatial impression of concert auditoria. For auditory stimuli, a fragment of an anechoic recording of orchestral music was convolved with calibrated binaural impulse responses, which had been made with the dummy head microphone at a wide range of positions in three auditoria and the sound source on the stage. For visual stimuli, greyscale photographs were used, taken at the same positions in the three auditoria, with a visual target on the stage. Subjective experiments were conducted with auditory stimuli alone, visual stimuli alone, and visual and auditory stimuli combined. In these experiments, subjects rated apparent source width, listener envelopment, intimacy and source distance (auditory stimuli), and spaciousness, envelopment, stage dominance, intimacy and target distance (visual stimuli). Results show target distance to be of primary importance in auditory and visual spatial impression-thereby providing a basis for covariance between some attributes of auditory and visual spatial impression. Nevertheless, some attributes of spatial impression diverge between the senses.

  15. The Generalization of Auditory Accommodation to Altered Spectral Cues.

    Science.gov (United States)

    Watson, Christopher J G; Carlile, Simon; Kelly, Heather; Balachandar, Kapilesh

    2017-09-14

    The capacity of healthy adult listeners to accommodate to altered spectral cues to the source locations of broadband sounds has now been well documented. In recent years we have demonstrated that the degree and speed of accommodation are improved by using an integrated sensory-motor training protocol under anechoic conditions. Here we demonstrate that the learning which underpins the localization performance gains during the accommodation process using anechoic broadband training stimuli generalize to environmentally relevant scenarios. As previously, alterations to monaural spectral cues were produced by fitting participants with custom-made outer ear molds, worn during waking hours. Following acute degradations in localization performance, participants then underwent daily sensory-motor training to improve localization accuracy using broadband noise stimuli over ten days. Participants not only demonstrated post-training improvements in localization accuracy for broadband noises presented in the same set of positions used during training, but also for stimuli presented in untrained locations, for monosyllabic speech sounds, and for stimuli presented in reverberant conditions. These findings shed further light on the neuroplastic capacity of healthy listeners, and represent the next step in the development of training programs for users of assistive listening devices which degrade localization acuity by distorting or bypassing monaural cues.

  16. Experience-Dependency of Reliance on Local Visual and Idiothetic Cues for Spatial Representations Created in the Absence of Distal Information.

    Science.gov (United States)

    Draht, Fabian; Zhang, Sijie; Rayan, Abdelrahman; Schönfeld, Fabian; Wiskott, Laurenz; Manahan-Vaughan, Denise

    2017-01-01

    Spatial encoding in the hippocampus is based on a range of different input sources. To generate spatial representations, reliable sensory cues from the external environment are integrated with idiothetic cues, derived from self-movement, that enable path integration and directional perception. In this study, we examined to what extent idiothetic cues significantly contribute to spatial representations and navigation: we recorded place cells while rodents navigated towards two visually identical chambers in 180° orientation via two different paths in darkness and in the absence of reliable auditory or olfactory cues. Our goal was to generate a conflict between local visual and direction-specific information, and then to assess which strategy was prioritized in different learning phases. We observed that, in the absence of distal cues, place fields are initially controlled by local visual cues that override idiothetic cues, but that with multiple exposures to the paradigm, spaced at intervals of days, idiothetic cues become increasingly implemented in generating an accurate spatial representation. Taken together, these data support that, in the absence of distal cues, local visual cues are prioritized in the generation of context-specific spatial representations through place cells, whereby idiothetic cues are deemed unreliable. With cumulative exposures to the environments, the animal learns to attend to subtle idiothetic cues to resolve the conflict between visual and direction-specific information.

  17. Tactile Cueing as a Gravitational Substitute for Spatial Navigation During Parabolic Flight

    Science.gov (United States)

    Montgomery, K. L.; Beaton, K. H.; Barba, J. M.; Cackler, J. M.; Son, J. H.; Horsfield, S. P.; Wood, S. J.

    2010-01-01

    INTRODUCTION: Spatial navigation requires an accurate awareness of orientation in your environment. The purpose of this experiment was to examine how spatial awareness was impaired with changing gravitational cues during parabolic flight, and the extent to which vibrotactile feedback of orientation could be used to help improve performance. METHODS: Six subjects were restrained in a chair tilted relative to the plane floor, and placed at random positions during the start of the microgravity phase. Subjects reported their orientation using verbal reports, and used a hand-held controller to point to a desired target location presented using a virtual reality video mask. This task was repeated with and without constant tactile cueing of "down" direction using a belt of 8 tactors placed around the mid-torso. Control measures were obtained during ground testing using both upright and tilted conditions. RESULTS: Perceptual estimates of orientation and pointing accuracy were impaired during microgravity or during rotation about an upright axis in 1g. The amount of error was proportional to the amount of chair displacement. Perceptual errors were reduced during movement about a tilted axis on earth. CONCLUSIONS: Reduced perceptual errors during tilts in 1g indicate the importance of otolith and somatosensory cues for maintaining spatial awareness. Tactile cueing may improve navigation in operational environments or clinical populations, providing a non-visual non-auditory feedback of orientation or desired direction heading.

  18. Selective integration of auditory-visual looming cues by humans.

    Science.gov (United States)

    Cappe, Céline; Thut, Gregor; Romei, Vincenzo; Murray, Micah M

    2009-03-01

    An object's motion relative to an observer can confer ethologically meaningful information. Approaching or looming stimuli can signal threats/collisions to be avoided or prey to be confronted, whereas receding stimuli can signal successful escape or failed pursuit. Using movement detection and subjective ratings, we investigated the multisensory integration of looming and receding auditory and visual information by humans. While prior research has demonstrated a perceptual bias for unisensory and more recently multisensory looming stimuli, none has investigated whether there is integration of looming signals between modalities. Our findings reveal selective integration of multisensory looming stimuli. Performance was significantly enhanced for looming stimuli over all other multisensory conditions. Contrasts with static multisensory conditions indicate that only multisensory looming stimuli resulted in facilitation beyond that induced by the sheer presence of auditory-visual stimuli. Controlling for variation in physical energy replicated the advantage for multisensory looming stimuli. Finally, only looming stimuli exhibited a negative linear relationship between enhancement indices for detection speed and for subjective ratings. Maximal detection speed was attained when motion perception was already robust under unisensory conditions. The preferential integration of multisensory looming stimuli highlights that complex ethologically salient stimuli likely require synergistic cooperation between existing principles of multisensory integration. A new conceptualization of the neurophysiologic mechanisms mediating real-world multisensory perceptions and action is therefore supported.

  19. Implicit learning of spatiotemporal contingencies in spatial cueing.

    Science.gov (United States)

    Rieth, Cory A; Huber, David E

    2013-08-01

    We investigated the role of implicit spatiotemporal learning in the Posner spatial cueing of attention task. During initial training, the proportion of different trial types was altered to produce a complex pattern of spatiotemporal contingencies between cues and targets. For example, in the short invalid and long valid condition, targets reliably appeared either at an uncued location after a short stimulus onset asynchrony (SOA; 100 ms) or at a cued location after a long SOA (350 ms). As revealed by postexperiment questioning, most participants were unaware of these manipulations. Whereas prior studies have examined reaction times during training, the current study examined the long-term effect of training on subsequent testing that removed these contingencies. An initial experiment found training effects only for the long SOAs that typically produce inhibition of return (IOR) effects. For instance, after short invalid and long valid training, there was a benefit at long SOAs rather than an IOR effect. A 2nd experiment ruled out target-cue overlap as an explanation of the difference between learning for long versus short SOAs. Rather than a mix of perfectly predictable spatiotemporal contingencies, Experiment 3 used only short SOA trials during training with a probabilistic spatial contingency. There was a smaller but reliable training effect in subsequent testing. These results demonstrate that implicit learning for specific combinations of location and SOA can affect behavior in spatial cueing paradigms, which is a necessary result if more generally spatial cueing reflects learned spatiotemporal regularities. 2013 APA, all rights reserved

  20. Eye movement preparation causes spatially-specific modulation of auditory processing: new evidence from event-related brain potentials.

    Science.gov (United States)

    Gherri, Elena; Driver, Jon; Eimer, Martin

    2008-08-11

    To investigate whether saccade preparation can modulate processing of auditory stimuli in a spatially-specific fashion, ERPs were recorded for a Saccade task, in which the direction of a prepared saccade was cued, prior to an imperative auditory stimulus indicating whether to execute or withhold that saccade. For comparison, we also ran a conventional Covert Attention task, where the same cue now indicated the direction for a covert endogenous attentional shift prior to an auditory target-nontarget discrimination. Lateralised components previously observed during cued shifts of attention (ADAN, LDAP) did not differ significantly across tasks, indicating commonalities between auditory spatial attention and oculomotor control. Moreover, in both tasks, spatially-specific modulation of auditory processing was subsequently found, with enhanced negativity for lateral auditory nontarget stimuli at cued versus uncued locations. This modulation started earlier and was more pronounced for the Covert Attention task, but was also reliably present in the Saccade task, demonstrating that the effects of covert saccade preparation on auditory processing can be similar to effects of endogenous covert attentional orienting, albeit smaller. These findings provide new evidence for similarities but also some differences between oculomotor preparation and shifts of endogenous spatial attention. They also show that saccade preparation can affect not just vision, but also sensory processing of auditory events.

  1. Trait anxiety reduces implicit expectancy during target spatial probability cueing.

    Science.gov (United States)

    Berggren, Nick; Derakshan, Nazanin

    2013-04-01

    Trait anxiety is associated with selective attentional biases to threat but also with more general impairments in attentional control, primarily supported in tasks involving distractor inhibition. Here, we investigated the novel prediction that anxiety should modulate expectation formation in response to task contingencies. Participants completed a visual search task, where briefly presented color cues predicted subsequent target spatial location on the majority of trials. Responses made in the absence of conscious awareness of cue-target contingency resulted in significantly faster RTs for cue-valid versus invalid trials, but only for low anxious participants; high anxiety eliminated evidence of cueing. This finding suggests that impairments to attentional control in anxiety also affect subtle rule-based learning and predictive coding of expectation. We discuss whether a lack of prediction in anxious behavior may reflect known deficits in attentional control, or may form part of a strategy to promote effective threat detection. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  2. Asymmetries in behavioral and neural responses to spectral cues demonstrate the generality of auditory looming bias

    Science.gov (United States)

    Reed, Darrin K.; Tóth, Brigitta; Best, Virginia; Majdak, Piotr; Colburn, H. Steven; Shinn-Cunningham, Barbara

    2017-01-01

    Studies of auditory looming bias have shown that sources increasing in intensity are more salient than sources decreasing in intensity. Researchers have argued that listeners are more sensitive to approaching sounds compared with receding sounds, reflecting an evolutionary pressure. However, these studies only manipulated overall sound intensity; therefore, it is unclear whether looming bias is truly a perceptual bias for changes in source distance, or only in sound intensity. Here we demonstrate both behavioral and neural correlates of looming bias without manipulating overall sound intensity. In natural environments, the pinnae induce spectral cues that give rise to a sense of externalization; when spectral cues are unnatural, sounds are perceived as closer to the listener. We manipulated the contrast of individually tailored spectral cues to create sounds of similar intensity but different naturalness. We confirmed that sounds were perceived as approaching when spectral contrast decreased, and perceived as receding when spectral contrast increased. We measured behavior and electroencephalography while listeners judged motion direction. Behavioral responses showed a looming bias in that responses were more consistent for sounds perceived as approaching than for sounds perceived as receding. In a control experiment, looming bias disappeared when spectral contrast changes were discontinuous, suggesting that perceived motion in distance and not distance itself was driving the bias. Neurally, looming bias was reflected in an asymmetry of late event-related potentials associated with motion evaluation. Hence, both our behavioral and neural findings support a generalization of the auditory looming bias, representing a perceptual preference for approaching auditory objects. PMID:28827336

  3. Asymmetries in behavioral and neural responses to spectral cues demonstrate the generality of auditory looming bias.

    Science.gov (United States)

    Baumgartner, Robert; Reed, Darrin K; Tóth, Brigitta; Best, Virginia; Majdak, Piotr; Colburn, H Steven; Shinn-Cunningham, Barbara

    2017-09-05

    Studies of auditory looming bias have shown that sources increasing in intensity are more salient than sources decreasing in intensity. Researchers have argued that listeners are more sensitive to approaching sounds compared with receding sounds, reflecting an evolutionary pressure. However, these studies only manipulated overall sound intensity; therefore, it is unclear whether looming bias is truly a perceptual bias for changes in source distance, or only in sound intensity. Here we demonstrate both behavioral and neural correlates of looming bias without manipulating overall sound intensity. In natural environments, the pinnae induce spectral cues that give rise to a sense of externalization; when spectral cues are unnatural, sounds are perceived as closer to the listener. We manipulated the contrast of individually tailored spectral cues to create sounds of similar intensity but different naturalness. We confirmed that sounds were perceived as approaching when spectral contrast decreased, and perceived as receding when spectral contrast increased. We measured behavior and electroencephalography while listeners judged motion direction. Behavioral responses showed a looming bias in that responses were more consistent for sounds perceived as approaching than for sounds perceived as receding. In a control experiment, looming bias disappeared when spectral contrast changes were discontinuous, suggesting that perceived motion in distance and not distance itself was driving the bias. Neurally, looming bias was reflected in an asymmetry of late event-related potentials associated with motion evaluation. Hence, both our behavioral and neural findings support a generalization of the auditory looming bias, representing a perceptual preference for approaching auditory objects.

  4. The Effects of Spatial Endogenous Pre-cueing across Eccentricities

    Directory of Open Access Journals (Sweden)

    Jing Feng

    2017-06-01

    Full Text Available Frequently, we use expectations about likely locations of a target to guide the allocation of our attention. Despite the importance of this attentional process in everyday tasks, examination of pre-cueing effects on attention, particularly endogenous pre-cueing effects, has been relatively little explored outside an eccentricity of 20°. Given the visual field has functional subdivisions that attentional processes can differ significantly among the foveal, perifoveal, and more peripheral areas, how endogenous pre-cues that carry spatial information of targets influence our allocation of attention across a large visual field (especially in the more peripheral areas remains unclear. We present two experiments examining how the expectation of the location of the target shapes the distribution of attention across eccentricities in the visual field. We measured participants’ ability to pick out a target among distractors in the visual field after the presentation of a highly valid cue indicating the size of the area in which the target was likely to occur, or the likely direction of the target (left or right side of the display. Our first experiment showed that participants had a higher target detection rate with faster responses, particularly at eccentricities of 20° and 30°. There was also a marginal advantage of pre-cueing effects when trials of the same size cue were blocked compared to when trials were mixed. Experiment 2 demonstrated a higher target detection rate when the target occurred at the cued direction. This pre-cueing effect was greater at larger eccentricities and with a longer cue-target interval. Our findings on the endogenous pre-cueing effects across a large visual area were summarized using a simple model to assist in conceptualizing the modifications of the distribution of attention over the visual field. We discuss our finding in light of cognitive penetration of perception, and highlight the importance of examining

  5. Auditory and Visual Cues for Spatiotemporal Rhythm Reproduction

    DEFF Research Database (Denmark)

    Maculewicz, Justyna; Serafin, Stefania; Kofoed, Lise B.

    2013-01-01

    into account both temporal and spatial characteristics of the presented rhythmic sequences. We were particularly interested in inves- tigating temporal accuracy of the rhythm reproduction, correctness of the chosen signal location, and strength of pressure. We assumed to con- firm earlier findings stating...

  6. Interface Design Implications for Recalling the Spatial Configuration of Virtual Auditory Environments

    Science.gov (United States)

    McMullen, Kyla A.

    Although the concept of virtual spatial audio has existed for almost twenty-five years, only in the past fifteen years has modern computing technology enabled the real-time processing needed to deliver high-precision spatial audio. Furthermore, the concept of virtually walking through an auditory environment did not exist. The applications of such an interface have numerous potential uses. Spatial audio has the potential to be used in various manners ranging from enhancing sounds delivered in virtual gaming worlds to conveying spatial locations in real-time emergency response systems. To incorporate this technology in real-world systems, various concerns should be addressed. First, to widely incorporate spatial audio into real-world systems, head-related transfer functions (HRTFs) must be inexpensively created for each user. The present study further investigated an HRTF subjective selection procedure previously developed within our research group. Users discriminated auditory cues to subjectively select their preferred HRTF from a publicly available database. Next, the issue of training to find virtual sources was addressed. Listeners participated in a localization training experiment using their selected HRTFs. The training procedure was created from the characterization of successful search strategies in prior auditory search experiments. Search accuracy significantly improved after listeners performed the training procedure. Next, in the investigation of auditory spatial memory, listeners completed three search and recall tasks with differing recall methods. Recall accuracy significantly decreased in tasks that required the storage of sound source configurations in memory. To assess the impacts of practical scenarios, the present work assessed the performance effects of: signal uncertainty, visual augmentation, and different attenuation modeling. Fortunately, source uncertainty did not affect listeners' ability to recall or identify sound sources. The present

  7. Symmetry matched auditory cues improve gait steadiness in most people with Parkinson's disease but not in healthy older people.

    Science.gov (United States)

    Brodie, Matthew A D; Dean, Roger T; Beijer, Tim R; Canning, Colleen G; Smith, Stuart T; Menant, Jasmine C; Lord, Stephen R

    2015-01-01

    Unsteady gait and falls are major problems for people with Parkinson's disease (PD). Symmetric auditory cues at altered cadences have been used to improve walking speed or step length. However, few people are exactly symmetric in terms of morphology or movement patterns and effects of symmetric cueing on gait steadiness are inconclusive. To investigate if matching auditory cue a/symmetry to an individual's intrinsic symmetry or asymmetry affects gait steadiness, gait symmetry, and comfort to cues, in people with PD, healthy age-matched controls (HAM) and young. Thirty participants; 10 with PD, 11 HAM (66 years), and 9 young (30 years), completed five baseline walks (no cues) and twenty-five cued walks at habitual cadence but different a/symmetries. Outcomes included; gait steadiness (step time variability and smoothness by harmonic ratios), walking speed, symmetry, comfort, and cue lag times. Without cues, PD participants had slower and less steady gait than HAM or young. Gait symmetry was distinct from gait steadiness, and unaffected by cue symmetry or a diagnosis of PD, but associated with aging. All participants maintained preferred gait symmetry and lag times independent of cue symmetry. When cues were matched to the individual's habitual gait symmetry and cadence: Gait steadiness improved in the PD group, but deteriorated in the HAM controls, and was unchanged in the young. Gait outcomes worsened for the two PD participants who reported discomfort to cued walking and had high New Freezing of Gait scores. It cannot be assumed all individuals benefit equally from auditory cues. Symmetry matched auditory cues compensated for unsteady gait in most people with PD, but interfered with gait steadiness in older people without basal ganglia deficits.

  8. The effect of visual cues on auditory stream segregation in musicians and non-musicians.

    Directory of Open Access Journals (Sweden)

    Jeremy Marozeau

    Full Text Available BACKGROUND: The ability to separate two interleaved melodies is an important factor in music appreciation. This ability is greatly reduced in people with hearing impairment, contributing to difficulties in music appreciation. The aim of this study was to assess whether visual cues, musical training or musical context could have an effect on this ability, and potentially improve music appreciation for the hearing impaired. METHODS: Musicians (N = 18 and non-musicians (N = 19 were asked to rate the difficulty of segregating a four-note repeating melody from interleaved random distracter notes. Visual cues were provided on half the blocks, and two musical contexts were tested, with the overlap between melody and distracter notes either gradually increasing or decreasing. CONCLUSIONS: Visual cues, musical training, and musical context all affected the difficulty of extracting the melody from a background of interleaved random distracter notes. Visual cues were effective in reducing the difficulty of segregating the melody from distracter notes, even in individuals with no musical training. These results are consistent with theories that indicate an important role for central (top-down processes in auditory streaming mechanisms, and suggest that visual cues may help the hearing-impaired enjoy music.

  9. Auditory training alters the physiological detection of stimulus-specific cues in humans

    Science.gov (United States)

    Tremblay, Kelly L.; Shahin, Antoine J.; Picton, Terence; Ross, Bernhard

    2009-01-01

    Objective Auditory training alters neural activity in humans but it is unknown if these alterations are specific to the trained cue. The objective of this study was to determine if enhanced cortical activity was specific to the trained voice-onset-time (VOT) stimuli ‘mba’ and ’ba’, or whether it generalized to the control stimulus ‘a’ that did not contain the trained cue. Methods Thirteen adults were trained to identify a 10 ms VOT cue that differentiated the two experimental stimuli. We recorded event-related potentials (ERPs) evoked by three different speech sounds ‘ba’ ‘mba’ and ‘a’ before and after six days of VOT training. Results The P2 wave increased in amplitude after training for both control and experimental stimuli, but the effects differed between stimulus conditions. Whereas the effects of training on P2 amplitude were greatest in the left hemisphere for the trained stimuli, enhanced P2 activity was seen in both hemispheres for the control stimulus. In addition, subjects with enhanced pre-training N1 amplitudes were more responsive to training and showed the most perceptual improvement. Conclusion Both stimulus-specific and general effects of training can be measured in humans. An individual’s pre-training N1 response might predict their capacity for improvement. Significance N1 and P2 responses can be used to examine physiological correlates of human auditory perceptual learning. PMID:19028139

  10. Clark's nutcracker spatial memory: the importance of large, structural cues.

    Science.gov (United States)

    Bednekoff, Peter A; Balda, Russell P

    2014-02-01

    Clark's nutcrackers, Nucifraga columbiana, cache and recover stored seeds in high alpine areas including areas where snowfall, wind, and rockslides may frequently obscure or alter cues near the cache site. Previous work in the laboratory has established that Clark's nutcrackers use spatial memory to relocate cached food. Following from aspects of this work, we performed experiments to test the importance of large, structural cues for Clark's nutcracker spatial memory. Birds were no more accurate in recovering caches when more objects were on the floor of a large experimental room nor when this room was subdivided with a set of panels. However, nutcrackers were consistently less accurate in this large room than in a small experimental room. Clark's nutcrackers probably use structural features of experimental rooms as important landmarks during recovery of cached food. This use of large, extremely stable cues may reflect the imperfect reliability of smaller, closer cues in the natural habitat of Clark's nutcrackers. This article is part of a Special Issue entitled: CO3 2013. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. Great cormorants (Phalacrocorax carbo) can detect auditory cues while diving

    DEFF Research Database (Denmark)

    Hansen, Kirstin Anderson; Maxwell, Alyssa; Siebert, Ursula

    2017-01-01

    the water surface. Whether some of these birds make use of acoustic cues while underwater is unknown. An interesting species in this respect is the great cormorant (Phalacrocorax carbo), being one of the most effective marine predators and relying on the aquatic environment for food year round. Here, its......In-air hearing in birds has been thoroughly investigated. Sound provides birds with auditory information for species and individual recognition from their complex vocalizations, as well as cues while foraging and for avoiding predators. Some 10% of existing species of birds obtain their food under...... underwater hearing abilities were investigated using psychophysics, where the bird learned to detect the presence or absence of a tone while submerged. The greatest sensitivity was found at 2 kHz, with an underwater hearing threshold of 71 dB re 1 μPa rms. The great cormorant is better at hearing underwater...

  12. Auditory and Cross-Modal Spatial Attention

    Science.gov (United States)

    2007-01-01

    interaural level and interaural envelope timing (weak cues for left-right direction). This work, published in Acustica united with Acta Acustica in...Acta Acust united Acustica 2005; 91:967-9. Durlach NI, Mason CR, Gallun FJ, Shinn-Cunningham BG, Colburn HS, and Kidd G Jr. Informational masking for

  13. Time course and cost of misdirecting auditory spatial attention in younger and older adults.

    Science.gov (United States)

    Singh, Gurjit; Pichora-Fuller, M Kathleen; Schneider, Bruce A

    2013-01-01

    The effects of directing, switching, and misdirecting auditory spatial attention in a complex listening situation were investigated in 8 younger and 8 older listeners with normal-hearing sensitivity below 4 kHz. In two companion experiments, a target sentence was presented from one spatial location and two competing sentences were presented simultaneously, one from each of two different locations. Pretrial, listeners were informed of the call-sign cue that identified which of the three sentences was the target and of the probability of the target sentence being presented from each of the three possible locations. Four different probability conditions varied in the likelihood of the target being presented at the left, center, and right locations. In Experiment 1, four timing conditions were tested: the original (unedited) sentences (which contained about 300 msec of filler speech between the call-sign cue and the onset of the target words), or modified (edited) sentences with silent pauses of 0, 150, or 300 msec replacing the filler speech. In Experiment 2, when the cued sentence was presented from an unlikely (side) listening location, for half of the trials the listener's task was to report target words from the cued sentence (cue condition); for the remaining trials, the listener's task was to report target words from the sentence presented from the opposite, unlikely (side) listening location (anticue condition). In Experiment 1, for targets presented from the likely (center) location, word identification was better for the unedited than for modified sentences. For targets presented from unlikely (side) locations, word identification was better when there was more time between the call-sign cue and target words. All listeners benefited similarly from the availability of more compared with less time and the presentation of continuous compared with interrupted speech. In Experiment 2, the key finding was that age-related performance deficits were observed in

  14. Effect of rhythmic auditory cueing on gait in cerebral palsy: a systematic review and meta-analysis

    Directory of Open Access Journals (Sweden)

    Ghai S

    2017-12-01

    Full Text Available Shashank Ghai,1 Ishan Ghai,2 Alfred O. Effenberg1 1Institute for Sports Science, Leibniz University Hannover, Hannover, Germany; 2School of Life Sciences, Jacobs University, Bremen, Germany Abstract: Auditory entrainment can influence gait performance in movement disorders. The entrainment can incite neurophysiological and musculoskeletal changes to enhance motor execution. However, a consensus as to its effects based on gait in people with cerebral palsy is still warranted. A systematic review and meta-analysis were carried out to analyze the effects of rhythmic auditory cueing on spatiotemporal and kinematic parameters of gait in people with cerebral palsy. Systematic identification of published literature was performed adhering to Preferred Reporting Items for Systematic Reviews and Meta-Analyses and American Academy for Cerebral Palsy and Developmental Medicine guidelines, from inception until July 2017, on online databases: Web of Science, PEDro, EBSCO, Medline, Cochrane, Embase and ProQuest. Kinematic and spatiotemporal gait parameters were evaluated in a meta-analysis across studies. Of 547 records, nine studies involving 227 participants (108 children/119 adults met our inclusion criteria. The qualitative review suggested beneficial effects of rhythmic auditory cueing on gait performance among all included studies. The meta-analysis revealed beneficial effects of rhythmic auditory cueing on gait dynamic index (Hedge’s g=0.9, gait velocity (1.1, cadence (0.3, and stride length (0.5. This review for the first time suggests a converging evidence toward application of rhythmic auditory cueing to enhance gait performance and stability in people with cerebral palsy. This article details underlying neurophysiological mechanisms and use of cueing as an efficient home-based intervention. It bridges gaps in the literature, and suggests translational approaches on how rhythmic auditory cueing can be incorporated in rehabilitation approaches to

  15. Early life exposure to noise alters the representation of auditory localization cues in the auditory space map of the barn owl.

    Science.gov (United States)

    Efrati, Adi; Gutfreund, Yoram

    2011-05-01

    The auditory space map in the optic tectum (OT) (also known as superior colliculus in mammals) relies on the tuning of neurons to auditory localization cues that correspond to specific sound source locations. This study investigates the effects of early auditory experiences on the neural representation of binaural auditory localization cues. Young barn owls were raised in continuous omnidirectional broadband noise from before hearing onset to the age of ∼ 65 days. Data from these birds were compared with data from age-matched control owls and from normal adult owls (>200 days). In noise-reared owls, the tuning of tectal neurons for interaural level differences and interaural time differences was broader than in control owls. Moreover, in neurons from noise-reared owls, the interaural level differences tuning was biased towards sounds louder in the contralateral ear. A similar bias appeared, but to a much lesser extent, in age-matched control owls and was absent in adult owls. To follow the recovery process from noise exposure, we continued to survey the neural representations in the OT for an extended period of up to several months after removal of the noise. We report that all the noise-rearing effects tended to recover gradually following exposure to a normal acoustic environment. The results suggest that deprivation from experiencing normal acoustic localization cues disrupts the maturation of the auditory space map in the OT.

  16. Using auditory classification images for the identification of fine acoustic cues used in speech perception.

    Directory of Open Access Journals (Sweden)

    Léo eVarnet

    2013-12-01

    Full Text Available An essential step in understanding the processes underlying the general mechanism of perceptual categorization is to identify which portions of a physical stimulation modulate the behavior of our perceptual system. More specifically, in the context of speech comprehension, it is still a major open challenge to understand which information is used to categorize a speech stimulus as one phoneme or another, the auditory primitives relevant for the categorical perception of speech being still unknown. Here we propose to adapt technique relying on a Generalized Linear Model with smoothness priors technique, already used in the visual domain for estimation of so-called classification images, to auditory experiments. This statistical model offers a rigorous framework for dealing with non-Gaussian noise, as it is often the case in the auditory modality, and limits the amount of noise in the estimated template by enforcing smoother solution. By applying this technique to a specific two-alternative forced choice experiment between stimuli ‘aba’ and ‘ada’ in noise with an adaptive SNR, we confirm that the second formantic transition is a key for classifying phonemes into /b/ or /d/ in noise, and that its estimation by the auditory system is a relative measurement across spectral bands and in relation to the perceived height of the second formant in the preceding syllable. Through this example, we show how the GLM with smoothness priors approach can be applied to the identification of fine functional acoustic cues in speech perception. Finally we discuss some assumptions of the model in the specific case of speech perception.

  17. Synchrony of maternal auditory and visual cues about unknown words to children with and without cochlear implants.

    Science.gov (United States)

    Lund, Emily; Schuele, C Melanie

    2015-01-01

    The purpose of this study was to compare types of maternal auditory-visual input about word referents available to children with cochlear implants, children with normal hearing matched for age, and children with normal hearing matched for vocabulary size. Although other works have considered the acoustic qualities of maternal input provided to children with cochlear implants, this study is the first to consider auditory-visual maternal input provided to children with cochlear implants. Participants included 30 mother-child dyads from three groups: children who wore cochlear implants (n = 10 dyads), children matched for chronological age (n = 10 dyads), and children matched for expressive vocabulary size (n = 10 dyads). All participants came from English-speaking families, with the families of children with hearing loss committed to developing listening and spoken language skills (not sign language). All mothers had normal hearing. Mother-child interactions were video recorded during mealtimes in the home. Each dyad participated in two mealtime observations. Maternal utterances were transcribed and coded for (a) nouns produced, (b) child-directed utterances, (c) nouns unknown to children per maternal report, and (d) auditory and visual cues provided about referents for unknown nouns. Auditory and visual cues were coded as either converging, diverging, or auditory-only. Mothers of children with cochlear implants provided percentages of converging and diverging cues that were similar to the percentages of mothers of children matched for chronological age. Mothers of children matched for vocabulary size, on the other hand, provided a higher percentage of converging auditory-visual cues and lower percentage of diverging cues than did mothers of children with cochlear implants. Groups did not differ in provision of auditory-only cues. The present study represents the first step toward identification of environmental input characteristics that may affect lexical learning

  18. Dopamine and noradrenaline efflux in the rat prefrontal cortex after classical aversive conditioning to an auditory cue

    NARCIS (Netherlands)

    Feenstra, M. G.; Vogel, M.; Botterblom, M. H.; Joosten, R. N.; de Bruin, J. P.

    2001-01-01

    We used bilateral microdialysis in the medial prefrontal cortex (PFC) of awake, freely moving rats to study aversive conditioning to an auditory cue in the controlled environment of the Skinner box. The presentation of the explicit conditioned stimuli (CS), previously associated with foot shocks,

  19. Independent effects of bottom-up temporal expectancy and top-down spatial attention. An audiovisual study using rhythmic cueing.

    Directory of Open Access Journals (Sweden)

    Alexander eJones

    2015-01-01

    Full Text Available Selective attention to a spatial location has shown enhance perception and facilitate behaviour for events at attended locations. However, selection relies not only on where but also when an event occurs. Recently, interest has turned to how intrinsic neural oscillations in the brain entrain to rhythms in our environment, and, stimuli appearing in or out of synch with a rhythm have shown to modulate perception and performance. Temporal expectations created by rhythms and spatial attention are two processes which have independently shown to affect stimulus processing but it remains largely unknown how, and if, they interact. In four separate tasks, this study investigated the effects of voluntary spatial attention and bottom-up temporal expectations created by rhythms in both unimodal and crossmodal conditions. In each task the participant used an informative cue, either colour or pitch, to direct their covert spatial attention to the left or right, and respond as quickly as possible to a target. The lateralized target (visual or auditory was then presented at the attended or unattended side. Importantly, although not task relevant, the cue was a rhythm of either flashes or beeps. The target was presented in or out of sync (early or late with the rhythmic cue. The results showed participants were faster responding to spatially attended compared to unattended targets in all tasks. Moreover, there was an effect of rhythmic cueing upon response times in both unimodal and crossmodal conditions. Responses were faster to targets presented in sync with the rhythm compared to when they appeared too early in both crossmodal tasks. That is, rhythmic stimuli in one modality influenced the temporal expectancy in the other modality, suggesting temporal expectancies created by rhythms are crossmodal. Interestingly, there was no interaction between top-down spatial attention and rhythmic cueing in any task suggesting these two processes largely influenced

  20. Nonlinear dynamics of human locomotion: effects of rhythmic auditory cueing on local dynamic stability

    Directory of Open Access Journals (Sweden)

    Philippe eTerrier

    2013-09-01

    Full Text Available It has been observed that times series of gait parameters (stride length (SL, stride time (ST and stride speed (SS, exhibit long-term persistence and fractal-like properties. Synchronizing steps with rhythmic auditory stimuli modifies the persistent fluctuation pattern to anti-persistence. Another nonlinear method estimates the degree of resilience of gait control to small perturbations, i.e. the local dynamic stability (LDS. The method makes use of the maximal Lyapunov exponent, which estimates how fast a nonlinear system embedded in a reconstructed state space (attractor diverges after an infinitesimal perturbation. We propose to use an instrumented treadmill to simultaneously measure basic gait parameters (time series of SL, ST and SS from which the statistical persistence among consecutive strides can be assessed, and the trajectory of the center of pressure (from which the LDS can be estimated. In 20 healthy participants, the response to rhythmic auditory cueing (RAC of LDS and of statistical persistence (assessed with detrended fluctuation analysis (DFA was compared. By analyzing the divergence curves, we observed that long-term LDS (computed as the reverse of the average logarithmic rate of divergence between the 4th and the 10th strides downstream from nearest neighbors in the reconstructed attractor was strongly enhanced (relative change +47%. That is likely the indication of a more dampened dynamics. The change in short-term LDS (divergence over one step was smaller (+3%. DFA results (scaling exponents confirmed an anti-persistent pattern in ST, SL and SS. Long-term LDS (but not short-term LDS and scaling exponents exhibited a significant correlation between them (r=0.7. Both phenomena probably result from the more conscious/voluntary gait control that is required by RAC. We suggest that LDS and statistical persistence should be used to evaluate the efficiency of cueing therapy in patients with neurological gait disorders.

  1. Verbal Auditory Cueing of Improvisational Dance: A Proposed Method for Training Agency in Parkinson's Disease.

    Science.gov (United States)

    Batson, Glenna; Hugenschmidt, Christina E; Soriano, Christina T

    2016-01-01

    Dance is a non-pharmacological intervention that helps maintain functional independence and quality of life in people with Parkinson's disease (PPD). Results from controlled studies on group-delivered dance for people with mild-to-moderate stage Parkinson's have shown statistically and clinically significant improvements in gait, balance, and psychosocial factors. Tested interventions include non-partnered dance forms (ballet and modern dance) and partnered (tango). In all of these dance forms, specific movement patterns initially are learned through repetition and performed in time-to-music. Once the basic steps are mastered, students may be encouraged to improvise on the learned steps as they perform them in rhythm with the music. Here, we summarize a method of teaching improvisational dance that advances previous reported benefits of dance for people with Parkinson's disease (PD). The method relies primarily on improvisational verbal auditory cueing with less emphasis on directed movement instruction. This method builds on the idea that daily living requires flexible, adaptive responses to real-life challenges. In PD, movement disorders not only limit mobility but also impair spontaneity of thought and action. Dance improvisation demands open and immediate interpretation of verbally delivered movement cues, potentially fostering the formation of spontaneous movement strategies. Here, we present an introduction to a proposed method, detailing its methodological specifics, and pointing to future directions. The viewpoint advances an embodied cognitive approach that has eco-validity in helping PPD meet the changing demands of daily living.

  2. The effect of pre-cueing on spatial attention across perception and action.

    Science.gov (United States)

    Israel, Moran M; Jolicoeur, Pierre; Cohen, Asher

    2017-11-06

    It is well established that processes of perception and action interact. A key question concerns the role of attention in the interaction between perception-action processes. We tested the hypothesis that spatial attention is shared by perception and action. We created a dual-task paradigm: In one task, spatial information is relevant for perception (spatial-input task) but not for action, and in a second task, spatial information is relevant for action (spatial-output task) but not for perception. We used endogenous pre-cueing, with two between-subjects conditions: In one condition the cue was predictive only for the target location in the spatial-input task; in a second condition the cue was predictive only for the location of the response in the spatial-output task. In both conditions, the cueing equally affected both tasks, regardless of the information conveyed by the cue. This finding directly supports the shared input-output attention hypothesis.

  3. Emphasis of spatial cues in the temporal fine structure during the rising segments of amplitude-modulated sounds.

    Science.gov (United States)

    Dietz, Mathias; Marquardt, Torsten; Salminen, Nelli H; McAlpine, David

    2013-09-10

    The ability to locate the direction of a target sound in a background of competing sources is critical to the survival of many species and important for human communication. Nevertheless, brain mechanisms that provide for such accurate localization abilities remain poorly understood. In particular, it remains unclear how the auditory brain is able to extract reliable spatial information directly from the source when competing sounds and reflections dominate all but the earliest moments of the sound wave reaching each ear. We developed a stimulus mimicking the mutual relationship of sound amplitude and binaural cues, characteristic to reverberant speech. This stimulus, named amplitude modulated binaural beat, allows for a parametric and isolated change of modulation frequency and phase relations. Employing magnetoencephalography and psychoacoustics it is demonstrated that the auditory brain uses binaural information in the stimulus fine structure only during the rising portion of each modulation cycle, rendering spatial information recoverable in an otherwise unlocalizable sound. The data suggest that amplitude modulation provides a means of "glimpsing" low-frequency spatial cues in a manner that benefits listening in noisy or reverberant environments.

  4. Hand proximity facilitates spatial discrimination of auditory tones

    Directory of Open Access Journals (Sweden)

    Philip eTseng

    2014-06-01

    Full Text Available The effect of hand proximity on vision and visual attention has been well documented. In this study we tested whether such effect(s would also be present in the auditory modality. With hands placed either near or away from the audio sources, participants performed an auditory-spatial discrimination (Exp 1: left or right side, pitch discrimination (Exp 2: high, med, or low tone, and spatial-plus-pitch (Exp 3: left or right; high, med, or low discrimination task. In Exp 1, when hands were away from the audio source, participants consistently responded faster with their right hand regardless of stimulus location. This right hand advantage, however, disappeared in the hands-near condition because of a significant improvement in left hand’s reaction time. No effect of hand proximity was found in Exp 2 or 3, where a choice reaction time task requiring pitch discrimination was used. Together, these results suggest that the effect of hand proximity is not exclusive to vision alone, but is also present in audition, though in a much weaker form. Most important, these findings provide evidence from auditory attention that supports the multimodal account originally raised by Reed et al. in 2006.

  5. Quadri-stability of a spatially ambiguous auditory illusion

    Directory of Open Access Journals (Sweden)

    Constance May Bainbridge

    2015-01-01

    Full Text Available In addition to vision, audition plays an important role in sound localization in our world. One way we estimate the motion of an auditory object moving towards or away from us is from changes in volume intensity. However, the human auditory system has unequally distributed spatial resolution, including difficulty distinguishing sounds in front versus behind the listener. Here, we introduce a novel quadri-stable illusion, the Transverse-and-Bounce Auditory Illusion, which combines front-back confusion with changes in volume levels of a nonspatial sound to create ambiguous percepts of an object approaching and withdrawing from the listener. The sound can be perceived as traveling transversely from front to back or back to front, or bouncing to remain exclusively in front of or behind the observer. Here we demonstrate how human listeners experience this illusory phenomenon by comparing ambiguous and unambiguous stimuli for each of the four possible motion percepts. When asked to rate their confidence in perceiving each sound’s motion, participants reported equal confidence for the illusory and unambiguous stimuli. Participants perceived all four illusory motion percepts, and could not distinguish the illusion from the unambiguous stimuli. These results show that this illusion is effectively quadri-stable. In a second experiment, the illusory stimulus was looped continuously in headphones while participants identified its perceived path of motion to test properties of perceptual switching, locking, and biases. Participants were biased towards perceiving transverse compared to bouncing paths, and they became perceptually locked into alternating between front-to-back and back-to-front percepts, perhaps reflecting how auditory objects commonly move in the real world. This multi-stable auditory illusion opens opportunities for studying the perceptual, cognitive, and neural representation of objects in motion, as well as exploring multimodal perceptual

  6. Spatial organization of tettigoniid auditory receptors: insights from neuronal tracing.

    Science.gov (United States)

    Strauß, Johannes; Lehmann, Gerlind U C; Lehmann, Arne W; Lakes-Harlan, Reinhard

    2012-11-01

    The auditory sense organ of Tettigoniidae (Insecta, Orthoptera) is located in the foreleg tibia and consists of scolopidial sensilla which form a row termed crista acustica. The crista acustica is associated with the tympana and the auditory trachea. This ear is a highly ordered, tonotopic sensory system. As the neuroanatomy of the crista acustica has been documented for several species, the most distal somata and dendrites of receptor neurons have occasionally been described as forming an alternating or double row. We investigate the spatial arrangement of receptor cell bodies and dendrites by retrograde tracing with cobalt chloride solution. In six tettigoniid species studied, distal receptor neurons are consistently arranged in double-rows of somata rather than a linear sequence. This arrangement of neurons is shown to affect 30-50% of the overall auditory receptors. No strict correlation of somata positions between the anterio-posterior and dorso-ventral axis was evident within the distal crista acustica. Dendrites of distal receptors occasionally also occur in a double row or are even massed without clear order. Thus, a substantial part of auditory receptors can deviate from a strictly straight organization into a more complex morphology. The linear organization of dendrites is not a morphological criterion that allows hearing organs to be distinguished from nonhearing sense organs serially homologous to ears in all species. Both the crowded arrangement of receptor somata and dendrites may result from functional constraints relating to frequency discrimination, or from developmental constraints of auditory morphogenesis in postembryonic development. Copyright © 2012 Wiley Periodicals, Inc.

  7. Auditory and proprioceptive spatial impairments in blind children and adults.

    Science.gov (United States)

    Cappagli, Giulia; Cocchi, Elena; Gori, Monica

    2017-05-01

    It is not clear what role visual information plays in the development of space perception. It has previously been shown that in absence of vision, both the ability to judge orientation in the haptic modality and bisect intervals in the auditory modality are severely compromised (Gori, Sandini, Martinoli & Burr, 2010; Gori, Sandini, Martinoli & Burr, 2014). Here we report for the first time also a strong deficit in proprioceptive reproduction and audio distance evaluation in early blind children and adults. Interestingly, the deficit is not present in a small group of adults with acquired visual disability. Our results support the idea that in absence of vision the audio and proprioceptive spatial representations may be delayed or drastically weakened due to the lack of visual calibration over the auditory and haptic modalities during the critical period of development. © 2015 John Wiley & Sons Ltd.

  8. What happens in between? Human oscillatory brain activity related to crossmodal spatial cueing.

    Directory of Open Access Journals (Sweden)

    Maja U Trenner

    Full Text Available Previous studies investigated the effects of crossmodal spatial attention by comparing the responses to validly versus invalidly cued target stimuli. Dynamics of cortical rhythms in the time interval between cue and target might contribute to cue effects on performance. Here, we studied the influence of spatial attention on ongoing oscillatory brain activity in the interval between cue and target onset. In a first experiment, subjects underwent periods of tactile stimulation (cue followed by visual stimulation (target in a spatial cueing task as well as tactile stimulation as a control. In a second experiment, cue validity was modified to be 50%, 75%, or else 25%, to separate effects of exogenous shifts of attention caused by tactile stimuli from that of endogenous shifts. Tactile stimuli produced: 1 a stronger lateralization of the sensorimotor beta-rhythm rebound (15-22 Hz after tactile stimuli serving as cues versus not serving as cues; 2 a suppression of the occipital alpha-rhythm (7-13 Hz appearing only in the cueing task (this suppression was stronger contralateral to the endogenously attended side and was predictive of behavioral success; 3 an increase of prefrontal gamma-activity (25-35 Hz specifically in the cueing task. We measured cue-related modulations of cortical rhythms which may accompany crossmodal spatial attention, expectation or decision, and therefore contribute to cue validity effects. The clearly lateralized alpha suppression after tactile cues in our data indicates its dependence on endogenous rather than exogenous shifts of visuo-spatial attention following a cue independent of its modality.

  9. Multiplicative auditory spatial receptive fields created by a hierarchy of population codes.

    Directory of Open Access Journals (Sweden)

    Brian J Fischer

    2009-11-01

    Full Text Available A multiplicative combination of tuning to interaural time difference (ITD and interaural level difference (ILD contributes to the generation of spatially selective auditory neurons in the owl's midbrain. Previous analyses of multiplicative responses in the owl have not taken into consideration the frequency-dependence of ITD and ILD cues that occur under natural listening conditions. Here, we present a model for the responses of ITD- and ILD-sensitive neurons in the barn owl's inferior colliculus which satisfies constraints raised by experimental data on frequency convergence, multiplicative interaction of ITD and ILD, and response properties of afferent neurons. We propose that multiplication between ITD- and ILD-dependent signals occurs only within frequency channels and that frequency integration occurs using a linear-threshold mechanism. The model reproduces the experimentally observed nonlinear responses to ITD and ILD in the inferior colliculus, with greater accuracy than previous models. We show that linear-threshold frequency integration allows the system to represent multiple sound sources with natural sound localization cues, whereas multiplicative frequency integration does not. Nonlinear responses in the owl's inferior colliculus can thus be generated using a combination of cellular and network mechanisms, showing that multiple elements of previous theories can be combined in a single system.

  10. A single-band envelope cue as a supplement to speechreading of segmentals: a comparison of auditory versus tactual presentation.

    Science.gov (United States)

    Bratakos, M S; Reed, C M; Delhorne, L A; Denesvich, G

    2001-06-01

    The objective of this study was to compare the effects of a single-band envelope cue as a supplement to speechreading of segmentals and sentences when presented through either the auditory or tactual modality. The supplementary signal, which consisted of a 200-Hz carrier amplitude-modulated by the envelope of an octave band of speech centered at 500 Hz, was presented through a high-performance single-channel vibrator for tactual stimulation or through headphones for auditory stimulation. Normal-hearing subjects were trained and tested on the identification of a set of 16 medial vowels in /b/-V-/d/ context and a set of 24 initial consonants in C-/a/-C context under five conditions: speechreading alone (S), auditory supplement alone (A), tactual supplement alone (T), speechreading combined with the auditory supplement (S+A), and speechreading combined with the tactual supplement (S+T). Performance on various speech features was examined to determine the contribution of different features toward improvements under the aided conditions for each modality. Performance on the combined conditions (S+A and S+T) was compared with predictions generated from a quantitative model of multi-modal performance. To explore the relationship between benefits for segmentals and for connected speech within the same subjects, sentence reception was also examined for the three conditions of S, S+A, and S+T. For segmentals, performance generally followed the pattern of T < A < S < S+T < S+A. Significant improvements to speechreading were observed with both the tactual and auditory supplements for consonants (10 and 23 percentage-point improvements, respectively), but only with the auditory supplement for vowels (a 10 percentage-point improvement). The results of the feature analyses indicated that improvements to speechreading arose primarily from improved performance on the features low and tense for vowels and on the features voicing, nasality, and plosion for consonants. These

  11. Phase of Spontaneous Slow Oscillations during Sleep Influences Memory-Related Processing of Auditory Cues.

    Science.gov (United States)

    Batterink, Laura J; Creery, Jessica D; Paller, Ken A

    2016-01-27

    Slow oscillations during slow-wave sleep (SWS) may facilitate memory consolidation by regulating interactions between hippocampal and cortical networks. Slow oscillations appear as high-amplitude, synchronized EEG activity, corresponding to upstates of neuronal depolarization and downstates of hyperpolarization. Memory reactivations occur spontaneously during SWS, and can also be induced by presenting learning-related cues associated with a prior learning episode during sleep. This technique, targeted memory reactivation (TMR), selectively enhances memory consolidation. Given that memory reactivation is thought to occur preferentially during the slow-oscillation upstate, we hypothesized that TMR stimulation effects would depend on the phase of the slow oscillation. Participants learned arbitrary spatial locations for objects that were each paired with a characteristic sound (eg, cat-meow). Then, during SWS periods of an afternoon nap, one-half of the sounds were presented at low intensity. When object location memory was subsequently tested, recall accuracy was significantly better for those objects cued during sleep. We report here for the first time that this memory benefit was predicted by slow-wave phase at the time of stimulation. For cued objects, location memories were categorized according to amount of forgetting from pre- to post-nap. Conditions of high versus low forgetting corresponded to stimulation timing at different slow-oscillation phases, suggesting that learning-related stimuli were more likely to be processed and trigger memory reactivation when they occurred at the optimal phase of a slow oscillation. These findings provide insight into mechanisms of memory reactivation during sleep, supporting the idea that reactivation is most likely during cortical upstates. Slow-wave sleep (SWS) is characterized by synchronized neural activity alternating between active upstates and quiet downstates. The slow-oscillation upstates are thought to provide a

  12. Emotional cues enhance the attentional effects on spatial and temporal resolution

    NARCIS (Netherlands)

    B.R. Bocanegra (Bruno); R. Zeelenberg (René)

    2011-01-01

    textabstractIn the present study, we demonstrated that the emotional significance of a spatial cue enhances the effect of covert attention on spatial and temporal resolution (i. e., our ability to discriminate small spatial details and fast temporal flicker). Our results indicated that fearful face

  13. Switching of auditory attention in "cocktail-party" listening: ERP evidence of cueing effects in younger and older adults.

    Science.gov (United States)

    Getzmann, Stephan; Jasny, Julian; Falkenstein, Michael

    2017-02-01

    Verbal communication in a "cocktail-party situation" is a major challenge for the auditory system. In particular, changes in target speaker usually result in declined speech perception. Here, we investigated whether speech cues indicating a subsequent change in target speaker reduce the costs of switching in younger and older adults. We employed event-related potential (ERP) measures and a speech perception task, in which sequences of short words were simultaneously presented by four speakers. Changes in target speaker were either unpredictable or semantically cued by a word within the target stream. Cued changes resulted in a less decreased performance than uncued changes in both age groups. The ERP analysis revealed shorter latencies in the change-related N400 and late positive complex (LPC) after cued changes, suggesting an acceleration in context updating and attention switching. Thus, both younger and older listeners used semantic cues to prepare changes in speaker setting. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Processing of spatial sounds in the impaired auditory system

    DEFF Research Database (Denmark)

    Arweiler, Iris

    Understanding speech in complex acoustic environments presents a challenge for most hearing-impaired listeners. In conditions where normal-hearing listeners effortlessly utilize spatial cues to improve speech intelligibility, hearing-impaired listeners often struggle. In this thesis, the influence...... with an intelligibility-weighted “efficiency factor” which revealed that the spectral characteristics of the ER’s caused the reduced benefit. Hearing-impaired listeners were able to utilize the ER energy as effectively as normal-hearing listeners, most likely because binaural processing was not required...... that are binaurally linked can utilize the signals at both ears and preserve the ILD’s through co-ordinated compression. Hearing-impaired listeners received a small, but not significant advantage from linked compared to independent compression. It was concluded that, for speech intelligibility, the exact ILD...

  15. Selective importance of the rat anterior thalamic nuclei for configural learning involving distal spatial cues.

    Science.gov (United States)

    Dumont, Julie R; Amin, Eman; Aggleton, John P

    2014-01-01

    To test potential parallels between hippocampal and anterior thalamic function, rats with anterior thalamic lesions were trained on a series of biconditional learning tasks. The anterior thalamic lesions did not disrupt learning two biconditional associations in operant chambers where a specific auditory stimulus (tone or click) had a differential outcome depending on whether it was paired with a particular visual context (spot or checkered wall-paper) or a particular thermal context (warm or cool). Likewise, rats with anterior thalamic lesions successfully learnt a biconditional task when they were reinforced for digging in one of two distinct cups (containing either beads or shredded paper), depending on the particular appearance of the local context on which the cup was placed (one of two textured floors). In contrast, the same rats were severely impaired at learning the biconditional rule to select a specific cup when in a particular location within the test room. Place learning was then tested with a series of go/no-go discriminations. Rats with anterior thalamic nuclei lesions could learn to discriminate between two locations when they were approached from a constant direction. They could not, however, use this acquired location information to solve a subsequent spatial biconditional task where those same places dictated the correct choice of digging cup. Anterior thalamic lesions produced a selective, but severe, biconditional learning deficit when the task incorporated distal spatial cues. This deficit mirrors that seen in rats with hippocampal lesions, so extending potential interdependencies between the two sites. © 2013 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  16. Visuo-spatial cueing in children with differential reading and spelling profiles.

    Science.gov (United States)

    Banfi, Chiara; Kemény, Ferenc; Gangl, Melanie; Schulte-Körne, Gerd; Moll, Kristina; Landerl, Karin

    2017-01-01

    Dyslexia has been claimed to be causally related to deficits in visuo-spatial attention. In particular, inefficient shifting of visual attention during spatial cueing paradigms is assumed to be associated with problems in graphemic parsing during sublexical reading. The current study investigated visuo-spatial attention performance in an exogenous cueing paradigm in a large sample (N = 191) of third and fourth graders with different reading and spelling profiles (controls, isolated reading deficit, isolated spelling deficit, combined deficit in reading and spelling). Once individual variability in reaction times was taken into account by means of z-transformation, a cueing deficit (i.e. no significant difference between valid and invalid trials) was found for children with combined deficits in reading and spelling. However, poor readers without spelling problems showed a cueing effect comparable to controls, but exhibited a particularly strong right-over-left advantage (position effect). Isolated poor spellers showed a significant cueing effect, but no position effect. While we replicated earlier findings of a reduced cueing effect among poor nonword readers (indicating deficits in sublexical processing), we also found a reduced cueing effect among children with particularly poor orthographic spelling (indicating deficits in lexical processing). Thus, earlier claims of a specific association with nonword reading could not be confirmed. Controlling for ADHD-symptoms reported in a parental questionnaire did not impact on the statistical analysis, indicating that cueing deficits are not caused by more general attentional limitations. Between 31 and 48% of participants in the three reading and/or spelling deficit groups as well as 32% of the control group showed reduced spatial cueing. These findings indicate a significant, but moderate association between certain aspects of visuo-spatial attention and subcomponents of written language processing, the causal status of

  17. Visuo-spatial cueing in children with differential reading and spelling profiles.

    Directory of Open Access Journals (Sweden)

    Chiara Banfi

    Full Text Available Dyslexia has been claimed to be causally related to deficits in visuo-spatial attention. In particular, inefficient shifting of visual attention during spatial cueing paradigms is assumed to be associated with problems in graphemic parsing during sublexical reading. The current study investigated visuo-spatial attention performance in an exogenous cueing paradigm in a large sample (N = 191 of third and fourth graders with different reading and spelling profiles (controls, isolated reading deficit, isolated spelling deficit, combined deficit in reading and spelling. Once individual variability in reaction times was taken into account by means of z-transformation, a cueing deficit (i.e. no significant difference between valid and invalid trials was found for children with combined deficits in reading and spelling. However, poor readers without spelling problems showed a cueing effect comparable to controls, but exhibited a particularly strong right-over-left advantage (position effect. Isolated poor spellers showed a significant cueing effect, but no position effect. While we replicated earlier findings of a reduced cueing effect among poor nonword readers (indicating deficits in sublexical processing, we also found a reduced cueing effect among children with particularly poor orthographic spelling (indicating deficits in lexical processing. Thus, earlier claims of a specific association with nonword reading could not be confirmed. Controlling for ADHD-symptoms reported in a parental questionnaire did not impact on the statistical analysis, indicating that cueing deficits are not caused by more general attentional limitations. Between 31 and 48% of participants in the three reading and/or spelling deficit groups as well as 32% of the control group showed reduced spatial cueing. These findings indicate a significant, but moderate association between certain aspects of visuo-spatial attention and subcomponents of written language processing, the

  18. Reduction of the spatial stroop effect by peripheral cueing as a function of the presence/absence of placeholders.

    Directory of Open Access Journals (Sweden)

    Chunming Luo

    Full Text Available In a paradigm combining spatial Stroop with spatial cueing, the current study investigated the role of the presence vs. absence of placeholders on the reduction of the spatial Stroop effect by peripheral cueing. At a short cue-target interval, the modulation of peripheral cueing over the spatial Stroop effect was observed independently of the presence/absence of placeholders. At the long cue-target interval, however, this modulation over the spatial Stroop effect only occurred in the placeholders-present condition. These findings show that placeholders are modulators but not mediators of the reduction of the spatial Stroop effect by peripheral cueing, which further favor the cue-target integration account.

  19. Developmental plasticity of spatial hearing following asymmetric hearing loss: context-dependent cue integration and its clinical implications

    Directory of Open Access Journals (Sweden)

    Peter eKeating

    2013-12-01

    Full Text Available Under normal hearing conditions, comparisons of the sounds reaching each ear are critical for accurate sound localization. Asymmetric hearing loss should therefore degrade spatial hearing and has become an important experimental tool for probing the plasticity of the auditory system, both during development and adulthood. In clinical populations, hearing loss affecting one ear more than the other is commonly associated with otitis media with effusion, a disorder experienced by approximately 80% of children before the age of two. Asymmetric hearing may also arise in other clinical situations, such as after unilateral cochlear implantation. Here, we consider the role played by spatial cue integration in sound localization under normal acoustical conditions. We then review evidence for adaptive changes in spatial hearing following a developmental hearing loss in one ear, and argue that adaptation may be achieved either by learning a new relationship between the altered cues and directions in space or by changing the way different cues are integrated in the brain. We next consider developmental plasticity as a source of vulnerability, describing maladaptive effects of asymmetric hearing loss that persist even when normal hearing is provided. We also examine the extent to which the consequences of asymmetric hearing loss depend upon its timing and duration. Although much of the experimental literature has focused on the effects of a stable unilateral hearing loss, some of the most common hearing impairments experienced by children tend to fluctuate over time. We therefore argue that there is a need to bridge this gap by investigating the effects of recurring hearing loss during development, and outline recent steps in this direction. We conclude by arguing that this work points toward a more nuanced view of developmental plasticity, in which plasticity may be selectively expressed in response to specific sensory contexts, and consider the clinical

  20. Spatial selective auditory attention in the presence of reverberant energy: individual differences in normal-hearing listeners.

    Science.gov (United States)

    Ruggles, Dorea; Shinn-Cunningham, Barbara

    2011-06-01

    Listeners can selectively attend to a desired target by directing attention to known target source features, such as location or pitch. Reverberation, however, reduces the reliability of the cues that allow a target source to be segregated and selected from a sound mixture. Given this, it is likely that reverberant energy interferes with selective auditory attention. Anecdotal reports suggest that the ability to focus spatial auditory attention degrades even with early aging, yet there is little evidence that middle-aged listeners have behavioral deficits on tasks requiring selective auditory attention. The current study was designed to look for individual differences in selective attention ability and to see if any such differences correlate with age. Normal-hearing adults, ranging in age from 18 to 55 years, were asked to report a stream of digits located directly ahead in a simulated rectangular room. Simultaneous, competing masker digit streams were simulated at locations 15° left and right of center. The level of reverberation was varied to alter task difficulty by interfering with localization cues (increasing localization blur). Overall, performance was best in the anechoic condition and worst in the high-reverberation condition. Listeners nearly always reported a digit from one of the three competing streams, showing that reverberation did not render the digits unintelligible. Importantly, inter-subject differences were extremely large. These differences, however, were not significantly correlated with age, memory span, or hearing status. These results show that listeners with audiometrically normal pure tone thresholds differ in their ability to selectively attend to a desired source, a task important in everyday communication. Further work is necessary to determine if these differences arise from differences in peripheral auditory function or in more central function.

  1. Learning to Match Auditory and Visual Speech Cues: Social Influences on Acquisition of Phonological Categories

    Science.gov (United States)

    Altvater-Mackensen, Nicole; Grossmann, Tobias

    2015-01-01

    Infants' language exposure largely involves face-to-face interactions providing acoustic and visual speech cues but also social cues that might foster language learning. Yet, both audiovisual speech information and social information have so far received little attention in research on infants' early language development. Using a preferential…

  2. Behavioural sensitivity to binaural spatial cues in ferrets: evidence for plasticity in the duplex theory of sound localization.

    Science.gov (United States)

    Keating, Peter; Nodal, Fernando R; King, Andrew J

    2014-01-01

    For over a century, the duplex theory has guided our understanding of human sound localization in the horizontal plane. According to this theory, the auditory system uses interaural time differences (ITDs) and interaural level differences (ILDs) to localize low-frequency and high-frequency sounds, respectively. Whilst this theory successfully accounts for the localization of tones by humans, some species show very different behaviour. Ferrets are widely used for studying both clinical and fundamental aspects of spatial hearing, but it is not known whether the duplex theory applies to this species or, if so, to what extent the frequency range over which each binaural cue is used depends on acoustical or neurophysiological factors. To address these issues, we trained ferrets to lateralize tones presented over earphones and found that the frequency dependence of ITD and ILD sensitivity broadly paralleled that observed in humans. Compared with humans, however, the transition between ITD and ILD sensitivity was shifted toward higher frequencies. We found that the frequency dependence of ITD sensitivity in ferrets can partially be accounted for by acoustical factors, although neurophysiological mechanisms are also likely to be involved. Moreover, we show that binaural cue sensitivity can be shaped by experience, as training ferrets on a 1-kHz ILD task resulted in significant improvements in thresholds that were specific to the trained cue and frequency. Our results provide new insights into the factors limiting the use of different sound localization cues and highlight the importance of sensory experience in shaping the underlying neural mechanisms. © 2013 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  3. Role of gravitational versus egocentric cues for human spatial orientation.

    Science.gov (United States)

    Bury, Nils; Bock, Otmar

    2016-04-01

    Our perception of the vertical depends on allocentric information about the visual surrounds, egocentric information about the own body axis and gravicentric information about the pull of gravity. Previous work has documented that some individuals rely strongly on allocentric information, while others do not, and the present work scrutinizes the existence of yet another dichotomy: We hypothesize that in the absence of allocentric cues, some individuals rely strongly on gravicentric information, while others do not. Twenty-four participants were tested at three angles of body pitch (0° = upright, -90° = supine, -110° = head down) after eliminating visual orientation cues. When asked to adjust a rotating tree '…such that the tree looks right,' nine persons set the tree consistently parallel to gravity, eight consistently parallel to their longitudinal axis and seven switched between these two references; responses mid-between gravity and body axis were rare. The outcome was similar when tactile cues were masked by body vibration, as well as when participants were asked to adjust the tree '… such that leaves are at the top and roots are at the bottom'; the incidence of gravicentric responses increased with the instruction to set the tree '… such that leaves are at the top and roots are at the bottom in space, irrespective of your own position.' We conclude that the perceived vertical can be anchored in gravicentric or in egocentric space, depending on instructions and individual preference.

  4. Impact of regularization of near field coding filters for 2D and 3D higher-order Ambisonics on auditory distance cues

    DEFF Research Database (Denmark)

    Favrot, Sylvain Emmanuel; Buchholz, Jörg

    2010-01-01

    A known challenge in sound field reproduction techniques such as high-order Ambisonics (HOA) is the reproduction of nearby sound sources. In order to reproduce such nearby sound sources, the near-field compensated (NFC) method with angular weighting windows (AWWs) has been previously roposed...... for HOA [1]. Considering auditory distance perception, (low-frequency)interaural level differences represent the main auditory cue for nearby real sound sources outside the median plane. Simulations showed that these ILD cues can be reproduced with existing weighted NFC-HOA methods for frequencies above...

  5. Auditory distance coding in rabbit midbrain neurons and human perception: monaural amplitude modulation depth as a cue.

    Science.gov (United States)

    Kim, Duck O; Zahorik, Pavel; Carney, Laurel H; Bishop, Brian B; Kuwada, Shigeyuki

    2015-04-01

    Mechanisms underlying sound source distance localization are not well understood. Here we tested the hypothesis that a novel mechanism can create monaural distance sensitivity: a combination of auditory midbrain neurons' sensitivity to amplitude modulation (AM) depth and distance-dependent loss of AM in reverberation. We used virtual auditory space (VAS) methods for sounds at various distances in anechoic and reverberant environments. Stimulus level was constant across distance. With increasing modulation depth, some rabbit inferior colliculus neurons increased firing rates whereas others decreased. These neurons exhibited monotonic relationships between firing rates and distance for monaurally presented noise when two conditions were met: (1) the sound had AM, and (2) the environment was reverberant. The firing rates as a function of distance remained approximately constant without AM in either environment and, in an anechoic condition, even with AM. We corroborated this finding by reproducing the distance sensitivity using a neural model. We also conducted a human psychophysical study using similar methods. Normal-hearing listeners reported perceived distance in response to monaural 1 octave 4 kHz noise source sounds presented at distances of 35-200 cm. We found parallels between the rabbit neural and human responses. In both, sound distance could be discriminated only if the monaural sound in reverberation had AM. These observations support the hypothesis. When other cues are available (e.g., in binaural hearing), how much the auditory system actually uses the AM as a distance cue remains to be determined. Copyright © 2015 the authors 0270-6474/15/355360-13$15.00/0.

  6. The encoding of vowels and temporal speech cues in the auditory cortex of professional musicians: an EEG study.

    Science.gov (United States)

    Kühnis, Jürg; Elmer, Stefan; Meyer, Martin; Jäncke, Lutz

    2013-07-01

    Here, we applied a multi-feature mismatch negativity (MMN) paradigm in order to systematically investigate the neuronal representation of vowels and temporally manipulated CV syllables in a homogeneous sample of string players and non-musicians. Based on previous work indicating an increased sensitivity of the musicians' auditory system, we expected to find that musically trained subjects will elicit increased MMN amplitudes in response to temporal variations in CV syllables, namely voice-onset time (VOT) and duration. In addition, since different vowels are principally distinguished by means of frequency information and musicians are superior in extracting tonal (and thus frequency) information from an acoustic stream, we also expected to provide evidence for an increased auditory representation of vowels in the experts. In line with our hypothesis, we could show that musicians are not only advantaged in the pre-attentive encoding of temporal speech cues, but most notably also in processing vowels. Additional "just noticeable difference" measurements suggested that the musicians' perceptual advantage in encoding speech sounds was more likely driven by the generic constitutional properties of a highly trained auditory system, rather than by its specialisation for speech representations per se. These results shed light on the origin of the often reported advantage of musicians in processing a variety of speech sounds. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Altered top-down cognitive control and auditory processing in tinnitus: evidences from auditory and visual spatial stroop.

    Science.gov (United States)

    Araneda, Rodrigo; De Volder, Anne G; Deggouj, Naïma; Philippot, Pierre; Heeren, Alexandre; Lacroix, Emilie; Decat, Monique; Rombaux, Philippe; Renier, Laurent

    2015-01-01

    Tinnitus is the perception of a sound in the absence of external stimulus. Currently, the pathophysiology of tinnitus is not fully understood, but recent studies indicate that alterations in the brain involve non-auditory areas, including the prefrontal cortex. Here, we hypothesize that these brain alterations affect top-down cognitive control mechanisms that play a role in the regulation of sensations, emotions and attention resources. The efficiency of the executive control as well as simple reaction speed and processing speed were evaluated in tinnitus participants (TP) and matched control subjects (CS) in both the auditory and the visual modalities using a spatial Stroop paradigm. TP were slower and less accurate than CS during both the auditory and the visual spatial Stroop tasks, while simple reaction speed and stimulus processing speed were affected in TP in the auditory modality only. Tinnitus is associated both with modality-specific deficits along the auditory processing system and an impairment of cognitive control mechanisms that are involved both in vision and audition (i.e. that are supra-modal). We postulate that this deficit in the top-down cognitive control is a key-factor in the development and maintenance of tinnitus and may also explain some of the cognitive difficulties reported by tinnitus sufferers.

  8. Discrimination of virtual auditory distance using level and direct-to-reverberant ratio cues.

    Science.gov (United States)

    Kolarik, Andrew; Cirstea, Silvia; Pardhan, Shahina

    2013-11-01

    The study investigated how listeners used level and direct-to-reverberant ratio (D/R) cues to discriminate distances to virtual sound sources. Sentence pairs were presented at virtual distances in simulated rooms that were either reverberant or anechoic. Performance on the basis of level was generally better than performance based on D/R. Increasing room reverberation time improved performance based on the D/R cue such that the two cues provided equally effective information at further virtual source distances in highly reverberant environments. Orientation of the listener within the virtual room did not affect performance.

  9. The Effect of Tactile Cues on Auditory Stream Segregation Ability of Musicians and Nonmusicians

    DEFF Research Database (Denmark)

    Slater, Kyle D.; Marozeau, Jeremy

    2016-01-01

    , we test whether tactile cues can be used to segregate 2 interleaved melodies. Twelve musicians and 12 nonmusicians were asked to detect changes in a 4-note repeated melody interleaved with a random melody. In order to perform this task, the listener must be able to segregate the target melody from...... the random melody. Tactile cues were applied to the listener’s fingers on half of the blocks. Results showed that tactile cues can significantly improve the melodic segregation ability in both musician and nonmusician groups in challenging listening conditions. Overall, the musician group performance...

  10. Task-dependent activations of human auditory cortex during spatial discrimination and spatial memory tasks.

    Science.gov (United States)

    Rinne, Teemu; Koistinen, Sonja; Talja, Suvi; Wikman, Patrik; Salonen, Oili

    2012-02-15

    In the present study, we applied high-resolution functional magnetic resonance imaging (fMRI) of the human auditory cortex (AC) and adjacent areas to compare activations during spatial discrimination and spatial n-back memory tasks that were varied parametrically in difficulty. We found that activations in the anterior superior temporal gyrus (STG) were stronger during spatial discrimination than during spatial memory, while spatial memory was associated with stronger activations in the inferior parietal lobule (IPL). We also found that wide AC areas were strongly deactivated during the spatial memory tasks. The present AC activation patterns associated with spatial discrimination and spatial memory tasks were highly similar to those obtained in our previous study comparing AC activations during pitch discrimination and pitch memory (Rinne et al., 2009). Together our previous and present results indicate that discrimination and memory tasks activate anterior and posterior AC areas differently and that this anterior-posterior division is present both when these tasks are performed on spatially invariant (pitch discrimination vs. memory) or spatially varying (spatial discrimination vs. memory) sounds. These results also further strengthen the view that activations of human AC cannot be explained only by stimulus-level parameters (e.g., spatial vs. nonspatial stimuli) but that the activations observed with fMRI are strongly dependent on the characteristics of the behavioral task. Thus, our results suggest that in order to understand the functional structure of AC a more systematic investigation of task-related factors affecting AC activations is needed. Copyright © 2011 Elsevier Inc. All rights reserved.

  11. Auditory externalization in hearing-impaired listeners: The effect of pinna cues and number of talkers

    OpenAIRE

    Boyd, Alan W.; Whitmer, William M; Soraghan, John J.; Akeroyd, Michael A

    2012-01-01

    Hearing-aid wearers have reported sound source locations as being perceptually internalized (i.e., inside their head). The contribution of hearing-aid design to internalization has, however, received little attention. This experiment compared the sensitivity of hearing-impaired (HI) and normal-hearing (NH) listeners to externalization cues when listening with their own ears and simulated BTE hearing-aids in increasingly complex listening situations and reduced pinna cues. Participants rated t...

  12. Auditory spatial discrimination by barn owls in simulated echoic conditions

    Science.gov (United States)

    Spitzer, Matthew W.; Bala, Avinash D. S.; Takahashi, Terry T.

    2003-03-01

    In humans, directional hearing in reverberant conditions is characterized by a ``precedence effect,'' whereby directional information conveyed by leading sounds dominates perceived location, and listeners are relatively insensitive to directional information conveyed by lagging sounds. Behavioral studies provide evidence of precedence phenomena in a wide range of species. The present study employs a discrimination paradigm, based on habituation and recovery of the pupillary dilation response, to provide quantitative measures of precedence phenomena in the barn owl. As in humans, the owl's ability to discriminate changes in the location of lagging sources is impaired relative to that for single sources. Spatial discrimination of lead sources is also impaired, but to a lesser extent than discrimination of lagging sources. Results of a control experiment indicate that sensitivity to monaural cues cannot account for discrimination of lag source location. Thus, impairment of discrimination ability in the two-source conditions most likely reflects a reduction in sensitivity to binaural directional information. These results demonstrate a similarity of precedence effect phenomena in barn owls and humans, and provide a basis for quantitative comparison with neuronal data from the same species.

  13. Multimodal information Management: Evaluation of Auditory and Haptic Cues for NextGen Communication Displays

    Science.gov (United States)

    Begault, Durand R.; Bittner, Rachel M.; Anderson, Mark R.

    2012-01-01

    Auditory communication displays within the NextGen data link system may use multiple synthetic speech messages replacing traditional ATC and company communications. The design of an interface for selecting amongst multiple incoming messages can impact both performance (time to select, audit and release a message) and preference. Two design factors were evaluated: physical pressure-sensitive switches versus flat panel "virtual switches", and the presence or absence of auditory feedback from switch contact. Performance with stimuli using physical switches was 1.2 s faster than virtual switches (2.0 s vs. 3.2 s); auditory feedback provided a 0.54 s performance advantage (2.33 s vs. 2.87 s). There was no interaction between these variables. Preference data were highly correlated with performance.

  14. The Effect of Attentional Cueing and Spatial Uncertainty in Visual Field Testing.

    Directory of Open Access Journals (Sweden)

    Jack Phu

    Full Text Available To determine the effect of reducing spatial uncertainty by attentional cueing on contrast sensitivity at a range of spatial locations and with different stimulus sizes.Six observers underwent perimetric testing with the Humphrey Visual Field Analyzer (HFA full threshold paradigm, and the output thresholds were compared to conditions where stimulus location was verbally cued to the observer. We varied the number of points cued, the eccentric and spatial location, and stimulus size (Goldmann size I, III and V. Subsequently, four observers underwent laboratory-based psychophysical testing on a custom computer program using Method of Constant Stimuli to determine the frequency-of-seeing (FOS curves with similar variables.We found that attentional cueing increased contrast sensitivity when measured using the HFA. We report a difference of approximately 2 dB with size I at peripheral and mid-peripheral testing locations. For size III, cueing had a greater effect for points presented in the periphery than in the mid-periphery. There was an exponential decay of the effect of cueing with increasing number of elements cued. Cueing a size V stimulus led to no change. FOS curves generated from laboratory-based psychophysical testing confirmed an increase in contrast detection sensitivity under the same conditions. We found that the FOS curve steepened when spatial uncertainty was reduced.We show that attentional cueing increases contrast sensitivity when using a size I or size III test stimulus on the HFA when up to 8 points are cued but not when a size V stimulus is cued. We show that this cueing also alters the slope of the FOS curve. This suggests that at least 8 points should be used to minimise potential attentional factors that may affect measurement of contrast sensitivity in the visual field.

  15. Effects of natural and artificial spatialization cues on segregation

    Science.gov (United States)

    de Cheveigné, Alain; Gretzky, Reinhard; Baskind, Alexis; Warusfel, Olivier

    2002-05-01

    A series of experiments was performed to better understanding factors that determine the clarity or ``transparency'' of sound scenes, particularly those created by artificial means. Using a paradigm proposed by C. J. Darwin and R. W. Hukin [J. Exp. Psychol. 25, 617-629], subjects were presented with a target sentence containing one word (chosen among two) that they had to report. Simultaneously they were presented with a distractor sentence containing a second word, temporally aligned with the first. In the absence of segregation cues, subjects scored 50% correct on average. A higher score indicated that simultaneous segregation and sequential grouping mechanisms were both effective. Stimuli were presented by headphones using individual or dummy-head head-related transfer functions (HRTFs), and various combinations of source positions, room effects, and restitution techniques. [Work supported by the Cognitique Programme of the French Ministry of Research and Technology.

  16. Recognizing Visual and Auditory Cues in the Detection of Foreign-Language Anxiety

    Science.gov (United States)

    Gregersen, Tammy

    2009-01-01

    This study examines whether nonverbal visual and/or auditory channels are more effective in detecting foreign-language anxiety. Recent research suggests that language teachers are often able to successfully decode the nonverbal behaviors indicative of foreign-language anxiety; however, relatively little is known about whether visual and/or…

  17. Lateralized ERP components related to spatial orienting: discriminating the direction of attention from processing sensory aspects of the cue.

    Science.gov (United States)

    Jongen, Ellen M M; Smulders, Fren T Y; Van der Heiden, Joep S H

    2007-11-01

    Two spatial cueing experiments were conducted to examine the functional significance of lateralized ERP components after cue-onset and to discriminate components related to sensory cue aspects and components related to the direction of attention. In Experiment 1, a simple detection task was presented. In Experiment 2, attentional selection was augmented. Two unimodal visual cueing tasks were presented using nonlateralized line cues and lateralized arrow cues. Lateralized cue effects and modulation after stimulus onset were stronger in Experiment 2. An early posterior component was related to the physical shape of arrows. A posterior negativity (EDAN) may be related to the encoding of direction from arrow cues. An anterior negativity (ADAN) and a posterior positivity (LDAP) were related to the direction of attention. The ADAN was delayed when it was more difficult to derive cue meaning. Finally, the data suggested an overlap of the LDAP and the EDAN.

  18. Flexible spatial perspective-taking: conversational partners weigh multiple cues in collaborative tasks

    Science.gov (United States)

    Galati, Alexia; Avraamides, Marios N.

    2013-01-01

    Research on spatial perspective-taking often focuses on the cognitive processes of isolated individuals as they adopt or maintain imagined perspectives. Collaborative studies of spatial perspective-taking typically examine speakers' linguistic choices, while overlooking their underlying processes and representations. We review evidence from two collaborative experiments that examine the contribution of social and representational cues to spatial perspective choices in both language and the organization of spatial memory. Across experiments, speakers organized their memory representations according to the convergence of various cues. When layouts were randomly configured and did not afford intrinsic cues, speakers encoded their partner's viewpoint in memory, if available, but did not use it as an organizing direction. On the other hand, when the layout afforded an intrinsic structure, speakers organized their spatial memories according to the person-centered perspective reinforced by the layout's structure. Similarly, in descriptions, speakers considered multiple cues whether available a priori or at the interaction. They used partner-centered expressions more frequently (e.g., “to your right”) when the partner's viewpoint was misaligned by a small offset or coincided with the layout's structure. Conversely, they used egocentric expressions more frequently when their own viewpoint coincided with the intrinsic structure or when the partner was misaligned by a computationally difficult, oblique offset. Based on these findings we advocate for a framework for flexible perspective-taking: people weigh multiple cues (including social ones) to make attributions about the relative difficulty of perspective-taking for each partner, and adapt behavior to minimize their collective effort. This framework is not specialized for spatial reasoning but instead emerges from the same principles and memory-depended processes that govern perspective-taking in non-spatial tasks

  19. Flexible spatial perspective-taking: Conversational partners weigh multiple cues in collaborative tasks

    Directory of Open Access Journals (Sweden)

    Alexia eGalati

    2013-09-01

    Full Text Available Research on spatial perspective-taking often focuses on the cognitive processes of isolated individuals as they adopt or maintain imagined perspectives. Collaborative studies of spatial perspective-taking typically examine speakers’ linguistic choices, while overlooking their underlying processes and representations. We review evidence from two collaborative experiments that examine the contribution of social and representational cues to spatial perspective choices in both language and the organization of spatial memory. Across experiments, speakers organized their memory representations according to the convergence of various cues. When layouts were randomly configured and did not afford intrinsic cues, speakers encoded their partner’s viewpoint in memory, if available, but did not use it as an organizing direction. On the other hand, when the layout afforded an intrinsic structure, speakers organized their spatial memories according to the person-centered perspective reinforced by the layout’s structure. Similarly, in descriptions, speakers considered multiple cues whether available a priori or at the interaction. They used partner-centered expressions more frequently (e.g., to your right when the partner’s viewpoint was misaligned by a small offset or coincided with the layout’s structure. Conversely, they used egocentric expressions more frequently when their own viewpoint coincided with the intrinsic structure or when the partner was misaligned by a computationally difficult, oblique offset. Based on these findings we advocate for a framework for flexible perspective-taking: people weigh multiple cues (including social ones to make attributions about the relative difficulty of perspective-taking for each partner, and adapt behavior to minimize their collective effort. This framework is not specialized for spatial reasoning but instead emerges from the same principles and memory-depended processes that govern perspective-taking in

  20. MEG evidence that the central auditory system simultaneously encodes multiple temporal cues

    NARCIS (Netherlands)

    Simpson, M.I.G.; Barnes, G.R.; Johnson, S.R.; Hillebrand, A.; Singh, K.D.; Green, G.G.R.

    2009-01-01

    Speech contains complex amplitude modulations that have envelopes with multiple temporal cues. The processing of these complex envelopes is not well explained by the classical models of amplitude modulation processing. This may be because the evidence for the models typically comes from the use of

  1. Two Persons with Multiple Disabilities Use Orientation Technology with Auditory Cues to Manage Simple Indoor Traveling

    Science.gov (United States)

    Lancioni, Giulio E.; Singh, Nirbhay N.; O'Reilly, Mark F.; Sigafoos, Jeff; Campodonico, Francesca; Oliva, Doretta

    2010-01-01

    This study was an effort to extend the evaluation of orientation technology for promoting independent indoor traveling in persons with multiple disabilities. Two participants (adults) were included, who were to travel to activity destinations within occupational settings. The orientation system involved (a) cueing sources only at the destinations…

  2. Attention Cueing and Activity Equally Reduce False Alarm Rate in Visual-Auditory Associative Learning through Improving Memory.

    Directory of Open Access Journals (Sweden)

    Mohammad-Ali Nikouei Mahani

    Full Text Available In our daily life, we continually exploit already learned multisensory associations and form new ones when facing novel situations. Improving our associative learning results in higher cognitive capabilities. We experimentally and computationally studied the learning performance of healthy subjects in a visual-auditory sensory associative learning task across active learning, attention cueing learning, and passive learning modes. According to our results, the learning mode had no significant effect on learning association of congruent pairs. In addition, subjects' performance in learning congruent samples was not correlated with their vigilance score. Nevertheless, vigilance score was significantly correlated with the learning performance of the non-congruent pairs. Moreover, in the last block of the passive learning mode, subjects significantly made more mistakes in taking non-congruent pairs as associated and consciously reported lower confidence. These results indicate that attention and activity equally enhanced visual-auditory associative learning for non-congruent pairs, while false alarm rate in the passive learning mode did not decrease after the second block. We investigated the cause of higher false alarm rate in the passive learning mode by using a computational model, composed of a reinforcement learning module and a memory-decay module. The results suggest that the higher rate of memory decay is the source of making more mistakes and reporting lower confidence in non-congruent pairs in the passive learning mode.

  3. Verbal Auditory Cueing of Improvisational Dance: A Proposed Method for Training Agency in Parkinson’s Disease

    Science.gov (United States)

    Batson, Glenna; Hugenschmidt, Christina E.; Soriano, Christina T.

    2016-01-01

    Dance is a non-pharmacological intervention that helps maintain functional independence and quality of life in people with Parkinson’s disease (PPD). Results from controlled studies on group-delivered dance for people with mild-to-moderate stage Parkinson’s have shown statistically and clinically significant improvements in gait, balance, and psychosocial factors. Tested interventions include non-partnered dance forms (ballet and modern dance) and partnered (tango). In all of these dance forms, specific movement patterns initially are learned through repetition and performed in time-to-music. Once the basic steps are mastered, students may be encouraged to improvise on the learned steps as they perform them in rhythm with the music. Here, we summarize a method of teaching improvisational dance that advances previous reported benefits of dance for people with Parkinson’s disease (PD). The method relies primarily on improvisational verbal auditory cueing with less emphasis on directed movement instruction. This method builds on the idea that daily living requires flexible, adaptive responses to real-life challenges. In PD, movement disorders not only limit mobility but also impair spontaneity of thought and action. Dance improvisation demands open and immediate interpretation of verbally delivered movement cues, potentially fostering the formation of spontaneous movement strategies. Here, we present an introduction to a proposed method, detailing its methodological specifics, and pointing to future directions. The viewpoint advances an embodied cognitive approach that has eco-validity in helping PPD meet the changing demands of daily living. PMID:26925029

  4. Verbal auditory cueing of improvisational dance: A proposed method for training agency in Parkinson’s disease

    Directory of Open Access Journals (Sweden)

    Glenna eBatson

    2016-02-01

    Full Text Available Dance is a non-pharmacological intervention that helps maintain functional independence and quality of life in people with Parkinson’s disease (PPD. Results from controlled studies on group-delivered dance for people with mild-to-moderate stage Parkinson’s have shown statistically and clinically significant improvements in gait, balance, and psychosocial factors. Tested interventions include non-partnered dance forms (ballet and modern dance and partnered (tango. In all of these dance forms, specific movement patterns initially are learned through repetition and performed in time to music. Once the basic steps are mastered, students may be encouraged to improvise on the learned steps as they perform them in rhythm with the music. Here, we summarize a method of teaching improvisational dance that advances previous reported benefits of dance for people with PD. The method relies primarily on improvisational verbal auditory cueing (VAC with less emphasis on directed movement instruction. This method builds on the idea that daily living requires flexible, adaptive responses to real-life challenges. In PD, movement disorders not only limit mobility, but also impair spontaneity of thought and action. Dance improvisation trains spontaneity of thought, fostering open and immediate interpretation of verbally delivered movement cues. Here we present an introduction to a proposed method, detailing its methodological specifics, and pointing to future directions. The viewpoint advances an embodied cognitive approach that has eco-validity in helping PPD meet the changing demands of daily living.

  5. The role of auditory temporal cues in the fluency of stuttering adults

    Directory of Open Access Journals (Sweden)

    Juliana Furini

    Full Text Available ABSTRACT Purpose: to compare the frequency of disfluencies and speech rate in spontaneous speech and reading in adults with and without stuttering in non-altered and delayed auditory feedback (NAF, DAF. Methods: participants were 30 adults: 15 with Stuttering (Research Group - RG, and 15 without stuttering (Control Group - CG. The procedures were: audiological assessment and speech fluency evaluation in two listening conditions, normal and delayed auditory feedback (100 milliseconds delayed by Fono Tools software. Results: the DAF caused a significant improvement in the fluency of spontaneous speech in RG when compared to speech under NAF. The effect of DAF was different in CG, because it increased the common disfluencies and the total of disfluencies in spontaneous speech and reading, besides showing an increase in the frequency of stuttering-like disfluencies in reading. The intergroup analysis showed significant differences in the two speech tasks for the two listening conditions in the frequency of stuttering-like disfluencies and in the total of disfluencies, and in the flows of syllable and word-per-minute in the NAF. Conclusion: the results demonstrated that delayed auditory feedback promoted fluency in spontaneous speech of adults who stutter, without interfering in the speech rate. In non-stuttering adults an increase occurred in the number of common disfluencies and total of disfluencies as well as reduction of speech rate in spontaneous speech and reading.

  6. Auditory Modulation of Somatosensory Spatial Judgments in Various Body Regions and Locations

    Directory of Open Access Journals (Sweden)

    Yukiomi Nozoe

    2011-10-01

    Full Text Available The spatial modulation effect has been reported in somatosensory spatial judgments when the task-irrelevant auditory stimuli are given from the opposite direction. Two experiments examined how the spatial modulation effect on somatosensory spatial judgments is altered in various body regions and their spatial locations. In experiment 1, air-puffs were presented randomly to either the left or right cheeks, hands (palm versus back and knees while auditory stimuli were presented from just behind ear on either the same or opposite side. In experiment 2, air-puffs were presented to hands which were aside of cheeks or placed on the knees. The participants were instructed to make speeded discrimination responses regarding the side (left versus right of the somatosensory targets by using two footpedals. In all conditions, reaction times significantry increased when the irrelevant stimuli were presented from the opposite side rather than from the same side. We found that the back of the hands were more influenced by incongruent auditory stimuli than cheeks, knees and palms, and that the hands were more influenced by incongruent auditory stimuli when placed at the side of cheeks than on the knees. These results indicate that the auditory-somatosensory interaction differs in various body regions and their spatial locations.

  7. Auditory externalization in hearing-impaired listeners: the effect of pinna cues and number of talkers.

    Science.gov (United States)

    Boyd, Alan W; Whitmer, William M; Soraghan, John J; Akeroyd, Michael A

    2012-03-01

    Hearing-aid wearers have reported sound source locations as being perceptually internalized (i.e., inside their head). The contribution of hearing-aid design to internalization has, however, received little attention. This experiment compared the sensitivity of hearing-impaired (HI) and normal-hearing listeners to externalization cues when listening with their own ears and simulated behind-the-ear hearing-aids in increasingly complex listening situations and reduced pinna cues. Participants rated the degree of externalization using a multiple-stimulus listening test for mixes of internalized and externalized speech stimuli presented over headphones. The results showed that HI listeners had a contracted perception of externalization correlated with high-frequency hearing loss. © 2012 Acoustical Society of America

  8. Use of local visual cues for spatial orientation in terrestrial toads (Rhinella arenarum): The role of distance to a goal.

    Science.gov (United States)

    Daneri, M Florencia; Casanave, Emma B; Muzio, Rubén N

    2015-08-01

    The use of environmental visual cues for navigation is an ability present in many groups of animals. The effect of spatial proximity between a visual cue and a goal on reorientation in an environment has been studied in several vertebrate groups, but never previously in amphibians. In this study, we tested the use of local visual cues (beacons) to orient in an open field in the terrestrial toad (Rhinella arenarum). Experiment 1 showed that toads could orient in space using 2 cues located near the rewarded container. Experiment 2 used only 1 cue placed at different distances to the goal and revealed that learning speed was affected by the proximity to the goal (the closer the cue was to the goal, the faster toads learned its location). Experiment 3 showed that the position of a cue results in a different predictive value. Toads preferred cues located closer to the goal more than those located farther away as a reference for orientation. Present results revealed, for the first time, that (a) toads can learn to orient in an open space using visual cues, and that (b) the effect of spatial proximity between a cue and a goal, a learning phenomenon previously observed in other groups of animals such as mammals, birds, fish, and invertebrates, also affects orientation in amphibians. Thus, our results suggest that toads are able to employ spatial strategies that closely parallel those described in other vertebrate groups, supporting an early evolutionary origin for these spatial orientation skills. (c) 2015 APA, all rights reserved).

  9. Pip and Pop: Nonspatial Auditory Signals Improve Spatial Visual Search

    Science.gov (United States)

    Van der Burg, Erik; Olivers, Christian N. L.; Bronkhorst, Adelbert W.; Theeuwes, Jan

    2008-01-01

    Searching for an object within a cluttered, continuously changing environment can be a very time-consuming process. The authors show that a simple auditory pip drastically decreases search times for a synchronized visual object that is normally very difficult to find. This effect occurs even though the pip contains no information on the location…

  10. Spatial Attention, Motor Intention, and Bayesian Cue Predictability in the Human Brain.

    Science.gov (United States)

    Kuhns, Anna B; Dombert, Pascasie L; Mengotti, Paola; Fink, Gereon R; Vossel, Simone

    2017-05-24

    Predictions about upcoming events influence how we perceive and respond to our environment. There is increasing evidence that predictions may be generated based upon previous observations following Bayesian principles, but little is known about the underlying cortical mechanisms and their specificity for different cognitive subsystems. The present study aimed at identifying common and distinct neural signatures of predictive processing in the spatial attentional and motor intentional system. Twenty-three female and male healthy human volunteers performed two probabilistic cueing tasks with either spatial or motor cues while lying in the fMRI scanner. In these tasks, the percentage of cue validity changed unpredictably over time. Trialwise estimates of cue predictability were derived from a Bayesian observer model of behavioral responses. These estimates were included as parametric regressors for analyzing the BOLD time series. Parametric effects of cue predictability in valid and invalid trials were considered to reflect belief updating by precision-weighted prediction errors. The brain areas exhibiting predictability-dependent effects dissociated between the spatial attention and motor intention task, with the right temporoparietal cortex being involved during spatial attention and the left angular gyrus and anterior cingulate cortex during motor intention. Connectivity analyses revealed that all three areas showed predictability-dependent coupling with the right hippocampus. These results suggest that precision-weighted prediction errors of stimulus locations and motor responses are encoded in distinct brain regions, but that crosstalk with the hippocampus may be necessary to integrate new trialwise outcomes in both cognitive systems.SIGNIFICANCE STATEMENT The brain is able to infer the environments' statistical structure and responds strongly to expectancy violations. In the spatial attentional domain, it has been shown that parts of the attentional networks are

  11. The interaction of acoustic and linguistic grouping cues in auditory object formation

    Science.gov (United States)

    Shapley, Kathy; Carrell, Thomas

    2005-09-01

    One of the earliest explanations for good speech intelligibility in poor listening situations was context [Miller et al., J. Exp. Psychol. 41 (1951)]. Context presumably allows listeners to group and predict speech appropriately and is known as a top-down listening strategy. Amplitude comodulation is another mechanism that has been shown to improve sentence intelligibility. Amplitude comodulation provides acoustic grouping information without changing the linguistic content of the desired signal [Carrell and Opie, Percept. Psychophys. 52 (1992); Hu and Wang, Proceedings of ICASSP-02 (2002)] and is considered a bottom-up process. The present experiment investigated how amplitude comodulation and semantic information combined to improve speech intelligibility. Sentences with high- and low-predictability word sequences [Boothroyd and Nittrouer, J. Acoust. Soc. Am. 84 (1988)] were constructed in two different formats: time-varying sinusoidal sentences (TVS) and reduced-channel sentences (RC). The stimuli were chosen because they minimally represent the traditionally defined speech cues and therefore emphasized the importance of the high-level context effects and low-level acoustic grouping cues. Results indicated that semantic information did not influence intelligibility levels of TVS and RC sentences. In addition amplitude modulation aided listeners' intelligibility scores in the TVS condition but hindered listeners' intelligibility scores in the RC condition.

  12. Estimating the relative weights of visual and auditory tau versus heuristic-based cues for time-to-contact judgments in realistic, familiar scenes by older and younger adults.

    Science.gov (United States)

    Keshavarz, Behrang; Campos, Jennifer L; DeLucia, Patricia R; Oberfeld, Daniel

    2017-04-01

    Estimating time to contact (TTC) involves multiple sensory systems, including vision and audition. Previous findings suggested that the ratio of an object's instantaneous optical size/sound intensity to its instantaneous rate of change in optical size/sound intensity (τ) drives TTC judgments. Other evidence has shown that heuristic-based cues are used, including final optical size or final sound pressure level. Most previous studies have used decontextualized and unfamiliar stimuli (e.g., geometric shapes on a blank background). Here we evaluated TTC estimates by using a traffic scene with an approaching vehicle to evaluate the weights of visual and auditory TTC cues under more realistic conditions. Younger (18-39 years) and older (65+ years) participants made TTC estimates in three sensory conditions: visual-only, auditory-only, and audio-visual. Stimuli were presented within an immersive virtual-reality environment, and cue weights were calculated for both visual cues (e.g., visual τ, final optical size) and auditory cues (e.g., auditory τ, final sound pressure level). The results demonstrated the use of visual τ as well as heuristic cues in the visual-only condition. TTC estimates in the auditory-only condition, however, were primarily based on an auditory heuristic cue (final sound pressure level), rather than on auditory τ. In the audio-visual condition, the visual cues dominated overall, with the highest weight being assigned to visual τ by younger adults, and a more equal weighting of visual τ and heuristic cues in older adults. Overall, better characterizing the effects of combined sensory inputs, stimulus characteristics, and age on the cues used to estimate TTC will provide important insights into how these factors may affect everyday behavior.

  13. A Bayesian computational basis for auditory selective attention using head rotation and the interaural time-difference cue.

    Directory of Open Access Journals (Sweden)

    Dillon A Hambrook

    Full Text Available The process of resolving mixtures of several sounds into their separate individual streams is known as auditory scene analysis and it remains a challenging task for computational systems. It is well-known that animals use binaural differences in arrival time and intensity at the two ears to find the arrival angle of sounds in the azimuthal plane, and this localization function has sometimes been considered sufficient to enable the un-mixing of complex scenes. However, the ability of such systems to resolve distinct sound sources in both space and frequency remains limited. The neural computations for detecting interaural time difference (ITD have been well studied and have served as the inspiration for computational auditory scene analysis systems, however a crucial limitation of ITD models is that they produce ambiguous or "phantom" images in the scene. This has been thought to limit their usefulness at frequencies above about 1khz in humans. We present a simple Bayesian model and an implementation on a robot that uses ITD information recursively. The model makes use of head rotations to show that ITD information is sufficient to unambiguously resolve sound sources in both space and frequency. Contrary to commonly held assumptions about sound localization, we show that the ITD cue used with high-frequency sound can provide accurate and unambiguous localization and resolution of competing sounds. Our findings suggest that an "active hearing" approach could be useful in robotic systems that operate in natural, noisy settings. We also suggest that neurophysiological models of sound localization in animals could benefit from revision to include the influence of top-down memory and sensorimotor integration across head rotations.

  14. Self-Generated Auditory Feedback as a Cue to Support Rhythmic Motor Stability

    Directory of Open Access Journals (Sweden)

    Gopher Daniel

    2011-12-01

    Full Text Available A goal of the SKILLS project is to develop Virtual Reality (VR-based training simulators for different application domains, one of which is juggling. Within this context the value of multimodal VR environments for skill acquisition is investigated. In this study, we investigated whether it was necessary to render the sounds of virtual balls hitting virtual hands within the juggling training simulator. First, we recorded sounds at the jugglers’ ears and found the sound of ball hitting hands to be audible. Second, we asked 24 jugglers to juggle under normal conditions (Audible or while listening to pink noise intended to mask the juggling sounds (Inaudible. We found that although the jugglers themselves reported no difference in their juggling across these two conditions, external juggling experts rated rhythmic stability worse in the Inaudible condition than in the Audible condition. This result suggests that auditory information should be rendered in the VR juggling training simulator.

  15. Gay- and Lesbian-Sounding Auditory Cues Elicit Stereotyping and Discrimination.

    Science.gov (United States)

    Fasoli, Fabio; Maass, Anne; Paladino, Maria Paola; Sulpizio, Simone

    2017-07-01

    The growing body of literature on the recognition of sexual orientation from voice ("auditory gaydar") is silent on the cognitive and social consequences of having a gay-/lesbian- versus heterosexual-sounding voice. We investigated this issue in four studies (overall N = 276), conducted in Italian language, in which heterosexual listeners were exposed to single-sentence voice samples of gay/lesbian and heterosexual speakers. In all four studies, listeners were found to make gender-typical inferences about traits and preferences of heterosexual speakers, but gender-atypical inferences about those of gay or lesbian speakers. Behavioral intention measures showed that listeners considered lesbian and gay speakers as less suitable for a leadership position, and male (but not female) listeners took distance from gay speakers. Together, this research demonstrates that having a gay/lesbian rather than heterosexual-sounding voice has tangible consequences for stereotyping and discrimination.

  16. Binaural processing in the synthesis of auditory spatial receptive fields.

    Science.gov (United States)

    Peña, José Luis

    2003-11-01

    The owl's auditory system computes interaural time (ITD) and interaural level (ILD) differences to create a two-dimensional map of auditory space. Space-specific neurons are selective for combinations of ITD and ILD, which define, respectively, the horizontal and vertical dimensions of their receptive fields. ITD curves for postsynaptic potentials indicate that ICx neurons integrate the results of binaural cross correlation in different frequency bands. However, the difference between the main and side peaks is slight. ICx neurons further enhance this difference in the process of converting membrane potentials to impulse rates. Comparison of subthreshold postsynaptic potentials (PSPs) and spike output for the same neurons showed that receptive fields measured in PSPs were much larger than those measured in spikes in both ITD and ILD dimensions. A multiplication of separate postsynaptic potentials tuned to ITD and ILD can account for the combination sensitivity of these neurons to ITD-ILD pairs.

  17. Auditory nerve fiber representation of cues to voicing in syllable-final stop consonants

    Energy Technology Data Exchange (ETDEWEB)

    Sinex, D.G. (Boys Town National Research Hospital, Omaha, NE (United States))

    1993-09-01

    Acoustic cues to the identity of consonants such as d and t vary according to contextual factors such as the position of the consonant within a syllable. However, investigations of the neural coding of consonants have almost always used stimuli in which the consonant occurs in the syllable-initial position. The present experiments examined the peripheral neural representation of spectral and temporal cues that can distinguish between stop consonants d and t in syllable-final position. Stimulus sets consisting of the syllables hid, hit, hud, and hut were recorded by three different talkers. During the consonant closure interval, the spectrum of d was characterized by the presence of a low-frequency voice bar. Most neurons responses were characterized by discharge rate decreases at the beginning of the closure interval and by rate increases that marked the release of the consonant closure. Exceptions were seen in the responses of neurons with characteristics frequencies (CFs) below approximately 0.7 kHz to syllables ending in d. These neurons responded to the voice bar with discharge rates that could approach the rates elicited by the vowel. The latencies of prominent discharge rate changes were measured for all neurons and used to compute the length of the encoded closure interval. The encoded interval was clearly longer for syllables ending in t than in d. The encoded interval increased with CF for both consonants but more rapidly for t. Differences in the encoded closure interval were small for syllables with different vowels or syllables produced by different talkers. 29 refs., 10 figs.

  18. Training-induced plasticity of auditory localization in adult mammals.

    Directory of Open Access Journals (Sweden)

    Oliver Kacelnik

    2006-04-01

    Full Text Available Accurate auditory localization relies on neural computations based on spatial cues present in the sound waves at each ear. The values of these cues depend on the size, shape, and separation of the two ears and can therefore vary from one individual to another. As with other perceptual skills, the neural circuits involved in spatial hearing are shaped by experience during development and retain some capacity for plasticity in later life. However, the factors that enable and promote plasticity of auditory localization in the adult brain are unknown. Here we show that mature ferrets can rapidly relearn to localize sounds after having their spatial cues altered by reversibly occluding one ear, but only if they are trained to use these cues in a behaviorally relevant task, with greater and more rapid improvement occurring with more frequent training. We also found that auditory adaptation is possible in the absence of vision or error feedback. Finally, we show that this process involves a shift in sensitivity away from the abnormal auditory spatial cues to other cues that are less affected by the earplug. The mature auditory system is therefore capable of adapting to abnormal spatial information by reweighting different localization cues. These results suggest that training should facilitate acclimatization to hearing aids in the hearing impaired.

  19. A randomised controlled trial evaluating the effect of an individual auditory cueing device on freezing and gait speed in people with Parkinson's disease

    Directory of Open Access Journals (Sweden)

    Lynch Deirdre

    2008-12-01

    Full Text Available Abstract Background Parkinson's disease is a progressive neurological disorder resulting from a degeneration of dopamine producing cells in the substantia nigra. Clinical symptoms typically affect gait pattern and motor performance. Evidence suggests that the use of individual auditory cueing devices may be used effectively for the management of gait and freezing in people with Parkinson's disease. The primary aim of the randomised controlled trial is to evaluate the effect of an individual auditory cueing device on freezing and gait speed in people with Parkinson's disease. Methods A prospective multi-centre randomised cross over design trial will be conducted. Forty-seven subjects will be randomised into either Group A or Group B, each with a control and intervention phase. Baseline measurements will be recorded using the Freezing of Gait Questionnaire as the primary outcome measure and 3 secondary outcome measures, the 10 m Walk Test, Timed "Up & Go" Test and the Modified Falls Efficacy Scale. Assessments are taken 3-times over a 3-week period. A follow-up assessment will be completed after three months. A secondary aim of the study is to evaluate the impact of such a device on the quality of life of people with Parkinson's disease using a qualitative methodology. Conclusion The Apple iPod-Shuffle™ and similar devices provide a cost effective and an innovative platform for integration of individual auditory cueing devices into clinical, social and home environments and are shown to have immediate effect on gait, with improvements in walking speed, stride length and freezing. It is evident that individual auditory cueing devices are of benefit to people with Parkinson's disease and the aim of this randomised controlled trial is to maximise the benefits by allowing the individual to use devices in both a clinical and social setting, with minimal disruption to their daily routine. Trial registration The protocol for this study is registered

  20. Geometric Cues, Reference Frames, and the Equivalence of Experienced-Aligned and Novel-Aligned Views in Human Spatial Memory

    Science.gov (United States)

    Kelly, Jonathan W.; Sjolund, Lori A.; Sturz, Bradley R.

    2013-01-01

    Spatial memories are often organized around reference frames, and environmental shape provides a salient cue to reference frame selection. To date, however, the environmental cues responsible for influencing reference frame selection remain relatively unknown. To connect research on reference frame selection with that on orientation via…

  1. Binding of Verbal and Spatial Features in Auditory Working Memory

    Science.gov (United States)

    Maybery, Murray T.; Clissa, Peter J.; Parmentier, Fabrice B. R.; Leung, Doris; Harsa, Grefin; Fox, Allison M.; Jones, Dylan M.

    2009-01-01

    The present study investigated the binding of verbal identity and spatial location in the retention of sequences of spatially distributed acoustic stimuli. Study stimuli varying in verbal content and spatial location (e.g. V[subscript 1]S[subscript 1], V[subscript 2]S[subscript 2], V[subscript 3]S[subscript 3], V[subscript 4]S[subscript 4]) were…

  2. Hippocampus is necessary for spatial discrimination using distal cue-configuration.

    Science.gov (United States)

    Kim, Jangjin; Lee, Inah

    2011-06-01

    The role of the hippocampus in processing contextual cues has been well recognized. Contextual manipulation often involves transferring animals between different rooms. Because of vague definition of context in such a paradigm, however, it has been difficult to study the role of the hippocampus parametrically in contextual information processing. We designed a novel task in which a different context can be parametrically defined by the spatial configuration of distal cues. In this task, rats were trained to associate two different configurations of distal cue-sets (standard contexts) with different food-well locations at the end of a radial arm. Experiment 1 tested the role of the dorsal hippocampus in retrieving well-learned associations between standard contexts and rewarding food-well locations by comparing rats with neurotoxic lesions in the dorsal hippocampus with controls. We found that the hippocampal-lesioned rats were unable to retrieve the context-place paired associations learned before surgery. To further test the role of the hippocampus in generalizing altered context, in Experiment 2, rats were trained in a task in which modified versions of the standard contexts (ambiguous contexts) were presented, intermixed with the standard contexts. Rats were able to process the ambiguous contexts immediately by using their similarities to the standard contexts, whereas muscimol inactivation of the dorsal hippocampus in the same animals reversibly deprived such capability. The results suggest that rats can effectively associate discrete spatial locations with spatial configuration of distal cues. More important, rats can generalize or orthogonalize modified contextual environments using learned contextual representation of the environment. Copyright © 2010 Wiley-Liss, Inc.

  3. Syntactic and auditory spatial processing in the human temporal cortex: an MEG study.

    Science.gov (United States)

    Herrmann, Björn; Maess, Burkhard; Hahne, Anja; Schröger, Erich; Friederici, Angela D

    2011-07-15

    Processing syntax is believed to be a higher cognitive function involving cortical regions outside sensory cortices. In particular, previous studies revealed that early syntactic processes at around 100-200 ms affect brain activations in anterior regions of the superior temporal gyrus (STG), while independent studies showed that pure auditory perceptual processing is related to sensory cortex activations. However, syntax-related modulations of sensory cortices were reported recently, thereby adding diverging findings to the previous studies. The goal of the present magnetoencephalography study was to localize the cortical regions underlying early syntactic processes and those underlying perceptual processes using a within-subject design. Sentences varying the factors syntax (correct vs. incorrect) and auditory space (standard vs. change of interaural time difference (ITD)) were auditorily presented. Both syntactic and auditory spatial anomalies led to very early activations (40-90 ms) in the STG. Around 135 ms after violation onset, differential effects were observed for syntax and auditory space, with syntactically incorrect sentences leading to activations in the anterior STG, whereas ITD changes elicited activations more posterior in the STG. Furthermore, our observations strongly indicate that the anterior and the posterior STG are activated simultaneously when a double violation is encountered. Thus, the present findings provide evidence of a dissociation of speech-related processes in the anterior STG and the processing of auditory spatial information in the posterior STG, compatible with the view of different processing streams in the temporal cortex. Copyright © 2011 Elsevier Inc. All rights reserved.

  4. Effect of Exogenous Cues on Covert Spatial Orienting in Deaf and Normal Hearing Individuals.

    Directory of Open Access Journals (Sweden)

    Seema Gorur Prasad

    Full Text Available Deaf individuals have been known to process visual stimuli better at the periphery compared to the normal hearing population. However, very few studies have examined attention orienting in the oculomotor domain in the deaf, particularly when targets appear at variable eccentricity. In this study, we examined if the visual perceptual processing advantage reported in the deaf people also modulates spatial attentional orienting with eye movement responses. We used a spatial cueing task with cued and uncued targets that appeared at two different eccentricities and explored attentional facilitation and inhibition. We elicited both a saccadic and a manual response. The deaf showed a higher cueing effect for the ocular responses than the normal hearing participants. However, there was no group difference for the manual responses. There was also higher facilitation at the periphery for both saccadic and manual responses, irrespective of groups. These results suggest that, owing to their superior visual processing ability, the deaf may orient attention faster to targets. We discuss the results in terms of previous studies on cueing and attentional orienting in deaf.

  5. Effects of attentional filtering demands on preparatory ERPs elicited in a spatial cueing task.

    Science.gov (United States)

    Seiss, Ellen; Driver, Jon; Eimer, Martin

    2009-06-01

    We used ERP measures to investigate how attentional filtering requirements affect preparatory attentional control and spatially selective visual processing. In a spatial cueing experiment, attentional filtering demands were manipulated by presenting task-relevant visual stimuli either in isolation (target-only task) or together with irrelevant adjacent distractors (target-plus-distractors task). ERPs were recorded in response to informative spatial precues, and in response to subsequent visual stimuli at attended and unattended locations. The preparatory ADAN component elicited during the cue-target interval was larger and more sustained in the target-plus-distractors task, reflecting the demand of stronger attentional filtering. By contrast, two other preparatory lateralised components (EDAN and LDAP) were unaffected by the attentional filtering demand. Similar enhancements of P1 and N1 components in response to the lateral imperative visual stimuli were observed at cued versus uncued locations, regardless of filtering demand, whereas later attentional-related negativities beyond 200 ms post-stimulus were larger the target-plus-distractor task. Our results implicate that the ADAN component is linked to preparatory top-down control processes involved in the attentional filtering of irrelevant distractors; such filtering also affects later attention-related negativities recorded after the onset of the imperative stimulus. ERPs can reveal effects of expected attentional filtering of irrelevant distractors on preparatory attentional control processes and spatially selective visual processing.

  6. Brain activity during auditory and visual phonological, spatial and simple discrimination tasks.

    Science.gov (United States)

    Salo, Emma; Rinne, Teemu; Salonen, Oili; Alho, Kimmo

    2013-02-16

    We used functional magnetic resonance imaging to measure human brain activity during tasks demanding selective attention to auditory or visual stimuli delivered in concurrent streams. Auditory stimuli were syllables spoken by different voices and occurring in central or peripheral space. Visual stimuli were centrally or more peripherally presented letters in darker or lighter fonts. The participants performed a phonological, spatial or "simple" (speaker-gender or font-shade) discrimination task in either modality. Within each modality, we expected a clear distinction between brain activations related to nonspatial and spatial processing, as reported in previous studies. However, within each modality, different tasks activated largely overlapping areas in modality-specific (auditory and visual) cortices, as well as in the parietal and frontal brain regions. These overlaps may be due to effects of attention common for all three tasks within each modality or interaction of processing task-relevant features and varying task-irrelevant features in the attended-modality stimuli. Nevertheless, brain activations caused by auditory and visual phonological tasks overlapped in the left mid-lateral prefrontal cortex, while those caused by the auditory and visual spatial tasks overlapped in the inferior parietal cortex. These overlapping activations reveal areas of multimodal phonological and spatial processing. There was also some evidence for intermodal attention-related interaction. Most importantly, activity in the superior temporal sulcus elicited by unattended speech sounds was attenuated during the visual phonological task in comparison with the other visual tasks. This effect might be related to suppression of processing irrelevant speech presumably distracting the phonological task involving the letters. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. Hamsters' (Mesocricetus auratus) memory in a radial maze analog: the role of spatial versus olfactory cues.

    Science.gov (United States)

    Tonneau, François; Cabrera, Felipe; Corujo, Alejandro

    2012-02-01

    The golden hamster's (Mesocricetus auratus) performance on radial maze tasks has not been studied a lot. Here we report the results of a spatial memory task that involved eight food stations equidistant from the center of a circular platform. Each of six male hamsters depleted the food stations along successive choices. After each choice and a 5-s retention delay, the hamster was brought back to the center of the platform for the next choice opportunity. When only one baited station was left, the platform was rotated to evaluate whether olfactory traces guided hamsters' choices. Results showed that despite the retention delay hamsters performed above chance in searching for food. The choice distributions observed during the rotation probes were consistent with spatial memory and could be explained without assuming guidance by olfactory cues. The radial maze analog we devised could be useful in furthering the study of spatial memory in hamsters.

  8. Reference frames in spatial updating when body-based cues are absent.

    Science.gov (United States)

    He, Qiliang; McNamara, Timothy P; Kelly, Jonathan W

    2017-07-28

    The current study investigated the reference frame used in spatial updating when idiothetic cues to self-motion were minimized (desktop virtual reality). In Experiment 1, participants learned a layout of eight objects from a single perspective (learning heading) in a virtual environment. After learning, they were placed in the same virtual environment and used a keyboard to navigate to two of the learned objects (visible) before pointing to a third object (invisible). We manipulated participants' starting orientation (initial heading) and final orientation (final heading) before pointing, to examine the reference frame used in this task. We found that participants used the initial heading and the learning heading to establish reference directions. In Experiment 2, the procedure was almost the same as in Experiment 1 except that participants pointed to objects relative to an imagined heading that differed from their final heading in the virtual environment. In this case, pointing performance was only affected by alignment with the learning heading. We concluded that the initial heading played an important role in spatial updating without idiothetic cues, but the representation established at this heading was transient and affected by the interruption of spatial updating; the learning heading, on the other hand, corresponded to an enduring representation which was used consistently.

  9. Rendering visual events as sounds: Spatial attention capture by auditory augmented reality.

    Directory of Open Access Journals (Sweden)

    Scott A Stone

    Full Text Available Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible.

  10. Long-Term Memory Biases Auditory Spatial Attention

    Science.gov (United States)

    Zimmermann, Jacqueline F.; Moscovitch, Morris; Alain, Claude

    2017-01-01

    Long-term memory (LTM) has been shown to bias attention to a previously learned visual target location. Here, we examined whether memory-predicted spatial location can facilitate the detection of a faint pure tone target embedded in real world audio clips (e.g., soundtrack of a restaurant). During an initial familiarization task, participants…

  11. Effects of posture-related auditory cueing (PAC) program on muscles activities and kinematics of the neck and trunk during computer work.

    Science.gov (United States)

    Yoo, Won-Gyu; Park, Se-Yeon

    2015-01-01

    The etiology of the neck and back discomfort are highly associated with abnormal static posture such as forward head posture and flexed relaxed posture; such postures are regarded as the risk factors for work-related musculoskeletal disorders. Although, various ergonomic chairs and devices have been developed for computer workers, there are few reports of software that can alert users to their posture or work hours. Purpose of the present study was to investigate the difference of kinematics of the neck and trunk segments as well as muscular activation between condition with and without posture related auditory cueing. Twelve male computer workers were recruited in this study. A posture related auditory cueing (PAC) program used a media file that generated postural correction cue at intervals of 300 seconds. Surface electromyography was used to measure the activity of the erector spine and upper trapezius. Kinematic data were obtained using an ultrasonic three dimensional movement analysis system. The results showed that the means of trunk flexion and forward head angle were significantly reduced with PAC. The muscular activity of the erector spine and upper trapezius was significantly higher with the PAC and significantly lower without the PAC. Our findings suggested that the software providing PACs is an ergonomic device with positive effects for preventing habitual poor posture and potential for widespread practical usage.

  12. Hippocampal long-term depression is facilitated by the acquisition and updating of memory of spatial auditory content and requires mGlu5 activation.

    Science.gov (United States)

    Dietz, Birte; Manahan-Vaughan, Denise

    2017-03-15

    Long-term potentiation (LTP) and long-term depression (LTD) are key cellular processes that support memory formation. Whereas increases of synaptic strength by means of LTP may support the creation of a spatial memory 'engram', LTD appears to play an important role in refining and optimising experience-dependent encoding. A differentiation in the role of hippocampal subfields is apparent. For example, LTD in the dentate gyrus (DG) is enabled by novel learning about large visuospatial features, whereas in area CA1, it is enabled by learning about discrete aspects of spatial content, whereby, both discrete visuospatial and olfactospatial cues trigger LTD in CA1. Here, we explored to what extent local audiospatial cues facilitate information encoding in the form of LTD in these subfields. Coupling of low frequency afferent stimulation (LFS) with discretely localised, novel auditory tones in the sonic hearing, or ultrasonic range, facilitated short-term depression (STD) into LTD (>24 h) in CA1, but not DG. Re-exposure to the now familiar audiospatial configuration ca. 1 week later failed to enhance STD. Reconfiguration of the same audiospatial cues resulted anew in LTD when ultrasound, but not non-ultrasound cues were used. LTD facilitation that was triggered by novel exposure to spatially arranged tones, or to spatial reconfiguration of the same tones were both prevented by an antagonism of the metabotropic glutamate receptor, mGlu5. These data indicate that, if behaviourally salient enough, the hippocampus can use audiospatial cues to facilitate LTD that contributes to the encoding and updating of spatial representations. Effects are subfield-specific, and require mGlu5 activation, as is the case for visuospatial information processing. These data reinforce the likelihood that LTD supports the encoding of spatial features, and that this occurs in a qualitative and subfield-specific manner. They also support that mGlu5 is essential for synaptic encoding of spatial

  13. Cross-modal activation of auditory regions during visuo-spatial working memory in early deafness.

    Science.gov (United States)

    Ding, Hao; Qin, Wen; Liang, Meng; Ming, Dong; Wan, Baikun; Li, Qiang; Yu, Chunshui

    2015-09-01

    Early deafness can reshape deprived auditory regions to enable the processing of signals from the remaining intact sensory modalities. Cross-modal activation has been observed in auditory regions during non-auditory tasks in early deaf subjects. In hearing subjects, visual working memory can evoke activation of the visual cortex, which further contributes to behavioural performance. In early deaf subjects, however, whether and how auditory regions participate in visual working memory remains unclear. We hypothesized that auditory regions may be involved in visual working memory processing and activation of auditory regions may contribute to the superior behavioural performance of early deaf subjects. In this study, 41 early deaf subjects (22 females and 19 males, age range: 20-26 years, age of onset of deafness working memory task than did the hearing controls. Compared with hearing controls, deaf subjects exhibited increased activation in the superior temporal gyrus bilaterally during the recognition stage. This increased activation amplitude predicted faster and more accurate working memory performance in deaf subjects. Deaf subjects also had increased activation in the superior temporal gyrus bilaterally during the maintenance stage and in the right superior temporal gyrus during the encoding stage. These increased activation amplitude also predicted faster reaction times on the spatial working memory task in deaf subjects. These findings suggest that cross-modal plasticity occurs in auditory association areas in early deaf subjects. These areas are involved in visuo-spatial working memory. Furthermore, amplitudes of cross-modal activation during the maintenance stage were positively correlated with the age of onset of hearing aid use and were negatively correlated with the percentage of lifetime hearing aid use in deaf subjects. These findings suggest that earlier and longer hearing aid use may inhibit cross-modal reorganization in early deaf subjects. Granger

  14. Effect of the cognitive-motor dual-task using auditory cue on balance of surviviors with chronic stroke: a pilot study.

    Science.gov (United States)

    Choi, Wonjae; Lee, GyuChang; Lee, Seungwon

    2015-08-01

    To investigate the effect of a cognitive-motor dual-task using auditory cues on the balance of patients with chronic stroke. Randomized controlled trial. Inpatient rehabilitation center. Thirty-seven individuals with chronic stroke. The participants were randomly allocated to the dual-task group (n=19) and the single-task group (n=18). The dual-task group performed a cognitive-motor dual-task in which they carried a circular ring from side to side according to a random auditory cue during treadmill walking. The single-task group walked on a treadmill only. All subjects completed 15 min per session, three times per week, for four weeks with conventional rehabilitation five times per week over the four weeks. Before and after intervention, both static and dynamic balance were measured with a force platform and using the Timed Up and Go (TUG) test. The dual-task group showed significant improvement in all variables compared to the single-task group, except for anteroposterior (AP) sway velocity with eyes open and TUG at follow-up: mediolateral (ML) sway velocity with eye open (dual-task group vs. single-task group: 2.11 mm/s vs. 0.38 mm/s), ML sway velocity with eye close (2.91 mm/s vs. 1.35 mm/s), AP sway velocity with eye close (4.84 mm/s vs. 3.12 mm/s). After intervention, all variables showed significant improvement in the dual-task group compared to baseline. The study results suggest that the performance of a cognitive-motor dual-task using auditory cues may influence balance improvements in chronic stroke patients. © The Author(s) 2014.

  15. Cues, context, and long-term memory: the role of the retrosplenial cortex in spatial cognition

    Directory of Open Access Journals (Sweden)

    Adam M P Miller

    2014-08-01

    Full Text Available Spatial navigation requires representations of landmarks and other navigation cues. The retrosplenial cortex (RSC is anatomically positioned between limbic areas important for memory formation, such as the hippocampus and the anterior thalamus, and cortical regions along the dorsal stream known to contribute importantly to long-term spatial representation, such as the posterior parietal cortex. Damage to the RSC severely impairs allocentric representations of the environment, including the ability to derive navigational information from landmarks. The specific deficits seen in tests of human and rodent navigation suggest that the RSC supports allocentric representation by processing the stable features of the environment and the spatial relationships among them. In addition to spatial cognition, the RSC plays a key role in contextual and episodic memory. The RSC also contributes importantly to the acquisition and consolidation of long-term spatial and contextual memory through its interactions with the hippocampus. Within this framework, the RSC plays a dual role as part of the feedforward network providing sensory and mnemonic input to the hippocampus and as a target of the hippocampal-dependent systems consolidation of long-term memory.

  16. Trading of dynamic interaural time and level difference cues and its effect on the auditory motion-onset response measured with electroencephalography.

    Science.gov (United States)

    Altmann, Christian F; Ueda, Ryuhei; Bucher, Benoit; Furukawa, Shigeto; Ono, Kentaro; Kashino, Makio; Mima, Tatsuya; Fukuyama, Hidenao

    2017-10-01

    Interaural time (ITD) and level differences (ILD) constitute the two main cues for sound localization in the horizontal plane. Despite extensive research in animal models and humans, the mechanism of how these two cues are integrated into a unified percept is still far from clear. In this study, our aim was to test with human electroencephalography (EEG) whether integration of dynamic ITD and ILD cues is reflected in the so-called motion-onset response (MOR), an evoked potential elicited by moving sound sources. To this end, ITD and ILD trajectories were determined individually by cue trading psychophysics. We then measured EEG while subjects were presented with either static click-trains or click-trains that contained a dynamic portion at the end. The dynamic part was created by combining ITD with ILD either congruently to elicit the percept of a right/leftward moving sound, or incongruently to elicit the percept of a static sound. In two experiments that differed in the method to derive individual dynamic cue trading stimuli, we observed an MOR with at least a change-N1 (cN1) component for both the congruent and incongruent conditions at about 160-190 ms after motion-onset. A significant change-P2 (cP2) component for both the congruent and incongruent ITD/ILD combination was found only in the second experiment peaking at about 250 ms after motion onset. In sum, this study shows that a sound which - by a combination of counter-balanced ITD and ILD cues - induces a static percept can still elicit a motion-onset response, indicative of independent ITD and ILD processing at the level of the MOR - a component that has been proposed to be, at least partly, generated in non-primary auditory cortex. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Spatial and Temporal High Processing of Visual and Auditory Stimuli in Cervical Dystonia.

    Science.gov (United States)

    Chillemi, Gaetana; Calamuneri, Alessandro; Morgante, Francesca; Terranova, Carmen; Rizzo, Vincenzo; Girlanda, Paolo; Ghilardi, Maria Felice; Quartarone, Angelo

    2017-01-01

    Investigation of spatial and temporal cognitive processing in idiopathic cervical dystonia (CD) by means of specific tasks based on perception in time and space domains of visual and auditory stimuli. Previous psychophysiological studies have investigated temporal and spatial characteristics of neural processing of sensory stimuli (mainly somatosensorial and visual), whereas the definition of such processing at higher cognitive level has not been sufficiently addressed. The impairment of time and space processing is likely driven by basal ganglia dysfunction. However, other cortical and subcortical areas, including cerebellum, may also be involved. We tested 21 subjects with CD and 22 age-matched healthy controls with 4 recognition tasks exploring visuo-spatial, audio-spatial, visuo-temporal, and audio-temporal processing. Dystonic subjects were subdivided in three groups according to the head movement pattern type (lateral: Laterocollis, rotation: Torticollis) as well as the presence of tremor (Tremor). We found significant alteration of spatial processing in Laterocollis subgroup compared to controls, whereas impairment of temporal processing was observed in Torticollis subgroup compared to controls. Our results suggest that dystonia is associated with a dysfunction of temporal and spatial processing for visual and auditory stimuli that could underlie the well-known abnormalities in sequence learning. Moreover, we suggest that different movement pattern type might lead to different dysfunctions at cognitive level within dystonic population.

  18. Specialization of Binaural Responses in Ventral Auditory Cortices

    Science.gov (United States)

    Higgins, Nathan C.; Storace, Douglas A.; Escabí, Monty A.

    2010-01-01

    Accurate orientation to sound under challenging conditions requires auditory cortex, but it is unclear how spatial attributes of the auditory scene are represented at this level. Current organization schemes follow a functional division whereby dorsal and ventral auditory cortices specialize to encode spatial and object features of sound source, respectively. However, few studies have examined spatial cue sensitivities in ventral cortices to support or reject such schemes. Here Fourier optical imaging was used to quantify best frequency responses and corresponding gradient organization in primary (A1), anterior, posterior, ventral (VAF), and suprarhinal (SRAF) auditory fields of the rat. Spike rate sensitivities to binaural interaural level difference (ILD) and average binaural level cues were probed in A1 and two ventral cortices, VAF and SRAF. Continuous distributions of best ILDs and ILD tuning metrics were observed in all cortices, suggesting this horizontal position cue is well covered. VAF and caudal SRAF in the right cerebral hemisphere responded maximally to midline horizontal position cues, whereas A1 and rostral SRAF responded maximally to ILD cues favoring more eccentric positions in the contralateral sound hemifield. SRAF had the highest incidence of binaural facilitation for ILD cues corresponding to midline positions, supporting current theories that auditory cortices have specialized and hierarchical functional organization. PMID:20980610

  19. Modulation of human auditory spatial scene analysis by transcranial direct current stimulation.

    Science.gov (United States)

    Lewald, Jörg

    2016-04-01

    Localizing and selectively attending to the source of a sound of interest in a complex auditory environment is an important capacity of the human auditory system. The underlying neural mechanisms have, however, still not been clarified in detail. This issue was addressed by using bilateral bipolar-balanced transcranial direct current stimulation (tDCS) in combination with a task demanding free-field sound localization in the presence of multiple sound sources, thus providing a realistic simulation of the so-called "cocktail-party" situation. With left-anode/right-cathode, but not with right-anode/left-cathode, montage of bilateral electrodes, tDCS over superior temporal gyrus, including planum temporale and auditory cortices, was found to improve the accuracy of target localization in left hemispace. No effects were found for tDCS over inferior parietal lobule or with off-target active stimulation over somatosensory-motor cortex that was used to control for non-specific effects. Also, the absolute error in localization remained unaffected by tDCS, thus suggesting that general response precision was not modulated by brain polarization. This finding can be explained in the framework of a model assuming that brain polarization modulated the suppression of irrelevant sound sources, thus resulting in more effective spatial separation of the target from the interfering sound in the complex auditory scene. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. So Close to a Deal: Spatial-Distance Cues Influence Economic Decision-Making in a Social Context.

    Directory of Open Access Journals (Sweden)

    Ramzi Fatfouta

    Full Text Available Social distance (i.e., the degree of closeness to another person affects the way humans perceive and respond to fairness during financial negotiations. Feeling close to someone enhances the acceptance of monetary offers. Here, we explored whether this effect also extends to the spatial domain. Specifically, using an iterated version of the Ultimatum Game in a within-subject design, we investigated whether different visual spatial distance-cues result in different rates of acceptance of otherwise identical monetary offers. Study 1 found that participants accepted significantly more offers when they were cued with spatial closeness than when they were cued with spatial distance. Study 2 replicated this effect using identical procedures but different spatial-distance cues in an independent sample. Importantly, our results could not be explained by feelings of social closeness. Our results demonstrate that mere perceptions of spatial closeness produce analogous-but independent-effects to those of social closeness.

  1. Detection of auditory signals in quiet and noisy backgrounds while performing a visuo-spatial task

    Directory of Open Access Journals (Sweden)

    Vishakha W Rawool

    2016-01-01

    Full Text Available Context: The ability to detect important auditory signals while performing visual tasks may be further compounded by background chatter. Thus, it is important to know how task performance may interact with background chatter to hinder signal detection. Aim: To examine any interactive effects of speech spectrum noise and task performance on the ability to detect signals. Settings and Design: The setting was a sound-treated booth. A repeated measures design was used. Materials and Methods: Auditory thresholds of 20 normal adults were determined at 0.5, 1, 2 and 4 kHz in the following conditions presented in a random order: (1 quiet with attention; (2 quiet with a visuo-spatial task or puzzle (distraction; (3 noise with attention and (4 noise with task. Statistical Analysis: Multivariate analyses of variance (MANOVA with three repeated factors (quiet versus noise, visuo-spatial task versus no task, signal frequency. Results: MANOVA revealed significant main effects for noise and signal frequency and significant noise–frequency and task–frequency interactions. Distraction caused by performing the task worsened the thresholds for tones presented at the beginning of the experiment and had no effect on tones presented in the middle. At the end of the experiment, thresholds (4 kHz were better while performing the task than those obtained without performing the task. These effects were similar across the quiet and noise conditions. Conclusion: Detection of auditory signals is difficult at the beginning of a distracting visuo-spatial task but over time, task learning and auditory training effects can nullify the effect of distraction and may improve detection of high frequency sounds.

  2. A reinforcement learning approach to model interactions between landmarks and geometric cues during spatial learning.

    Science.gov (United States)

    Sheynikhovich, Denis; Arleo, Angelo

    2010-12-13

    In contrast to predictions derived from the associative learning theory, a number of behavioral studies suggested the absence of competition between geometric cues and landmarks in some experimental paradigms. In parallel to these studies, neurobiological experiments suggested the existence of separate independent memory systems which may not always interact according to classic associative principles. In this paper we attempt to combine these two lines of research by proposing a model of spatial learning that is based on the theory of multiple memory systems. In our model, a place-based locale strategy uses activities of modeled hippocampal place cells to drive navigation to a hidden goal, while a stimulus-response taxon strategy, presumably mediated by the dorso-lateral striatum, learns landmark-approaching behavior. A strategy selection network, proposed to reside in the prefrontal cortex, implements a simple reinforcement learning rule to switch behavioral strategies. The model is used to reproduce the results of a behavioral experiment in which an interaction between a landmark and geometric cues was studied. We show that this model, built on the basis of neurobiological data, can explain the lack of competition between the landmark and geometry, potentiation of geometry learning by the landmark, and blocking. Namely, we propose that the geometry potentiation is a consequence of cooperation between memory systems during learning, while blocking is due to competition between the memory systems during action selection. Copyright © 2010 Elsevier B.V. All rights reserved.

  3. Interference between postural control and spatial vs. non-spatial auditory reaction time tasks in older adults.

    Science.gov (United States)

    Fuhrman, Susan I; Redfern, Mark S; Jennings, J Richard; Furman, Joseph M

    2015-01-01

    This study investigated whether spatial aspects of an information processing task influence dual-task interference. Two groups (Older/Young) of healthy adults participated in dual-task experiments. Two auditory information processing tasks included a frequency discrimination choice reaction time task (non-spatial task) and a lateralization choice reaction time task (spatial task). Postural tasks included combinations of standing with eyes open or eyes closed on either a fixed floor or a sway-referenced floor. Reaction times and postural sway via center of pressure were recorded. Baseline measures of reaction time and sway were subtracted from the corresponding dual-task results to calculate reaction time task costs and postural task costs. Reaction time task cost increased with eye closure (p = 0.01), sway-referenced flooring (p < 0.0001), and the spatial task (p = 0.04). Additionally, a significant (p = 0.05) task x vision x age interaction indicated that older subjects had a significant vision X task interaction whereas young subjects did not. However, when analyzed by age group, the young group showed minimal differences in interference for the spatial and non-spatial tasks with eyes open, but showed increased interference on the spatial relative to non-spatial task with eyes closed. On the contrary, older subjects demonstrated increased interference on the spatial relative to the non-spatial task with eyes open, but not with eyes closed. These findings suggest that visual-spatial interference may occur in older subjects when vision is used to maintain posture.

  4. The perception of prosody and associated auditory cues in early-implanted children: the role of auditory working memory and musical activities.

    Science.gov (United States)

    Torppa, Ritva; Faulkner, Andrew; Huotilainen, Minna; Järvikivi, Juhani; Lipsanen, Jari; Laasonen, Marja; Vainio, Martti

    2014-03-01

    To study prosodic perception in early-implanted children in relation to auditory discrimination, auditory working memory, and exposure to music. Word and sentence stress perception, discrimination of fundamental frequency (F0), intensity and duration, and forward digit span were measured twice over approximately 16 months. Musical activities were assessed by questionnaire. Twenty-one early-implanted and age-matched normal-hearing (NH) children (4-13 years). Children with cochlear implants (CIs) exposed to music performed better than others in stress perception and F0 discrimination. Only this subgroup of implanted children improved with age in word stress perception, intensity discrimination, and improved over time in digit span. Prosodic perception, F0 discrimination and forward digit span in implanted children exposed to music was equivalent to the NH group, but other implanted children performed more poorly. For children with CIs, word stress perception was linked to digit span and intensity discrimination: sentence stress perception was additionally linked to F0 discrimination. Prosodic perception in children with CIs is linked to auditory working memory and aspects of auditory discrimination. Engagement in music was linked to better performance across a range of measures, suggesting that music is a valuable tool in the rehabilitation of implanted children.

  5. KEYING AND ROLE PLAY IN BUSINESS ENCOUNTERS. SPATIAL, TEMPORAL, BEHAVIOR AND LANGUAGE CUES

    Directory of Open Access Journals (Sweden)

    GABRIELA DUMBRAVĂ

    2012-12-01

    Full Text Available This study proposes an approach business communication based on Erving Goffman’s theory of the relational dimension of meaning, according to which meaning is not attached to the communication process, but generated within the context (frame of each specific interaction. This automatically involves a complex process of keying, which basically refers to a series of paradigm shifts that individualize each instance of communication. Therefore, the present study aims at tracing the way in which the process of keying operates in business communication where the overlapping frames of everyday informal interactions and of formal, standardized communication generate, under the pressure of culturally inherited patterns, specific sets of spatial, temporal, behavior and language cues that assign well defined roles to the participants.

  6. Spatial selective attention in a complex auditory environment such as polyphonic music.

    Science.gov (United States)

    Saupe, Katja; Koelsch, Stefan; Rübsamen, Rudolf

    2010-01-01

    To investigate the influence of spatial information in auditory scene analysis, polyphonic music (three parts in different timbres) was composed and presented in free field. Each part contained large falling interval jumps in the melody and the task of subjects was to detect these events in one part ("target part") while ignoring the other parts. All parts were either presented from the same location (0 degrees; overlap condition) or from different locations (-28 degrees, 0 degrees, and 28 degrees or -56 degrees, 0 degrees, and 56 degrees in the azimuthal plane), with the target part being presented either at 0 degrees or at one of the right-sided locations. Results showed that spatial separation of 28 degrees was sufficient for a significant improvement in target detection (i.e., in the detection of large interval jumps) compared to the overlap condition, irrespective of the position (frontal or right) of the target part. A larger spatial separation of the parts resulted in further improvements only if the target part was lateralized. These data support the notion of improvement in the suppression of interfering signals with spatial sound source separation. Additionally, the data show that the position of the relevant sound source influences auditory performance.

  7. Auditory spatial acuity approximates the resolving power of space-specific neurons.

    Directory of Open Access Journals (Sweden)

    Avinash D S Bala

    Full Text Available The relationship between neuronal acuity and behavioral performance was assessed in the barn owl (Tyto alba, a nocturnal raptor renowned for its ability to localize sounds and for the topographic representation of auditory space found in the midbrain. We measured discrimination of sound-source separation using a newly developed procedure involving the habituation and recovery of the pupillary dilation response. The smallest discriminable change of source location was found to be about two times finer in azimuth than in elevation. Recordings from neurons in its midbrain space map revealed that their spatial tuning, like the spatial discrimination behavior, was also better in azimuth than in elevation by a factor of about two. Because the PDR behavioral assay is mediated by the same circuitry whether discrimination is assessed in azimuth or in elevation, this difference in vertical and horizontal acuity is likely to reflect a true difference in sensory resolution, without additional confounding effects of differences in motor performance in the two dimensions. Our results, therefore, are consistent with the hypothesis that the acuity of the midbrain space map determines auditory spatial discrimination.

  8. Developmental Changes in the Effect of Verbal, Non-verbal, and Spatial-Positional Cues for Memory

    Science.gov (United States)

    Derevensky, Jeffrey

    1976-01-01

    Sixty kindergarten, sixty second grade, and sixty fourth grade students performed several memory tasks under one of six conditions. The conditions differed as to the method of presentation of information. The study focused on developmental changes in children's use of verbal, nonverbal, and spatial-positional cues for memory. (Editor)

  9. Developmental Changes in the Effect of Verbal, Non-Verbal and Spatial-Positional Cues on Retention.

    Science.gov (United States)

    Derevensky, Jeffrey

    Sixty kindergarten, 60 second-grade, and 60 fourth-grade students performed several memory tasks under one of six conditions. The conditions differed as to the method of presentation of information. The study was focused on developmental changes in children's use of verbal, nonverbal, and spatial-positional cues for memory. The results, in…

  10. The relationship between visual-spatial and auditory-verbal working memory span in Senegalese and Ugandan children.

    Directory of Open Access Journals (Sweden)

    Michael J Boivin

    Full Text Available BACKGROUND: Using the Kaufman Assessment Battery for Children (K-ABC Conant et al. (1999 observed that visual and auditory working memory (WM span were independent in both younger and older children from DR Congo, but related in older American children and in Lao children. The present study evaluated whether visual and auditory WM span were independent in Ugandan and Senegalese children. METHOD: In a linear regression analysis we used visual (Spatial Memory, Hand Movements and auditory (Number Recall WM along with education and physical development (weight/height as predictors. The predicted variable in this analysis was Word Order, which is a verbal memory task that has both visual and auditory memory components. RESULTS: Both the younger (8.5 yrs Ugandan children had auditory memory span (Number Recall that was strongly predictive of Word Order performance. For both the younger and older groups of Senegalese children, only visual WM span (Spatial Memory was strongly predictive of Word Order. Number Recall was not significantly predictive of Word Order in either age group. CONCLUSIONS: It is possible that greater literacy from more schooling for the Ugandan age groups mediated their greater degree of interdependence between auditory and verbal WM. Our findings support those of Conant et al., who observed in their cross-cultural comparisons that stronger education seemed to enhance the dominance of the phonological-auditory processing loop for WM.

  11. Greater anterior cingulate activation and connectivity in response to visual and auditory high-calorie food cues in binge eating: Preliminary findings.

    Science.gov (United States)

    Geliebter, Allan; Benson, Leora; Pantazatos, Spiro P; Hirsch, Joy; Carnell, Susan

    2016-01-01

    Obese individuals show altered neural responses to high-calorie food cues. Individuals with binge eating [BE], who exhibit heightened impulsivity and emotionality, may show a related but distinct pattern of irregular neural responses. However, few neuroimaging studies have compared BE and non-BE groups. To examine neural responses to food cues in BE, 10 women with BE and 10 women without BE (non-BE) who were matched for obesity (5 obese and 5 lean in each group) underwent fMRI scanning during presentation of visual (picture) and auditory (spoken word) cues representing high energy density (ED) foods, low-ED foods, and non-foods. We then compared regional brain activation in BE vs. non-BE groups for high-ED vs. low-ED foods. To explore differences in functional connectivity, we also compared psychophysiologic interactions [PPI] with dorsal anterior cingulate cortex [dACC] for BE vs. non-BE groups. Region of interest (ROI) analyses revealed that the BE group showed more activation than the non-BE group in the dACC, with no activation differences in the striatum or orbitofrontal cortex [OFC]. Exploratory PPI analyses revealed a trend towards greater functional connectivity with dACC in the insula, cerebellum, and supramarginal gyrus in the BE vs. non-BE group. Our results suggest that women with BE show hyper-responsivity in the dACC as well as increased coupling with other brain regions when presented with high-ED cues. These differences are independent of body weight, and appear to be associated with the BE phenotype. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Influence of age, spatial memory, and ocular fixation on localization of auditory, visual, and bimodal targets by human subjects.

    Science.gov (United States)

    Dobreva, Marina S; O'Neill, William E; Paige, Gary D

    2012-12-01

    A common complaint of the elderly is difficulty identifying and localizing auditory and visual sources, particularly in competing background noise. Spatial errors in the elderly may pose challenges and even threats to self and others during everyday activities, such as localizing sounds in a crowded room or driving in traffic. In this study, we investigated the influence of aging, spatial memory, and ocular fixation on the localization of auditory, visual, and combined auditory-visual (bimodal) targets. Head-restrained young and elderly subjects localized targets in a dark, echo-attenuated room using a manual laser pointer. Localization accuracy and precision (repeatability) were quantified for both ongoing and transient (remembered) targets at response delays up to 10 s. Because eye movements bias auditory spatial perception, localization was assessed under target fixation (eyes free, pointer guided by foveal vision) and central fixation (eyes fixed straight ahead, pointer guided by peripheral vision) conditions. Spatial localization across the frontal field in young adults demonstrated (1) horizontal overshoot and vertical undershoot for ongoing auditory targets under target fixation conditions, but near-ideal horizontal localization with central fixation; (2) accurate and precise localization of ongoing visual targets guided by foveal vision under target fixation that degraded when guided by peripheral vision during central fixation; (3) overestimation in horizontal central space (±10°) of remembered auditory, visual, and bimodal targets with increasing response delay. In comparison with young adults, elderly subjects showed (1) worse precision in most paradigms, especially when localizing with peripheral vision under central fixation; (2) greatly impaired vertical localization of auditory and bimodal targets; (3) increased horizontal overshoot in the central field for remembered visual and bimodal targets across response delays; (4) greater vulnerability to

  13. Auditory and Visual Cues for Topic Maintenance with Persons Who Exhibit Dementia of Alzheimer’s Type

    Directory of Open Access Journals (Sweden)

    Amy F. Teten

    2015-01-01

    Full Text Available This study compared the effectiveness of auditory and visual redirections in facilitating topic coherence for persons with Dementia of Alzheimer’s Type (DAT. Five persons with moderate stage DAT engaged in conversation with the first author. Three topics related to activities of daily living, recreational activities, food, and grooming, were broached. Each topic was presented three times to each participant: once as a baseline condition, once with auditory redirection to topic, and once with visual redirection to topic. Transcripts of the interactions were scored for overall coherence. Condition was a significant factor in that the DAT participants exhibited better topic maintenance under visual and auditory conditions as opposed to baseline. In general, the performance of the participants was not affected by the topic, except for significantly higher overall coherence ratings for the visually redirected interactions dealing with the topic of food.

  14. Using spatial manipulation to examine interactions between visual and auditory encoding of pitch and time

    Directory of Open Access Journals (Sweden)

    Neil M McLachlan

    2010-12-01

    Full Text Available Music notations use both symbolic and spatial representation systems. Novice musicians do not have the training to associate symbolic information with musical identities, such as chords or rhythmic and melodic patterns. They provide an opportunity to explore the mechanisms underpinning multimodal learning when spatial encoding strategies of feature dimensions might be expected to dominate. In this study, we applied a range of transformations (such as time reversal to short melodies and rhythms and asked novice musicians to identify them with or without the aid of notation. Performance using a purely spatial (graphic notation was contrasted with the more symbolic, traditional western notation over a series of weekly sessions. The results showed learning effects for both notation types, but performance improved more for graphic notation. This points to greater compatibility of auditory and visual neural codes for novice musicians when using spatial notation, suggesting that pitch and time may be spatially encoded in multimodal associative memory. The findings also point to new strategies for training novice musicians.

  15. Mental representation of spatial cues in microgravity: Writing and drawing tests

    Science.gov (United States)

    Clément, Gilles; Lathan, Corinna; Lockerd, Anna; Bukley, Angie

    2009-04-01

    Humans have mental representation of their environment based on sensory information and experience. A series of experiments has been designed to allow the identification of disturbances in the mental representation of three-dimensional space during space flight as a consequence of the absence of the gravitational frame of reference. This NASA/ESA-funded research effort includes motor tests complemented by psychophysics measurements, designed to distinguish the effects of cognitive versus perceptual-motor changes due to microgravity exposure. Preliminary results have been obtained during the microgravity phase of parabolic flight. These results indicate that the vertical height of handwritten characters and drawn objects is reduced in microgravity compared to normal gravity, suggesting that the mental representation of the height of objects and the environment change during short-term microgravity. Identifying lasting abnormalities in the mental representation of spatial cues will establish the scientific and technical foundation for development of preflight and in-flight training and rehabilitative schemes, enhancing astronaut performance of perceptual-motor tasks, for example, interaction with robotic systems during exploration-class missions.

  16. Using Pitch, Amplitude Modulation, and Spatial Cues for Separation of Harmonic Instruments from Stereo Music Recordings

    Directory of Open Access Journals (Sweden)

    Bryan Pardo

    2007-01-01

    Full Text Available Recent work in blind source separation applied to anechoic mixtures of speech allows for improved reconstruction of sources that rarely overlap in a time-frequency representation. While the assumption that speech mixtures do not overlap significantly in time-frequency is reasonable, music mixtures rarely meet this constraint, requiring new approaches. We introduce a method that uses spatial cues from anechoic, stereo music recordings and assumptions regarding the structure of musical source signals to effectively separate mixtures of tonal music. We discuss existing techniques to create partial source signal estimates from regions of the mixture where source signals do not overlap significantly. We use these partial signals within a new demixing framework, in which we estimate harmonic masks for each source, allowing the determination of the number of active sources in important time-frequency frames of the mixture. We then propose a method for distributing energy from time-frequency frames of the mixture to multiple source signals. This allows dealing with mixtures that contain time-frequency frames in which multiple harmonic sources are active without requiring knowledge of source characteristics.

  17. Auditory Spatial Discrimination and the Mismatch Negativity Response in Hearing-Impaired Individuals.

    Directory of Open Access Journals (Sweden)

    Yuexin Cai

    Full Text Available The aims of the present study were to investigate the ability of hearing-impaired (HI individuals with different binaural hearing conditions to discriminate spatial auditory-sources at the midline and lateral positions, and to explore the possible central processing mechanisms by measuring the minimal audible angle (MAA and mismatch negativity (MMN response. To measure MAA at the left/right 0°, 45° and 90° positions, 12 normal-hearing (NH participants and 36 patients with sensorineural hearing loss, which included 12 patients with symmetrical hearing loss (SHL and 24 patients with asymmetrical hearing loss (AHL [12 with unilateral hearing loss on the left (UHLL and 12 with unilateral hearing loss on the right (UHLR] were recruited. In addition, 128-electrode electroencephalography was used to record the MMN response in a separate group of 60 patients (20 UHLL, 20 UHLR and 20 SHL patients and 20 NH participants. The results showed MAA thresholds of the NH participants to be significantly lower than the HI participants. Also, a significantly smaller MAA threshold was obtained at the midline position than at the lateral position in both NH and SHL groups. However, in the AHL group, MAA threshold for the 90° position on the affected side was significantly smaller than the MMA thresholds obtained at other positions. Significantly reduced amplitudes and prolonged latencies of the MMN were found in the HI groups compared to the NH group. In addition, contralateral activation was found in the UHL group for sounds emanating from the 90° position on the affected side and in the NH group. These findings suggest that the abilities of spatial discrimination at the midline and lateral positions vary significantly in different hearing conditions. A reduced MMN amplitude and prolonged latency together with bilaterally symmetrical cortical activations over the auditory hemispheres indicate possible cortical compensatory changes associated with poor

  18. Grabbing attention without knowing: Automatic capture of attention by subliminal spatial cues

    NARCIS (Netherlands)

    Mulckhuyse, Manon; Talsma, D.; Theeuwes, Jan

    2007-01-01

    The present study shows that an abrupt onset cue that is not consciously perceived can cause attentional facilitation followed by inhibition at the cued location. The observation of this classic biphasic effect of facilitation followed by inhibition of return (IOR) suggests that the subliminal cue

  19. Spatial hearing in a child with auditory neuropathy spectrum disorder and bilateral cochlear implants.

    Science.gov (United States)

    Johnstone, Patti M; Yeager, Kelly R; Noss, Emily

    2013-06-01

    The neural dys-synchrony associated with auditory neuropathy spectrum disorder (ANSD) causes a temporal impairment that could degrade spatial hearing, particularly sound localization accuracy (SLA) and spatial release from masking (SRM). Unilateral cochlear implantation has become an accepted treatment for ANSD but treatment options for the contralateral ear remain controversial. We report spatial hearing measures in a child with ANSD before and after receiving a second cochlear implant (CI). An 11-year-7-month old boy with ANSD and expressive and receptive language delay received a second CI eight years after his first implant. The SLA and SRM were measured four months before sequential bilateral CIs (with the contralateral ear plugged and unplugged), and after nine months using both CIs. Testing done before the second CI, with the first CI alone, suggested that residual hearing in the contralateral ear contributed to sound localization accuracy, but not word recognition in quiet or noise. Nine-months after receiving a second CI, SLA improved by 12.76° and SRM increased to 3.8-4.2 dB relative to pre-operative performance. Results were compared to published outcomes for children with bilateral CIs. The addition of a second CI in this child with ANSD improved spatial hearing.

  20. Brain correlates of the orientation of auditory spatial attention onto speaker location in a "cocktail-party" situation.

    Science.gov (United States)

    Lewald, Jörg; Hanenberg, Christina; Getzmann, Stephan

    2016-10-01

    Successful speech perception in complex auditory scenes with multiple competing speakers requires spatial segregation of auditory streams into perceptually distinct and coherent auditory objects and focusing of attention toward the speaker of interest. Here, we focused on the neural basis of this remarkable capacity of the human auditory system and investigated the spatiotemporal sequence of neural activity within the cortical network engaged in solving the "cocktail-party" problem. Twenty-eight subjects localized a target word in the presence of three competing sound sources. The analysis of the ERPs revealed an anterior contralateral subcomponent of the N2 (N2ac), computed as the difference waveform for targets to the left minus targets to the right. The N2ac peaked at about 500 ms after stimulus onset, and its amplitude was correlated with better localization performance. Cortical source localization for the contrast of left versus right targets at the time of the N2ac revealed a maximum in the region around left superior frontal sulcus and frontal eye field, both of which are known to be involved in processing of auditory spatial information. In addition, a posterior-contralateral late positive subcomponent (LPCpc) occurred at a latency of about 700 ms. Both these subcomponents are potential correlates of allocation of spatial attention to the target under cocktail-party conditions. © 2016 Society for Psychophysiological Research.

  1. Trial-by-trial changes in a priori informational value of external cues and subjective expectancies in human auditory attention.

    Science.gov (United States)

    Arjona, Antonio; Gómez, Carlos M

    2011-01-01

    Preparatory activity based on a priori probabilities generated in previous trials and subjective expectancies would produce an attentional bias. However, preparation can be correct (valid) or incorrect (invalid) depending on the actual target stimulus. The alternation effect refers to the subjective expectancy that a target will not be repeated in the same position, causing RTs to increase if the target location is repeated. The present experiment, using the Posner's central cue paradigm, tries to demonstrate that not only the credibility of the cue, but also the expectancy about the next position of the target are changed in a trial by trial basis. Sequences of trials were analyzed. The results indicated an increase in RT benefits when sequences of two and three valid trials occurred. The analysis of errors indicated an increase in anticipatory behavior which grows as the number of valid trials is increased. On the other hand, there was also an RT benefit when a trial was preceded by trials in which the position of the target changed with respect to the current trial (alternation effect). Sequences of two alternations or two repetitions were faster than sequences of trials in which a pattern of repetition or alternation is broken. Taken together, these results suggest that in Posner's central cue paradigm, and with regard to the anticipatory activity, the credibility of the external cue and of the endogenously anticipated patterns of target location are constantly updated. The results suggest that Bayesian rules are operating in the generation of anticipatory activity as a function of the previous trial's outcome, but also on biases or prior beliefs like the "gambler fallacy".

  2. Auditory spatial attention to speech and complex non-speech sounds in children with autism spectrum disorder.

    Science.gov (United States)

    Soskey, Laura N; Allen, Paul D; Bennetto, Loisa

    2017-08-01

    One of the earliest observable impairments in autism spectrum disorder (ASD) is a failure to orient to speech and other social stimuli. Auditory spatial attention, a key component of orienting to sounds in the environment, has been shown to be impaired in adults with ASD. Additionally, specific deficits in orienting to social sounds could be related to increased acoustic complexity of speech. We aimed to characterize auditory spatial attention in children with ASD and neurotypical controls, and to determine the effect of auditory stimulus complexity on spatial attention. In a spatial attention task, target and distractor sounds were played randomly in rapid succession from speakers in a free-field array. Participants attended to a central or peripheral location, and were instructed to respond to target sounds at the attended location while ignoring nearby sounds. Stimulus-specific blocks evaluated spatial attention for simple non-speech tones, speech sounds (vowels), and complex non-speech sounds matched to vowels on key acoustic properties. Children with ASD had significantly more diffuse auditory spatial attention than neurotypical children when attending front, indicated by increased responding to sounds at adjacent non-target locations. No significant differences in spatial attention emerged based on stimulus complexity. Additionally, in the ASD group, more diffuse spatial attention was associated with more severe ASD symptoms but not with general inattention symptoms. Spatial attention deficits have important implications for understanding social orienting deficits and atypical attentional processes that contribute to core deficits of ASD. Autism Res 2017, 10: 1405-1416. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.

  3. Auditory Perceptual and Visual-Spatial Characteristics of Gaze-Evoked Tinnitus

    Directory of Open Access Journals (Sweden)

    Jamileh Fattahi

    1996-09-01

    Full Text Available Auditory perceptual and visual-spatial characteristics of subjective tinnitus evoked by eye gaze were studied in two adult human subjects. This uncommon form of tinnitus occurred approximately 4-6 weeks following neurosurgery for gross total excision of space Occupying lesions of the cerebellopontine angle and hearing was lost in the operated ear. In both cases, the gaze evoked tinnitus was characterized as being tonal in nature, with pitch and loudness percepts remaining constant as long as the same horizontal or vertical eye directions were maintained. Tinnitus was absent when the eyes were in a neutral head referenced position with subjects looking straight ahead. The results and implications of ophthalmological, standard and modified visual field assessment, pure tone audio metric assessment, spontaneous otoacoustic emission testing and detailed psychophysical assessment of pitch and loudness are discussed

  4. The impact of variation in low-frequency interaural cross correlation on auditory spatial imagery in stereophonic loudspeaker reproduction

    Science.gov (United States)

    Martens, William

    2005-04-01

    Several attributes of auditory spatial imagery associated with stereophonic sound reproduction are strongly modulated by variation in interaural cross correlation (IACC) within low frequency bands. Nonetheless, a standard practice in bass management for two-channel and multichannel loudspeaker reproduction is to mix low-frequency musical content to a single channel for reproduction via a single driver (e.g., a subwoofer). This paper reviews the results of psychoacoustic studies which support the conclusion that reproduction via multiple drivers of decorrelated low-frequency signals significantly affects such important spatial attributes as auditory source width (ASW), auditory source distance (ASD), and listener envelopment (LEV). A variety of methods have been employed in these tests, including forced choice discrimination and identification, and direct ratings of both global dissimilarity and distinct attributes. Contrary to assumptions that underlie industrial standards established in 1994 by ITU-R. Recommendation BS.775-1, these findings imply that substantial stereophonic spatial information exists within audio signals at frequencies below the 80 to 120 Hz range of prescribed subwoofer cutoff frequencies, and that loudspeaker reproduction of decorrelated signals at frequencies as low as 50 Hz can have an impact upon auditory spatial imagery. [Work supported by VRQ.

  5. Attention-shift vs. response-priming explanations for the spatial cueing effect in cross-modal tasks.

    Science.gov (United States)

    Paavilainen, Petri; Illi, Janne; Moisseinen, Nella; Niinisalo, Maija; Ojala, Karita; Reinikainen, Johanna; Vainio, Lari

    2016-06-01

    The task-irrelevant spatial location of a cue stimulus affects the processing of a subsequent target. This "Posner effect" has been explained by an exogenous attention shift to the spatial location of the cue, improving perceptual processing of the target. We studied whether the left/right location of task-irrelevant and uninformative tones produces cueing effects on the processing of visual targets. Tones were presented randomly from left or right. In the first condition, the subsequent visual target, requiring response either with the left or right hand, was presented peripherally to left or right. In the second condition, the target was a centrally presented left/right-pointing arrow, indicating the response hand. In the third condition, the tone and the central arrow were presented simultaneously. Data were recorded on compatible (the tone location and the response hand were the same) and incompatible trials. Reaction times were longer on incompatible than on compatible trials. The results of the second and third conditions are difficult to explain with the attention-shift model emphasizing improved perceptual processing in the cued location, as the central target did not require any location-based processing. Consequently, as an alternative explanation they suggest response priming in the hand corresponding to the spatial location of the tone. Simultaneous lateralized readiness potential (LRP) recordings were consistent with the behavioral data, the tone cues eliciting on incompatible trials a fast preparation for the incorrect response and on compatible trials preparation for the correct response. © 2016 Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  6. Pressure and particle motion detection thresholds in fish: a re-examination of salient auditory cues in teleosts.

    Science.gov (United States)

    Radford, Craig A; Montgomery, John C; Caiger, Paul; Higgs, Dennis M

    2012-10-01

    The auditory evoked potential technique has been used for the past 30 years to evaluate the hearing ability of fish. The resulting audiograms are typically presented in terms of sound pressure (dB re. 1 μPa) with the particle motion (dB re. 1 m s(-2)) component largely ignored until recently. When audiograms have been presented in terms of particle acceleration, one of two approaches has been used for stimulus characterisation: measuring the pressure gradient between two hydrophones or using accelerometers. With rare exceptions these values are presented from experiments using a speaker as the stimulus, thus making it impossible to truly separate the contribution of direct particle motion and pressure detection in the response. Here, we compared the particle acceleration and pressure auditory thresholds of three species of fish with differing hearing specialisations, goldfish (Carassius auratus, weberian ossicles), bigeye (Pempheris adspersus, ligamentous hearing specialisation) and a third species with no swim bladder, the common triplefin (Forstergyian lappillum), using three different methods of determining particle acceleration. In terms of particle acceleration, all three fish species have similar hearing thresholds, but when expressed as pressure thresholds goldfish are the most sensitive, followed by bigeye, with triplefin the least sensitive. It is suggested here that all fish have a similar ability to detect the particle motion component of the sound field and it is their ability to transduce the pressure component of the sound field to the inner ear via ancillary hearing structures that provides the differences in hearing ability. Therefore, care is needed in stimuli presentation and measurement when determining hearing ability of fish and when interpreting comparative hearing abilities between species.

  7. Early continuous white noise exposure alters auditory spatial sensitivity and expression of GAD65 and GABAA receptor subunits in rat auditory cortex.

    Science.gov (United States)

    Xu, Jinghong; Yu, Liping; Cai, Rui; Zhang, Jiping; Sun, Xinde

    2010-04-01

    Sensory experiences have important roles in the functional development of the mammalian auditory cortex. Here, we show how early continuous noise rearing influences spatial sensitivity in the rat primary auditory cortex (A1) and its underlying mechanisms. By rearing infant rat pups under conditions of continuous, moderate level white noise, we found that noise rearing markedly attenuated the spatial sensitivity of A1 neurons. Compared with rats reared under normal conditions, spike counts of A1 neurons were more poorly modulated by changes in stimulus location, and their preferred locations were distributed over a larger area. We further show that early continuous noise rearing induced significant decreases in glutamic acid decarboxylase 65 and gamma-aminobutyric acid (GABA)(A) receptor alpha1 subunit expression, and an increase in GABA(A) receptor alpha3 expression, which indicates a returned to the juvenile form of GABA(A) receptor, with no effect on the expression of N-methyl-D-aspartate receptors. These observations indicate that noise rearing has powerful adverse effects on the maturation of cortical GABAergic inhibition, which might be responsible for the reduced spatial sensitivity.

  8. Musical metaphors: evidence for a spatial grounding of non-literal sentences describing auditory events.

    Science.gov (United States)

    Wolter, Sibylla; Dudschig, Carolin; de la Vega, Irmgard; Kaup, Barbara

    2015-03-01

    This study investigated whether the spatial terms high and low, when used in sentence contexts implying a non-literal interpretation, trigger similar spatial associations as would have been expected from the literal meaning of the words. In three experiments, participants read sentences describing either a high or a low auditory event (e.g., The soprano sings a high aria vs. The pianist plays a low note). In all Experiments, participants were asked to judge (yes/no) whether the sentences were meaningful by means of up/down (Experiments 1 and 2) or left/right (Experiment 3) key press responses. Contrary to previous studies reporting that metaphorical language understanding differs from literal language understanding with regard to simulation effects, the results show compatibility effects between sentence implied pitch height and response location. The results are in line with grounded models of language comprehension proposing that sensory motor experiences are being elicited when processing literal as well as non-literal sentences. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Facilitation and inhibition arising from the exogenous orienting of covert attention depends on the temporal properties of spatial cues and targets.

    Science.gov (United States)

    Maruff, P; Yucel, M; Danckert, J; Stuart, G; Currie, J

    1999-06-01

    On the covert orienting of visual attention task (COVAT), responses to targets appearing at the location indicated by a non-predictive spatial cue are faster than responses to targets appearing at uncued locations when stimulus onset asynchrony (SOA) is less than approximately 200 ms. For longer SOAs, this pattern reverses and RTs to targets appearing at uncued locations become faster than RTs to targets appearing at the cued location. This facilitation followed by inhibition has been termed the biphasic effect of non-predictive peripheral spatial cues. Currently, there is debate about whether these two processes are independent. This issue was addressed in a series of experiments where the temporal overlap between the peripheral cue and target was manipulated at both short and long SOAs. Results showed that facilitation was present only when the SOA was short and there was temporal overlap between cue and target. Conversely, inhibition occurred only when the SOA was long and there was no temporal overlap between cue and target. The biphasic effect, with an early facilitation followed by a later inhibition, occurred only when the cue duration was fixed such that there was temporal overlap between the cue and target at short but not long SOAs. In a final experiment, the duration of targets the temporal overlap between cue and target and the SOA were manipulated factorially. The results showed that facilitation occurred only when the SOA was short, there was temporal overlap between cue and target and the target remained visible until the subject responded. These results suggest that the facilitation and inhibition found on COVATs which use non-informative peripheral cues are independent processes and their presence and magnitude is related to the temporal properties of cues and targets.

  10. Express attentional re-engagement but delayed entry into consciousness following invalid spatial cues in visual search.

    Directory of Open Access Journals (Sweden)

    Benoit Brisson

    Full Text Available BACKGROUND: In predictive spatial cueing studies, reaction times (RT are shorter for targets appearing at cued locations (valid trials than at other locations (invalid trials. An increase in the amplitude of early P1 and/or N1 event-related potential (ERP components is also present for items appearing at cued locations, reflecting early attentional sensory gain control mechanisms. However, it is still unknown at which stage in the processing stream these early amplitude effects are translated into latency effects. METHODOLOGY/PRINCIPAL FINDINGS: Here, we measured the latency of two ERP components, the N2pc and the sustained posterior contralateral negativity (SPCN, to evaluate whether visual selection (as indexed by the N2pc and visual-short term memory processes (as indexed by the SPCN are delayed in invalid trials compared to valid trials. The P1 was larger contralateral to the cued side, indicating that attention was deployed to the cued location prior to the target onset. Despite these early amplitude effects, the N2pc onset latency was unaffected by cue validity, indicating an express, quasi-instantaneous re-engagement of attention in invalid trials. In contrast, latency effects were observed for the SPCN, and these were correlated to the RT effect. CONCLUSIONS/SIGNIFICANCE: Results show that latency differences that could explain the RT cueing effects must occur after visual selection processes giving rise to the N2pc, but at or before transfer in visual short-term memory, as reflected by the SPCN, at least in discrimination tasks in which the target is presented concurrently with at least one distractor. Given that the SPCN was previously associated to conscious report, these results further show that entry into consciousness is delayed following invalid cues.

  11. Can basic auditory and cognitive measures predict hearing-impaired listeners' localization and spatial speech recognition abilities?

    Science.gov (United States)

    Neher, Tobias; Laugesen, Søren; Jensen, Niels Søgaard; Kragelund, Louise

    2011-09-01

    This study aimed to clarify the basic auditory and cognitive processes that affect listeners' performance on two spatial listening tasks: sound localization and speech recognition in spatially complex, multi-talker situations. Twenty-three elderly listeners with mild-to-moderate sensorineural hearing impairments were tested on the two spatial listening tasks, a measure of monaural spectral ripple discrimination, a measure of binaural temporal fine structure (TFS) sensitivity, and two (visual) cognitive measures indexing working memory and attention. All auditory test stimuli were spectrally shaped to restore (partial) audibility for each listener on each listening task. Eight younger normal-hearing listeners served as a control group. Data analyses revealed that the chosen auditory and cognitive measures could predict neither sound localization accuracy nor speech recognition when the target and maskers were separated along the front-back dimension. When the competing talkers were separated along the left-right dimension, however, speech recognition performance was significantly correlated with the attentional measure. Furthermore, supplementary analyses indicated additional effects of binaural TFS sensitivity and average low-frequency hearing thresholds. Altogether, these results are in support of the notion that both bottom-up and top-down deficits are responsible for the impaired functioning of elderly hearing-impaired listeners in cocktail party-like situations. © 2011 Acoustical Society of America

  12. Adaptive spatial filtering improves speech reception in noise while preserving binaural cues.

    Science.gov (United States)

    Bissmeyer, Susan R S; Goldsworthy, Raymond L

    2017-09-01

    Hearing loss greatly reduces an individual's ability to comprehend speech in the presence of background noise. Over the past decades, numerous signal-processing algorithms have been developed to improve speech reception in these situations for cochlear implant and hearing aid users. One challenge is to reduce background noise while not introducing interaural distortion that would degrade binaural hearing. The present study evaluates a noise reduction algorithm, referred to as binaural Fennec, that was designed to improve speech reception in background noise while preserving binaural cues. Speech reception thresholds were measured for normal-hearing listeners in a simulated environment with target speech generated in front of the listener and background noise originating 90° to the right of the listener. Lateralization thresholds were also measured in the presence of background noise. These measures were conducted in anechoic and reverberant environments. Results indicate that the algorithm improved speech reception thresholds, even in highly reverberant environments. Results indicate that the algorithm also improved lateralization thresholds for the anechoic environment while not affecting lateralization thresholds for the reverberant environments. These results provide clear evidence that this algorithm can improve speech reception in background noise while preserving binaural cues used to lateralize sound.

  13. Goal orientation by geometric and feature cues: spatial learning in the terrestrial toad Rhinella arenarum.

    Science.gov (United States)

    Sotelo, María Inés; Bingman, Verner Peter; Muzio, Rubén N

    2015-01-01

    Although of crucial importance in vertebrate evolution, amphibians are rarely considered in studies of comparative cognition. Using water as reward, we studied whether the terrestrial toad, Rhinella arenarum, is also capable of encoding geometric and feature information to navigate to a goal location. Experimental toads, partially dehydrated, were trained in either a white rectangular box (Geometry-only, Experiment 1) or in the same box with a removable colored panel (Geometry-Feature, Experiment 2) covering one wall. Four water containers were used, but only one (Geometry-Feature), or two in geometrically equivalent corners (Geometry-only), had water accessible to the trained animals. After learning to successfully locate the water reward, probe trials were carried out by changing the shape of the arena or the location of the feature cue. Probe tests revealed that, under the experimental conditions used, toads can use both geometry and feature to locate a goal location, but geometry is more potent as a navigational cue. The results generally agree with findings from other vertebrates and support the idea that at the behavioral-level geometric orientation is a conserved feature shared by all vertebrates.

  14. Spatial Cueing in Time-Space Synesthetes: An Event-Related Brain Potential Study

    Science.gov (United States)

    Teuscher, Ursina; Brang, David; Ramachandran, Vilayanur S.; Coulson, Seana

    2010-01-01

    Some people report that they consistently and involuntarily associate time events, such as months of the year, with specific spatial locations; a condition referred to as time-space synesthesia. The present study investigated the manner in which such synesthetic time-space associations affect visuo-spatial attention via an endogenous cuing…

  15. Tonal cues modulate line bisection performance: Preliminary evidence for a new rehabilitation prospect?

    Directory of Open Access Journals (Sweden)

    Masami eIshihara

    2013-10-01

    Full Text Available The effect of the presentation of two different auditory pitches (high & low on manual line-bisection performance was studied to investigate the relationship between space and magnitude representations underlying motor acts. Participants were asked to mark the midpoint of a given line with a pen while they were listening a pitch via headphones. In healthy participants, the effect of the presentation order (blocked or alternative way of auditory stimuli was tested (Exp. 1. The results showed no biasing effect of pitch in blocked-order presentation, whereas the alternative presentation modulated the line-bisection. Lower pitch produced leftward or downward bisection biases whereas higher pitch produced rightward or upward biases, suggesting that visuomotor processing can be spatially modulated by irrelevant auditory cues. In Exp. 2, the effect of such alternative stimulations in line bisection in right brain damaged patients with a unilateral neglect and without a neglect was tested. Similar biasing effects caused by auditory cues were observed although the white noise presentation also affected the patient’s performance. Additionally, the effect of pitch difference was larger for the neglect patient than for the no-neglect patient as well as for healthy participants. The neglect patient’s bisection performance gradually improved during the experiment and was maintained even after one week. It is therefore concluded that auditory cues, characterized by both the pitch difference and the dynamic alternation, influence spatial representations. The larger biasing effect seen in the neglect patient compared to the no-neglect patient and healthy participants suggests that auditory cues could modulate the direction of the attentional bias that is characteristic of neglect patients. Thus the alternative presentation of auditory cues could be used as rehabilitation for neglect patients. The space-pitch associations are discussed in terms of a

  16. Effect of working memory load on electrophysiological markers of visuospatial orienting in a spatial cueing task simulating a traffic situation.

    Science.gov (United States)

    Vossen, Alexandra Y; Ross, Veerle; Jongen, Ellen M M; Ruiter, Robert A C; Smulders, Fren T Y

    2016-02-01

    Visuospatial attentional orienting has typically been studied in abstract tasks with low ecological validity. However, real-life tasks such as driving require allocation of working memory (WM) resources to several subtasks over and above orienting in a complex sensory environment. The aims of this study were twofold: firstly, to establish whether electrophysiological signatures of attentional orienting commonly observed under simplified task conditions generalize to a more naturalistic task situation with realistic-looking stimuli, and, secondly, to assess how these signatures are affected by increased WM load under such conditions. Sixteen healthy participants performed a dual task consisting of a spatial cueing paradigm and a concurrent verbal memory task that simulated aspects of an actual traffic situation. Behaviorally, we observed a load-induced detriment of sensitivity to targets. In the EEG, we replicated orienting-related alpha lateralization, the lateralized ERPs ADAN, EDAN, and LDAP, and the P1-N1 attention effect. When WM load was high (i.e., WM resources were reduced), lateralization of oscillatory activity in the lower alpha band was delayed. In the ERPs, we found that ADAN was also delayed, while EDAN was absent. Later ERP correlates were unaffected by load. Our results show that the findings in highly controlled artificial tasks can be generalized to spatial orienting in ecologically more valid tasks, and further suggest that the initiation of spatial orienting is delayed when WM demands of an unrelated secondary task are high. © 2015 Society for Psychophysiological Research.

  17. The impact of early reflections on binaural cues.

    Science.gov (United States)

    Gourévitch, Boris; Brette, Romain

    2012-07-01

    Animals live in cluttered auditory environments, where sounds arrive at the two ears through several paths. Reflections make sound localization difficult, and it is thought that the auditory system deals with this issue by isolating the first wavefront and suppressing later signals. However, in many situations, reflections arrive too early to be suppressed, for example, reflections from the ground in small animals. This paper examines the implications of these early reflections on binaural cues to sound localization, using realistic models of reflecting surfaces and a spherical model of diffraction by the head. The fusion of direct and reflected signals at each ear results in interference patterns in binaural cues as a function of frequency. These cues are maximally modified at frequencies related to the delay between direct and reflected signals, and therefore to the spatial location of the sound source. Thus, natural binaural cues differ from anechoic cues. In particular, the range of interaural time differences is substantially larger than in anechoic environments. Reflections may potentially contribute binaural cues to distance and polar angle when the properties of the reflecting surface are known and stable, for example, for reflections on the ground.

  18. Spatially valid proprioceptive cues improve the detection of a visual stimulus

    DEFF Research Database (Denmark)

    Jackson, Carl P T; Miall, R Chris; Balslev, Daniela

    2010-01-01

    Vision and proprioception are the main sensory modalities that convey hand location and direction of movement. Fusion of these sensory signals into a single robust percept is now well documented. However, it is not known whether these modalities also interact in the spatial allocation of attentio...

  19. Audio-visual temporal recalibration can be constrained by content cues regardless of spatial overlap

    Directory of Open Access Journals (Sweden)

    Warrick eRoseboom

    2013-04-01

    Full Text Available It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated, and opposing, estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possible to maintain a temporal relationship distinct from other pairs. It has been suggested that spatial separation of the different audio-visual pairs is necessary to achieve multiple distinct audio-visual synchrony estimates. Here we investigated if this was necessarily true. Specifically, we examined whether it is possible to obtain two distinct temporal recalibrations for stimuli that differed only in featural content. Using both complex (audio visual speech; Experiment 1 and simple stimuli (high and low pitch audio matched with either vertically or horizontally oriented Gabors; Experiment 2 we found concurrent, and opposite, recalibrations despite there being no spatial difference in presentation location at any point throughout the experiment. This result supports the notion that the content of an audio-visual pair can be used to constrain distinct audio-visual synchrony estimates regardless of spatial overlap.

  20. Audio-Visual Temporal Recalibration Can be Constrained by Content Cues Regardless of Spatial Overlap.

    Science.gov (United States)

    Roseboom, Warrick; Kawabe, Takahiro; Nishida, Shin'ya

    2013-01-01

    It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possible to maintain a temporal relationship distinct from other pairs. It has been suggested that spatial separation of the different audio-visual pairs is necessary to achieve multiple distinct audio-visual synchrony estimates. Here we investigated if this is necessarily true. Specifically, we examined whether it is possible to obtain two distinct temporal recalibrations for stimuli that differed only in featural content. Using both complex (audio visual speech; see Experiment 1) and simple stimuli (high and low pitch audio matched with either vertically or horizontally oriented Gabors; see Experiment 2) we found concurrent, and opposite, recalibrations despite there being no spatial difference in presentation location at any point throughout the experiment. This result supports the notion that the content of an audio-visual pair alone can be used to constrain distinct audio-visual synchrony estimates regardless of spatial overlap.

  1. From repulsion to attraction: species- and spatial context-dependent threat sensitive response of the spider mite Tetranychus urticae to predatory mite cues

    Science.gov (United States)

    Fernández Ferrari, M. Celeste; Schausberger, Peter

    2013-06-01

    Prey perceiving predation risk commonly change their behavior to avoid predation. However, antipredator strategies are costly. Therefore, according to the threat-sensitive predator avoidance hypothesis, prey should match the intensity of their antipredator behaviors to the degree of threat, which may depend on the predator species and the spatial context. We assessed threat sensitivity of the two-spotted spider mite, Tetranychus urticae, to the cues of three predatory mites, Phytoseiulus persimilis, Neoseiulus californicus, and Amblyseius andersoni, posing different degrees of risk in two spatial contexts. We first conducted a no-choice test measuring oviposition and activity of T. urticae exposed to chemical traces of predators or traces plus predator eggs. Then, we tested the site preference of T. urticae in choice tests, using artificial cages and leaves. In the no-choice test, T. urticae deposited their first egg later in the presence of cues of P. persimilis than of the other two predators and cue absence, indicating interspecific threat-sensitivity. T. urticae laid also fewer eggs in the presence of cues of P. persimilis and A. andersoni than of N. californicus and cue absence. In the artificial cage test, the spider mites preferred the site with predator traces, whereas in the leaf test, they preferentially resided on leaves without traces. We argue that in a nonplant environment, chemical predator traces do not indicate a risk for T. urticae, and instead, these traces function as indirect habitat cues. The spider mites were attracted to these cues because they associated them with the existence of a nearby host plant.

  2. Contributions of Sensory Coding and Attentional Control to Individual Differences in Performance in Spatial Auditory Selective Attention Tasks.

    Science.gov (United States)

    Dai, Lengshi; Shinn-Cunningham, Barbara G

    2016-01-01

    Listeners with normal hearing thresholds (NHTs) differ in their ability to steer attention to whatever sound source is important. This ability depends on top-down executive control, which modulates the sensory representation of sound in the cortex. Yet, this sensory representation also depends on the coding fidelity of the peripheral auditory system. Both of these factors may thus contribute to the individual differences in performance. We designed a selective auditory attention paradigm in which we could simultaneously measure envelope following responses (EFRs, reflecting peripheral coding), onset event-related potentials (ERPs) from the scalp (reflecting cortical responses to sound) and behavioral scores. We performed two experiments that varied stimulus conditions to alter the degree to which performance might be limited due to fine stimulus details vs. due to control of attentional focus. Consistent with past work, in both experiments we find that attention strongly modulates cortical ERPs. Importantly, in Experiment I, where coding fidelity limits the task, individual behavioral performance correlates with subcortical coding strength (derived by computing how the EFR is degraded for fully masked tones compared to partially masked tones); however, in this experiment, the effects of attention on cortical ERPs were unrelated to individual subject performance. In contrast, in Experiment II, where sensory cues for segregation are robust (and thus less of a limiting factor on task performance), inter-subject behavioral differences correlate with subcortical coding strength. In addition, after factoring out the influence of subcortical coding strength, behavioral differences are also correlated with the strength of attentional modulation of ERPs. These results support the hypothesis that behavioral abilities amongst listeners with NHTs can arise due to both subcortical coding differences and differences in attentional control, depending on stimulus characteristics

  3. Contributions of sensory coding and attentional control to individual differences in performance in spatial auditory selective attention tasks

    Directory of Open Access Journals (Sweden)

    Lengshi Dai

    2016-10-01

    Full Text Available Listeners with normal hearing thresholds differ in their ability to steer attention to whatever sound source is important. This ability depends on top-down executive control, which modulates the sensory representation of sound in cortex. Yet, this sensory representation also depends on the coding fidelity of the peripheral auditory system. Both of these factors may thus contribute to the individual differences in performance. We designed a selective auditory attention paradigm in which we could simultaneously measure envelope following responses (EFRs, reflecting peripheral coding, onset event-related potentials from the scalp (ERPs, reflecting cortical responses to sound, and behavioral scores. We performed two experiments that varied stimulus conditions to alter the degree to which performance might be limited due to fine stimulus details vs. due to control of attentional focus. Consistent with past work, in both experiments we find that attention strongly modulates cortical ERPs. Importantly, in Experiment I, where coding fidelity limits the task, individual behavioral performance correlates with subcortical coding strength (derived by computing how the EFR is degraded for fully masked tones compared to partially masked tones; however, in this experiment, the effects of attention on cortical ERPs were unrelated to individual subject performance. In contrast, in Experiment II, where sensory cues for segregation are robust (and thus less of a limiting factor on task performance, inter-subject behavioral differences correlate with subcortical coding strength. In addition, after factoring out the influence of subcortical coding strength, behavioral differences are also correlated with the strength of attentional modulation of ERPs. These results support the hypothesis that behavioral abilities amongst listeners with normal hearing thresholds can arise due to both subcortical coding differences and differences in attentional control, depending on

  4. Individual differences in verbal-spatial conflict in rapid spatial-orientation tasks.

    Science.gov (United States)

    Barrow, Jane H; Baldwin, Carryl L

    2015-05-01

    The impact of interference from irrelevant spatial versus verbal cues is investigated in an auditory spatial Stroop task, and individual differences in navigation strategy are examined as a moderating factor. Verbal-spatial cue conflict in the auditory modality has not been extensively studied, and yet the potential for such conflict can be high in certain settings, such as modern aircraft and automobile cockpits, where multiple warning systems and verbally delivered instructions may compete for the operator's spatial attention. Two studies are presented in which participants responded to either the semantic meaning or the spatial location of directional words, which were presented from congruent and incongruent locations. A subset was selected from the larger sample for additional analyses based on their navigation strategy. Results demonstrated greater interference when participants were responding to the spatial location and thus attempting to ignore conflicting semantic information. Participants with a verbal navigation strategy paralleled this finding. Conversely, highly spatial navigators responded faster to spatially relevant information but did not show corresponding interference when trying to ignore spatial information. The findings suggest that people have fundamentally different approaches to the use of auditory spatial information that manifest at the early level of orienting toward a single word or sound. When designing spatial information displays and warning systems, particularly those with an auditory component, designers should ensure that either verbal-directional or nonverbal-spatial information is utilized by all alerts to reduce interference. © 2014, Human Factors and Ergonomics Society.

  5. Glial cell contributions to auditory brainstem development

    Directory of Open Access Journals (Sweden)

    Karina S Cramer

    2016-10-01

    Full Text Available Glial cells, previously thought to have generally supporting roles in the central nervous system, are emerging as essential contributors to multiple aspects of neuronal circuit function and development. This review focuses on the contributions of glial cells to the development of specialized auditory pathways in the brainstem. These pathways display specialized synapses and an unusually high degree of precision in circuitry that enables sound source localization. The development of these pathways thus requires highly coordinated molecular and cellular mechanisms. Several classes of glial cells, including astrocytes, oligodendrocytes, and microglia, have now been explored in these circuits in both avian and mammalian brainstems. Distinct populations of astrocytes are found over the course of auditory brainstem maturation. Early appearing astrocytes are associated with spatial compartments in the avian auditory brainstem. Factors from late appearing astrocytes promote synaptogenesis and dendritic maturation, and astrocytes remain integral parts of specialized auditory synapses. Oligodendrocytes play a unique role in both birds and mammals in highly regulated myelination essential for proper timing to decipher interaural cues. Microglia arise early in brainstem development and may contribute to maturation of auditory pathways. Together these studies demonstrate the importance of non-neuronal cells in the assembly of specialized auditory brainstem circuits.

  6. Spatial Attention Modulates the Precedence Effect

    Science.gov (United States)

    London, Sam; Bishop, Christopher W.; Miller, Lee M.

    2012-01-01

    Communication and navigation in real environments rely heavily on the ability to distinguish objects in acoustic space. However, auditory spatial information is often corrupted by conflicting cues and noise such as acoustic reflections. Fortunately the brain can apply mechanisms at multiple levels to emphasize target information and mitigate such…

  7. Effects of a Combined 3-D Auditory/visual Cueing System on Visual Target Detection Using a Helmet-Mounted Display

    National Research Council Canada - National Science Library

    Pinedo, Carlos; Young, Laurence; Esken, Robert

    2005-01-01

    ..., and the development and evaluation of the NDFR symbology for on/off-boresight viewing. The localized auditory research includes looking at the benefits of augmenting the Terrain Collision Avoidance System (TCAS...

  8. Robotic and Virtual Reality BCIs Using Spatial Tactile and Auditory Oddball Paradigms

    OpenAIRE

    Rutkowski, Tomasz M.

    2016-01-01

    The paper reviews nine robotic and virtual reality (VR) brain–computer interface (BCI) projects developed by the author, in collaboration with his graduate students, within the BCI–lab research group during its association with University of Tsukuba, Japan. The nine novel approaches are discussed in applications to direct brain-robot and brain-virtual-reality-agent control interfaces using tactile and auditory BCI technologies. The BCI user intentions are decoded from the brainwaves in realti...

  9. Extending and Applying the EPIC Architecture for Human Cognition and Performance: Auditory and Spatial Components

    Science.gov (United States)

    2013-03-20

    by postulating that an object with a perceived location in space could have both visual and auditory properties. A connection was added between the...fundamental pitch improves the discrimination of simultaneous vowel sounds (surveyed by Darwin , 2008). As a simple way to incorporate this effect, we...simultaneous speakers. J.Acoust.Soc. Am. 110(3), 1101-1109. Darwin , C J. (2008). Listening to speech in the presence of other sounds. Philosophical

  10. Auditory attention in childhood and adolescence: An event-related potential study of spatial selective attention to one of two simultaneous stories

    Directory of Open Access Journals (Sweden)

    Christina M. Karns

    2015-06-01

    Full Text Available Auditory selective attention is a critical skill for goal-directed behavior, especially where noisy distractions may impede focusing attention. To better understand the developmental trajectory of auditory spatial selective attention in an acoustically complex environment, in the current study we measured auditory event-related potentials (ERPs across five age groups: 3–5 years; 10 years; 13 years; 16 years; and young adults. Using a naturalistic dichotic listening paradigm, we characterized the ERP morphology for nonlinguistic and linguistic auditory probes embedded in attended and unattended stories. We documented robust maturational changes in auditory evoked potentials that were specific to the types of probes. Furthermore, we found a remarkable interplay between age and attention-modulation of auditory evoked potentials in terms of morphology and latency from the early years of childhood through young adulthood. The results are consistent with the view that attention can operate across age groups by modulating the amplitude of maturing auditory early-latency evoked potentials or by invoking later endogenous attention processes. Development of these processes is not uniform for probes with different acoustic properties within our acoustically dense speech-based dichotic listening task. In light of the developmental differences we demonstrate, researchers conducting future attention studies of children and adolescents should be wary of combining analyses across diverse ages.

  11. Auditory attention in childhood and adolescence: An event-related potential study of spatial selective attention to one of two simultaneous stories

    Science.gov (United States)

    Karns, Christina M.; Isbell, Elif; Giuliano, Ryan J.; Neville, Helen J.

    2015-01-01

    Auditory selective attention is a critical skill for goal-directed behavior, especially where noisy distractions may impede focusing attention. To better understand the developmental trajectory of auditory spatial selective attention in an acoustically complex environment, in the current study we measured auditory event-related potentials (ERPs) in human children across five age groups: 3–5 years; 10 years; 13 years; 16 years; and young adults using a naturalistic dichotic listening paradigm, characterizing the ERP morphology for nonlinguistic and linguistic auditory probes embedded in attended and unattended stories. We documented robust maturational changes in auditory evoked potentials that were specific to the types of probes. Furthermore, we found a remarkable interplay between age and attention-modulation of auditory evoked potentials in terms of morphology and latency from the early years of childhood through young adulthood. The results are consistent with the view that attention can operate across age groups by modulating the amplitude of maturing auditory early-latency evoked potentials or by invoking later endogenous attention processes. Development of these processes is not uniform for probes with different acoustic properties within our acoustically dense speech-based dichotic listening task. In light of the developmental differences we demonstrate, researchers conducting future attention studies of children and adolescents should be wary of combining analyses across diverse ages. PMID:26002721

  12. Auditory Spatial Perception: Auditory Localization

    Science.gov (United States)

    2012-05-01

    the presence of primacy and recency effects , resulting in a large number of errors in which listeners erroneously selected the loudspeaker that had...the sound source that produced this sound. As in the previous studies mentioned, pronounced primacy and recency effect were found. Further research...16 2.3.2 Sound Onset and Precedence Effect

  13. Auditory Spatial Perception: Auditory Localization

    Science.gov (United States)

    2012-05-01

    the surrounding space and the location and position of our own body within it. Thus, it is the multisensory awareness of being immersed in a specific...improves situational awareness, speech perception, and sound source identification in the presence of other sound sources (e.g., Bronkhorst, 2000; Kidd et...ventriloquism effect (VE) (Howard and Templeton, 1966) in which the listener perceives the ventriloquist’s speech as coming from ventriloquist’s dummy. The

  14. The effects of distraction and a brief intervention on auditory and visual-spatial working memory in college students with attention deficit hyperactivity disorder.

    Science.gov (United States)

    Lineweaver, Tara T; Kercood, Suneeta; O'Keeffe, Nicole B; O'Brien, Kathleen M; Massey, Eric J; Campbell, Samantha J; Pierce, Jenna N

    2012-01-01

    Two studies addressed how young adult college students with attention deficit hyperactivity disorder (ADHD) (n = 44) compare to their nonaffected peers (n = 42) on tests of auditory and visual-spatial working memory (WM), are vulnerable to auditory and visual distractions, and are affected by a simple intervention. Students with ADHD demonstrated worse auditory WM than did controls. A near significant trend indicated that auditory distractions interfered with the visual WM of both groups and that, whereas controls were also vulnerable to visual distractions, visual distractions improved visual WM in the ADHD group. The intervention was ineffective. Limited correlations emerged between self-reported ADHD symptoms and objective test performances; students with ADHD who perceived themselves as more symptomatic often had better WM and were less vulnerable to distractions than their ADHD peers.

  15. Plasticity in the neural coding of auditory space in the mammalian brain

    Science.gov (United States)

    King, Andrew J.; Parsons, Carl H.; Moore, David R.

    2000-10-01

    Sound localization relies on the neural processing of monaural and binaural spatial cues that arise from the way sounds interact with the head and external ears. Neurophysiological studies of animals raised with abnormal sensory inputs show that the map of auditory space in the superior colliculus is shaped during development by both auditory and visual experience. An example of this plasticity is provided by monaural occlusion during infancy, which leads to compensatory changes in auditory spatial tuning that tend to preserve the alignment between the neural representations of visual and auditory space. Adaptive changes also take place in sound localization behavior, as demonstrated by the fact that ferrets raised and tested with one ear plugged learn to localize as accurately as control animals. In both cases, these adjustments may involve greater use of monaural spectral cues provided by the other ear. Although plasticity in the auditory space map seems to be restricted to development, adult ferrets show some recovery of sound localization behavior after long-term monaural occlusion. The capacity for behavioral adaptation is, however, task dependent, because auditory spatial acuity and binaural unmasking (a measure of the spatial contribution to the "cocktail party effect") are permanently impaired by chronically plugging one ear, both in infancy but especially in adulthood. Experience-induced plasticity allows the neural circuitry underlying sound localization to be customized to individual characteristics, such as the size and shape of the head and ears, and to compensate for natural conductive hearing losses, including those associated with middle ear disease in infancy.

  16. Relationship between postural stability and spatial hearing.

    Science.gov (United States)

    Zhong, Xuan; Yost, William A

    2013-10-01

    Maintaining balance is known to be a multisensory process that uses information from different sensory organs. Although it has been known for a long time that spatial hearing cues provide humans with moderately accurate abilities to localize sound sources, how the auditory system interacts with balance mediated by the vestibular system remains largely a mystery. The primary goal of the current study was to determine whether auditory spatial cues obtained from a fixed sound source can help human participants balance themselves as compared to conditions in which participants use vision. The experiment uses modified versions of conventional clinical tests: the Tandem Romberg test and the Fukuda Stepping test. In the Tandem Romberg test, participants stand with their feet in a heel-to-toe position, and try to maintain balance for 40 sec. In the Fukuda Stepping test, a participant is asked to close his or her eyes and to march in place for 100 steps. The sway and angular deviation of each participant was measured with and without vision and spatial auditory cues. An auditory spatial reference was provided by presenting a broadband noise source from a loudspeaker directly in front of the participant located 1-2 m away. A total of 19 participants (11 women and 8 men; mean age = 27 yr; age range = 18∼52 yr), voluntarily participated in the experiment. All participants had normal vision, hearing, and vestibular function. The primary intervention was the use of a broadband noise source to provide an auditory spatial referent for balance measurements in the Tandem Romberg test and Fukuda Stepping test. Conditions were also tested in which the participants had their eyes opened or closed. A head tracker recorded the position of the participant's head for the Tandem Romberg test. The angular deviation of the feet after 100 steps was measured in the Fukuda Stepping test. An average distance or angle moved by the head or feet was calculated relative to the head or feet resting

  17. Postural prioritization is differentially altered in healthy older compared to younger adults during visual and auditory coded spatial multitasking.

    Science.gov (United States)

    Liston, Matthew B; Bergmann, Jeroen H; Keating, Niamh; Green, David A; Pavlou, Marousa

    2014-01-01

    Many daily activities require appropriate allocation of attention between postural and cognitive tasks (i.e. dual-tasking) to be carried out effectively. Processing multiple streams of spatial information is important for everyday tasks such as road crossing. Fifteen community-dwelling healthy older (mean age=78.3, male=1) and twenty younger adults (mean age=25.3, male=6) completed a novel bimodal spatial multi-task test providing contextually similar spatial information via separate sensory modalities to investigate effects on postural prioritization. Two tasks, a temporally random visually coded spatial step navigation task (VS) and a regular auditory-coded spatial congruency task (AS) were performed independently (single task) and in combination (multi-task). Response time, accuracy and dual-task costs (% change in multi-task condition) were determined. Results showed a significant 3-way interaction between task type (VS vs. AS), complexity (single vs. multi) and age group for both response time (p ≤ 0.01) and response accuracy (p ≤ 0.05) with older adults performing significantly worse than younger adults. Dual-task costs were significantly greater for older compared to younger adults in the VS step task for both response time (p ≤ 0.01) and accuracy (p ≤ 0.05) indicating prioritization of the AS over the VS stepping task in older adults. Younger adults display greater AS task response time dual task costs compared to older adults (p ≤ 0.05) indicating VS task prioritization in agreement with the posture first strategy. Findings suggest that novel dual modality spatial testing may lead to adoption of postural strategies that deviate from posture first, particularly in older people. Adoption of previously unreported postural prioritization strategies may influence balance control in older people. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. Facilitating Children’s Ability to Distinguish Symbols for Emotions: The Effects of Background Color Cues and Spatial Arrangement of Symbols on Accuracy and Speed of Search

    Science.gov (United States)

    Wilkinson, Krista M.; Snell, Julie

    2012-01-01

    Purpose Communication about feelings is a core element of human interaction. Aided augmentative and alternative communication systems must therefore include symbols representing these concepts. The symbols must be readily distinguishable in order for users to communicate effectively. However, emotions are represented within most systems by schematic faces in which subtle distinctions are difficult to represent. We examined whether background color cuing and spatial arrangement might help children identify symbols for different emotions. Methods Thirty nondisabled children searched for symbols representing emotions within an 8-choice array. On some trials, a color cue signaled the valence of the emotion (positive vs. negative). Additionally, symbols were either organized with the negatively-valenced symbols at the top and the positive symbols on the bottom of the display, or the symbols were distributed randomly throughout. Dependent variables were accuracy and speed of responses. Results The speed with which children could locate a target was significantly faster for displays in which symbols were clustered by valence, but only when the symbols had white backgrounds. Addition of a background color cue did not facilitate responses. Conclusions Rapid search was facilitated by a spatial organization cue, but not by the addition of background color. Further examination of the situations in which color cues may be useful is warranted. PMID:21813821

  19. From ear to hand: the role of the auditory-motor loop in pointing to an auditory source

    Directory of Open Access Journals (Sweden)

    Eric Olivier Boyer

    2013-04-01

    Full Text Available Studies of the nature of the neural mechanisms involved in goal-directed movements tend to concentrate on the role of vision. We present here an attempt to address the mechanisms whereby an auditory input is transformed into a motor command. The spatial and temporal organization of hand movements were studied in normal human subjects as they pointed towards unseen auditory targets located in a horizontal plane in front of them. Positions and movements of the hand were measured by a six infrared camera tracking system. In one condition, we assessed the role of auditory information about target position in correcting the trajectory of the hand. To accomplish this, the duration of the target presentation was varied. In another condition, subjects received continuous auditory feedback of their hand movement while pointing to the auditory targets. Online auditory control of the direction of pointing movements was assessed by evaluating how subjects reacted to shifts in heard hand position. Localization errors were exacerbated by short duration of target presentation but not modified by auditory feedback of hand position. Long duration of target presentation gave rise to a higher level of accuracy and was accompanied by early automatic head orienting movements consistently related to target direction. These results highlight the efficiency of auditory feedback processing in online motor control and suggest that the auditory system takes advantages of dynamic changes of the acoustic cues due to changes in head orientation in order to process online motor control. How to design an informative acoustic feedback needs to be carefully studied to demonstrate that auditory feedback of the hand could assist the monitoring of movements directed at objects in auditory space.

  20. Effects of dynamic range compression on spatial selective auditory attention in normal-hearing listeners.

    Science.gov (United States)

    Schwartz, Andrew H; Shinn-Cunningham, Barbara G

    2013-04-01

    Many hearing aids introduce compressive gain to accommodate the reduced dynamic range that often accompanies hearing loss. However, natural sounds produce complicated temporal dynamics in hearing aid compression, as gain is driven by whichever source dominates at a given moment. Moreover, independent compression at the two ears can introduce fluctuations in interaural level differences (ILDs) important for spatial perception. While independent compression can interfere with spatial perception of sound, it does not always interfere with localization accuracy or speech identification. Here, normal-hearing listeners reported a target message played simultaneously with two spatially separated masker messages. We measured the amount of spatial separation required between the target and maskers for subjects to perform at threshold in this task. Fast, syllabic compression that was independent at the two ears increased the required spatial separation, but linking the compressors to provide identical gain to both ears (preserving ILDs) restored much of the deficit caused by fast, independent compression. Effects were less clear for slower compression. Percent-correct performance was lower with independent compression, but only for small spatial separations. These results may help explain differences in previous reports of the effect of compression on spatial perception of sound.

  1. The Effect of Auditory and Contextual Emotional Cues on the Ability to Recognise Facial Expressions of Emotion in Healthy Adult Aging

    OpenAIRE

    Duncan, Nikki

    2013-01-01

    Previous emotion recognition studies have suggested an age-related decline in the recognition of facial expressions of emotion. However, these studies often lack ecological validity and do not consider the multiple interacting sensory stimuli that are critical to realworld emotion recognition. In the current study, emotion recognition in everyday life was considered to comprise of the interaction between facial expressions, accompanied by an auditory expression and embedded in a situational c...

  2. Robotic and Virtual Reality BCIs Using Spatial Tactile and Auditory Oddball Paradigms

    Directory of Open Access Journals (Sweden)

    Tomasz Maciej Rutkowski

    2016-12-01

    Full Text Available The paper reviews nine robotic and virtual reality (VR brain-computer interface (BCI projects developed by the author, in collaboration with his graduate students, within the BCI-lab research group during its association with University of Tsukuba, Japan. The nine novel approaches are discussed in applications to direct brain-robot and brain-virtual-reality-agent control interfaces using tactile and auditory BCI technologies. The BCI user intentions are decoded from the brainwaves in realtime using a non-invasive electroencephalography (EEG and they are translated to a symbiotic robot or virtual reality agent thought-based only control. A communication protocol between the BCI output and the robot or the virtual environment is realized in a symbiotic communication scenario using an user datagram protocol (UDP, which constitutes an internet of things (IoT control scenario. Results obtained from healthy users reproducing simple brain-robot and brain-virtual-agent control tasks in online experiments support the research goal of a possibility to interact with robotic devices and virtual reality agents using symbiotic thought-based BCI technologies. An offline BCI classification accuracy boosting method, using a previously proposed information geometry derived approach, is also discussed in order to further support the reviewed robotic and virtual reality thought-based control paradigms.

  3. Robotic and Virtual Reality BCIs Using Spatial Tactile and Auditory Oddball Paradigms.

    Science.gov (United States)

    Rutkowski, Tomasz M

    2016-01-01

    The paper reviews nine robotic and virtual reality (VR) brain-computer interface (BCI) projects developed by the author, in collaboration with his graduate students, within the BCI-lab research group during its association with University of Tsukuba, Japan. The nine novel approaches are discussed in applications to direct brain-robot and brain-virtual-reality-agent control interfaces using tactile and auditory BCI technologies. The BCI user intentions are decoded from the brainwaves in realtime using a non-invasive electroencephalography (EEG) and they are translated to a symbiotic robot or virtual reality agent thought-based only control. A communication protocol between the BCI output and the robot or the virtual environment is realized in a symbiotic communication scenario using an user datagram protocol (UDP), which constitutes an internet of things (IoT) control scenario. Results obtained from healthy users reproducing simple brain-robot and brain-virtual-agent control tasks in online experiments support the research goal of a possibility to interact with robotic devices and virtual reality agents using symbiotic thought-based BCI technologies. An offline BCI classification accuracy boosting method, using a previously proposed information geometry derived approach, is also discussed in order to further support the reviewed robotic and virtual reality thought-based control paradigms.

  4. The role of spatial abilities and age in performance in an auditory computer navigation task.

    Science.gov (United States)

    Pak, Richard; Czaja, Sara J; Sharit, Joseph; Rogers, Wendy A; Fisk, Arthur D

    2006-01-01

    Age-related differences in spatial ability have been suggested as a mediator of age-related differences in computer-based task performance. However, the vast majority of tasks studied have primarily used a visual display (e.g., graphical user interfaces). In the current study, the relationship between spatial ability and performance in a non-visual computer-based navigation task was examined in a sample of 196 participants ranging in age from 18 to 91. Participants called into a simulated interactive voice response system and carried out a variety of transactions. They also completed measures of attention, working memory, and spatial abilities. The results showed that age-related differences in spatial ability predicted a significant amount of variance in performance in the non-visual computer task, even after controlling for other abilities. Understanding the abilities that influence performance with technology may provide insight into the source of age-related performance differences in the successful use of technology.

  5. Costs of switching auditory spatial attention in following conversational turn-taking

    Directory of Open Access Journals (Sweden)

    Gaven eLin

    2015-04-01

    Full Text Available Following a multi-talker conversation relies on the ability to rapidly and efficiently shift the focus of spatial attention from one talker to another. The current study investigated the listening costs associated with shifts in spatial attention during conversational turn-taking in 16 normally-hearing listeners using a novel sentence recall task. Three pairs of syntactically fixed but semantically unpredictable matrix sentences, recorded from a single male talker, were presented concurrently through an array of three loudspeakers (directly ahead and +/-30° azimuth. Subjects attended to one spatial location, cued by a tone, and followed the target conversation from one sentence to the next using the call-sign at the beginning of each sentence. Subjects were required to report the last three words of each sentence (speech recall task or answer multiple choice questions related to the target material (speech comprehension task. The reading span test, attention network test, and trail making test were also administered to assess working memory, attentional control, and executive function. There was a 10.7 ± 1.3% decrease in word recall, a pronounced primacy effect, and a rise in masker confusion errors and word omissions when the target switched location between sentences. Switching costs were independent of the location, direction, and angular size of the spatial shift but did appear to be load dependent and only significant for complex questions requiring multiple cognitive operations. Reading span scores were positively correlated with total words recalled, and negatively correlated with switching costs and word omissions. Task switching speed (Trail-B time was also significantly correlated with recall accuracy. Overall, this study highlights i the listening costs associated with shifts in spatial attention and ii the important role of working memory in maintaining goal relevant information and extracting meaning from dynamic multi

  6. Costs of switching auditory spatial attention in following conversational turn-taking.

    Science.gov (United States)

    Lin, Gaven; Carlile, Simon

    2015-01-01

    Following a multi-talker conversation relies on the ability to rapidly and efficiently shift the focus of spatial attention from one talker to another. The current study investigated the listening costs associated with shifts in spatial attention during conversational turn-taking in 16 normally-hearing listeners using a novel sentence recall task. Three pairs of syntactically fixed but semantically unpredictable matrix sentences, recorded from a single male talker, were presented concurrently through an array of three loudspeakers (directly ahead and +/-30° azimuth). Subjects attended to one spatial location, cued by a tone, and followed the target conversation from one sentence to the next using the call-sign at the beginning of each sentence. Subjects were required to report the last three words of each sentence (speech recall task) or answer multiple choice questions related to the target material (speech comprehension task). The reading span test, attention network test, and trail making test were also administered to assess working memory, attentional control, and executive function. There was a 10.7 ± 1.3% decrease in word recall, a pronounced primacy effect, and a rise in masker confusion errors and word omissions when the target switched location between sentences. Switching costs were independent of the location, direction, and angular size of the spatial shift but did appear to be load dependent and only significant for complex questions requiring multiple cognitive operations. Reading span scores were positively correlated with total words recalled, and negatively correlated with switching costs and word omissions. Task switching speed (Trail-B time) was also significantly correlated with recall accuracy. Overall, this study highlights (i) the listening costs associated with shifts in spatial attention and (ii) the important role of working memory in maintaining goal relevant information and extracting meaning from dynamic multi-talker conversations.

  7. The Influence of Monocular Spatial Cues on Vergence Eye Movements in Monocular and Binocular Viewing of 3-D and 2-D Stimuli.

    Science.gov (United States)

    Batvinionak, Anton A; Gracheva, Maria A; Bolshakov, Andrey S; Rozhkova, Galina I

    2015-01-01

    The influence of monocular spatial cues on the vergence eye movements was studied in two series of experiments: (I) the subjects were viewing a 3-D video and also its 2-D version-binocularly and monocularly; and (II) in binocular and monocular viewing conditions, the subjects were presented with stationary 2-D stimuli containing or not containing some monocular indications of spatial arrangement. The results of the series (I) showed that, in binocular viewing conditions, the vergence eye movements were only present in the case of 3-D but not 2-D video, while in the course of monocular viewing of 2-D video, some regular vergence eye movements could be revealed, suggesting that the occluded eye position could be influenced by the spatial organization of the scene reconstructed on the basis of the monocular depth information provided by the viewing eye. The data obtained in series (II), in general, seem to support this hypothesis. © The Author(s) 2015.

  8. Influence of auditory spatial attention on cross-modal semantic priming effect: evidence from N400 effect.

    Science.gov (United States)

    Wang, Hongyan; Zhang, Gaoyan; Liu, Baolin

    2017-01-01

    Semantic priming is an important research topic in the field of cognitive neuroscience. Previous studies have shown that the uni-modal semantic priming effect can be modulated by attention. However, the influence of attention on cross-modal semantic priming is unclear. To investigate this issue, the present study combined a cross-modal semantic priming paradigm with an auditory spatial attention paradigm, presenting the visual pictures as the prime stimuli and the semantically related or unrelated sounds as the target stimuli. Event-related potentials results showed that when the target sound was attended to, the N400 effect was evoked. The N400 effect was also observed when the target sound was not attended to, demonstrating that the cross-modal semantic priming effect persists even though the target stimulus is not focused on. Further analyses revealed that the N400 effect evoked by the unattended sound was significantly lower than the effect evoked by the attended sound. This contrast provides new evidence that the cross-modal semantic priming effect can be modulated by attention.

  9. Investigating the time course of tactile reflexive attention using a non-spatial discrimination task.

    Science.gov (United States)

    Miles, Eleanor; Poliakoff, Ellen; Brown, Richard J

    2008-06-01

    Peripheral cues are thought to facilitate responses to stimuli presented at the same location because they lead to exogenous attention shifts. Facilitation has been observed in numerous studies of visual and auditory attention, but there have been only four demonstrations of tactile facilitation, all in studies with potential confounds. Three studies used a spatial (finger versus thumb) discrimination task, where the cue could have provided a spatial framework that might have assisted the discrimination of subsequent targets presented on the same side as the cue. The final study circumvented this problem by using a non-spatial discrimination; however, the cues were informative and interspersed with visual cues which may have affected the attentional effects observed. In the current study, therefore, we used a non-spatial tactile frequency discrimination task following a non-informative tactile white noise cue. When the target was presented 150 ms after the cue, we observed faster discrimination responses to targets presented on the same side compared to the opposite side as the cue; by 1000 ms, responses were significantly faster to targets presented on the opposite side to the cue. Thus, we demonstrated that tactile attentional facilitation can be observed in a non-spatial discrimination task, under unimodal conditions and with entirely non-predictive cues. Furthermore, we provide the first demonstration of significant tactile facilitation and tactile inhibition of return within a single experiment.

  10. Real color captures attention and overrides spatial cues in grapheme-color synesthetes but not in controls.

    Science.gov (United States)

    van Leeuwen, Tessa M; Hagoort, Peter; Händel, Barbara F

    2013-08-01

    Grapheme-color synesthetes perceive color when reading letters or digits. We investigated oscillatory brain signals of synesthetes vs. controls using magnetoencephalography. Brain oscillations specifically in the alpha band (∼10Hz) have two interesting features: alpha has been linked to inhibitory processes and can act as a marker for attention. The possible role of reduced inhibition as an underlying cause of synesthesia, as well as the precise role of attention in synesthesia is widely discussed. To assess alpha power effects due to synesthesia, synesthetes as well as matched controls viewed synesthesia-inducing graphemes, colored control graphemes, and non-colored control graphemes while brain activity was recorded. Subjects had to report a color change at the end of each trial which allowed us to assess the strength of synesthesia in each synesthete. Since color (synesthetic or real) might allocate attention we also included an attentional cue in our paradigm which could direct covert attention. In controls the attentional cue always caused a lateralization of alpha power with a contralateral decrease and ipsilateral alpha increase over occipital sensors. In synesthetes, however, the influence of the cue was overruled by color: independent of the attentional cue, alpha power decreased contralateral to the color (synesthetic or real). This indicates that in synesthetes color guides attention. This was confirmed by reaction time effects due to color, i.e. faster RTs for the color side independent of the cue. Finally, the stronger the observed color dependent alpha lateralization, the stronger was the manifestation of synesthesia as measured by congruency effects of synesthetic colors on RTs. Behavioral and imaging results indicate that color induces a location-specific, automatic shift of attention towards color in synesthetes but not in controls. We hypothesize that this mechanism can facilitate coupling of grapheme and color during the development of

  11. Different spatio-temporal electroencephalography features drive the successful decoding of binaural and monaural cues for sound localization.

    Science.gov (United States)

    Bednar, Adam; Boland, Francis M; Lalor, Edmund C

    2017-03-01

    The human ability to localize sound is essential for monitoring our environment and helps us to analyse complex auditory scenes. Although the acoustic cues mediating sound localization have been established, it remains unknown how these cues are represented in human cortex. In particular, it is still a point of contention whether binaural and monaural cues are processed by the same or distinct cortical networks. In this study, participants listened to a sequence of auditory stimuli from different spatial locations while we recorded their neural activity using electroencephalography (EEG). The stimuli were presented over a loudspeaker array, which allowed us to deliver realistic, free-field stimuli in both the horizontal and vertical planes. Using a multivariate classification approach, we showed that it is possible to decode sound source location from scalp-recorded EEG. Robust and consistent decoding was shown for stimuli that provide binaural cues (i.e. Left vs. Right stimuli). Decoding location when only monaural cues were available (i.e. Front vs. Rear and elevational stimuli) was successful for a subset of subjects and showed less consistency. Notably, the spatio-temporal pattern of EEG features that facilitated decoding differed based on the availability of binaural and monaural cues. In particular, we identified neural processing of binaural cues at around 120 ms post-stimulus and found that monaural cues are processed later between 150 and 200 ms. Furthermore, different spatial activation patterns emerged for binaural and monaural cue processing. These spatio-temporal dissimilarities suggest the involvement of separate cortical mechanisms in monaural and binaural acoustic cue processing. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  12. Development of visuo-auditory integration in space and time

    Directory of Open Access Journals (Sweden)

    Monica eGori

    2012-09-01

    Full Text Available Adults integrate multisensory information optimally (e.g. Ernst & Banks, 2002 while children are not able to integrate multisensory visual haptic cues until 8-10 years of age (e.g. Gori, Del Viva, Sandini, & Burr, 2008. Before that age strong unisensory dominance is present for size and orientation visual-haptic judgments maybe reflecting a process of cross-sensory calibration between modalities. It is widely recognized that audition dominates time perception, while vision dominates space perception. If the cross sensory calibration process is necessary for development, then the auditory modality should calibrate vision in a bimodal temporal task, and the visual modality should calibrate audition in a bimodal spatial task. Here we measured visual-auditory integration in both the temporal and the spatial domains reproducing for the spatial task a child-friendly version of the ventriloquist stimuli used by Alais and Burr (2004 and for the temporal task a child-friendly version of the stimulus used by Burr, Banks and Morrone (2009. Unimodal and bimodal (conflictual or not conflictual audio-visual thresholds and PSEs were measured and compared with the Bayesian predictions. In the temporal domain, we found that both in children and adults, audition dominates the bimodal visuo-auditory task both in perceived time and precision thresholds. Contrarily, in the visual-auditory spatial task, children younger than 12 years of age show clear visual dominance (on PSEs and bimodal thresholds higher than the Bayesian prediction. Only in the adult group bimodal thresholds become optimal. In agreement with previous studies, our results suggest that also visual-auditory adult-like behaviour develops late. Interestingly, the visual dominance for space and the auditory dominance for time that we found might suggest a cross-sensory comparison of vision in a spatial visuo-audio task and a cross-sensory comparison of audition in a temporal visuo-audio task.

  13. Asymmetry in auditory and spatial attention span in normal elderly genetically at risk for Alzheimer's disease.

    Science.gov (United States)

    Jacobson, Mark W; Delis, Dean C; Bondi, Mark W; Salmon, David P

    2005-02-01

    Some studies of elderly individuals with the ApoE-e4 genotype noted subtle deficits on tests of attention such as the WAIS-R Digit Span subtest, but these findings have not been consistently reported. One possible explanation for the inconsistent results could be the presence of subgroups of e4+ individuals with asymmetric cognitive profiles (i.e., significant discrepancies between verbal and visuospatial skills). Comparing genotype groups with individual, modality-specific tests might obscure subtle differences between verbal and visuospatial attention in these asymmetric subgroups. In this study, we administered the WAIS-R Digit Span and WMS-R Visual Memory Span subtests to 21 nondemented elderly e4+ individuals and 21 elderly e4- individuals matched on age, education, and overall cognitive ability. We hypothesized that a) the e4+ group would show a higher incidence of asymmetric cognitive profiles when comparing Digit Span/Visual Memory Span performance relative to the e4- group; and (b) an analysis of individual test performance would fail to reveal differences between the two subject groups. Although the groups' performances were comparable on the individual attention span tests, the e4+ group showed a significantly larger discrepancy between digit span and spatial span scores compared to the e4- group. These findings suggest that contrast measures of modality-specific attentional skills may be more sensitive to subtle group differences in at-risk groups, even when the groups do not differ on individual comparisons of standardized test means. The increased discrepancy between verbal and visuospatial attention may reflect the presence of "subgroups" within the ApoE-e4 group that are qualitatively similar to asymmetric subgroups commonly associated with the earliest stages of AD.

  14. Sensitivity to an Illusion of Sound Location in Human Auditory Cortex.

    Science.gov (United States)

    Higgins, Nathan C; McLaughlin, Susan A; Da Costa, Sandra; Stecker, G Christopher

    2017-01-01

    Human listeners place greater weight on the beginning of a sound compared to the middle or end when determining sound location, creating an auditory illusion known as the Franssen effect. Here, we exploited that effect to test whether human auditory cortex (AC) represents the physical vs. perceived spatial features of a sound. We used functional magnetic resonance imaging (fMRI) to measure AC responses to sounds that varied in perceived location due to interaural level differences (ILD) applied to sound onsets or to the full sound duration. Analysis of hemodynamic responses in AC revealed sensitivity to ILD in both full-cue (veridical) and onset-only (illusory) lateralized stimuli. Classification analysis revealed regional differences in the sensitivity to onset-only ILDs, where better classification was observed in posterior compared to primary AC. That is, restricting the ILD to sound onset-which alters the physical but not the perceptual nature of the spatial cue-did not eliminate cortical sensitivity to that cue. These results suggest that perceptual representations of auditory space emerge or are refined in higher-order AC regions, supporting the stable perception of auditory space in noisy or reverberant environments and forming the basis of illusions such as the Franssen effect.

  15. A retroactive spatial cue improved VSTM capacity in mild cognitive impairment and medial temporal lobe amnesia but not in healthy older adults.

    Science.gov (United States)

    Newsome, Rachel N; Duarte, Audrey; Pun, Carson; Smith, Victoria M; Ferber, Susanne; Barense, Morgan D

    2015-10-01

    Visual short-term memory (VSTM) is a vital cognitive ability, connecting visual input with conscious awareness. VSTM performance declines with mild cognitive impairment (MCI) and medial temporal lobe (MTL) amnesia. Many studies have shown that providing a spatial retrospective cue ("retrocue") improves VSTM capacity estimates for healthy young adults. However, one study has demonstrated that older adults are unable to use a retrocue to inhibit irrelevant items from memory. It is unknown whether patients with MCI and MTL amnesia will be able to use a retrocue to benefit their memory. We administered a retrocue and a baseline (simultaneous cue, "simucue") task to young adults, older adults, MCI patients, and MTL cases. Consistent with previous findings, young adults showed a retrocue benefit, whereas healthy older adults did not. In contrast, both MCI patients and MTL cases showed a retrocue benefit--the use of a retrocue brought patient performance up to the level of age-matched controls. We speculate that the patients were able to use the spatial information from the retrocue to reduce interference and facilitate binding items to their locations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. The Wellcome Prize Lecture. A map of auditory space in the mammalian brain: neural computation and development.

    Science.gov (United States)

    King, A J

    1993-09-01

    The experiments described in this review have demonstrated that the SC contains a two-dimensional map of auditory space, which is synthesized within the brain using a combination of monaural and binaural localization cues. There is also an adaptive fusion of auditory and visual space in this midbrain nucleus, providing for a common access to the motor pathways that control orientation behaviour. This necessitates a highly plastic relationship between the visual and auditory systems, both during postnatal development and in adult life. Because of the independent mobility of difference sense organs, gating mechanisms are incorporated into the auditory representation to provide up-to-date information about the spatial orientation of the eyes and ears. The SC therefore provides a valuable model system for studying a number of important issues in brain function, including the neural coding of sound location, the co-ordination of spatial information between different sensory systems, and the integration of sensory signals with motor outputs.

  17. The Influence of Auditory Information on Visual Size Adaptation

    Directory of Open Access Journals (Sweden)

    Alessia Tonelli

    2017-10-01

    Full Text Available Size perception can be influenced by several visual cues, such as spatial (e.g., depth or vergence and temporal contextual cues (e.g., adaptation to steady visual stimulation. Nevertheless, perception is generally multisensory and other sensory modalities, such as auditory, can contribute to the functional estimation of the size of objects. In this study, we investigate whether auditory stimuli at different sound pitches can influence visual size perception after visual adaptation. To this aim, we used an adaptation paradigm (Pooresmaeili et al., 2013 in three experimental conditions: visual-only, visual-sound at 100 Hz and visual-sound at 9,000 Hz. We asked participants to judge the size of a test stimulus in a size discrimination task. First, we obtained a baseline for all conditions. In the visual-sound conditions, the auditory stimulus was concurrent to the test stimulus. Secondly, we repeated the task by presenting an adapter (twice as big as the reference stimulus before the test stimulus. We replicated the size aftereffect in the visual-only condition: the test stimulus was perceived smaller than its physical size. The new finding is that we found the auditory stimuli have an effect on the perceived size of the test stimulus after visual adaptation: low frequency sound decreased the effect of visual adaptation, making the stimulus perceived bigger compared to the visual-only condition, and contrarily, the high frequency sound had the opposite effect, making the test size perceived even smaller.

  18. Auditory Discrimination of Lexical Stress Patterns in Hearing-Impaired Infants with Cochlear Implants Compared with Normal Hearing: Influence of Acoustic Cues and Listening Experience to the Ambient Language.

    Science.gov (United States)

    Segal, Osnat; Houston, Derek; Kishon-Rabin, Liat

    2016-01-01

    To assess discrimination of lexical stress pattern in infants with cochlear implant (CI) compared with infants with normal hearing (NH). While criteria for cochlear implantation have expanded to infants as young as 6 months, little is known regarding infants' processing of suprasegmental-prosodic cues which are known to be important for the first stages of language acquisition. Lexical stress is an example of such a cue, which, in hearing infants, has been shown to assist in segmenting words from fluent speech and in distinguishing between words that differ only the stress pattern. To date, however, there are no data on the ability of infants with CIs to perceive lexical stress. Such information will provide insight to the speech characteristics that are available to these infants in their first steps of language acquisition. This is of particular interest given the known limitations that the CI device has in transmitting speech information that is mediated by changes in fundamental frequency. Two groups of infants participated in this study. The first group included 20 profoundly hearing-impaired infants with CI, 12 to 33 months old, implanted under the age of 2.5 years (median age of implantation = 14.5 months), with 1 to 6 months of CI use (mean = 2.7 months) and no known additional problems. The second group of infants included 48 NH infants, 11 to 14 months old with normal development and no known risk factors for developmental delays. Infants were tested on their ability to discriminate between nonsense words that differed on their stress pattern only (/dóti/ versus /dotí/ and /dotí/ versus /dóti/) using the visual habituation procedure. The measure for discrimination was the change in looking time between the last habituation trial (e.g., /dóti/) and the novel trial (e.g., /dotí/). (1) Infants with CI showed discrimination between lexical stress pattern with only limited auditory experience with their implant device, (2) discrimination of stress

  19. Auditory Display

    DEFF Research Database (Denmark)

    volume. The conference's topics include auditory exploration of data via sonification and audification; real time monitoring of multivariate date; sound in immersive interfaces and teleoperation; perceptual issues in auditory display; sound in generalized computer interfaces; technologies supporting...... auditory display creation; data handling for auditory display systems; applications of auditory display....

  20. Auditory motion in depth is preferentially 'captured' by visual looming signals.

    Science.gov (United States)

    Harrison, Neil

    2012-01-01

    The phenomenon of crossmodal dynamic visual capture occurs when the direction of motion of a visual cue causes a weakening or reversal of the perceived direction of motion of a concurrently presented auditory stimulus. It is known that there is a perceptual bias towards looming compared to receding stimuli, and faster bimodal reaction times have recently been observed for looming cues compared to receding cues (Cappe et al., 2009). The current studies aimed to test whether visual looming cues are associated with greater dynamic capture of auditory motion in depth compared to receding signals. Participants judged the direction of an auditory motion cue presented with a visual looming cue (expanding disk), a visual receding cue (contracting disk), or visual stationary cue (static disk). Visual cues were presented either simultaneously with the auditory cue, or after 500 ms. We found increased levels of interference with looming visual cues compared to receding visual cues, compared to asynchronous presentation or stationary visual cues. The results could not be explained by the weaker subjective strength of the receding auditory stimulus, as in Experiment 2 the looming and receding auditory cues were matched for perceived strength. These results show that dynamic visual capture of auditory motion in the depth plane is modulated by an adaptive bias for looming compared to receding visual cues.

  1. Listeners' expectation of room acoustical parameters based on visual cues

    Science.gov (United States)

    Valente, Daniel L.

    Despite many studies investigating auditory spatial impressions in rooms, few have addressed the impact of simultaneous visual cues on localization and the perception of spaciousness. The current research presents an immersive audio-visual study, in which participants are instructed to make spatial congruency and quantity judgments in dynamic cross-modal environments. The results of these psychophysical tests suggest the importance of consilient audio-visual presentation to the legibility of an auditory scene. Several studies have looked into audio-visual interaction in room perception in recent years, but these studies rely on static images, speech signals, or photographs alone to represent the visual scene. Building on these studies, the aim is to propose a testing method that uses monochromatic compositing (blue-screen technique) to position a studio recording of a musical performance in a number of virtual acoustical environments and ask subjects to assess these environments. In the first experiment of the study, video footage was taken from five rooms varying in physical size from a small studio to a small performance hall. Participants were asked to perceptually align two distinct acoustical parameters---early-to-late reverberant energy ratio and reverberation time---of two solo musical performances in five contrasting visual environments according to their expectations of how the room should sound given its visual appearance. In the second experiment in the study, video footage shot from four different listening positions within a general-purpose space was coupled with sounds derived from measured binaural impulse responses (IRs). The relationship between the presented image, sound, and virtual receiver position was examined. It was found that many visual cues caused different perceived events of the acoustic environment. This included the visual attributes of the space in which the performance was located as well as the visual attributes of the performer

  2. Preschool children and adults flexibly shift their preferences for auditory versus visual modalities, but do not exhibit auditory dominance

    Science.gov (United States)

    Noles, Nicholaus S.; Gelman, Susan A.

    2012-01-01

    The goal of the present study is to evaluate the claim that young children display preferences for auditory stimuli over visual stimuli. This study is motivated by concerns that the visual stimuli employed in prior studies were considerably more complex and less distinctive than the competing auditory stimuli, resulting in an illusory preference for auditory cues. Across three experiments, preschool children and adults were trained to use paired audio-visual cues to predict the location of a target. At test, the cues were switched so that auditory cues indicated one location and visual cues indicated the opposite location. In contrast to prior studies, preschool age children did not exhibit auditory dominance. Instead, children and adults flexibly shifted their preferences as a function of the degree of contrast within each modality (with high contrast leading to greater use). PMID:22513210

  3. Listen, you are writing!Speeding up online spelling with a dynamic auditory BCI

    Directory of Open Access Journals (Sweden)

    Martijn eSchreuder

    2011-10-01

    Full Text Available Representing an intuitive spelling interface for Brain-Computer Interfaces (BCI in the auditory domain is not straightforward. In consequence, all existing approaches based on event-related potentials (ERP rely at least partially on a visual representation of the interface. This online study introduces an auditory spelling interface that eliminates the necessity for such a visualization. In up to two sessions, a group of healthy subjects (N=21 was asked to use a text entry application, utilizing the spatial cues of the AMUSE paradigm (Auditory Multiclass Spatial ERP. The speller relies on the auditory sense both for stimulation and the core feedback. Without prior BCI experience, 76% of the participants were able to write a full sentence during the first session. By exploiting the advantages of a newly introduced dynamic stopping method, a maximum writing speed of 1.41 characters/minute (7.55 bits/minute could be reached during the second session (average: .94 char/min, 5.26 bits/min. For the first time, the presented work shows that an auditory BCI can reach performances similar to state-of-the-art visual BCIs based on covert attention. These results represent an important step towards a purely auditory BCI.

  4. Listen, You are Writing! Speeding up Online Spelling with a Dynamic Auditory BCI.

    Science.gov (United States)

    Schreuder, Martijn; Rost, Thomas; Tangermann, Michael

    2011-01-01

    Representing an intuitive spelling interface for brain-computer interfaces (BCI) in the auditory domain is not straight-forward. In consequence, all existing approaches based on event-related potentials (ERP) rely at least partially on a visual representation of the interface. This online study introduces an auditory spelling interface that eliminates the necessity for such a visualization. In up to two sessions, a group of healthy subjects (N = 21) was asked to use a text entry application, utilizing the spatial cues of the AMUSE paradigm (Auditory Multi-class Spatial ERP). The speller relies on the auditory sense both for stimulation and the core feedback. Without prior BCI experience, 76% of the participants were able to write a full sentence during the first session. By exploiting the advantages of a newly introduced dynamic stopping method, a maximum writing speed of 1.41 char/min (7.55 bits/min) could be reached during the second session (average: 0.94 char/min, 5.26 bits/min). For the first time, the presented work shows that an auditory BCI can reach performances similar to state-of-the-art visual BCIs based on covert attention. These results represent an important step toward a purely auditory BCI.

  5. Sensitivity to an Illusion of Sound Location in Human Auditory Cortex

    Directory of Open Access Journals (Sweden)

    Nathan C. Higgins

    2017-05-01

    Full Text Available Human listeners place greater weight on the beginning of a sound compared to the middle or end when determining sound location, creating an auditory illusion known as the Franssen effect. Here, we exploited that effect to test whether human auditory cortex (AC represents the physical vs. perceived spatial features of a sound. We used functional magnetic resonance imaging (fMRI to measure AC responses to sounds that varied in perceived location due to interaural level differences (ILD applied to sound onsets or to the full sound duration. Analysis of hemodynamic responses in AC revealed sensitivity to ILD in both full-cue (veridical and onset-only (illusory lateralized stimuli. Classification analysis revealed regional differences in the sensitivity to onset-only ILDs, where better classification was observed in posterior compared to primary AC. That is, restricting the ILD to sound onset—which alters the physical but not the perceptual nature of the spatial cue—did not eliminate cortical sensitivity to that cue. These results suggest that perceptual representations of auditory space emerge or are refined in higher-order AC regions, supporting the stable perception of auditory space in noisy or reverberant environments and forming the basis of illusions such as the Franssen effect.

  6. Cue validity probability influences neural processing of targets.

    Science.gov (United States)

    Arjona, Antonio; Escudero, Miguel; Gómez, Carlos M

    2016-09-01

    The neural bases of the so-called Spatial Cueing Effect in a visuo-auditory version of the Central Cue Posneŕs Paradigm (CCPP) are analyzed by means of behavioral patterns (Reaction Times and Errors) and Event-Related Potentials (ERPs), namely the Contingent Negative Variation (CNV), N1, P2a, P2p, P3a, P3b and Negative Slow Wave (NSW). The present version consisted of three types of trial blocks with different validity/invalidity proportions: 50% valid - 50% invalid trials, 68% valid - 32% invalid trials and 86% valid - 14% invalid trials. Thus, ERPs can be analyzed as the proportion of valid trials per block increases. Behavioral (Reaction Times and Incorrect responses) and ERP (lateralized component of CNV, P2a, P3b and NSW) results showed a spatial cueing effect as the proportion of valid trials per block increased. Results suggest a brain activity modulation related to sensory-motor attention and working memory updating, in order to adapt to external unpredictable contingencies. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Home-cage odors spatial cues elicit theta phase/gamma amplitude coupling between olfactory bulb and dorsal hippocampus.

    Science.gov (United States)

    Pena, Roberta Ribas; Medeiros, Daniel de Castro; Guarnieri, Leonardo de Oliveira; Guerra, Julio Boriollo; Carvalho, Vinícius Rezende; Mendes, Eduardo Mazoni Andrade Marçal; Pereira, Grace Schenatto; Moraes, Márcio Flávio Dutra

    2017-11-05

    The brain oscillations may play a critical role in synchronizing neuronal assemblies in order to establish appropriate sensory-motor integration. In fact, studies have demonstrated phase-amplitude coupling of distinct oscillatory rhythms during cognitive processes. Here we investigated whether olfacto-hippocampal coupling occurs when mice are detecting familiar odors located in a spatially restricted area of a new context. The spatial olfactory task (SOT) was designed to expose mice to a new environment in which only one quadrant (target) contains odors provided by its own home-cage bedding. As predicted, mice showed a significant higher exploration preference to the target quadrant; which was impaired by olfactory epithelium lesion (ZnSO4). Furthermore, mice were able to discriminate odors from a different cage and avoided the quadrant with predator odor 2,4,5-trimethylthiazoline (TMT), reinforcing the specificity of the SOT. The local field potential (LFP) analysis of non-lesioned mice revealed higher gamma activity (35-100Hz) in the main olfactory bulb (MOB) and a significant theta phase/gamma amplitude coupling between MOB and dorsal hippocampus, only during exploration of home-cage odors (i.e. in the target quadrant). Our results suggest that exploration of familiar odors in a new context involves dynamic coupling between the olfactory bulb and dorsal hippocampus. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  8. Metronome cueing of walking reduces gait variability after a cerebellar stroke

    Directory of Open Access Journals (Sweden)

    Rachel Lindsey Wright

    2016-06-01

    Full Text Available Cerebellar stroke typically results in increased variability during walking. Previous research has suggested that auditory-cueing reduces excessive variability in conditions such as Parkinson’s disease and post-stroke hemiparesis. The aim of this case report was to investigate whether the use of a metronome cue during walking could reduce excessive variability in gait parameters after a cerebellar stroke. An elderly female with a history of cerebellar stroke and recurrent falling undertook 3 standard gait trials and 3 gait trials with an auditory metronome. A Vicon system was used to collect 3-D marker trajectory data. The coefficient of variation was calculated for temporal and spatial gait parameters. Standard deviations of the joint angles were calculated and used to give a measure of joint kinematic variability. Step time, stance time and double support time variability were reduced with metronome cueing. Variability in the sagittal hip, knee and ankle angles were reduced to normal values when walking to the metronome. In summary, metronome cueing resulted in a decrease in variability for step, stance and double support times and joint kinematics. Further research is needed to establish whether a metronome may be useful in gait rehabilitation after cerebellar stroke, and whether this leads to a decreased risk of falling.

  9. The role of temporal coherence in auditory stream segregation

    DEFF Research Database (Denmark)

    Christiansen, Simon Krogholt

    The ability to perceptually segregate concurrent sound sources and focus one’s attention on a single source at a time is essential for the ability to use acoustic information. While perceptual experiments have determined a range of acoustic cues that help facilitate auditory stream segregation......, it is not clear how the auditory system realizes the task. This thesis presents a study of the mechanisms involved in auditory stream segregation. Through a combination of psychoacoustic experiments, designed to characterize the influence of acoustic cues on auditory stream formation, and computational models...... of auditory processing, the role of auditory preprocessing and temporal coherence in auditory stream formation was evaluated. The computational model presented in this study assumes that auditory stream segregation occurs when sounds stimulate non-overlapping neural populations in a temporally incoherent...

  10. The role of social cues in the deployment of spatial attention: Head-body relationships automatically activate directional spatial codes in a Simon task

    Directory of Open Access Journals (Sweden)

    Iwona ePomianowska

    2012-02-01

    Full Text Available The role of body orientation in the orienting and allocation of social attention was examined using an adapted Simon paradigm. Participants categorized the facial expression of forward facing, computer-generated human figures by pressing one of two response keys, each located left or right of the observers’ body midline, while the orientation of the stimulus figure’s body (trunk, arms, and legs, which was the task-irrelevant feature of interest, was manipulated (oriented towards the left or right visual hemifield with respect to the spatial location of the required response. We found that when the orientation of the body was compatible with the required response location, responses were slower relative to when body orientation was incompatible with the response location. This reverse compatibility effect suggests that body orientation is automatically processed into a directional spatial code, but that this code is based on an integration of head and body orientation within an allocentric-based frame of reference. Moreover, we argue that this code may be derived from the motion information implied in the image of a figure when head and body orientation are incongruent. Our results have implications for understanding the nature of the information that affects the allocation of attention for social orienting.

  11. The effect of rhythmic somatosensory cueing on gait in patients with Parkinson's disease.

    NARCIS (Netherlands)

    Wegen, E. van; Goede, C. de; Lim, I.; Rietberg, M.B.; Nieuwboer, A.; Willems, A.; Jones, D.; Rochester, L.; Hetherington, V.; Berendse, H.W.; Zijlmans, J.C.M.; Wolters, E.; Kwakkel, G.

    2006-01-01

    BACKGROUND AND AIMS: Gait and gait related activities in patients with Parkinson's disease (PD) can be improved with rhythmic auditory cueing (e.g. a metronome). In the context of a large European study, a portable prototype cueing device was developed to provide an alternative for rhythmic auditory

  12. Auditory and visual space maps in the cholinergic nucleus isthmi pars parvocellularis of the barn owl.

    Science.gov (United States)

    Maczko, Kristin A; Knudsen, Phyllis F; Knudsen, Eric I

    2006-12-06

    The nucleus isthmi pars parvocellularis (Ipc) is a midbrain cholinergic nucleus that shares reciprocal, topographic connections with the optic tectum (OT). Ipc neurons project to spatially restricted columns in the OT, contacting essentially all OT layers in a given column. Previous research characterizes the Ipc as a visual processor. We found that, in the barn owl, the Ipc responds to auditory as well as to visual stimuli. Auditory responses were tuned broadly for frequency, but sharply for spatial cues. We measured the tuning of Ipc units to binaural sound localization cues, including interaural timing differences (ITDs) and interaural level differences (ILDs). Units in the Ipc were tuned to specific values of both ITD and ILD and were organized systematically according to their ITD and ILD tuning, forming a map of space. The auditory space map aligned with the visual space map in the Ipc. These results demonstrate that the Ipc encodes the spatial location of objects, independent of stimulus modality. These findings, combined with the precise pattern of projections from the Ipc to the OT, suggest that the role of the Ipc is to regulate the sensitivity of OT neurons in a space-specific manner.

  13. Application of Visual Cues on 3D Dynamic Visualizations for Engineering Technology Students and Effects on Spatial Visualization Ability: A Quasi-Experimental Study

    Science.gov (United States)

    Katsioloudis, Petros; Jovanovic, Vukica; Jones, Mildred

    2016-01-01

    Several theorists believe that different types of visual cues influence cognition and behavior through learned associations; however, research provides inconsistent results. Considering this, a quasi-experimental study was done to determine if there are significant positive effects of visual cues (color blue) and to identify if a positive increase…

  14. Auditory and multisensory responses in the tectofugal pathway of the barn owl.

    Science.gov (United States)

    Reches, Amit; Gutfreund, Yoram

    2009-07-29

    A common visual pathway in all amniotes is the tectofugal pathway connecting the optic tectum with the forebrain. The tectofugal pathway has been suggested to be involved in tasks such as orienting and attention, tasks that may benefit from integrating information across senses. Nevertheless, previous research has characterized the tectofugal pathway as strictly visual. Here we recorded from two stations along the tectofugal pathway of the barn owl: the thalamic nucleus rotundus (nRt) and the forebrain entopallium (E). We report that neurons in E and nRt respond to auditory stimuli as well as to visual stimuli. Visual tuning to the horizontal position of the stimulus and auditory tuning to the corresponding spatial cue (interaural time difference) were generally broad, covering a large portion of the contralateral space. Responses to spatiotemporally coinciding multisensory stimuli were mostly enhanced above the responses to the single modality stimuli, whereas spatially misaligned stimuli were not. Results from inactivation experiments suggest that the auditory responses in E are of tectal origin. These findings support the notion that the tectofugal pathway is involved in multisensory processing. In addition, the findings suggest that the ascending auditory information to the forebrain is not as bottlenecked through the auditory thalamus as previously thought.

  15. Auditory and Visual Sensations

    CERN Document Server

    Ando, Yoichi

    2010-01-01

    Professor Yoichi Ando, acoustic architectural designer of the Kirishima International Concert Hall in Japan, presents a comprehensive rational-scientific approach to designing performance spaces. His theory is based on systematic psychoacoustical observations of spatial hearing and listener preferences, whose neuronal correlates are observed in the neurophysiology of the human brain. A correlation-based model of neuronal signal processing in the central auditory system is proposed in which temporal sensations (pitch, timbre, loudness, duration) are represented by an internal autocorrelation representation, and spatial sensations (sound location, size, diffuseness related to envelopment) are represented by an internal interaural crosscorrelation function. Together these two internal central auditory representations account for the basic auditory qualities that are relevant for listening to music and speech in indoor performance spaces. Observed psychological and neurophysiological commonalities between auditor...

  16. Auditory agnosia.

    Science.gov (United States)

    Slevc, L Robert; Shell, Alison R

    2015-01-01

    Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. © 2015 Elsevier B.V. All rights reserved.

  17. Improving target detection in visual search through the augmenting multi-sensory cues

    NARCIS (Netherlands)

    Hancock, P.A.; Mercado, J.E.; Merlo, J.; Erp, J.B.F. van

    2013-01-01

    The present experiment tested 60 individuals on a multiple screen, visual target detection task. Using a within-participant design, individuals received no-cue augmentation, an augmenting tactile cue alone, an augmenting auditory cue alone or both of the latter augmentations in combination. Results

  18. Second spatial derivative analysis of cortical surface potentials recorded in cat primary auditory cortex using thin film surface arrays: Comparisons with multi-unit data.

    Science.gov (United States)

    Fallon, James B; Irving, Sam; Pannu, Satinderpall S; Tooker, Angela C; Wise, Andrew K; Shepherd, Robert K; Irvine, Dexter R F

    2016-07-15

    Current source density analysis of recordings from penetrating electrode arrays has traditionally been used to examine the layer- specific cortical activation and plastic changes associated with changed afferent input. We report on a related analysis, the second spatial derivative (SSD) of surface local field potentials (LFPs) recorded using custom designed thin-film polyimide substrate arrays. SSD analysis of tone- evoked LFPs generated from the auditory cortex under the recording array demonstrated a stereotypical single local minimum, often flanked by maxima on both the caudal and rostral sides. In contrast, tone-pips at frequencies not represented in the region under the array, but known (on the basis of normal tonotopic organization) to be represented caudal to the recording array, had a more complex pattern of many sources and sinks. Compared to traditional analysis of LFPs, SSD analysis produced a tonotopic map that was more similar to that obtained with multi-unit recordings in a normal-hearing animal. Additionally, the statistically significant decrease in the number of acoustically responsive cortical locations in partially deafened cats following 6 months of cochlear implant use compared to unstimulated cases observed with multi-unit data (p=0.04) was also observed with SSD analysis (p=0.02), but was not apparent using traditional analysis of LFPs (p=0.6). SSD analysis of surface LFPs from the thin-film array provides a rapid and robust method for examining the spatial distribution of cortical activity with improved spatial resolution compared to more traditional LFP recordings. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Hemispheric asymmetry in the auditory facilitation effect in dual-stream rapid serial visual presentation tasks.

    Directory of Open Access Journals (Sweden)

    Yasuhiro Takeshima

    Full Text Available Even though auditory stimuli do not directly convey information related to visual stimuli, they often improve visual detection and identification performance. Auditory stimuli often alter visual perception depending on the reliability of the sensory input, with visual and auditory information reciprocally compensating for ambiguity in the other sensory domain. Perceptual processing is characterized by hemispheric asymmetry. While the left hemisphere is more involved in linguistic processing, the right hemisphere dominates spatial processing. In this context, we hypothesized that an auditory facilitation effect in the right visual field for the target identification task, and a similar effect would be observed in the left visual field for the target localization task. In the present study, we conducted target identification and localization tasks using a dual-stream rapid serial visual presentation. When two targets are embedded in a rapid serial visual presentation stream, the target detection or discrimination performance for the second target is generally lower than for the first target; this deficit is well known as attentional blink. Our results indicate that auditory stimuli improved target identification performance for the second target within the stream when visual stimuli were presented in the right, but not the left visual field. In contrast, auditory stimuli improved second target localization performance when visual stimuli were presented in the left visual field. An auditory facilitation effect was observed in perceptual processing, depending on the hemispheric specialization. Our results demonstrate a dissociation between the lateral visual hemifield in which a stimulus is projected and the kind of visual judgment that may benefit from the presentation of an auditory cue.

  20. Processing of Horizontal Sound Localization Cues in Newborn Infants.

    Science.gov (United States)

    Németh, Renáta; Háden, Gábor P; Török, Miklós; Winkler, István

    2015-01-01

    By measuring event-related brain potentials (ERPs), the authors tested the sensitivity of the newborn auditory cortex to sound lateralization and to the most common cues of horizontal sound localization. Sixty-eight healthy full-term newborn infants were presented with auditory oddball sequences composed of frequent and rare noise segments in four experimental conditions. The authors tested in them the detection of deviations in the primary cues of sound lateralization (interaural time and level difference) and in actual sound source location (free-field and monaural sound presentation). ERP correlates of deviance detection were measured in two time windows. Deviations in both primary sound localization cues and the ear of stimulation elicited a significant ERP difference in the early (90 to 140 msec) time window. Deviance in actual sound source location (the free-field condition) elicited a significant response in the late (290 to 340 msec) time window. The early differential response may indicate the detection of a change in the respective auditory features. The authors suggest that the late differential response, which was only elicited by actual sound source location deviation, reflects the detection of location deviance integrating the various cues of sound source location. Although the results suggest that all of the tested binaural cues are processed by the neonatal auditory cortex, utilizing the cues for locating sound sources of these cues may require maturation and learning.

  1. Gaze Cueing by Pareidolia Faces

    Directory of Open Access Journals (Sweden)

    Kohske Takahashi

    2013-12-01

    Full Text Available Visual images that are not faces are sometimes perceived as faces (the pareidolia phenomenon. While the pareidolia phenomenon provides people with a strong impression that a face is present, it is unclear how deeply pareidolia faces are processed as faces. In the present study, we examined whether a shift in spatial attention would be produced by gaze cueing of face-like objects. A robust cueing effect was observed when the face-like objects were perceived as faces. The magnitude of the cueing effect was comparable between the face-like objects and a cartoon face. However, the cueing effect was eliminated when the observer did not perceive the objects as faces. These results demonstrated that pareidolia faces do more than give the impression of the presence of faces; indeed, they trigger an additional face-specific attentional process.

  2. Traveling waves on the organ of corti of the chinchilla cochlea: spatial trajectories of inner hair cell depolarization inferred from responses of auditory-nerve fibers.

    Science.gov (United States)

    Temchin, Andrei N; Recio-Spinoso, Alberto; Cai, Hongxue; Ruggero, Mario A

    2012-08-01

    Spatial magnitude and phase profiles for inner hair cell (IHC) depolarization throughout the chinchilla cochlea were inferred from responses of auditory-nerve fibers (ANFs) to threshold- and moderate-level tones and tone complexes. Firing-rate profiles for frequencies ≤2 kHz are bimodal, with the major peak at the characteristic place and a secondary peak at 3-5 mm from the extreme base. Response-phase trajectories are synchronous with peak outward stapes displacement at the extreme cochlear base and accumulate 1.5 period lags at the characteristic places. High-frequency phase trajectories are very similar to the trajectories of basilar-membrane peak velocity toward scala tympani. Low-frequency phase trajectories undergo a polarity flip in a region, 6.5-9 mm from the cochlear base, where traveling-wave phase velocity attains a local minimum and a local maximum and where the onset latencies of near-threshold impulse responses computed from responses to near-threshold white noise exhibit a local minimum. That region is the same where frequency-threshold tuning curves of ANFs undergo a shape transition. Since depolarization of IHCs presumably indicates the mechanical stimulus to their stereocilia, the present results suggest that distinct low-frequency forward waves of organ of Corti vibration are launched simultaneously at the extreme base of the cochlea and at the 6.5-9 mm transition region, from where antiphasic reflections arise.

  3. Modeling the Development of Audiovisual Cue Integration in Speech Perception.

    Science.gov (United States)

    Getz, Laura M; Nordeen, Elke R; Vrabic, Sarah C; Toscano, Joseph C

    2017-03-21

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues.

  4. Binaural interactions in primary auditory cortex of the awake macaque.

    Science.gov (United States)

    Reser, D H; Fishman, Y I; Arezzo, J C; Steinschneider, M

    2000-06-01

    The functional organization of primary auditory cortex in non-primates is generally modeled as a tonotopic gradient with an orthogonal representation of independently mapped binaural interaction columns along the isofrequency contours. Little information is available regarding the validity of this model in the primate brain, despite the importance of binaural cues for sound localization and auditory scene analysis. Binaural and monaural responses of A1 to pure tone stimulation were studied using auditory evoked potentials, current source density and multiunit activity. Key findings include: (i) differential distribution of binaural responses with respect to best frequency, such that 74% of the sites exhibiting binaural summation had best frequencies below 2000 Hz; (ii) the pattern of binaural responses was variable with respect to cortical depth, with binaural summation often observed in the supragranular laminae of sites showing binaural suppression in thalamorecipient laminae; and (iii) dissociation of binaural responses between the initial and sustained action potential firing of neuronal ensembles in A1. These data support earlier findings regarding the temporal and spatial complexity of responses in A1 in the awake state, and are inconsistent with a simple orthogonal arrangement of binaural interaction columns and best frequency in A1 of the awake primate.

  5. Transferrable Learning of Multisensory Cues in Flight Simulation

    Directory of Open Access Journals (Sweden)

    Georg F Meyer

    2011-10-01

    Full Text Available Flight simulators which provide visual, auditory, and kinematic (physical motion cues are increasingly used for pilot training. We have previously shown that kinematic cues, but not auditory cues, representing aircraft motion improve target tracking performance for novice ‘pilots’ in a simulated flying task (Meyer et al IMRF 2010. Here we explore the effect of learning on task performance. Our subjects were first tested on a target tracking task in a helicopter flight simulation. They were then trained in a simulator-simulator, which provided full audio, simplified visuals, but not kinematic signals to test whether learning of auditory cues is possible. After training we evaluated flight performance in the full simulator again. We show that after 2 hours training auditory cues are used by our participants as efficiently as kinematic cues to improve target tracking performance. The performance improvement relative to a condition where no audio signals are presented is robust if the sound environment used during training is replaced by a very different audio signal that is modulated in amplitude and pitch in the same way as the training signal. This shows that training is not signal specific but that our participants learn to extract transferrable information on sound pitch and amplitude to improve their flying performance.

  6. The encoding of auditory objects in auditory cortex: insights from magnetoencephalography.

    Science.gov (United States)

    Simon, Jonathan Z

    2015-02-01

    Auditory objects, like their visual counterparts, are perceptually defined constructs, but nevertheless must arise from underlying neural circuitry. Using magnetoencephalography (MEG) recordings of the neural responses of human subjects listening to complex auditory scenes, we review studies that demonstrate that auditory objects are indeed neurally represented in auditory cortex. The studies use neural responses obtained from different experiments in which subjects selectively listen to one of two competing auditory streams embedded in a variety of auditory scenes. The auditory streams overlap spatially and often spectrally. In particular, the studies demonstrate that selective attentional gain does not act globally on the entire auditory scene, but rather acts differentially on the separate auditory streams. This stream-based attentional gain is then used as a tool to individually analyze the different neural representations of the competing auditory streams. The neural representation of the attended stream, located in posterior auditory cortex, dominates the neural responses. Critically, when the intensities of the attended and background streams are separately varied over a wide intensity range, the neural representation of the attended speech adapts only to the intensity of that speaker, irrespective of the intensity of the background speaker. This demonstrates object-level intensity gain control in addition to the above object-level selective attentional gain. Overall, these results indicate that concurrently streaming auditory objects, even if spectrally overlapping and not resolvable at the auditory periphery, are individually neurally encoded in auditory cortex, as separate objects. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Auditory, Visual and Audiovisual Speech Processing Streams in Superior Temporal Sulcus.

    Science.gov (United States)

    Venezia, Jonathan H; Vaden, Kenneth I; Rong, Feng; Maddox, Dale; Saberi, Kourosh; Hickok, Gregory

    2017-01-01

    The human superior temporal sulcus (STS) is responsive to visual and auditory information, including sounds and facial cues during speech recognition. We investigated the functional organization of STS with respect to modality-specific and multimodal speech representations. Twenty younger adult participants were instructed to perform an oddball detection task and were presented with auditory, visual, and audiovisual speech stimuli, as well as auditory and visual nonspeech control stimuli in a block fMRI design. Consistent with a hypothesized anterior-posterior processing gradient in STS, auditory, visual and audiovisual stimuli produced the largest BOLD effects in anterior, posterior and middle STS (mSTS), respectively, based on whole-brain, linear mixed effects and principal component analyses. Notably, the mSTS exhibited preferential responses to multisensory stimulation, as well as speech compared to nonspeech. Within the mid-posterior and mSTS regions, response preferences changed gradually from visual, to multisensory, to auditory moving posterior to anterior. Post hoc analysis of visual regions in the posterior STS revealed that a single subregion bordering the mSTS was insensitive to differences in low-level motion kinematics yet distinguished between visual speech and nonspeech based on multi-voxel activation patterns. These results suggest that auditory and visual speech representations are elaborated gradually within anterior and posterior processing streams, respectively, and may be integrated within the mSTS, which is sensitive to more abstract speech information within and across presentation modalities. The spatial organization of STS is consistent with processing streams that are hypothesized to synthesize perceptual speech representations from sensory signals that provide convergent information from visual and auditory modalities.

  8. [Visual cues as a therapeutic tool in Parkinson's disease. A systematic review].

    Science.gov (United States)

    Muñoz-Hellín, Elena; Cano-de-la-Cuerda, Roberto; Miangolarra-Page, Juan Carlos

    2013-01-01

    Sensory stimuli or sensory cues are being used as a therapeutic tool for improving gait disorders in Parkinson's disease patients, but most studies seem to focus on auditory stimuli. The aim of this study was to conduct a systematic review regarding the use of visual cues over gait disorders, dual tasks during gait, freezing and the incidence of falls in patients with Parkinson to obtain therapeutic implications. We conducted a systematic review in main databases such as Cochrane Database of Systematic Reviews, TripDataBase, PubMed, Ovid MEDLINE, Ovid EMBASE and Physiotherapy Evidence Database, during 2005 to 2012, according to the recommendations of the Consolidated Standards of Reporting Trials, evaluating the quality of the papers included with the Downs & Black Quality Index. 21 articles were finally included in this systematic review (with a total of 892 participants) with variable methodological quality, achieving an average of 17.27 points in the Downs and Black Quality Index (range: 11-21). Visual cues produce improvements over temporal-spatial parameters in gait, turning execution, reducing the appearance of freezing and falls in Parkinson's disease patients. Visual cues appear to benefit dual tasks during gait, reducing the interference of the second task. Further studies are needed to determine the preferred type of stimuli for each stage of the disease. Copyright © 2012 SEGG. Published by Elsevier Espana. All rights reserved.

  9. Assessment of rival males through the use of multiple sensory cues in the fruitfly Drosophila pseudoobscura.

    Directory of Open Access Journals (Sweden)

    Chris P Maguire

    Full Text Available Environments vary stochastically, and animals need to behave in ways that best fit the conditions in which they find themselves. The social environment is particularly variable, and responding appropriately to it can be vital for an animal's success. However, cues of social environment are not always reliable, and animals may need to balance accuracy against the risk of failing to respond if local conditions or interfering signals prevent them detecting a cue. Recent work has shown that many male Drosophila fruit flies respond to the presence of rival males, and that these responses increase their success in acquiring mates and fathering offspring. In Drosophila melanogaster males detect rivals using auditory, tactile and olfactory cues. However, males fail to respond to rivals if any two of these senses are not functioning: a single cue is not enough to produce a response. Here we examined cue use in the detection of rival males in a distantly related Drosophila species, D. pseudoobscura, where auditory, olfactory, tactile and visual cues were manipulated to assess the importance of each sensory cue singly and in combination. In contrast to D. melanogaster, male D. pseudoobscura require intact olfactory and tactile cues to respond to rivals. Visual cues were not important for detecting rival D. pseudoobscura, while results on auditory cues appeared puzzling. This difference in cue use in two species in the same genus suggests that cue use is evolutionarily labile, and may evolve in response to ecological or life history differences between species.

  10. Tactile cueing in detecting and controlling pitch and roll motion.

    Science.gov (United States)

    Bouak, Fethi; Kline, Julianne; Cheung, Bob

    2011-10-01

    Tactile cueing has been explored primarily for the detection of linear motion such as vertical, longitudinal, and lateral translation in the laboratory and in flight. The usefulness of tactile cues in detecting roll and pitch motion has not been fully investigated. There were 12 subjects (21-56 yr) who were exposed to controlled pitch and roll motion generated by a motion platform with and without tactile cueing. The tactile system consists of a torso vest with 24 electromechanical tactors and a tactor on each shoulder and under each thigh harness, respectively. While devoid of visual and auditory cues, each subject performed three tasks: 1) indicate motion perception without tactile cues (C1); 2) return to vertical from an offset angle (C2); and 3) maintain straight and level while the platform was continuously in motion (C3). Our results indicated that in the absence of visual and auditory cues, subjects reported that the tactile system was useful in the execution of C2 and C3 maneuvers. Specifically, the presence of tactile cues had a significant impact on the accuracy, duration, and perceived workload. In addition, tactile cueing also increased the accuracy in returning to neutral from an offset position and in maintaining the neutral position while the platform was in continuous motion. Tactile cueing appears to be effective in detecting roll and pitch motion and has the potential to reduce the workload and risks of high stress and time sensitive air operations.

  11. The Influence of Visual Cues on Sound Externalization

    DEFF Research Database (Denmark)

    Carvajal, Juan Camilo Gil; Santurette, Sébastien; Cubick, Jens

    this is due to incongruent auditory cues between the recording and playback room during sound reproduction or to an expectation effect from the visual impression of the room. This study investigated the influence of a priori acoustic and visual knowledge of the playback room on sound externalization...... between recording and playback room was found to be detrimental to virtual sound externalization. The auditory modality governed externalization in terms of perceived distance when cues from the recording and playback room were incongruent, whereby the auditory impression of the room was more critical...... the more reverberant the listening environment was. While the visual impression of the playback room did not affect perceived distance, visual cues helped resolve localization ambiguities and improved compactness perception....

  12. Self-affirmation in auditory persuasion

    NARCIS (Netherlands)

    Elbert, Sarah; Dijkstra, Arie

    Persuasive health information can be presented through an auditory channel. Curiously enough, the effect of voice cues in health persuasion has hardly been studied. Research concerning visual persuasive messages showed that self-affirmation results in a more open-minded reaction to threatening

  13. The influence of tactile cognitive maps on auditory space perception in sighted persons.

    Directory of Open Access Journals (Sweden)

    Alessia Tonelli

    2016-11-01

    Full Text Available We have recently shown that vision is important to improve spatial auditory cognition. In this study we investigate whether touch is as effective as vision to create a cognitive map of a soundscape. In particular we tested whether the creation of a mental representation of a room, obtained through tactile exploration of a 3D model, can influence the perception of a complex auditory task in sighted people. We tested two groups of blindfolded sighted people – one experimental and one control group – in an auditory space bisection task. In the first group the bisection task was performed three times: specifically, the participants explored with their hands the 3D tactile model of the room and were led along the perimeter of the room between the first and the second execution of the space bisection. Then, they were allowed to remove the blindfold for a few minutes and look at the room between the second and third execution of the space bisection. Instead, the control group repeated for two consecutive times the space bisection task without performing any environmental exploration in between. Considering the first execution as a baseline, we found an improvement in the precision after the tactile exploration of the 3D model. Interestingly, no additional gain was obtained when room observation followed the tactile exploration, suggesting that no additional gain was obtained by vision cues after spatial tactile cues were internalized. No improvement was found between the first and the second execution of the space bisection without environmental exploration in the control group, suggesting that the improvement was not due to task learning. Our results show that tactile information modulates the precision of an ongoing space auditory task as well as visual information. This suggests that cognitive maps elicited by touch may participate in cross-modal calibration and supra-modal representations of space that increase implicit knowledge about sound

  14. Characterization of auditory synaptic inputs to gerbil perirhinal cortex

    Directory of Open Access Journals (Sweden)

    Vibhakar C Kotak

    2015-08-01

    Full Text Available The representation of acoustic cues involves regions downstream from the auditory cortex (ACx. One such area, the perirhinal cortex (PRh, processes sensory signals containing mnemonic information. Therefore, our goal was to assess whether PRh receives auditory inputs from the auditory thalamus (MG and ACx in an auditory thalamocortical brain slice preparation and characterize these afferent-driven synaptic properties. When the MG or ACx was electrically stimulated, synaptic responses were recorded from the PRh neurons. Blockade of GABA-A receptors dramatically increased the amplitude of evoked excitatory potentials. Stimulation of the MG or ACx also evoked calcium transients in most PRh neurons. Separately, when fluoro ruby was injected in ACx in vivo, anterogradely labeled axons and terminals were observed in the PRh. Collectively, these data show that the PRh integrates auditory information from the MG and ACx and that auditory driven inhibition dominates the postsynaptic responses in a non-sensory cortical region downstream from the auditory cortex.

  15. Near-Independent Capacities and Highly Constrained Output Orders in the Simultaneous Free Recall of Auditory-Verbal and Visuo-Spatial Stimuli

    Science.gov (United States)

    Cortis Mack, Cathleen; Dent, Kevin; Ward, Geoff

    2018-01-01

    Three experiments examined the immediate free recall (IFR) of auditory-verbal and visuospatial materials from single-modality and dual-modality lists. In Experiment 1, we presented participants with between 1 and 16 spoken words, with between 1 and 16 visuospatial dot locations, or with between 1 and 16 words "and" dots with synchronized…

  16. Role of Binaural Temporal Fine Structure and Envelope Cues in Cocktail-Party Listening.

    Science.gov (United States)

    Swaminathan, Jayaganesh; Mason, Christine R; Streeter, Timothy M; Best, Virginia; Roverud, Elin; Kidd, Gerald

    2016-08-03

    While conversing in a crowded social setting, a listener is often required to follow a target speech signal amid multiple competing speech signals (the so-called "cocktail party" problem). In such situations, separation of the target speech signal in azimuth from the interfering masker signals can lead to an improvement in target intelligibility, an effect known as spatial release from masking (SRM). This study assessed the contributions of two stimulus properties that vary with separation of sound sources, binaural envelope (ENV) and temporal fine structure (TFS), to SRM in normal-hearing (NH) human listeners. Target speech was presented from the front and speech maskers were either colocated with or symmetrically separated from the target in azimuth. The target and maskers were presented either as natural speech or as "noise-vocoded" speech in which the intelligibility was conveyed only by the speech ENVs from several frequency bands; the speech TFS within each band was replaced with noise carriers. The experiments were designed to preserve the spatial cues in the speech ENVs while retaining/eliminating them from the TFS. This was achieved by using the same/different noise carriers in the two ears. A phenomenological auditory-nerve model was used to verify that the interaural correlations in TFS differed across conditions, whereas the ENVs retained a high degree of correlation, as intended. Overall, the results from this study revealed that binaural TFS cues, especially for frequency regions below 1500 Hz, are critical for achieving SRM in NH listeners. Potential implications for studying SRM in hearing-impaired listeners are discussed. Acoustic signals received by the auditory system pass first through an array of physiologically based band-pass filters. Conceptually, at the output of each filter, there are two principal forms of temporal information: slowly varying fluctuations in the envelope (ENV) and rapidly varying fluctuations in the temporal fine

  17. A Spatial Modality Effect in Serial Memory

    Science.gov (United States)

    Tremblay, Sebastien; Parmentier, Fabrice B. R.; Guerard, Katherine; Nicholls, Alastair P.; Jones, Dylan M.

    2006-01-01

    In 2 experiments, the authors tested whether the classical modality effect--that is, the stronger recency effect for auditory items relative to visual items--can be extended to the spatial domain. An order reconstruction task was undertaken with four types of material: visual-spatial, auditory-spatial, visual-verbal, and auditory-verbal.…

  18. Effects of an Auditory Lateralization Training in Children Suspected to Central Auditory Processing Disorder

    OpenAIRE

    Lotfi, Yones; Moosavi, Abdollah; Abdollahi, Farzaneh Zamiri; BAKHSHI, Enayatollah; Sadjedi, Hamed

    2016-01-01

    Background and Objectives Central auditory processing disorder [(C)APD] refers to a deficit in auditory stimuli processing in nervous system that is not due to higher-order language or cognitive factors. One of the problems in children with (C)APD is spatial difficulties which have been overlooked despite their significance. Localization is an auditory ability to detect sound sources in space and can help to differentiate between the desired speech from other simultaneous sound sources. Aim o...

  19. Interaction of Object Binding Cues in Binaural Masking Pattern Experiments.

    Science.gov (United States)

    Verhey, Jesko L; Lübken, Björn; van de Par, Steven

    2016-01-01

    Object binding cues such as binaural and across-frequency modulation cues are likely to be used by the auditory system to separate sounds from different sources in complex auditory scenes. The present study investigates the interaction of these cues in a binaural masking pattern paradigm where a sinusoidal target is masked by a narrowband noise. It was hypothesised that beating between signal and masker may contribute to signal detection when signal and masker do not spectrally overlap but that this cue could not be used in combination with interaural cues. To test this hypothesis an additional sinusoidal interferer was added to the noise masker with a lower frequency than the noise whereas the target had a higher frequency than the noise. Thresholds increase when the interferer is added. This effect is largest when the spectral interferer-masker and masker-target distances are equal. The result supports the hypothesis that modulation cues contribute to signal detection in the classical masking paradigm and that these are analysed with modulation bandpass filters. A monaural model including an across-frequency modulation process is presented that account for this effect. Interestingly, the interferer also affects dichotic thresholds indicating that modulation cues also play a role in binaural processing.

  20. Dynamic oscillatory processes governing cued orienting and allocation of auditory attention

    Science.gov (United States)

    Ahveninen, Jyrki; Huang, Samantha; Belliveau, John W.; Chang, Wei-Tang; Hämäläinen, Matti

    2013-01-01

    In everyday listening situations, we need to constantly switch between alternative sound sources and engage attention according to cues that match our goals and expectations. The exact neuronal bases of these processes are poorly understood. We investigated oscillatory brain networks controlling auditory attention using cortically constrained fMRI-weighted magnetoencephalography/ electroencephalography (MEG/EEG) source estimates. During consecutive trials, subjects were instructed to shift attention based on a cue, presented in the ear where a target was likely to follow. To promote audiospatial attention effects, the targets were embedded in streams of dichotically presented standard tones. Occasionally, an unexpected novel sound occurred opposite to the cued ear, to trigger involuntary orienting. According to our cortical power correlation analyses, increased frontoparietal/temporal 30–100 Hz gamma activity at 200–1400 ms after cued orienting predicted fast and accurate discrimination of subsequent targets. This sustained correlation effect, possibly reflecting voluntary engagement of attention after the initial cue-driven orienting, spread from the temporoparietal junction, anterior insula, and inferior frontal (IFC) cortices to the right frontal eye fields. Engagement of attention to one ear resulted in a significantly stronger increase of 7.5–15 Hz alpha in the ipsilateral than contralateral parieto-occipital cortices 200–600 ms after the cue onset, possibly reflecting crossmodal modulation of the dorsal visual pathway during audiospatial attention. Comparisons of cortical power patterns also revealed significant increases of sustained right medial frontal cortex theta power, right dorsolateral prefrontal cortex and anterior insula/IFC beta power, and medial parietal cortex and posterior cingulate cortex gamma activity after cued vs. novelty-triggered orienting (600–1400 ms). Our results reveal sustained oscillatory patterns associated with voluntary

  1. Smartphone-based tactile cueing improves motor performance in Parkinson's disease.

    Science.gov (United States)

    Ivkovic, Vladimir; Fisher, Stanley; Paloski, William H

    2016-01-01

    Visual and auditory cueing improve functional performance in Parkinson's disease (PD) patients. However, audiovisual processing shares many cognitive resources used for attention-dependent tasks such as communication, spatial orientation, and balance. Conversely, tactile cues (TC) may be processed faster, with minimal attentional demand, and may be more efficient means for modulating motor-cognitive performance. In this study we aimed to investigate the efficacy and limitations of TC for modulating simple (heel tapping) and more complex (walking) motor tasks (1) over a range of cueing intervals, (2) with/without a secondary motor task (holding tray with cups of water). Ten PD patients (71 ± 9 years) and 10 healthy controls (69 ± 7 years) participated in the study. TCs was delivered through a smart phone attached to subjects' dominant arm and were controlled by a custom-developed Android application. PD patients and healthy controls were able to use TC to modulate heel tapping (F(3.8,1866.1) = 1008.1, p usage for movement modulation and motor-cognitive integration in PD patients. The smartphone TC application was validated as a user-friendly movement modulation aid. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Emphasis of spatial cues in the temporal fine structure during the rising segments of amplitude-modulated sounds II: single-neuron recordings

    Science.gov (United States)

    Marquardt, Torsten; Stange, Annette; Pecka, Michael; Grothe, Benedikt; McAlpine, David

    2014-01-01

    Recently, with the use of an amplitude-modulated binaural beat (AMBB), in which sound amplitude and interaural-phase difference (IPD) were modulated with a fixed mutual relationship (Dietz et al. 2013b), we demonstrated that the human auditory system uses interaural timing differences in the temporal fine structure of modulated sounds only during the rising portion of each modulation cycle. However, the degree to which peripheral or central mechanisms contribute to the observed strong dominance of the rising slope remains to be determined. Here, by recording responses of single neurons in the medial superior olive (MSO) of anesthetized gerbils and in the inferior colliculus (IC) of anesthetized guinea pigs to AMBBs, we report a correlation between the position within the amplitude-modulation (AM) cycle generating the maximum response rate and the position at which the instantaneous IPD dominates the total neural response. The IPD during the rising segment dominates the total response in 78% of MSO neurons and 69% of IC neurons, with responses of the remaining neurons predominantly coding the IPD around the modulation maximum. The observed diversity of dominance regions within the AM cycle, especially in the IC, and its comparison with the human behavioral data suggest that only the subpopulation of neurons with rising slope dominance codes the sound-source location in complex listening conditions. A comparison of two models to account for the data suggests that emphasis on IPDs during the rising slope of the AM cycle depends on adaptation processes occurring before binaural interaction. PMID:24554782

  3. The use of listening devices to ameliorate auditory deficit in children with autism.

    Science.gov (United States)

    Rance, Gary; Saunders, Kerryn; Carew, Peter; Johansson, Marlin; Tan, Johanna

    2014-02-01

    To evaluate both monaural and binaural processing skills in a group of children with autism spectrum disorder (ASD) and to determine the degree to which personal frequency modulation (radio transmission) (FM) listening systems could ameliorate their listening difficulties. Auditory temporal processing (amplitude modulation detection), spatial listening (integration of binaural difference cues), and functional hearing (speech perception in background noise) were evaluated in 20 children with ASD. Ten of these subsequently underwent a 6-week device trial in which they wore the FM system for up to 7 hours per day. Auditory temporal processing and spatial listening ability were poorer in subjects with ASD than in matched controls (temporal: P = .014 [95% CI -6.4 to -0.8 dB], spatial: P = .003 [1.0 to 4.4 dB]), and performance on both of these basic processing measures was correlated with speech perception ability (temporal: r = -0.44, P = .022; spatial: r = -0.50, P = .015). The provision of FM listening systems resulted in improved discrimination of speech in noise (P children with ASD. Copyright © 2014 Mosby, Inc. All rights reserved.

  4. Pupillometry shows the effort of auditory attention switching.

    Science.gov (United States)

    McCloy, Daniel R; Lau, Bonnie K; Larson, Eric; Pratt, Katherine A I; Lee, Adrian K C

    2017-04-01

    Successful speech communication often requires selective attention to a target stream amidst competing sounds, as well as the ability to switch attention among multiple interlocutors. However, auditory attention switching negatively affects both target detection accuracy and reaction time, suggesting that attention switches carry a cognitive cost. Pupillometry is one method of assessing mental effort or cognitive load. Two experiments were conducted to determine whether the effort associated with attention switches is detectable in the pupillary response. In both experiments, pupil dilation, target detection sensitivity, and reaction time were measured; the task required listeners to either maintain or switch attention between two concurrent speech streams. Secondary manipulations explored whether switch-related effort would increase when auditory streaming was harder. In experiment 1, spatially distinct stimuli were degraded by simulating reverberation (compromising across-time streaming cues), and target-masker talker gender match was also varied. In experiment 2, diotic streams separable by talker voice quality and pitch were degraded by noise vocoding, and the time alloted for mid-trial attention switching was varied. All trial manipulations had some effect on target detection sensitivity and/or reaction time; however, only the attention-switching manipulation affected the pupillary response: greater dilation was observed in trials requiring switching attention between talkers.

  5. Amygdala and auditory cortex exhibit distinct sensitivity to relevant acoustic features of auditory emotions.

    Science.gov (United States)

    Pannese, Alessia; Grandjean, Didier; Frühholz, Sascha

    2016-12-01

    Discriminating between auditory signals of different affective value is critical to successful social interaction. It is commonly held that acoustic decoding of such signals occurs in the auditory system, whereas affective decoding occurs in the amygdala. However, given that the amygdala receives direct subcortical projections that bypass the auditory cortex, it is possible that some acoustic decoding occurs in the amygdala as well, when the acoustic features are relevant for affective discrimination. We tested this hypothesis by combining functional neuroimaging with the neurophysiological phenomena of repetition suppression (RS) and repetition enhancement (RE) in human listeners. Our results show that both amygdala and auditory cortex responded differentially to physical voice features, suggesting that the amygdala and auditory cortex decode the affective quality of the voice not only by processing the emotional content from previously processed acoustic features, but also by processing the acoustic features themselves, when these are relevant to the identification of the voice's affective value. Specifically, we found that the auditory cortex is sensitive to spectral high-frequency voice cues when discriminating vocal anger from vocal fear and joy, whereas the amygdala is sensitive to vocal pitch when discriminating between negative vocal emotions (i.e., anger and fear). Vocal pitch is an instantaneously recognized voice feature, which is potentially transferred to the amygdala by direct subcortical projections. These results together provide evidence that, besides the auditory cortex, the amygdala too processes acoustic information, when this is relevant to the discrimination of auditory emotions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. The many facets of auditory display

    Science.gov (United States)

    Blattner, Meera M.

    1995-01-01

    In this presentation we will examine some of the ways sound can be used in a virtual world. We make the case that many different types of audio experience are available to us. A full range of audio experiences include: music, speech, real-world sounds, auditory displays, and auditory cues or messages. The technology of recreating real-world sounds through physical modeling has advanced in the past few years allowing better simulation of virtual worlds. Three-dimensional audio has further enriched our sensory experiences.

  7. Visual and cross-modal cues increase the identification of overlapping visual stimuli in Balint's syndrome.

    Science.gov (United States)

    D'Imperio, Daniela; Scandola, Michele; Gobbetto, Valeria; Bulgarelli, Cristina; Salgarello, Matteo; Avesani, Renato; Moro, Valentina

    2017-10-01

    Cross-modal interactions improve the processing of external stimuli, particularly when an isolated sensory modality is impaired. When information from different modalities is integrated, object recognition is facilitated probably as a result of bottom-up and top-down processes. The aim of this study was to investigate the potential effects of cross-modal stimulation in a case of simultanagnosia. We report a detailed analysis of clinical symptoms and an 18F-fluorodeoxyglucose (FDG) brain positron emission tomography/computed tomography (PET/CT) study of a patient affected by Balint's syndrome, a rare and invasive visual-spatial disorder following bilateral parieto-occipital lesions. An experiment was conducted to investigate the effects of visual and nonvisual cues on performance in tasks involving the recognition of overlapping pictures. Four modalities of sensory cues were used: visual, tactile, olfactory, and auditory. Data from neuropsychological tests showed the presence of ocular apraxia, optic ataxia, and simultanagnosia. The results of the experiment indicate a positive effect of the cues on the recognition of overlapping pictures, not only in the identification of the congruent valid-cued stimulus (target) but also in the identification of the other, noncued stimuli. All the sensory modalities analyzed (except the auditory stimulus) were efficacious in terms of increasing visual recognition. Cross-modal integration improved the patient's ability to recognize overlapping figures. However, while in the visual unimodal modality both bottom-up (priming, familiarity effect, disengagement of attention) and top-down processes (mental representation and short-term memory, the endogenous orientation of attention) are involved, in the cross-modal integration it is semantic representations that mainly activate visual recognition processes. These results are potentially useful for the design of rehabilitation training for attentional and visual-perceptual deficits.

  8. Estimating the intended sound direction of the user: toward an auditory brain-computer interface using out-of-head sound localization.

    Directory of Open Access Journals (Sweden)

    Isao Nambu

    Full Text Available The auditory Brain-Computer Interface (BCI using electroencephalograms (EEG is a subject of intensive study. As a cue, auditory BCIs can deal with many of the characteristics of stimuli such as tone, pitch, and voices. Spatial information on auditory stimuli also provides useful information for a BCI. However, in a portable system, virtual auditory stimuli have to be presented spatially through earphones or headphones, instead of loudspeakers. We investigated the possibility of an auditory BCI using the out-of-head sound localization technique, which enables us to present virtual auditory stimuli to users from any direction, through earphones. The feasibility of a BCI using this technique was evaluated in an EEG oddball experiment and offline analysis. A virtual auditory stimulus was presented to the subject from one of six directions. Using a support vector machine, we were able to classify whether the subject attended the direction of a presented stimulus from EEG signals. The mean accuracy across subjects was 70.0% in the single-trial classification. When we used trial-averaged EEG signals as inputs to the classifier, the mean accuracy across seven subjects reached 89.5% (for 10-trial averaging. Further analysis showed that the P300 event-related potential responses from 200 to 500 ms in central and posterior regions of the brain contributed to the classification. In comparison with the results obtained from a loudspeaker experiment, we confirmed that stimulus presentation by out-of-head sound localization achieved similar event-related potential responses and classification performances. These results suggest that out-of-head sound localization enables us to provide a high-performance and loudspeaker-less portable BCI system.

  9. Learning to Count: Structured Practice with Spatial Cues Supports the Development of Counting Sequence Knowledge in 3-Year-Old English-Speaking Children

    Science.gov (United States)

    Dunbar, Kristina; Ridha, Aala; Cankaya, Ozlem; Jiménez Lira, Carolina; LeFevre, Jo-Anne

    2017-01-01

    Research Findings: Children who speak English are slower to learn the counting sequence between 11 and 20 compared to children who speak Asian languages. In the present research, we examined whether providing children with spatially relevant information during counting would facilitate their acquisition of the counting sequence. Three-year-olds…

  10. [Auditory fatigue].

    Science.gov (United States)

    Sanjuán Juaristi, Julio; Sanjuán Martínez-Conde, Mar

    2015-01-01

    Given the relevance of possible hearing losses due to sound overloads and the short list of references of objective procedures for their study, we provide a technique that gives precise data about the audiometric profile and recruitment factor. Our objectives were to determine peripheral fatigue, through the cochlear microphonic response to sound pressure overload stimuli, as well as to measure recovery time, establishing parameters for differentiation with regard to current psychoacoustic and clinical studies. We used specific instruments for the study of cochlear microphonic response, plus a function generator that provided us with stimuli of different intensities and harmonic components. In Wistar rats, we first measured the normal microphonic response and then the effect of auditory fatigue on it. Using a 60dB pure tone acoustic stimulation, we obtained a microphonic response at 20dB. We then caused fatigue with 100dB of the same frequency, reaching a loss of approximately 11dB after 15minutes; after that, the deterioration slowed and did not exceed 15dB. By means of complex random tone maskers or white noise, no fatigue was caused to the sensory receptors, not even at levels of 100dB and over an hour of overstimulation. No fatigue was observed in terms of sensory receptors. Deterioration of peripheral perception through intense overstimulation may be due to biochemical changes of desensitisation due to exhaustion. Auditory fatigue in subjective clinical trials presumably affects supracochlear sections. The auditory fatigue tests found are not in line with those obtained subjectively in clinical and psychoacoustic trials. Copyright © 2013 Elsevier España, S.L.U. y Sociedad Española de Otorrinolaringología y Patología Cérvico-Facial. All rights reserved.

  11. Auditory environmental context affects visual distance perception.

    Science.gov (United States)

    Etchemendy, Pablo E; Abregú, Ezequiel; Calcagno, Esteban R; Eguia, Manuel C; Vechiatti, Nilda; Iasi, Federico; Vergara, Ramiro O

    2017-08-03

    In this article, we show that visual distance perception (VDP) is influenced by the auditory environmental context through reverberation-related cues. We performed two VDP experiments in two dark rooms with extremely different reverberation times: an anechoic chamber and a reverberant room. Subjects assigned to the reverberant room perceived the targets farther than subjects assigned to the anechoic chamber. Also, we found a positive correlation between the maximum perceived distance and the auditorily perceived room size. We next performed a second experiment in which the same subjects of Experiment 1 were interchanged between rooms. We found that subjects preserved the responses from the previous experiment provided they were compatible with the present perception of the environment; if not, perceived distance was biased towards the auditorily perceived boundaries of the room. Results of both experiments show that the auditory environment can influence VDP, presumably through reverberation cues related to the perception of room size.

  12. Auditory Midbrain Implant: A Review

    Science.gov (United States)

    Lim, Hubert H.; Lenarz, Minoo; Lenarz, Thomas

    2009-01-01

    The auditory midbrain implant (AMI) is a new hearing prosthesis designed for stimulation of the inferior colliculus in deaf patients who cannot sufficiently benefit from cochlear implants. The authors have begun clinical trials in which five patients have been implanted with a single shank AMI array (20 electrodes). The goal of this review is to summarize the development and research that has led to the translation of the AMI from a concept into the first patients. This study presents the rationale and design concept for the AMI as well a summary of the animal safety and feasibility studies that were required for clinical approval. The authors also present the initial surgical, psychophysical, and speech results from the first three implanted patients. Overall, the results have been encouraging in terms of the safety and functionality of the implant. All patients obtain improvements in hearing capabilities on a daily basis. However, performance varies dramatically across patients depending on the implant location within the midbrain with the best performer still not able to achieve open set speech perception without lip-reading cues. Stimulation of the auditory midbrain provides a wide range of level, spectral, and temporal cues, all of which are important for speech understanding, but they do not appear to sufficiently fuse together to enable open set speech perception with the currently used stimulation strategies. Finally, several issues and hypotheses for why current patients obtain limited speech perception along with several feasible solutions for improving AMI implementation are presented. PMID:19762428

  13. Auditory Hallucination

    Directory of Open Access Journals (Sweden)

    MohammadReza Rajabi

    2003-09-01

    Full Text Available Auditory Hallucination or Paracusia is a form of hallucination that involves perceiving sounds without auditory stimulus. A common is hearing one or more talking voices which is associated with psychotic disorders such as schizophrenia or mania. Hallucination, itself, is the most common feature of perceiving the wrong stimulus or to the better word perception of the absence stimulus. Here we will discuss four definitions of hallucinations:1.Perceiving of a stimulus without the presence of any subject; 2. hallucination proper which are the wrong perceptions that are not the falsification of real perception, Although manifest as a new subject and happen along with and synchronously with a real perception;3. hallucination is an out-of-body perception which has no accordance with a real subjectIn a stricter sense, hallucinations are defined as perceptions in a conscious and awake state in the absence of external stimuli which have qualities of real perception, in that they are vivid, substantial, and located in external objective space. We are going to discuss it in details here.

  14. Early Visual Deprivation Severely Compromises the Auditory Sense of Space in Congenitally Blind Children

    Science.gov (United States)

    Vercillo, Tiziana; Burr, David; Gori, Monica

    2016-01-01

    A recent study has shown that congenitally blind adults, who have never had visual experience, are impaired on an auditory spatial bisection task (Gori, Sandini, Martinoli, & Burr, 2014). In this study we investigated how thresholds for auditory spatial bisection and auditory discrimination develop with age in sighted and congenitally blind…

  15. Reduced Sensitivity to Slow-Rate Dynamic Auditory Information in Children with Dyslexia

    Science.gov (United States)

    Poelmans, Hanne; Luts, Heleen; Vandermosten, Maaike; Boets, Bart; Ghesquiere, Pol; Wouters, Jan

    2011-01-01

    The etiology of developmental dyslexia remains widely debated. An appealing theory postulates that the reading and spelling problems in individuals with dyslexia originate from reduced sensitivity to slow-rate dynamic auditory cues. This low-level auditory deficit is thought to provoke a cascade of effects, including inaccurate speech perception…

  16. Age-related deficits in auditory confrontation naming.

    Science.gov (United States)

    Hanna-Pladdy, Brenda; Choi, Hyun

    2010-09-01

    The naming of manipulable objects in older and younger adults was evaluated across auditory, visual, and multisensory conditions. Older adults were less accurate and slower in naming across conditions, and all subjects were more impaired and slower to name action sounds than pictures or audiovisual combinations. Moreover, there was a sensory by age group interaction, revealing lower accuracy and increased latencies in auditory naming for older adults unrelated to hearing insensitivity but modest improvement to multisensory cues. These findings support age-related deficits in object action naming and suggest that auditory confrontation naming may be more sensitive than visual naming. (c) 2010 APA, all rights reserved.

  17. Changes in auditory perceptions and cortex resulting from hearing recovery after extended congenital unilateral hearing loss

    OpenAIRE

    Firszt, Jill B.; Reeder, Ruth M.; Holden, Timothy A.; Harold eBurton; Chole, Richard A.

    2013-01-01

    Monaural hearing induces auditory system reorganization. Imbalanced input also degrades time-intensity cues for sound localization and signal segregation for listening in noise. While there have been studies of bilateral auditory deprivation and later hearing restoration (e.g. cochlear implants), less is known about unilateral auditory deprivation and subsequent hearing improvement. We investigated effects of long-term congenital unilateral hearing loss on localization, speech understanding, ...

  18. Cortical Transformation of Spatial Processing for Solving the Cocktail Party Problem: A Computational Model123

    Science.gov (United States)

    Dong, Junzi; Colburn, H. Steven

    2016-01-01

    In multisource, “cocktail party” sound environments, human and animal auditory systems can use spatial cues to effectively separate and follow one source of sound over competing sources. While mechanisms to extract spatial cues such as interaural time differences (ITDs) are well understood in precortical areas, how such information is reused and transformed in higher cortical regions to represent segregated sound sources is not clear. We present a computational model describing a hypothesized neural network that spans spatial cue detection areas and the cortex. This network is based on recent physiological findings that cortical neurons selectively encode target stimuli in the presence of competing maskers based on source locations (Maddox et al., 2012). We demonstrate that key features of cortical responses can be generated by the model network, which exploits spatial interactions between inputs via lateral inhibition, enabling the spatial separation of target and interfering sources while allowing monitoring of a broader acoustic space when there is no competition. We present the model network along with testable experimental paradigms as a starting point for understanding the transformation and organization of spatial information from midbrain to cortex. This network is then extended to suggest engineering solutions that may be useful for hearing-assistive devices in solving the cocktail party problem. PMID:26866056

  19. The human brain maintains contradictory and redundant auditory sensory predictions.

    Directory of Open Access Journals (Sweden)

    Marika Pieszek

    Full Text Available Computational and experimental research has revealed that auditory sensory predictions are derived from regularities of the current environment by using internal generative models. However, so far, what has not been addressed is how the auditory system handles situations giving rise to redundant or even contradictory predictions derived from different sources of information. To this end, we measured error signals in the event-related brain potentials (ERPs in response to violations of auditory predictions. Sounds could be predicted on the basis of overall probability, i.e., one sound was presented frequently and another sound rarely. Furthermore, each sound was predicted by an informative visual cue. Participants' task was to use the cue and to discriminate the two sounds as fast as possible. Violations of the probability based prediction (i.e., a rare sound as well as violations of the visual-auditory prediction (i.e., an incongruent sound elicited error signals in the ERPs (Mismatch Negativity [MMN] and Incongruency Response [IR]. Particular error signals were observed even in case the overall probability and the visual symbol predicted different sounds. That is, the auditory system concurrently maintains and tests contradictory predictions. Moreover, if the same sound was predicted, we observed an additive error signal (scalp potential and primary current density equaling the sum of the specific error signals. Thus, the auditory system maintains and tolerates functionally independently represented redundant and contradictory predictions. We argue that the auditory system exploits all currently active regularities in order to optimally prepare for future events.

  20. Single-unit Analysis of Somatosensory Processing in Core Auditory Cortex of Hearing Ferrets

    Science.gov (United States)

    Meredith, M. Alex; Allman, Brian L.

    2014-01-01

    The recent findings in several species that primary auditory cortex processes non-auditory information have largely overlooked the possibility for somatosensory effects. Therefore, the present investigation examined the core auditory cortices (anterior – AAF, and primary auditory-- A1, fields) for tactile responsivity. Multiple single-unit recordings from anesthetized ferret cortex yielded histologically verified neurons (n=311) tested with electronically controlled auditory, visual and tactile stimuli and their combinations. Of the auditory neurons tested, a small proportion (17%) was influenced by visual cues, but a somewhat larger number (23%) was affected by tactile stimulation. Tactile effects rarely occurred alone and spiking responses were observed in bimodal auditory-tactile neurons. However, the broadest tactile effect that was observed, which occurred in all neuron types, was that of suppression of the response to a concurrent auditory cue. The presence of tactile effects in core auditory cortices was supported by a substantial anatomical projection from the rostral suprasylvian sulcal somatosensory area. Collectively, these results demonstrate that crossmodal effects in auditory cortex are not exclusively visual and that somatosensation plays a significant role in modulation of acoustic processing and indicate that crossmodal plasticity following deafness may unmask these existing non-auditory functions. PMID:25728185

  1. Hey1 and Hey2 control the spatial and temporal pattern of mammalian auditory hair cell differentiation downstream of Hedgehog signaling.

    Science.gov (United States)

    Benito-Gonzalez, Ana; Doetzlhofer, Angelika

    2014-09-17

    Mechano-sensory hair cells (HCs), housed in the inner ear cochlea, are critical for the perception of sound. In the mammalian cochlea, differentiation of HCs occurs in a striking basal-to-apical and medial-to-lateral gradient, which is thought to ensure correct patterning and proper function of the auditory sensory epithelium. Recent studies have revealed that Hedgehog signaling opposes HC differentiation and is critical for the establishment of the graded pattern of auditory HC differentiation. However, how Hedgehog signaling interferes with HC differentiation is unknown. Here, we provide evidence that in the murine cochlea, Hey1 and Hey2 control the spatiotemporal pattern of HC differentiation downstream of Hedgehog signaling. It has been recently shown that HEY1 and HEY2, two highly redundant HES-related transcriptional repressors, are highly expressed in supporting cell (SC) and HC progenitors (prosensory cells), but their prosensory function remained untested. Using a conditional double knock-out strategy, we demonstrate that prosensory cells form and proliferate properly in the absence of Hey1 and Hey2 but differentiate prematurely because of precocious upregulation of the pro-HC factor Atoh1. Moreover, we demonstrate that prosensory-specific expression of Hey1 and Hey2 and its subsequent graded downregulation is controlled by Hedgehog signaling in a largely FGFR-dependent manner. In summary, our study reveals a critical role for Hey1 and Hey2 in prosensory cell maintenance and identifies Hedgehog signaling as a novel upstream regulator of their prosensory function in the mammalian cochlea. The regulatory mechanism described here might be a broadly applied mechanism for controlling progenitor behavior in the central and peripheral nervous system. Copyright © 2014 the authors 0270-6474/14/3412865-12$15.00/0.

  2. Nogo stimuli do not receive more attentional suppression or response inhibition than neutral stimuli: evidence from the N2pc, PD and N2 components in a spatial cueing paradigm

    Directory of Open Access Journals (Sweden)

    Caroline eBarras

    2016-05-01

    Full Text Available It has been claimed that stimuli sharing the color of the nogo-target are suppressed because of the strong incentive to not process the nogo-target, but we failed to replicate this finding. Participants searched for a color singleton in the target display and indicated its shape when it was in the go color. If the color singleton in the target display was in the nogo color, they had to withhold the response. The target display was preceded by a cue display that also contained a color singleton (the cue. The cue was either in the color of the go or nogo target, or it was in an unrelated, neutral color. With cues in the go color, reaction times (RTs were shorter when the cue appeared at the same location as the target compared to when it appeared at a different location. Also, electrophysiological recordings showed that an index of attentional selection, the N2pc, was elicited by go cues. Surprisingly, we failed to replicate cueing costs for cues in the nogo color that were originally reported by Anderson and Folk (2012. Consistently, we also failed to find an electrophysiological index of attentional suppression (the PD for cues in the nogo color. Further, fronto-central ERPs to the cue display showed the same negativity for nogo and neutral stimuli relative to go stimuli, which is at odds with response inhibition and conflict monitoring accounts of the Nogo-N2. Thus, the modified cueing paradigm employed here provides little evidence that features associated with nogo-targets are suppressed at the level of attention or response selection. Rather, nogo-stimuli are efficiently ignored and attention is focused on features that require a response.

  3. Slow Temporal Integration Enables Robust Neural Coding and Perception of a Cue to Sound Source Location.

    Science.gov (United States)

    Brown, Andrew D; Tollin, Daniel J

    2016-09-21

    In mammals, localization of sound sources in azimuth depends on sensitivity to interaural differences in sound timing (ITD) and level (ILD). Paradoxically, while typical ILD-sensitive neurons of the auditory brainstem require millisecond synchrony of excitatory and inhibitory inputs for the encoding of ILDs, human and animal behavioral ILD sensitivity is robust to temporal stimulus degradations (e.g., interaural decorrelation due to reverberation), or, in humans, bilateral clinical device processing. Here we demonstrate that behavioral ILD sensitivity is only modestly degraded with even complete decorrelation of left- and right-ear signals, suggesting the existence of a highly integrative ILD-coding mechanism. Correspondingly, we find that a majority of auditory midbrain neurons in the central nucleus of the inferior colliculus (of chinchilla) effectively encode ILDs despite complete decorrelation of left- and right-ear signals. We show that such responses can be accounted for by relatively long windows of bilateral excitatory-inhibitory interaction, which we explicitly measure using trains of narrowband clicks. Neural and behavioral data are compared with the outputs of a simple model of ILD processing with a single free parameter, the duration of excitatory-inhibitory interaction. Behavioral, neural, and modeling data collectively suggest that ILD sensitivity depends on binaural integration of excitation and inhibition within a ≳3 ms temporal window, significantly longer than observed in lower brainstem neurons. This relatively slow integration potentiates a unique role for the ILD system in spatial hearing that may be of particular importance when informative ITD cues are unavailable. In mammalian hearing, interaural differences in the timing (ITD) and level (ILD) of impinging sounds carry critical information about source location. However, natural sounds are often decorrelated between the ears by reverberation and background noise, degrading the fidelity of

  4. Efficacy of the LiSN & Learn Auditory Training Software: randomized blinded controlled study

    Directory of Open Access Journals (Sweden)

    Sharon Cameron

    2012-01-01

    Full Text Available Background: Children with a spatial processing disorder (SPD require a more favorable signal-to-noise ratio in the classroom because they have difficulty perceiving sound source location cues. Previous research has shown that a novel training program - LiSN & Learn - employing spatialized sound, overcomes this deficit. Here we investigate whether improvements in spatial processing ability are specific to the LiSN & Learn training program. Materials and methods: Participants were ten children (aged between 6;0 [years;months] and 9;9 with normal peripheral hearing who were diagnosed as having SPD using the Listening in Spatialized Noise – Sentences Test (LISN-S. In a blinded controlled study, the participants were randomly allocated to train with either the LiSN & Learn or another auditory training program – Earobics - for approximately 15 minutes per day for twelve weeks. Results: There was a significant improvement post-training on the conditions of the LiSN-S that evaluate spatial processing ability for the LiSN & Learn group (p=0.03 to 0.0008, η2=0.75 to 0.95, n=5, but not for the Earobics group (p=0.5 to 0.7, η2=0.1 to 0.04, n=5. Results from questionnaires completed by the participants and their parents and teachers revealed improvements in real-world listening performance post-training were greater in the LiSN & Learn group than the Earobics group. Conclusions: LiSN & Learn training improved binaural processing ability in children with SPD, enhancing their ability to understand speech in noise. Exposure to non-spatialized auditory training does not produce similar outcomes, emphasizing the importance of deficit-specific remediation.

  5. Efficacy of the LiSN & Learn auditory training software: randomized blinded controlled study

    Directory of Open Access Journals (Sweden)

    Sharon Cameron

    2012-09-01

    Full Text Available Children with a spatial processing disorder (SPD require a more favorable signal-to-noise ratio in the classroom because they have difficulty perceiving sound source location cues. Previous research has shown that a novel training program - LiSN & Learn - employing spatialized sound, overcomes this deficit. Here we investigate whether improvements in spatial processing ability are specific to the LiSN & Learn training program. Participants were ten children (aged between 6;0 [years;months] and 9;9 with normal peripheral hearing who were diagnosed as having SPD using the Listening in Spatialized Noise - Sentences test (LiSN-S. In a blinded controlled study, the participants were randomly allocated to train with either the LiSN & Learn or another auditory training program - Earobics - for approximately 15 min per day for twelve weeks. There was a significant improvement post-training on the conditions of the LiSN-S that evaluate spatial processing ability for the LiSN & Learn group (P=0.03 to 0.0008, η 2=0.75 to 0.95, n=5, but not for the Earobics group (P=0.5 to 0.7, η 2=0.1 to 0.04, n=5. Results from questionnaires completed by the participants and their parents and teachers revealed improvements in real-world listening performance post-training were greater in the LiSN & Learn group than the Earobics group. LiSN & Learn training improved binaural processing ability in children with SPD, enhancing their ability to understand speech in noise. Exposure to non-spatialized auditory training does not produce similar outcomes, emphasizing the importance of deficit-specific remediation.

  6. Auditory Connections and Functions of Prefrontal Cortex

    Directory of Open Access Journals (Sweden)

    Bethany ePlakke

    2014-07-01

    Full Text Available The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC. In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition.

  7. Auditory connections and functions of prefrontal cortex

    Science.gov (United States)

    Plakke, Bethany; Romanski, Lizabeth M.

    2014-01-01

    The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC). In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG) most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition. PMID:25100931

  8. Anticipatory cortical activation precedes auditory events in sleeping infants.

    Directory of Open Access Journals (Sweden)

    Tamami Nakano

    Full Text Available BACKGROUND: Behavioral studies have shown that infants can form associations between environmental events and produce anticipatory actions for the predictable event, but the neural mechanisms for the learning and anticipation of events in infants are not known. Recent neuroimaging studies revealed that the association cortices of infants show activation related to auditory-stimulus discrimination and novelty detection during sleep. In the present study, we expected that when an auditory cue (beeps predicted an auditory event (a female voice, specific regions of the infant cortex would show anticipatory activation before the event onset even while sleeping. METHODOLOGY/PRINCIPAL FINDINGS: We examined the cortical activation of 3-month-old infants during delays between the cue and the event by using multi-channel near-infrared spectroscopy. To investigate spatiotemporal changes in cortical activation over the experimental session, we divided the session into two phases (early and late phase and analyzed each phase separately. In the early phase, the frontal regions showed activation in response to the cue that was followed by the event compared with another cue that was not followed by any event. In the late phase, the temporoparietal region, in addition to the frontal region, showed prominent activation in response to the cue followed by the event. In contrast, when the cue was followed by an event and no-event in equal proportions, cortical activation in response to the cue was not observed in any phase. CONCLUSIONS: Sleeping 3-month-old infants showed anticipatory cortical activation in the temporoparietal and frontal regions only in response to the cue predicting the event, suggesting that infants can implicitly form associations between temporally separated events and generate the anticipatory activation before the predictable event. Furthermore, the different time evolution of activation in the temporoparietal and frontal regions suggests

  9. Color to gray: visual cue preservation.

    Science.gov (United States)

    Song, Mingli; Tao, Dacheng; Chen, Chun; Li, Xuelong; Chen, Chang Wen

    2010-09-01

    Both commercial and scientific applications often need to transform color images into gray-scale images, e.g., to reduce the publication cost in printing color images or to help color blind people see visual cues of color images. However, conventional color to gray algorithms are not ready for practical applications because they encounter the following problems: 1) Visual cues are not well defined so it is unclear how to preserve important cues in the transformed gray-scale images; 2) some algorithms have extremely high time cost for computation; and 3) some require human-computer interactions to have a reasonable transformation. To solve or at least reduce these problems, we propose a new algorithm based on a probabilistic graphical model with the assumption that the image is defined over a Markov random field. Thus, color to gray procedure can be regarded as a labeling process to preserve the newly well--defined visual cues of a color image in the transformed gray-scale image. Visual cues are measurements that can be extracted from a color image by a perceiver. They indicate the state of some properties of the image that the perceiver is interested in perceiving. Different people may perceive different cues from the same color image and three cues are defined in this paper, namely, color spatial consistency, image structure information, and color channel perception priority. We cast color to gray as a visual cue preservation procedure based on a probabilistic graphical model and optimize the model based on an integral minimization problem. We apply the new algorithm to both natural color images and artificial pictures, and demonstrate that the proposed approach outperforms representative conventional algorithms in terms of effectiveness and efficiency. In addition, it requires no human-computer interactions.

  10. Perception of binaural cues develops in children who are deaf through bilateral cochlear implantation.

    Science.gov (United States)

    Gordon, Karen A; Deighton, Michael R; Abbasalipour, Parvaneh; Papsin, Blake C

    2014-01-01

    There are significant challenges to restoring binaural hearing to children who have been deaf from an early age. The uncoordinated and poor temporal information available from cochlear implants distorts perception of interaural timing differences normally important for sound localization and listening in noise. Moreover, binaural development can be compromised by bilateral and unilateral auditory deprivation. Here, we studied perception of both interaural level and timing differences in 79 children/adolescents using bilateral cochlear implants and 16 peers with normal hearing. They were asked on which side of their head they heard unilaterally or bilaterally presented click- or electrical pulse- trains. Interaural level cues were identified by most participants including adolescents with long periods of unilateral cochlear implant use and little bilateral implant experience. Interaural timing cues were not detected by new bilateral adolescent users, consistent with previous evidence. Evidence of binaural timing detection was, for the first time, found in children who had much longer implant experience but it was marked by poorer than normal sensitivity and abnormally strong dependence on current level differences between implants. In addition, children with prior unilateral implant use showed a higher proportion of responses to their first implanted sides than children implanted simultaneously. These data indicate that there are functional repercussions of developing binaural hearing through bilateral cochlear implants, particularly when provided sequentially; nonetheless, children have an opportunity to use these devices to hear better in noise and gain spatial hearing.

  11. Temporal prediction errors in visual and auditory cortices.

    Science.gov (United States)

    Lee, Hweeling; Noppeney, Uta

    2014-04-14

    To form a coherent percept of the environment, the brain needs to bind sensory signals emanating from a common source, but to segregate those from different sources [1]. Temporal correlations and synchrony act as prominent cues for multisensory integration [2-4], but the neural mechanisms by which such cues are identified remain unclear. Predictive coding suggests that the brain iteratively optimizes an internal model of its environment by minimizing the errors between its predictions and the sensory inputs [5,6]. This model enables the brain to predict the temporal evolution of natural audiovisual inputs and their statistical (for example, temporal) relationship. A prediction of this theory is that asynchronous audiovisual signals violating the model's predictions induce an error signal that depends on the directionality of the audiovisual asynchrony. As the visual system generates the dominant temporal predictions for visual leading asynchrony, the delayed auditory inputs are expected to generate a prediction error signal in the auditory system (and vice versa for auditory leading asynchrony). Using functional magnetic resonance imaging (fMRI), we measured participants' brain responses to synchronous, visual leading and auditory leading movies of speech, sinewave speech or music. In line with predictive coding, auditory leading asynchrony elicited a prediction error in visual cortices and visual leading asynchrony in auditory cortices. Our results reveal predictive coding as a generic mechanism to temporally bind signals from multiple senses into a coherent percept. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Auditory capture of visual motion: effects on perception and discrimination.

    Science.gov (United States)

    McCourt, Mark E; Leone, Lynnette M

    2016-09-28

    We asked whether the perceived direction of visual motion and contrast thresholds for motion discrimination are influenced by the concurrent motion of an auditory sound source. Visual motion stimuli were counterphasing Gabor patches, whose net motion energy was manipulated by adjusting the contrast of the leftward-moving and rightward-moving components. The presentation of these visual stimuli was paired with the simultaneous presentation of auditory stimuli, whose apparent motion in 3D auditory space (rightward, leftward, static, no sound) was manipulated using interaural time and intensity differences, and Doppler cues. In experiment 1, observers judged whether the Gabor visual stimulus appeared to move rightward or leftward. In experiment 2, contrast discrimination thresholds for detecting the interval containing unequal (rightward or leftward) visual motion energy were obtained under the same auditory conditions. Experiment 1 showed that the perceived direction of ambiguous visual motion is powerfully influenced by concurrent auditory motion, such that auditory motion 'captured' ambiguous visual motion. Experiment 2 showed that this interaction occurs at a sensory stage of processing as visual contrast discrimination thresholds (a criterion-free measure of sensitivity) were significantly elevated when paired with congruent auditory motion. These results suggest that auditory and visual motion signals are integrated and combined into a supramodal (audiovisual) representation of motion.

  13. Loss of form vision impairs spatial imagery.

    Science.gov (United States)

    Occelli, Valeria; Lin, Jonathan B; Lacey, Simon; Sathian, K

    2014-01-01

    Previous studies have reported inconsistent results when comparing spatial imagery performance in the blind and the sighted, with some, but not all, studies demonstrating deficits in the blind. Here, we investigated the effect of visual status and individual preferences ("cognitive style") on performance of a spatial imagery task. Participants with blindness resulting in the loss of form vision at or after age 6, and age- and gender-matched sighted participants, performed a spatial imagery task requiring memorization of a 4 × 4 lettered matrix and subsequent mental construction of shapes within the matrix from four-letter auditory cues. They also completed the Santa Barbara Sense of Direction Scale (SBSoDS) and a self-evaluation of cognitive style. The sighted participants also completed the Object-Spatial Imagery and Verbal Questionnaire (OSIVQ). Visual status affected performance on the spatial imagery task: the blind performed significantly worse than the sighted, independently of the age at which form vision was completely lost. Visual status did not affect the distribution of preferences based on self-reported cognitive style. Across all participants, self-reported verbalizer scores were significantly negatively correlated with accuracy on the spatial imagery task. There was a positive correlation between the SBSoDS score and accuracy on the spatial imagery task, across all participants, indicating that a better sense of direction is related to a more proficient spatial representation and that the imagery task indexes ecologically relevant spatial abilities. Moreover, the older the participants were, the worse their performance was, indicating a detrimental effect of age on spatial imagery performance. Thus, spatial skills represent an important target for rehabilitative approaches to visual impairment, and individual differences, which can modulate performance, should be taken into account in such approaches.

  14. Auditory perception of a human walker.

    Science.gov (United States)

    Cottrell, David; Campbell, Megan E J

    2014-01-01

    When one hears footsteps in the hall, one is able to instantly recognise it as a person: this is an everyday example of auditory biological motion perception. Despite the familiarity of this experience, research into this phenomenon is in its infancy compared with visual biological motion perception. Here, two experiments explored sensitivity to, and recognition of, auditory stimuli of biological and nonbiological origin. We hypothesised that the cadence of a walker gives rise to a temporal pattern of impact sounds that facilitates the recognition of human motion from auditory stimuli alone. First a series of detection tasks compared sensitivity with three carefully matched impact sounds: footsteps, a ball bouncing, and drumbeats. Unexpectedly, participants were no more sensitive to footsteps than to impact sounds of nonbiological origin. In the second experiment participants made discriminations between pairs of the same stimuli, in a series of recognition tasks in which the temporal pattern of impact sounds was manipulated to be either that of a walker or the pattern more typical of the source event (a ball bouncing or a drumbeat). Under these conditions, there was evidence that both temporal and nontemporal cues were important in recognising theses stimuli. It is proposed that the interval between footsteps, which reflects a walker's cadence, is a cue for the recognition of the sounds of a human walking.

  15. The contribution of dynamic visual cues to audiovisual speech perception.

    Science.gov (United States)

    Jaekl, Philip; Pesquita, Ana; Alsius, Agnes; Munhall, Kevin; Soto-Faraco, Salvador

    2015-08-01

    Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Auditory distance perception in humans : A summary of past and present research

    NARCIS (Netherlands)

    Zahorik, P.; Brungart, D.S.; Bronkhorst, A.W.

    2005-01-01

    Although auditory distance perception is a critical component of spatial hearing, it has received substantially less scienti.c attention than the directional aspects of auditory localization. Here we summarize current knowledge on auditory distance perception, with special emphasis on recent

  17. The ability for cocaine and cocaine-associated cues to compete for attention.

    Science.gov (United States)

    Pitchers, Kyle K; Wood, Taylor R; Skrzynski, Cari J; Robinson, Terry E; Sarter, Martin

    2017-03-01

    In humans, reward cues, including drug cues in individuals experiencing addiction, are especially effective in biasing attention towards them, so much so they can disrupt ongoing task performance. It is not known, however, whether this happens in rats. To address this question, we developed a behavioral paradigm to assess the capacity of an auditory drug (cocaine) cue to evoke cocaine-seeking behavior, thus distracting thirsty rats from performing a well-learned sustained attention task (SAT) to obtain a water reward. First, it was determined that an auditory cocaine cue (tone-CS) reinstated drug-seeking equally in sign-trackers (STs) and goal-trackers (GTs), which otherwise vary in the propensity to attribute incentive salience to a localizable drug cue. Next, we tested the ability of an auditory cocaine cue to disrupt performance on the SAT in STs and GTs. Rats were trained to self-administer cocaine intravenously using an Intermittent Access self-administration procedure known to produce a progressive increase in motivation for cocaine, escalation of intake, and strong discriminative stimulus control over drug-seeking behavior. When presented alone, the auditory discriminative stimulus elicited cocaine-seeking behavior while rats were performing the SAT, but it was not sufficiently disruptive to impair SAT performance. In contrast, if cocaine was available in the presence of the cue, or when administered non-contingently, SAT performance was severely disrupted. We suggest that performance on a relatively automatic, stimulus-driven task, such as the basic version of the SAT used here, may be difficult to disrupt with a drug cue alone. A task that requires more top-down cognitive control may be needed. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. The influence of spectral distinctiveness on acoustic cue weighting in children's and adults' speech perception

    Science.gov (United States)

    Mayo, Catherine; Turk, Alice

    2005-09-01

    Children and adults appear to weight some acoustic cues differently in perceiving certain speech contrasts. One possible explanation for this difference is that children and adults make use of different strategies in the way that they process speech. An alternative explanation is that adult-child cue weighting differences are due to more general sensory (auditory) processing differences between the two groups. It has been proposed that children may be less able to deal with incomplete or insufficient acoustic information than are adults, and thus may require cues that are longer, louder, or more spectrally distinct to identify or discriminate between auditory stimuli. The current study tested this hypothesis by examining adults' and 3- to 7-year-old children's cue weighting for contrasts in which vowel-onset formant transitions varied from spectrally distinct (/no/-/mo/, /do/-/bo/, and /ta/-/da/) to spectrally similar (/ni/-/mi/, /de/-/be/, and /ti/-/di/). Spectrally distinct cues were more likely to yield different consonantal responses than were spectrally similar cues, for all listeners. Furthermore, as predicted by a sensory hypothesis, children were less likely to give different consonantal responses to stimuli distinguished by spectrally similar transitional cues than were adults. However, this pattern of behavior did not hold for all contrasts. Implications for theories of adult-child cue weighting differences are discussed.

  19. Reactivity to nicotine cues over repeated cue reactivity sessions

    Science.gov (United States)

    LaRowe, Steven D.; Saladin, Michael E.; Carpenter, Matthew J.; Upadhyaya, Himanshu P.

    2009-01-01

    The present study investigated whether reactivity to nicotine-related cues would attenuate across four experimental sessions held one week apart. Participants were nineteen non-treatment seeking, nicotine-dependent males. Cue reactivity sessions were performed in an outpatient research center using in vivo cues consisting of standardized smoking-related paraphernalia (e.g., cigarettes) and neutral comparison paraphernalia (e.g., pencils). Craving ratings were collected before and after both cue presentations while physiological measures (heart rate, skin conductance) were collected before and during the cue presentations. Although craving levels decreased across sessions, smoking-related cues consistently evoked significantly greater increases in craving relative to neutral cues over all four experimental sessions. Skin conductance was higher in response to smoking cues, though this effect was not as robust as that observed for craving. Results suggest that, under the described experimental parameters, craving can be reliably elicited over repeated cue reactivity sessions. PMID:17537583

  20. Complex-tone pitch representations in the human auditory system

    DEFF Research Database (Denmark)

    Bianchi, Federica

    enhanced relative to the non-musicians for both resolved and unresolved harmonics in the right auditory cortex, right frontal regions and inferior colliculus. However, the increase in neural activation in the right auditory cortex of musicians was predictive of the increased pitch......Understanding how the human auditory system processes the physical properties of an acoustical stimulus to give rise to a pitch percept is a fascinating aspect of hearing research. Since most natural sounds are harmonic complex tones, this work focused on the nature of pitch-relevant cues...... of training, which seemed to be specific to the stimuli containing resolved harmonics. Finally, a functional magnetic resonance imaging paradigm was used to examine the response of the auditory cortex to resolved and unresolved harmonics in musicians and non-musicians. The neural responses in musicians were...

  1. Cues and expressions

    Directory of Open Access Journals (Sweden)

    Thorbjörg Hróarsdóttir

    2005-02-01

    Full Text Available A number of European languages have undergone a change from object-verb to verb-object order. We focus on the change in English and Icelandic, showing that while the structural change was the same, it took place at different times and different ways in the two languages, triggered by different E-language changes. As seen from the English viewpoint, low-level facts of inflection morphology may express the relevant cue for parameters, and so the loss of inflection may lead to a grammar change. This analysis does not carry over to Icelandic, as the loss of OV there took place despite rich case morphology. We aim to show how this can be explained within a cue-style approach, arguing for a universal set of cues. However, the relevant cue may be expressed differently among languages: While it may have been expressed through morphology in English, it as expressed through information structure in Icelandic. In both cases, external effects led to fewer expressions of the relevant (universal cue and a grammar change took place.

  2. Age differences in visual-auditory self-motion perception during a simulated driving task

    Directory of Open Access Journals (Sweden)

    Robert eRamkhalawansingh

    2016-04-01

    Full Text Available Recent evidence suggests that visual-auditory cue integration may change as a function of age such that integration is heightened among older adults. Our goal was to determine whether these changes in multisensory integration are also observed in the context of self-motion perception under realistic task constraints. Thus, we developed a simulated driving paradigm in which we provided older and younger adults with visual motion cues (i.e. optic flow and systematically manipulated the presence or absence of congruent auditory cues to self-motion (i.e. engine, tire, and wind sounds. Results demonstrated that the presence or absence of congruent auditory input had different effects on older and younger adults. Both age groups demonstrated a reduction in speed variability when auditory cues were present compared to when they were absent, but older adults demonstrated a proportionally greater reduction in speed variability under combined sensory conditions. These results are consistent with evidence indicating that multisensory integration is heightened in older adults. Importantly, this study is the first to provide evidence to suggest that age differences in multisensory integration may generalize from simple stimulus detection tasks to the integration of the more complex and dynamic visual and auditory cues that are experienced during self-motion.

  3. Acquisition of Conditioning between Methamphetamine and Cues in Healthy Humans.

    Directory of Open Access Journals (Sweden)

    Joel S Cavallo

    Full Text Available Environmental stimuli repeatedly paired with drugs of abuse can elicit conditioned responses that are thought to promote future drug seeking. We recently showed that healthy volunteers acquired conditioned responses to auditory and visual stimuli after just two pairings with methamphetamine (MA, 20 mg, oral. This study extended these findings by systematically varying the number of drug-stimuli pairings. We expected that more pairings would result in stronger conditioning. Three groups of healthy adults were randomly assigned to receive 1, 2 or 4 pairings (Groups P1, P2 and P4, Ns = 13, 16, 16, respectively of an auditory-visual stimulus with MA, and another stimulus with placebo (PBO. Drug-cue pairings were administered in an alternating, counterbalanced order, under double-blind conditions, during 4 hr sessions. MA produced prototypic subjective effects (mood, ratings of drug effects and alterations in physiology (heart rate, blood pressure. Although subjects did not exhibit increased behavioral preference for, or emotional reactivity to, the MA-paired cue after conditioning, they did exhibit an increase in attentional bias (initial gaze toward the drug-paired stimulus. Further, subjects who had four pairings reported "liking" the MA-paired cue more than the PBO cue after conditioning. Thus, the number of drug-stimulus pairings, varying from one to four, had only modest effects on the strength of conditioned responses. Further studies investigating the parameters under which drug conditioning occurs will help to identify risk factors for developing drug abuse, and provide new treatment strategies.

  4. Effect of Unilateral Temporal Lobe Resection on Short‐Term Memory for Auditory Object and Sound Location

    National Research Council Canada - National Science Library

    LANCELOT, CÉLINE; SAMSON, SÉVERINE; AHAD, PIERRE; BAULAC, MICHEL

    2003-01-01

    A bstract : To investigate auditory spatial and nonspatial short‐term memory, a sound location discrimination task and an auditory object discrimination task were used in patients with medial temporal lobe resection...

  5. Viewpoint-independent contextual cueing effect

    Directory of Open Access Journals (Sweden)

    taiga etsuchiai

    2012-06-01

    Full Text Available We usually perceive things in our surroundings as unchanged despite viewpoint changes caused by self-motion. The visual system therefore must have a function to process objects independently of viewpoint. In this study, we examined whether viewpoint-independent spatial layout can be obtained implicitly. For this purpose, we used a contextual cueing effect, a learning effect of spatial layout in visual search displays known to be an implicit effect. We compared the transfer of the contextual cueing effect between cases with and without self-motion by using visual search displays for 3D objects, which changed according to the participant’s assumed location for viewing the stimuli. The contextual cueing effect was obtained with self-motion but disappeared when the display changed without self-motion. This indicates that there is an implicit learning effect in spatial coordinates and suggests that the spatial representation of object layouts or scenes can be obtained and updated implicitly. We also showed that binocular disparity play an important role in the layout representations.

  6. Virtual reality cues for binge drinking in college students.

    Science.gov (United States)

    Ryan, Joseph J; Kreiner, David S; Chapman, Marla D; Stark-Wroblewski, Kim

    2010-04-01

    We investigated the ability of virtual reality (VR) cue exposure to trigger a desire for alcohol among binge-drinking students. Fifteen binge-drinking college students and eight students who were nonbingers were immersed into a neutral-cue environment or room (underwater scenes), followed by four alcohol-cue rooms (bar, party, kitchen, argument), followed by a repeat of the neutral room. The virtual rooms were computer generated via head-mounted visual displays with associated auditory and olfactory stimuli. In each room, participants reported their subjective cravings for alcohol, the amount of attention given to the sight and smell of alcohol, and how much they were thinking of drinking. A 2 x 6 (type of drinker by VR room) repeated measures ANOVA was conducted on the responses to each question. After alcohol exposure, binge drinkers reported significantly higher cravings for and thoughts of alcohol than nonbinge drinkers, whereas differences between the groups following the neutral rooms were not significant.

  7. Predicting speech release from masking through spatial separation in distance

    DEFF Research Database (Denmark)

    Chabot-Leclerc, Alexandre; Dau, Torsten

    2014-01-01

    Speech intelligibility models typically consist of a preprocessing part that transforms stimuli into some internal (auditory) representation and a decision metric that relates the internal representation to speech intelligibility. This study investigated speech intelligibility in conditions......-term monaural model based on the SNRenv metric predicted a small SRM only in the noise-masker condition. The results suggest that true binaural processing is not always crucial to account for speech intelligibility in spatial conditions and that an SNR metric in the envelope domain appears to be more...... appropriate in conditions of on-axis spatial speech segregation than the conventional SNR. Additionally, none of the models considered grouping cues, which seem to play an important role in the conditions studied....

  8. Perceptual Plasticity for Auditory Object Recognition

    Science.gov (United States)

    Heald, Shannon L. M.; Van Hedger, Stephen C.; Nusbaum, Howard C.

    2017-01-01

    In our auditory environment, we rarely experience the exact acoustic waveform twice. This is especially true for communicative signals that have meaning for listeners. In speech and music, the acoustic signal changes as a function of the talker (or instrument), speaking (or playing) rate, and room acoustics, to name a few factors. Yet, despite this acoustic variability, we are able to recognize a sentence or melody as the same across various kinds of acoustic inputs and determine meaning based on listening goals, expectations, context, and experience. The recognition process relates acoustic signals to prior experience despite variability in signal-relevant and signal-irrelevant acoustic properties, some of which could be considered as “noise” in service of a recognition goal. However, some acoustic variability, if systematic, is lawful and can be exploited by listeners to aid in recognition. Perceivable changes in systematic variability can herald a need for listeners to reorganize perception and reorient their attention to more immediately signal-relevant cues. This view is not incorporated currently in many extant theories of auditory perception, which traditionally reduce psychological or neural representations of perceptual objects and the processes that act on them to static entities. While this reduction is likely done for the sake of empirical tractability, such a reduction may seriously distort the perceptual process to be modeled. We argue that perceptual representations, as well as the processes underlying perception, are dynamically determined by an interaction between the uncertainty of the auditory signal and constraints of context. This suggests that the process of auditory recognition is highly context-dependent in that the identity of a given auditory object may be intrinsically tied to its preceding context. To argue for the flexible neural and psychological updating of sound-to-meaning mappings across speech and music, we draw upon examples

  9. Perceptual Plasticity for Auditory Object Recognition

    Directory of Open Access Journals (Sweden)

    Shannon L. M. Heald

    2017-05-01

    Full Text Available In our auditory environment, we rarely experience the exact acoustic waveform twice. This is especially true for communicative signals that have meaning for listeners. In speech and music, the acoustic signal changes as a function of the talker (or instrument, speaking (or playing rate, and room acoustics, to name a few factors. Yet, despite this acoustic variability, we are able to recognize a sentence or melody as the same across various kinds of acoustic inputs and determine meaning based on listening goals, expectations, context, and experience. The recognition process relates acoustic signals to prior experience despite variability in signal-relevant and signal-irrelevant acoustic properties, some of which could be considered as “noise” in service of a recognition goal. However, some acoustic variability, if systematic, is lawful and can be exploited by listeners to aid in recognition. Perceivable changes in systematic variability can herald a need for listeners to reorganize perception and reorient their attention to more immediately signal-relevant cues. This view is not incorporated currently in many extant theories of auditory perception, which traditionally reduce psychological or neural representations of perceptual objects and the processes that act on them to static entities. While this reduction is likely done for the sake of empirical tractability, such a reduction may seriously distort the perceptual process to be modeled. We argue that perceptual representations, as well as the processes underlying perception, are dynamically determined by an interaction between the uncertainty of the auditory signal and constraints of context. This suggests that the process of auditory recognition is highly context-dependent in that the identity of a given auditory object may be intrinsically tied to its preceding context. To argue for the flexible neural and psychological updating of sound-to-meaning mappings across speech and music, we

  10. Persistent fluctuations in stride intervals under fractal auditory stimulation.

    Science.gov (United States)

    Marmelat, Vivien; Torre, Kjerstin; Beek, Peter J; Daffertshofer, Andreas

    2014-01-01

    Stride sequences of healthy gait are characterized by persistent long-range correlations, which become anti-persistent in the presence of an isochronous metronome. The latter phenomenon is of particular interest because auditory cueing is generally considered to reduce stride variability and may hence be beneficial for stabilizing gait. Complex systems tend to match their correlation structure when synchronizing. In gait training, can one capitalize on this tendency by using a fractal metronome rather than an isochronous one? We examined whether auditory cues with fractal variations in inter-beat intervals yield similar fractal inter-stride interval variability as isochronous auditory cueing in two complementary experiments. In Experiment 1, participants walked on a treadmill while being paced by either an isochronous or a fractal metronome with different variation strengths between beats in order to test whether participants managed to synchronize with a fractal metronome and to determine the necessary amount of variability for participants to switch from anti-persistent to persistent inter-stride intervals. Participants did synchronize with the metronome despite its fractal randomness. The corresponding coefficient of variation of inter-beat intervals was fixed in Experiment 2, in which participants walked on a treadmill while being paced by non-isochronous metronomes with different scaling exponents. As expected, inter-stride intervals showed persistent correlations similar to self-paced walking only when cueing contained persistent correlations. Our results open up a new window to optimize rhythmic auditory cueing for gait stabilization by integrating fractal fluctuations in the inter-beat intervals.

  11. A configural dominant account of contextual cueing: Configural cues are stronger than colour cues.

    Science.gov (United States)

    Kunar, Melina A; John, Rebecca; Sweetman, Hollie

    2014-01-01

    Previous work has shown that reaction times to find a target in displays that have been repeated are faster than those for displays that have never been seen before. This learning effect, termed "contextual cueing" (CC), has been shown using contexts such as the configuration of the distractors in the display and the background colour. However, it is not clear how these two contexts interact to facilitate search. We investigated this here by comparing the strengths of these two cues when they appeared together. In Experiment 1, participants searched for a target that was cued by both colour and distractor configural cues, compared with when the target was only predicted by configural information. The results showed that the addition of a colour cue did not increase contextual cueing. In Experiment 2, participants searched for a target that was cued by both colour and distractor configuration compared with when the target was only cued by colour. The results showed that adding a predictive configural cue led to a stronger CC benefit. Experiments 3 and 4 tested the disruptive effects of removing either a learned colour cue or a learned configural cue and whether there was cue competition when colour and configural cues were presented together. Removing the configural cue was more disruptive to CC than removing colour, and configural learning was shown to overshadow the learning of colour cues. The data support a configural dominant account of CC, where configural cues act as the stronger cue in comparison to colour when they are presented together.

  12. Frequency encoded auditory display of the critical tracking task

    Science.gov (United States)

    Stevenson, J.

    1984-01-01

    The use of auditory displays for selected cockpit instruments was examined. In auditory, visual, and combined auditory-visual compensatory displays of a vertical axis, critical tracking task were studied. The visual display encoded vertical error as the position of a dot on a 17.78 cm, center marked CRT. The auditory display encoded vertical error as log frequency with a six octave range; the center point at 1 kHz was marked by a 20-dB amplitude notch, one-third octave wide. Asymptotic performance on the critical tracking task was significantly better when using combined displays rather than the visual only mode. At asymptote, the combined display was slightly, but significantly, better than the visual only mode. The maximum controllable bandwidth using the auditory mode was only 60% of the maximum controllable bandwidth using the visual mode. Redundant cueing increased the rate of improvement of tracking performance, and the asymptotic performance level. This enhancement increases with the amount of redundant cueing used. This effect appears most prominent when the bandwidth of the forcing function is substantially less than the upper limit of controllability frequency.

  13. Part-set cueing impairment & facilitation in semantic memory.

    Science.gov (United States)

    Kelley, Matthew R; Parihar, Sushmeena A

    2018-01-19

    The present study explored the influence of part-set cues in semantic memory using tests of "free" recall, reconstruction of order, and serial recall. Nine distinct categories of information were used (e.g., Zodiac signs, Harry Potter books, Star Wars films, planets). The results showed part-set cueing impairment for all three "free" recall sets, whereas part-set cueing facilitation was evident for five of the six ordered sets. Generally, the present results parallel those often observed across episodic tasks, which could indicate that similar mechanisms contribute to part-set cueing effects in both episodic and semantic memory. A novel anchoring explanation of part-set cueing facilitation in order and spatial tasks is provided.

  14. An examination of cue redundancy theory in cross-cultural decoding of emotions in music.

    Science.gov (United States)

    Kwoun, Soo-Jin

    2009-01-01

    The present study investigated the effects of structural features of music (i.e., variations in tempo, loudness, or articulation, etc.) and cultural and learning factors in the assignments of emotional meaning in music. Four participant groups, young Koreans, young Americans, older Koreans, and older Americans, rated emotional expressions of Korean folksongs with three adjective scales: happiness, sadness and anger. The results of the study are in accordance with the Cue Redundancy model of emotional perception in music, indicating that expressive music embodies both universal auditory cues that communicate the emotional meanings of music across cultures and cultural specific cues that result from cultural convention.

  15. Music and metronome cues produce different effects on gait spatiotemporal measures but not gait variability in healthy older adults.

    Science.gov (United States)

    Wittwer, Joanne E; Webster, Kate E; Hill, Keith

    2013-02-01

    Rhythmic auditory cues including music and metronome beats have been used, sometimes interchangeably, to improve disordered gait arising from a range of clinical conditions. There has been limited investigation into whether there are optimal cue types. Different cue types have produced inconsistent effects across groups which differed in both age and clinical condition. The possible effect of normal ageing on response to different cue types has not been reported for gait. The aim of this study was to determine the effects of both rhythmic music and metronome cues on gait spatiotemporal measures (including variability) in healthy older people. Twelve women and seven men (>65 years) walked on an instrumented walkway at comfortable pace and then in time to each of rhythmic music and metronome cues at comfortable pace stepping frequency. Music but not metronome cues produced a significant increase in group mean gait velocity of 4.6 cm/s, due mostly to a significant increase in group mean stride length of 3.1cm. Both cue types produced a significant but small increase in cadence of 1 step/min. Mean spatio-temporal variability was low at baseline and did not increase with either cue type suggesting cues did not disrupt gait timing. Study findings suggest music and metronome cues may not be used interchangeably and cue type as well as frequency should be considered when evaluating effects of rhythmic auditory cueing on gait. Further work is required to determine whether optimal cue types and frequencies to improve walking in different clinical groups can be identified. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. Composition: Cue Wheel

    DEFF Research Database (Denmark)

    Bergstrøm-Nielsen, Carl

    2014-01-01

    Cue Rondo is an open composition to be realised by improvising musicians. See more about my composition practise in the entry "Composition - General Introduction". This work is licensed under a Creative Commons "by-nc" License. You may for non-commercial purposes use and distribute it, performanc...

  17. Neural cross-correlation and signal decorrelation: insights into coding of auditory space.

    Science.gov (United States)

    Saberi, Kourosh; Petrosyan, Agavni

    2005-07-07

    The auditory systems of humans and many other species use the difference in the time of arrival of acoustic signals at the two ears to compute the lateral position of sound sources. This computation is assumed to initially occur in an assembly of neurons organized along a frequency-by-delay surface. Mathematically, the computations are equivalent to a two-dimensional cross-correlation of the input signals at the two ears, with the position of the peak activity along this surface designating the position of the source in space. In this study, partially correlated signals to the two ears are used to probe the mechanisms for encoding spatial cues in stationary or dynamic (moving) signals. It is demonstrated that a cross-correlation model of the auditory periphery coupled with statistical decision theory can predict the patterns of performance by human subjects for both stationary and motion stimuli as a function of stimulus decorrelation. Implications of these findings for the existence of a unique cortical motion system are discussed.

  18. Timing the events of directional cueing.

    Science.gov (United States)

    Girardi, Giovanna; Antonucci, Gabriella; Nico, Daniele

    2015-11-01

    To explore the role of temporal context on voluntary orienting of attention, we submitted healthy participants to a spatial cueing task in which cue-target stimulus onset asynchronies (SOAs) were organized according to two-dimensional parameters: range and central value. Three ranges of SOAs organized around two central SOA values were presented to six groups of participants. Results showed a complex pattern of responses in terms of spatial validity (faster responses to correctly cued target) and preparatory effect (faster responses to longer SOAs). Responses to validly and neutrally cued targets were affected by the increase in SOA duration if the difference between longer and shorter SOA was large. On the contrary, responses to invalidly cued targets did not vary according to SOA manipulations. The observed pattern of cueing effects does not fit in the typical description of spatial attention working as a mandatory disengaging-shifting-engaging routine. In contrast, results rather suggest a mechanism based on the interaction between context sensitive top-down processes and bottom-up attentional processes.

  19. BAER - brainstem auditory evoked response

    Science.gov (United States)

    ... auditory potentials; Brainstem auditory evoked potentials; Evoked response audiometry; Auditory brainstem response; ABR; BAEP ... Normal results vary. Results will depend on the person and the instruments used to perform the test.

  20. Auditory Processing Disorder (For Parents)

    Science.gov (United States)

    ... role. Auditory cohesion problems: This is when higher-level listening tasks are difficult. Auditory cohesion skills — drawing inferences from conversations, understanding riddles, or comprehending verbal math problems — require heightened auditory processing and language levels. ...

  1. Analysis of Parallel and Transverse Visual Cues on the Gait of Individuals with Idiopathic Parkinson's Disease

    Science.gov (United States)

    de Melo Roiz, Roberta; Azevedo Cacho, Enio Walker; Cliquet, Alberto, Jr.; Barasnevicius Quagliato, Elizabeth Maria Aparecida

    2011-01-01

    Idiopathic Parkinson's disease (IPD) has been defined as a chronic progressive neurological disorder with characteristics that generate changes in gait pattern. Several studies have reported that appropriate external influences, such as visual or auditory cues may improve the gait pattern of patients with IPD. Therefore, the objective of this…

  2. Superior Temporal Activation in Response to Dynamic Audio-Visual Emotional Cues

    Science.gov (United States)

    Robins, Diana L.; Hunyadi, Elinora; Schultz, Robert T.

    2009-01-01

    Perception of emotion is critical for successful social interaction, yet the neural mechanisms underlying the perception of dynamic, audio-visual emotional cues are poorly understood. Evidence from language and sensory paradigms suggests that the superior temporal sulcus and gyrus (STS/STG) play a key role in the integration of auditory and visual…

  3. Functional dissociation of transient and sustained fMRI BOLD components in human auditory cortex revealed with a streaming paradigm based on interaural time differences.

    Science.gov (United States)

    Schadwinkel, Stefan; Gutschalk, Alexander

    2010-12-01

    A number of physiological studies suggest that feature-selective adaptation is relevant to the pre-processing for auditory streaming, the perceptual separation of overlapping sound sources. Most of these studies are focused on spectral differences between streams, which are considered most important for streaming. However, spatial cues also support streaming, alone or in combination with spectral cues, but physiological studies of spatial cues for streaming remain scarce. Here, we investigate whether the tuning of selective adaptation for interaural time differences (ITD) coincides with the range where streaming perception is observed. FMRI activation that has been shown to adapt depending on the repetition rate was studied with a streaming paradigm where two tones were differently lateralized by ITD. Listeners were presented with five different ΔITD conditions (62.5, 125, 187.5, 343.75, or 687.5 μs) out of an active baseline with no ΔITD during fMRI. The results showed reduced adaptation for conditions with ΔITD ≥ 125 μs, reflected by enhanced sustained BOLD activity. The percentage of streaming perception for these stimuli increased from approximately 20% for ΔITD = 62.5 μs to > 60% for ΔITD = 125 μs. No further sustained BOLD enhancement was observed when the ΔITD was increased beyond ΔITD = 125 μs, whereas the streaming probability continued to increase up to 90% for ΔITD = 687.5 μs. Conversely, the transient BOLD response, at the transition from baseline to ΔITD blocks, increased most prominently as ΔITD was increased from 187.5 to 343.75 μs. These results demonstrate a clear dissociation of transient and sustained components of the BOLD activity in auditory cortex. © 2010 The Authors. European Journal of Neuroscience © 2010 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  4. Listeners use speaker identity to access representations of spatial perspective during online language comprehension.

    Science.gov (United States)

    Ryskin, Rachel A; Wang, Ranxiao Frances; Brown-Schmidt, Sarah

    2016-02-01

    Little is known about how listeners represent another person's spatial perspective during language processing (e.g., two people looking at a map from different angles). Can listeners use contextual cues such as speaker identity to access a representation of the interlocutor's spatial perspective? In two eye-tracking experiments, participants received auditory instructions to move objects around a screen from two randomly alternating spatial perspectives (45° vs. 315° or 135° vs. 225° rotations from the participant's viewpoint). Instructions were spoken either by one voice, where the speaker's perspective switched at random, or by two voices, where each speaker maintained one perspective. Analysis of participant eye-gaze showed that interpretation of the instructions improved when each viewpoint was associated with a different voice. These findings demonstrate that listeners can learn mappings between individual talkers and viewpoints, and use these mappings to guide online language processing. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Experience with speech sounds is not necessary for cue trading by budgerigars (Melopsittacus undulatus).

    Science.gov (United States)

    Flaherty, Mary; Dent, Micheal L; Sawusch, James R

    2017-01-01

    The influence of experience with human speech sounds on speech perception in budgerigars, vocal mimics whose speech exposure can be tightly controlled in a laboratory setting, was measured. Budgerigars were divided into groups that differed in auditory exposure and then tested on a cue-trading identification paradigm with synthetic speech. Phonetic cue trading is a perceptual phenomenon observed when changes on one cue dimension are offset by changes in another cue dimension while still maintaining the same phonetic percept. The current study examined whether budgerigars would trade the cues of voice onset time (VOT) and the first formant onset frequency when identifying syllable initial stop consonants and if this would be influenced by exposure to speech sounds. There were a total of four different exposure groups: No speech exposure (completely isolated), Passive speech exposure (regular exposure to human speech), and two Speech-trained groups. After the exposure period, all budgerigars were tested for phonetic cue trading using operant conditioning procedures. Birds were trained to peck keys in response to different synthetic speech sounds that began with "d" or "t" and varied in VOT and frequency of the first formant at voicing onset. Once training performance criteria were met, budgerigars were presented with the entire intermediate series, including ambiguous sounds. Responses on these trials were used to determine which speech cues were used, if a trading relation between VOT and the onset frequency of the first formant was present, and whether speech exposure had an influence on perception. Cue trading was found in all birds and these results were largely similar to those of a group of humans. Results indicated that prior speech experience was not a requirement for cue trading by budgerigars. The results are consistent with theories that explain phonetic cue trading in terms of a rich auditory encoding of the speech signal.

  6. Experience with speech sounds is not necessary for cue trading by budgerigars (Melopsittacus undulatus.

    Directory of Open Access Journals (Sweden)

    Mary Flaherty

    Full Text Available The influence of experience with human speech sounds on speech perception in budgerigars, vocal mimics whose speech exposure can be tightly controlled in a laboratory setting, was measured. Budgerigars were divided into groups that differed in auditory exposure and then tested on a cue-trading identification paradigm with synthetic speech. Phonetic cue trading is a perceptual phenomenon observed when changes on one cue dimension are offset by changes in another cue dimension while still maintaining the same phonetic percept. The current study examined whether budgerigars would trade the cues of voice onset time (VOT and the first formant onset frequency when identifying syllable initial stop consonants and if this would be influenced by exposure to speech sounds. There were a total of four different exposure groups: No speech exposure (completely isolated, Passive speech exposure (regular exposure to human speech, and two Speech-trained groups. After the exposure period, all budgerigars were tested for phonetic cue trading using operant conditioning procedures. Birds were trained to peck keys in response to different synthetic speech sounds that began with "d" or "t" and varied in VOT and frequency of the first formant at voicing onset. Once training performance criteria were met, budgerigars were presented with the entire intermediate series, including ambiguous sounds. Responses on these trials were used to determine which speech cues were used, if a trading relation between VOT and the onset frequency of the first formant was present, and whether speech exposure had an influence on perception. Cue trading was found in all birds and these results were largely similar to those of a group of humans. Results indicated that prior speech experience was not a requirement for cue trading by budgerigars. The results are consistent with theories that explain phonetic cue trading in terms of a rich auditory encoding of the speech signal.

  7. Resizing Auditory Communities

    DEFF Research Database (Denmark)

    Kreutzfeldt, Jacob

    2012-01-01

    Heard through the ears of the Canadian composer and music teacher R. Murray Schafer the ideal auditory community had the shape of a village. Schafer’s work with the World Soundscape Project in the 70s represent an attempt to interpret contemporary environments through musical and auditory...

  8. Cue conflicts in context

    DEFF Research Database (Denmark)

    Boeg Thomsen, Ditte; Poulsen, Mads

    2015-01-01

    When learning their first language, children develop strategies for assigning semantic roles to sentence structures, depending on morphosyntactic cues such as case and word order. Traditionally, comprehension experiments have presented transitive clauses in isolation, and crosslinguistically...... preschoolers. However, object-first clauses may be context-sensitive structures, which are infelicitous in isolation. In a second act-out study we presented OVS clauses in supportive and unsupportive discourse contexts and in isolation and found that five-to-six-year-olds’ OVS comprehension was enhanced...... in discourse-pragmatically felicitous contexts. Our results extend previous findings of preschoolers’ sensitivity to discourse-contextual cues in sentence comprehension (Hurewitz, 2001; Song & Fisher, 2005) to the basic task of assigning agent and patient roles....

  9. Auditory localisation of conventional and electric cars : laboratory results and implications for cycling safety.

    OpenAIRE

    Stelling-Konczak, A. & Hagenzieker, M.P.

    2016-01-01

    When driven at low speeds, cars operating in electric mode have been found to be quieter than conventional cars. As a result, the auditory cues which pedestrians and cyclists use to assess the presence, proximity and location oncoming traffic may be reduced, posing a safety hazard. This laboratory study examined auditory localisation of conventional and electric cars including vehicle motion paths relevant for cycling activity. Participants (N = 65) in three age groups (16–18, 30–40 and 65–70...

  10. Mind your pricing cues.

    Science.gov (United States)

    Anderson, Eric; Simester, Duncan

    2003-09-01

    For most of the items they buy, consumers don't have an accurate sense of what the price should be. Ask them to guess how much a four-pack of 35-mm film costs, and you'll get a variety of wrong answers: Most people will underestimate; many will only shrug. Research shows that consumers' knowledge of the market is so far from perfect that it hardly deserves to be called knowledge at all. Yet people happily buy film and other products every day. Is this because they don't care what kind of deal they're getting? No. Remarkably, it's because they rely on retailers to tell them whether they're getting a good price. In subtle and not-so-subtle ways, retailers send signals to customers, telling them whether a given price is relatively high or low. In this article, the authors review several common pricing cues retailers use--"sale" signs, prices that end in 9, signpost items, and price-matching guarantees. They also offer some surprising facts about how--and how well--those cues work. For instance, the authors' tests with several mail-order catalogs reveal that including the word "sale" beside a price can increase demand by more than 50%. The practice of using a 9 at the end of a price to denote a bargain is so common, you'd think customers would be numb to it. Yet in a study the authors did involving a women's clothing catalog, they increased demand by a third just by changing the price of a dress from $34 to $39. Pricing cues are powerful tools for guiding customers' purchasing decisions, but they must be applied judiciously. Used inappropriately, the cues may breach customers' trust, reduce brand equity, and give rise to lawsuits.

  11. Measuring Auditory Selective Attention using Frequency Tagging

    Directory of Open Access Journals (Sweden)

    Hari M Bharadwaj

    2014-02-01

    Full Text Available Frequency tagging of sensory inputs (presenting stimuli that fluctuate periodically at rates to which the cortex can phase lock has been used to study attentional modulation of neural responses to inputs in different sensory modalities. For visual inputs, the visual steady-state response (VSSR at the frequency modulating an attended object is enhanced, while the VSSR to a distracting object is suppressed. In contrast, the effect of attention on the auditory steady-state response (ASSR is inconsistent across studies. However, most auditory studies analyzed results at the sensor level or used only a small number of equivalent current dipoles to fit cortical responses. In addition, most studies of auditory spatial attention used dichotic stimuli (independent signals at the ears rather than more natural, binaural stimuli. Here, we asked whether these methodological choices help explain discrepant results. Listeners attended to one of two competing speech streams, one simulated from the left and one from the right, that were modulated at different frequencies. Using distributed source modeling of magnetoencephalography results, we estimate how spatially directed attention modulates the ASSR in neural regions across the whole brain. Attention enhances the ASSR power at the frequency of the attended stream in the contralateral auditory cortex. The attended-stream modulation frequency also drives phase-locked responses in the left (but not right precentral sulcus (lPCS, a region implicated in control of eye gaze and visual spatial attention. Importantly, this region shows no phase locking to the distracting stream suggesting that the lPCS in engaged in an attention-specific manner. Modeling results that take account of the geometry and phases of the cortical sources phase locked to the two streams (including hemispheric asymmetry of lPCS activity help partly explain why past ASSR studies of auditory spatial attention yield seemingly contradictory

  12. Auditory-motor learning influences auditory memory for music.

    Science.gov (United States)

    Brown, Rachel M; Palmer, Caroline

    2012-05-01

    In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.

  13. Effect of harmonicity on the detection of a signal in a complex masker and on spatial release from masking.

    Directory of Open Access Journals (Sweden)

    Astrid Klinge

    Full Text Available The amount of masking of sounds from one source (signals by sounds from a competing source (maskers heavily depends on the sound characteristics of the masker and the signal and on their relative spatial location. Numerous studies investigated the ability to detect a signal in a speech or a noise masker or the effect of spatial separation of signal and masker on the amount of masking, but there is a lack of studies investigating the combined effects of many cues on the masking as is typical for natural listening situations. The current study using free-field listening systematically evaluates the combined effects of harmonicity and inharmonicity cues in multi-tone maskers and cues resulting from spatial separation of target signal and masker on the detection of a pure tone in a multi-tone or a noise masker. A linear binaural processing model was implemented to predict the masked thresholds in order to estimate whether the observed thresholds can be accounted for by energetic masking in the auditory periphery or whether other effects are involved. Thresholds were determined for combinations of two target frequencies (1 and 8 kHz, two spatial configurations (masker and target either co-located or spatially separated by 90 degrees azimuth, and five different masker types (four complex multi-tone stimuli, one noise masker. A spatial separation of target and masker resulted in a release from masking for all masker types. The amount of masking significantly depended on the masker type and frequency range. The various harmonic and inharmonic relations between target and masker or between components of the masker resulted in a complex pattern of increased or decreased masked thresholds in comparison to the predicted energetic masking. The results indicate that harmonicity cues affect the detectability of a tonal target in a complex masker.

  14. Auditory temporal processing skills in musicians with dyslexia.

    Science.gov (United States)

    Bishop-Liebler, Paula; Welch, Graham; Huss, Martina; Thomson, Jennifer M; Goswami, Usha

    2014-08-01

    The core cognitive difficulty in developmental dyslexia involves phonological processing, but adults and children with dyslexia also have sensory impairments. Impairments in basic auditory processing show particular links with phonological impairments, and recent studies with dyslexic children across languages reveal a relationship between auditory temporal processing and sensitivity to rhythmic timing and speech rhythm. As rhythm is explicit in music, musical training might have a beneficial effect on the auditory perception of acoustic cues to rhythm in dyslexia. Here we took advantage of the presence of musicians with and without dyslexia in musical conservatoires, comparing their auditory temporal processing abilities with those of dyslexic non-musicians matched for cognitive ability. Musicians with dyslexia showed equivalent auditory sensitivity to musicians without dyslexia and also showed equivalent rhythm perception. The data support the view that extensive rhythmic experience initiated during childhood (here in the form of music training) can affect basic auditory processing skills which are found to be deficient in individuals with dyslexia. Copyright © 2014 John Wiley & Sons, Ltd.

  15. Effects of sequential streaming on auditory masking using psychoacoustics and auditory evoked potentials.

    Science.gov (United States)

    Verhey, Jesko L; Ernst, Stephan M A; Yasin, Ifat

    2012-03-01

    The present study was aimed at investigating the relationship between the mismatch negativity (MMN) and psychoacoustical effects of sequential streaming on comodulation masking release (CMR). The influence of sequential streaming on CMR was investigated using a psychoacoustical alternative forced-choice procedure and electroencephalography (EEG) for the same group of subjects. The psychoacoustical data showed, that adding precursors comprising of only off-signal-frequency maskers abolished the CMR. Complementary EEG data showed an MMN irrespective of the masker envelope correlation across frequency when only the off-signal-frequency masker components were present. The addition of such precursors promotes a separation of the on- and off-frequency masker components into distinct auditory objects preventing the auditory system from using comodulation as an additional cue. A frequency-specific adaptation changing the representation of the flanking bands in the streaming conditions may also contribute to the reduction of CMR in the stream conditions, however, it is unlikely that adaptation is the primary reason for the streaming effect. A neurophysiological correlate of sequential streaming was found in EEG data using MMN, but the magnitude of the MMN was not correlated with the audibility of the signal in CMR experiments. Dipole source analysis indicated different cortical regions involved in processing auditory streaming and modulation detection. In particular, neural sources for processing auditory streaming include cortical regions involved in decision-making. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. The effects of speech motor preparation on auditory perception

    Science.gov (United States)

    Myers, John

    Perception and action are coupled via bidirectional relationships between sensory and motor systems. Motor systems influence sensory areas by imparting a feedforward influence on sensory processing termed "motor efference copy" (MEC). MEC is suggested to occur in humans because speech preparation and production modulate neural measures of auditory cortical activity. However, it is not known if MEC can affect auditory perception. We tested the hypothesis that during speech preparation auditory thresholds will increase relative to a control condition, and that the increase would be most evident for frequencies that match the upcoming vocal response. Participants performed trials in a speech condition that contained a visual cue indicating a vocal response to prepare (one of two frequencies), followed by a go signal to speak. To determine threshold shifts, voice-matched or -mismatched pure tones were presented at one of three time points between the cue and target. The control condition was the same except the visual cues did not specify a response and subjects did not speak. For each participant, we measured f0 thresholds in isolation from the task in order to establish baselines. Results indicated that auditory thresholds were highest during speech preparation, relative to baselines and a non-speech control condition, especially at suprathreshold levels. Thresholds for tones that matched the frequency of planned responses gradually increased over time, but sharply declined for the mismatched tones shortly before targets. Findings support the hypothesis that MEC influences auditory perception by modulating thresholds during speech preparation, with some specificity relative to the planned response. The threshold increase in tasks vs. baseline may reflect attentional demands of the tasks.

  17. Sensitivity of cochlear nucleus neurons to spatio-temporal changes in auditory nerve activity.

    Science.gov (United States)

    Wang, Grace I; Delgutte, Bertrand

    2012-12-01

    The spatio-temporal pattern of auditory nerve (AN) activity, representing the relative timing of spikes across the tonotopic axis, contains cues to perceptual features of sounds such as pitch, loudness, timbre, and spatial location. These spatio-temporal cues may be extracted by neurons in the cochlear nucleus (CN) that are sensitive to relative timing of inputs from AN fibers innervating different cochlear regions. One possible mechanism for this extraction is "cross-frequency" coincidence detection (CD), in which a central neuron converts the degree of coincidence across the tonotopic axis into a rate code by preferentially firing when its AN inputs discharge in synchrony. We used Huffman stimuli (Carney LH. J Neurophysiol 64: 437-456, 1990), which have a flat power spectrum but differ in their phase spectra, to systematically manipulate relative timing of spikes across tonotopically neighboring AN fibers without changing overall firing rates. We compared responses of CN units to Huffman stimuli with responses of model CD cells operating on spatio-temporal patterns of AN activity derived from measured responses of AN fibers with the principle of cochlear scaling invariance. We used the maximum likelihood method to determine the CD model cell parameters most likely to produce the measured CN unit responses, and thereby could distinguish units behaving like cross-frequency CD cells from those consistent with same-frequency CD (in which all inputs would originate from the same tonotopic location). We find that certain CN unit types, especially those associated with globular bushy cells, have responses consistent with cross-frequency CD cells. A possible functional role of a cross-frequency CD mechanism in these CN units is to increase the dynamic range of binaural neurons that process cues for sound localization.

  18. fMRI of the auditory system: understanding the neural basis of auditory gestalt.

    Science.gov (United States)

    Di Salle, Francesco; Esposito, Fabrizio; Scarabino, Tommaso; Formisano, Elia; Marciano, Elio; Saulino, Claudio; Cirillo, Sossio; Elefante, Raffaele; Scheffler, Klaus; Seifritz, Erich

    2003-12-01

    Functional magnetic resonance imaging (fMRI) has rapidly become the most widely used imaging method for studying brain functions in humans. This is a result of its extreme flexibility of use and of the astonishingly detailed spatial and temporal information it provides. Nevertheless, until very recently, the study of the auditory system has progressed at a considerably slower pace compared to other functional systems. Several factors have limited fMRI research in the auditory field, including some intrinsic features of auditory functional anatomy and some peculiar interactions between fMRI technique and audition. A well known difficulty arises from the high intensity acoustic noise produced by gradient switching in echo-planar imaging (EPI), as well as in other fMRI sequences more similar to conventional MR sequences. The acoustic noise interacts in an unpredictable way with the experimental stimuli both from a perceptual point of view and in the evoked hemodynamics. To overcome this problem, different approaches have been proposed recently that generally require careful tailoring of the experimental design and the fMRI methodology to the specific requirements posed by the auditory research. The novel methodological approaches can make the fMRI exploration of auditory processing much easier and more reliable, and thus may permit filling the gap with other fields of neuroscience research. As a result, some fundamental neural underpinnings of audition are being clarified, and the way sound stimuli are integrated in the auditory gestalt are beginning to be understood.

  19. Self-grounding visual, auditory and olfactory autobiographical memories.

    Science.gov (United States)

    Knez, Igor; Ljunglöf, Louise; Arshamian, Artin; Willander, Johan

    2017-07-01

    Given that autobiographical memory provides a cognitive foundation for the self, we investigated the relative importance of visual, auditory and olfactory autobiographical memories for the self. Thirty subjects, with a mean age of 35.4years, participated in a study involving a three×three within-subject design containing nine different types of autobiographical memory cues: pictures, sounds and odors presented with neutral, positive and negative valences. It was shown that visual compared to auditory and olfactory autobiographical memories involved higher cognitive and emotional constituents for the self. Furthermore, there was a trend showing positive autobiographical memories to increase their proportion to both cognitive and emotional components of the self, from olfactory to auditory to visually cued autobiographical memories; but, yielding a reverse trend for negative autobiographical memories. Finally, and independently of modality, positive affective states were shown to be more involved in autobiographical memory than negative ones. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Auditory Integration Training

    Directory of Open Access Journals (Sweden)

    Zahra Jafari

    2002-07-01

    Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depressin, and hyperacute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of The Sound of a Moracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.

  1. Review: Auditory Integration Training

    Directory of Open Access Journals (Sweden)

    Zahra Ja'fari

    2003-01-01

    Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depression, and hyper acute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of the sound of a miracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.

  2. Prolonged Walking with a Wearable System Providing Intelligent Auditory Input in People with Parkinson's Disease.

    Science.gov (United States)

    Ginis, Pieter; Heremans, Elke; Ferrari, Alberto; Dockx, Kim; Canning, Colleen G; Nieuwboer, Alice

    2017-01-01

    Rhythmic auditory cueing is a well-accepted tool for gait rehabilitation in Parkinson's disease (PD), which can now be applied in a performance-adapted fashion due to technological advance. This study investigated the immediate differences on gait during a prolonged, 30 min, walk with performance-adapted (intelligent) auditory cueing and verbal feedback provided by a wearable sensor-based system as alternatives for traditional cueing. Additionally, potential effects on self-perceived fatigue were assessed. Twenty-eight people with PD and 13 age-matched healthy elderly (HE) performed four 30 min walks with a wearable cue and feedback system. In randomized order, participants received: (1) continuous auditory cueing; (2) intelligent cueing (10 metronome beats triggered by a deviating walking rhythm); (3) intelligent feedback (verbal instructions triggered by a deviating walking rhythm); and (4) no external input. Fatigue was self-scored at rest and after walking during each session. The results showed that while HE were able to maintain cadence for 30 min during all conditions, cadence in PD significantly declined without input. With continuous cueing and intelligent feedback people with PD were able to maintain cadence (p = 0.04), although they were more physically fatigued than HE. Furthermore, cadence deviated significantly more in people with PD than in HE without input and particularly with intelligent feedback (both: p = 0.04). In PD, continuous and intelligent cueing induced significantly less deviations of cadence (p = 0.006). Altogether, this suggests that intelligent cueing is a suitable alternative for the continuous mode during prolonged walking in PD, as it induced similar effects on gait without generating levels of fatigue beyond that of HE.

  3. Auditory short-term memory activation during score reading.

    Science.gov (United States)

    Simoens, Veerle L; Tervaniemi, Mari

    2013-01-01

    Performing music on the basis of reading a score requires reading ahead of what is being played in order to anticipate the necessary actions to produce the notes. Score reading thus not only involves the decoding of a visual score and the comparison to the auditory feedback, but also short-term storage of the musical information due to the delay of the auditory feedback during reading ahead. This study investigates the mechanisms of encoding of musical information in short-term memory during such a complicated procedure. There were three parts in this study. First, professional musicians participated in an electroencephalographic (EEG) experiment to study the slow wave potentials during a time interval of short-term memory storage in a situation that requires cross-modal translation and short-term storage of visual material to be compared with delayed auditory material, as it is the case in music score reading. This delayed visual-to-auditory matching task was compared with delayed visual-visual and auditory-auditory matching tasks in terms of EEG topography and voltage amplitudes. Second, an additional behavioural experiment was performed to determine which type of distractor would be the most interfering with the score reading-like task. Third, the self-reported strategies of the participants were also analyzed. All three parts of this study point towards the same conclusion according to which during music score reading, the musician most likely first translates the visual score into an auditory cue, probably starting around 700 or 1300 ms, ready for storage and delayed comparison with the auditory feedback.

  4. Auditory short-term memory activation during score reading.

    Directory of Open Access Journals (Sweden)

    Veerle L Simoens

    Full Text Available Performing music on the basis of reading a score requires reading ahead of what is being played in order to anticipate the necessary actions to produce the notes. Score reading thus not only involves the decoding of a visual score and the comparison to the auditory feedback, but also short-term storage of the musical information due to the delay of the auditory feedback during reading ahead. This study investigates the mechanisms of encoding of musical information in short-term memory during such a complicated procedure. There were three parts in this study. First, professional musicians participated in an electroencephalographic (EEG experiment to study the slow wave potentials during a time interval of short-term memory storage in a situation that requires cross-modal translation and short-term storage of visual material to be compared with delayed auditory material, as it is the case in music score reading. This delayed visual-to-auditory matching task was compared with delayed visual-visual and auditory-auditory matching tasks in terms of EEG topography and voltage amplitudes. Second, an additional behavioural experiment was performed to determine which type of distractor would be the most interfering with the score reading-like task. Third, the self-reported strategies of the participants were also analyzed. All three parts of this study point towards the same conclusion according to which during music score reading, the musician most likely first translates the visual score into an auditory cue, probably starting around 700 or 1300 ms, ready for storage and delayed comparison with the auditory feedback.

  5. Auditory Short-Term Memory Activation during Score Reading

    Science.gov (United States)

    Simoens, Veerle L.; Tervaniemi, Mari

    2013-01-01

    Performing music on the basis of reading a score requires reading ahead of what is being played in order to anticipate the necessary actions to produce the notes. Score reading thus not only involves the decoding of a visual score and the comparison to the auditory feedback, but also short-term storage of the musical information due to the delay of the auditory feedback during reading ahead. This study investigates the mechanisms of encoding of musical information in short-term memory during such a complicated procedure. There were three parts in this study. First, professional musicians participated in an electroencephalographic (EEG) experiment to study the slow wave potentials during a time interval of short-term memory storage in a situation that requires cross-modal translation and short-term storage of visual material to be compared with delayed auditory material, as it is the case in music score reading. This delayed visual-to-auditory matching task was compared with delayed visual-visual and auditory-auditory matching tasks in terms of EEG topography and voltage amplitudes. Second, an additional behavioural experiment was performed to determine which type of distractor would be the most interfering with the score reading-like task. Third, the self-reported strategies of the participants were also analyzed. All three parts of this study point towards the same conclusion according to which during music score reading, the musician most likely first translates the visual score into an auditory cue, probably starting around 700 or 1300 ms, ready for storage and delayed comparison with the auditory feedback. PMID:23326487

  6. Effects of interaural pitch matching and auditory image centering on binaural sensitivity in cochlear implant users.

    Science.gov (United States)

    Kan, Alan; Litovsky, Ruth Y; Goupell, Matthew J

    2015-01-01

    In bilateral cochlear implant users, electrodes mapped to the same frequency range in each ear may stimulate different places in each cochlea due to an insertion depth difference of electrode arrays. This interaural place of stimulation mismatch can lead to problems with auditory image fusion and sensitivity to binaural cues, which may explain the large localization errors seen in many patients. Previous work has shown that interaural place of stimulation mismatch can lead to off-centered auditory images being perceived even though interaural time and level differences (ITD and ILD, respectively) were zero. Large interaural mismatches reduced the ability to use ITDs for auditory image lateralization. In contrast, lateralization with ILDs was still possible but the mapping of ILDs to spatial locations was distorted. This study extends the previous work by systematically investigating the effect of interaural place of stimulation mismatch on ITD and ILD sensitivity directly and examining whether "centering" methods can be used to mitigate some of the negative effects of interaural place of stimulation mismatch. Interaural place of stimulation mismatch was deliberately introduced for this study. Interaural pitch-matching techniques were used to identify a pitch-matched pair of electrodes across the ears approximately at the center of the array. Mismatched pairs were then created by maintaining one of the pitch-matched electrodes constant, and systematically varying the contralateral electrode by two, four, or eight electrode positions (corresponding to approximately 1.5, 3, and 6 mm of interaural place of excitation differences). The stimuli were 300 msec, constant amplitude pulse trains presented at 100 pulses per second. ITD and ILD just noticeable differences (JNDs) were measured using a method of constant stimuli with a two-interval, two-alternative forced choice task. The results were fit with a psychometric function to obtain the JNDs. In experiment I, ITD and

  7. Behavioral assessment of auditory processing disorder in children with non-syndromic cleft lip and/or palate.

    Science.gov (United States)

    Ma, Xiaoran; McPherson, Bradley; Ma, Lian

    2015-03-01

    Peripheral hearing disorders have been frequently described in children with non-syndromic cleft lip and/or palate (NSCL/P). However, auditory processing problems are rarely considered for children with NSCL/P despite their poor academic performance in general compared to their craniofacially normal peers. This study aimed to compare auditory processing skills, using behavioral assessment techniques, in school age children with and without NSCL/P. One hundred and forty one Mandarin-speaking children with NSCL/P aged from 6.00 to 15.67 years, and 60 age-matched, craniofacially normal children, were recruited. Standard hearing health tests were conducted to evaluate peripheral hearing. Behavioral auditory processing assessment included adaptive tests of temporal resolution (ATTR), and the Mandarin pediatric lexical tone and disyllabic-word picture identification test in noise (MAPPID-N). Age effects were found in children with cleft disorder but not in the control group for gap detection thresholds with ATTR narrow band noise in the across-channel stimuli condition, with a significant difference in test performance between the 6 to 8 year group and 12 to 15 year group of children with NSCL/P. For MAPPID-N, the bilateral cleft lip and palate subgroup showed significantly poorer SNR-50% scores than the control group in the condition where speech was spatially separated from noise. Also, the cleft palate participants showed a significantly smaller spatial separation advantage for speech recognition in noise compared to the control group children. ATTR gap detection test results indicated that maturation for temporal resolution abilities was not achieved in children with NSCL/P until approximately 8 years of age compared to approximately 6 years for craniofacially normal children. For speech recognition in noisy environments, poorer abilities to use timing and intensity cues were found in children with cleft palate and children with bilateral cleft lip and palate

  8. Helmets: conventional to cueing

    Science.gov (United States)

    Sedillo, Michael R.; Dixon, Sharon A.

    2003-09-01

    Aviation helmets have always served as an interface between technology and flyers. The functional evolution of helmets continued with the advent of radio when helmets were modified to accept communication components and later, oxygen masks. As development matured, interest in safety increased as evident in more robust designs. Designing helmets became a balance between adding new capabilities and reducing the helmet's weight. As the research community better defined acceptable limits of weight-tolerances with tools such as the "Knox Box" criteria, system developers added and subtracted technologies while remaining within these limits. With most helmet-mounted technologies being independent of each other, the level of precision in mounting these technologies was not as significant a concern as it is today. The attachment of new components was acceptable as long as the components served their purpose. However this independent concept has become obsolete with the dawn of modern helmet mounted displays. These complex systems are interrelated and demand precision in their attachment to the helmet. The helmets' role now extends beyond serving as a means to mount the technologies to the head, but is now instrumental in critical visual alignment of complex night vision and missile cueing technologies. These new technologies demand a level of helmet fit and component alignment previously not seen in past helmet designs. This paper presents some of the design, integration and logistical issues gleaned during the development of the Joint Helmet Mounted Cueing System (JHMCS) to include the application of head-track technologies in forensic investigations.

  9. Aktiverende Undervisning i auditorier

    DEFF Research Database (Denmark)

    Parus, Judith

    Workshop om erfaringer og brug af aktiverende metoder i undervisning i auditorier og på store hold. Hvilke metoder har fungeret godt og hvilke dårligt ? Hvilke overvejelser skal man gøre sig.......Workshop om erfaringer og brug af aktiverende metoder i undervisning i auditorier og på store hold. Hvilke metoder har fungeret godt og hvilke dårligt ? Hvilke overvejelser skal man gøre sig....

  10. Auditory reafferences: The influence of real-time feedback on movement control

    Directory of Open Access Journals (Sweden)

    Christian eKennel

    2015-01-01

    Full Text Available Auditory reafferences are real-time auditory products created by a person’s own movements. Whereas the interdependency of action and perception is generally well studied, the auditory feedback channel and the influence of perceptual processes during movement execution remain largely unconsidered. We argue that movements have a rhythmic character that is closely connected to sound, making it possible to manipulate auditory reafferences online to understand their role in motor control. We examined if step sounds, occurring as a by-product of running, have an influence on the performance of a complex movement task. Twenty participants completed a hurdling task in three auditory feedback conditions: a control condition with normal auditory feedback, a white noise condition in which sound was masked, and a delayed auditory feedback condition. Overall time and kinematic data were collected. Results show that delayed auditory feedback led to a significantly slower overall time and changed kinematic parameters. Our findings complement previous investigations in a natural movement situation with nonartificial auditory cues. Our results support the existing theoretical understanding of action–perception coupling and hold potential for applied work, where naturally occurring movement sounds can be implemented in the motor learning processes.

  11. Using auditory pre-information to solve the cocktail-party problem: electrophysiological evidence for age-specific differences.

    Science.gov (United States)

    Getzmann, Stephan; Lewald, Jörg; Falkenstein, Michael

    2014-01-01

    Speech understanding in complex and dynamic listening environments requires (a) auditory scene analysis, namely auditory object formation and segregation, and (b) allocation of the attentional focus to the talker of interest. There is evidence that pre-information is actively used to facilitate these two aspects of the so-called "cocktail-party" problem. Here, a simulated multi-talker scenario was combined with electroencephalography to study scene analysis and allocation of attention in young and middle-aged adults. Sequences of short words (combinations of brief company names and stock-price values) from four talkers at different locations were simultaneously presented, and the detection of target names and the discrimination between critical target values were assessed. Immediately prior to speech sequences, auditory pre-information was provided via cues that either prepared auditory scene analysis or attentional focusing, or non-specific pre-information was given. While performance was generally better in younger than older participants, both age groups benefited from auditory pre-information. The analysis of the cue-related event-related potentials revealed age-specific differences in the use of pre-cues: Younger adults showed a pronounced N2 component, suggesting early inhibition of concurrent speech stimuli; older adults exhibited a stronger late P3 component, suggesting increased resource allocation to process the pre-information. In sum, the results argue for an age-specific utilization of auditory pre-information to improve listening in complex dynamic auditory environments.

  12. Using auditory pre-information to solve the cocktail-party problem: electrophysiological evidence for age-specific differences

    Directory of Open Access Journals (Sweden)

    Stephan eGetzmann

    2014-12-01

    Full Text Available Speech understanding in complex and dynamic listening environments requires (a auditory scene analysis, namely auditory object formation and segregation, and (b allocation of the attentional focus to the talker of interest. There is evidence that pre-information is actively used to facilitate these two aspects of the so-called cocktail-party problem. Here, a simulated multi-talker scenario was combined with electroencephalography to study scene analysis and allocation of attention in young and middle-aged adults. Sequences of short words (combinations of brief company names and stock-price values from four talkers at different locations were simultaneously presented, and the detection of target names and the discrimination between critical target values were assessed. Immediately prior to speech sequences, auditory pre-information was provided via cues that either prepared auditory scene analysis or attentional focusing, or non-specific pre-information was given. While performance was generally better in younger than older participants, both age groups benefited from auditory pre-information. The analysis of the cue-related event-related potentials revealed age-specific differences in the use of pre-cues: Younger adults showed a pronounced N2 component, suggesting early inhibition of concurrent speech stimuli; older adults exhibited a stronger late P3 component, suggesting increased resource allocation to process the pre-information. In sum, the results argue for an age-specific utilization of auditory pre-information to improve listening in complex dynamic auditory environments.

  13. Cue reactivity towards shopping cues in female participants.

    Science.gov (United States)

    Starcke, Katrin; Schlereth, Berenike; Domass, Debora; Schöler, Tobias; Brand, Matthias

    2013-03-01

    Background and aims It is currently under debate whether pathological buying can be considered as a behavioural addiction. Addictions have often been investigated with cue-reactivity paradigms to assess subjective, physiological and neural craving reactions. The current study aims at testing whether cue reactivity towards shopping cues is related to pathological buying tendencies. Methods A sample of 66 non-clinical female participants rated shopping related pictures concerning valence, arousal, and subjective craving. In a subgroup of 26 participants, electrodermal reactions towards those pictures were additionally assessed. Furthermore, all participants were screened concerning pathological buying tendencies and baseline craving for shopping. Results Results indicate a relationship between the subjective ratings of the shopping cues and pathological buying tendencies, even if baseline craving for shopping was controlled for. Electrodermal reactions were partly related to the subjective ratings of the cues. Conclusions Cue reactivity may be a potential correlate of pathological buying tendencies. Thus, pathological buying may be accompanied by craving reactions towards shopping cues. Results support the assumption that pathological buying can be considered as a behavioural addiction. From a methodological point of view, results support the view that the cue-reactivity paradigm is suited for the investigation of craving reactions in pathological buying and future studies should implement this paradigm in clinical samples.

  14. Modeling the utility of binaural cues for underwater sound localization.

    Science.gov (United States)

    Schneider, Jennifer N; Lloyd, David R; Banks, Patchouly N; Mercado, Eduardo

    2014-06-01

    The binaural cues used by terrestrial animals for sound localization in azimuth may not always suffice for accurate sound localization underwater. The purpose of this research was to examine the theoretical limits of interaural timing and level differences available underwater using computational and physical models. A paired-hydrophone system was used to record sounds transmitted underwater and recordings were analyzed using neural networks calibrated to reflect the auditory capabilities of terrestrial mammals. Estimates of source direction based on temporal differences were most accurate for frequencies between 0.5 and 1.75 kHz, with greater resolution toward the midline (2°), and lower resolution toward the periphery (9°). Level cues also changed systematically with source azimuth, even at lower frequencies than expected from theoretical calculations, suggesting that binaural mechanical coupling (e.g., through bone conduction) might, in principle, facilitate underwater sound localization. Overall, the relatively limited ability of the model to estimate source position using temporal and level difference cues underwater suggests that animals such as whales may use additional cues to accurately localize conspecifics and predators at long distances. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Binaural cues provide for a release from informational masking.

    Science.gov (United States)

    Tolnai, Sandra; Dolležal, Lena-Vanessa; Klump, Georg M

    2015-10-01

    Informational masking (IM) describes the insensitivity of detecting a change in sound features in a complex acoustical environment when such a change could easily be detected in the absence of distracting sounds. IM occurs because of the similarity between deviant sound and distracting sounds (so-called similarity-based IM) and/or stimulus uncertainty stemming from trial-to-trial variability (so-called uncertainty-based IM). IM can be abolished if similarity-based or uncertainty-based IM are minimized. Here, we modulated similarity-based IM using binaural cues. Standard/deviant tones and distracting tones were presented sequentially, and level-increment thresholds were measured. Deviant tones differed from standard tones by a higher sound level. Distracting tones covered a wide range of levels. Standard/deviant tones and distracting tones were characterized by their interaural time difference (ITD), interaural level difference (ILD), or both ITD and ILD. The larger the ITD or ILD was, the better similarity-based IM was overcome. If both interaural differences were applied to standard/deviant tones, the release from IM was larger than when either interaural difference was used. The results show that binaural cues are potent cues to abolish similarity-based IM and that the auditory system makes use of multiple available cues. (c) 2015 APA, all rights reserved).

  16. Discrimination and streaming of speech sounds based on differences in interaural and spectral cues.

    Science.gov (United States)

    David, Marion; Lavandier, Mathieu; Grimault, Nicolas; Oxenham, Andrew J

    2017-09-01

    Differences in spatial cues, including interaural time differences (ITDs), interaural level differences (ILDs) and spectral cues, can lead to stream segregation of alternating noise bursts. It is unknown how effective such cues are for streaming sounds with realistic spectro-temporal variations. In particular, it is not known whether the high-frequency spectral cues associated with elevation remain sufficiently robust under such conditions. To answer these questions, sequences of consonant-vowel tokens were generated and filtered by non-individualized head-related transfer functions to simulate the cues associated with different positions in the horizontal and median planes. A discrimination task showed that listeners could discriminate changes in interaural cues both when the stimulus remained constant and when it varied between presentations. However, discrimination of changes in spectral cues was much poorer in the presence of stimulus variability. A streaming task, based on the detection of repeated syllables in the presence of interfering syllables, revealed that listeners can use both interaural and spectral cues to segregate alternating syllable sequences, despite the large spectro-temporal differences between stimuli. However, only the full complement of spatial cues (ILDs, ITDs, and spectral cues) resulted in obligatory streaming in a task that encouraged listeners to integrate the tokens into a single stream.

  17. Phonetic matching of auditory and visual speech develops during childhood : Evidence from sine-wave speech

    NARCIS (Netherlands)

    Baart, M.; Bortfeld, H.; Vroomen, J.

    2015-01-01

    The correspondence between auditory speech and lip-read information can be detected based on a combination of temporal and phonetic cross-modal cues. Here, we determined the point in developmental time at which children start to effectively use phonetic information to match a speech sound with one

  18. Auditory localisation of conventional and electric cars: laboratory results and implications for cycling safety

    NARCIS (Netherlands)

    Stelling-Konczak, A.; Hagenzieker, M.P.; Commandeur, J.J.F.; Agterberg, M.J.H.; van Wee, B.

    2016-01-01

    When driven at low speeds, cars operating in electric mode have been found to be quieter than conventional cars. As a result, the auditory cues which pedestrians and cyclists use to assess the presence, proximity and location oncoming traffic may be reduced, posing a safety hazard. This laboratory

  19. Auditory localisation of conventional and electric cars : laboratory results and implications for cycling safety.

    NARCIS (Netherlands)

    Stelling-Konczak, A. Hagenzieker, M.P. Commandeur, J.J.F. Agterberg, M.J.H. & Wee, B. van

    2016-01-01

    When driven at low speeds, cars operating in electric mode have been found to be quieter than conventional cars. As a result, the auditory cues which pedestrians and cyclists use to assess the presence, proximity and location oncoming traffic may be reduced, posing a safety hazard. This laboratory

  20. Auditory detectability of hybrid electric vehicles by pedestrians who are blind

    Science.gov (United States)

    2010-11-15

    Quieter cars such as electric vehicles (EVs) and hybrid electric vehicles (HEVs) may reduce auditory cues used by pedestrians to assess the state of nearby traffic and, as a result, their use may have an adverse impact on pedestrian safety. In order ...

  1. Encoding of virtual acoustic space stimuli by neurons in ferret primary auditory cortex.

    Science.gov (United States)

    Mrsic-Flogel, Thomas D; King, Andrew J; Schnupp, Jan W H

    2005-06-01

    Recent studies from our laboratory have indicated that the spatial response fields (SRFs) of neurons in the ferret primary auditory cortex (A1) with best frequencies > or =4 kHz may arise from a largely linear processing of binaural level and spectral localization cues. Here we extend this analysis to investigate how well the linear model can predict the SRFs of neurons with different binaural response properties and the manner in which SRFs change with increases in sound level. We also consider whether temporal features of the response (e.g., response latency) vary with sound direction and whether such variations can be explained by linear processing. In keeping with previous studies, we show that A1 SRFs, which we measured with individualized virtual acoustic space stimuli, expand and shift in direction with increasing sound level. We found that these changes are, in most cases, in good agreement with predictions from a linear threshold model. However, changes in spatial tuning with increasing sound level were generally less well predicted for neurons whose binaural frequency-time receptive field (FTRF) exhibited strong excitatory inputs from both ears than for those in which the binaural FTRF revealed either a predominantly inhibitory effect or no clear contribution from the ipsilateral ear. Finally, we found (in agreement with other authors) that many A1 neurons exhibit systematic response latency shifts as a function of sound-source direction, although these temporal details could usually not be predicted from the neuron's binaural FTRF.

  2. Linear Processing of Interaural Level Difference Underlies Spatial Tuning in the Nucleus of the Brachium of the Inferior Colliculus

    Science.gov (United States)

    Slee, Sean J.; Young, Eric D.

    2013-01-01

    The spatial location of sounds is an important aspect of auditory perception, but the ways in which space is represented are not fully understood. No space map has been found within the primary auditory pathway. However, a space map has been found in the nucleus of the brachium of the inferior colliculus (BIN), which provides a major auditory projection to the superior colliculus. We measured the spectral processing underlying auditory spatial tuning in the BIN of unanesthetized marmoset monkeys. Because neurons in the BIN respond poorly to tones and are broadly tuned, we used a broadband stimulus with random spectral shapes (RSS) from which both spatial receptive fields and frequency sensitivity can be derived. Responses to virtual space (VS) stimuli, based on the animal’s own ear acoustics, were compared with the predictions of a weight-function model of responses to the RSS stimuli. First-order (linear) weight functions had broad spectral tuning (~3 octaves), were excitatory in the contralateral ear, inhibitory in the ipsilateral ear, and biased towards high frequencies. Responses to interaural time differences and spectral cues were relatively weak. In cross-validation tests, the first-order RSS model accurately predicted the measured VS tuning curves in the majority of neurons but was inaccurate in 25% of neurons. In some cases second-order weighting functions led to significant improvements. Finally, we found a significant correlation between the degree of binaural weight asymmetry and the best azimuth. Overall, the results suggest that linear processing of interaural level difference underlies spatial tuning in the BIN. PMID:23447600

  3. Temporal aspects of cue combination

    NARCIS (Netherlands)

    van Mierlo, C.M.; Brenner, E.; Smeets, J.B.J.

    2007-01-01

    The human brain processes different kinds of information (or cues) independently with different neural latencies. How does the brain deal with these differences in neural latency when it combines cues into one estimate? To find out, we introduced artificial asynchronies between the moments that

  4. Auditory hallucinations induced by trazodone

    Science.gov (United States)

    Shiotsuki, Ippei; Terao, Takeshi; Ishii, Nobuyoshi; Hatano, Koji

    2014-01-01

    A 26-year-old female outpatient presenting with a depressive state suffered from auditory hallucinations at night. Her auditory hallucinations did not respond to blonanserin or paliperidone, but partially responded to risperidone. In view of the possibility that her auditory hallucinations began after starting trazodone, trazodone was discontinued, leading to a complete resolution of her auditory hallucinations. Furthermore, even after risperidone was decreased and discontinued, her auditory hallucinations did not recur. These findings suggest that trazodone may induce auditory hallucinations in some susceptible patients. PMID:24700048

  5. Functional imaging reveals numerous fields in the monkey auditory cortex.

    Directory of Open Access Journals (Sweden)

    Christopher I Petkov

    2006-07-01

    Full Text Available Anatomical studies propose that the primate auditory cortex contains more fields than have actually been functionally confirmed or described. Spatially resolved functional magnetic resonance imaging (fMRI with carefully designed acoustical stimulation could be ideally suited to extend our understanding of the processing within these fields. However, after numerous experiments in humans, many auditory fields remain poorly characterized. Imaging the macaque monkey is of particular interest as these species have a richer set of anatomical and neurophysiological data to clarify the source of the imaged activity. We functionally mapped the auditory cortex of behaving and of anesthetized macaque monkeys with high resolution fMRI. By optimizing our imaging and stimulation procedures, we obtained robust activity throughout auditory cortex using tonal and band-passed noise sounds. Then, by varying the frequency content of the sounds, spatially specific activity patterns were observed over this region. As a result, the activity patterns could be assigned to many auditory cortical fields, including those whose functional properties were previously undescribed. The results provide an extensive functional tessellation of the macaque auditory cortex and suggest that 11 fields contain neurons tuned for the frequency of sounds. This study provides functional support for a model where three fields in primary auditory cortex are surrounded by eight neighboring "belt" fields in non-primary auditory cortex. The findings can now guide neurophysiological recordings in the monkey to expand our understanding of the processing within these fields. Additionally, this work will improve fMRI investigations of the human auditory cortex.

  6. Medial parietal cortex activation related to attention control involving alcohol cues

    NARCIS (Netherlands)

    Gladwin, Thomas E.; ter Mors-Schulte, Mieke H. J.; Ridderinkhof, K. Richard; Wiers, Reinout W.

    2013-01-01

    Automatic attentional engagement toward and disengagement from alcohol cues play a role in alcohol use and dependence. In the current study, social drinkers performed a spatial cueing task designed to evoke conflict between such automatic processes and task instructions, a potentially important task

  7. Show me your opinion : Perceptual cues in creating and reading argument diagrams

    NARCIS (Netherlands)

    van Amelsvoort, Marije; Maes, Alfons

    2016-01-01

    In argument diagrams, perceptual cues are important to aid understanding. However, we do not know what perceptual cues are used and produced to aid under- standing. We present two studies in which we investigate (1) which spatial, graphical and textual elements people spontaneously use in creating

  8. Integration of auditory and visual speech information

    NARCIS (Netherlands)

    Hall, M.; Smeele, P.M.T.; Kuhl, P.K.

    1998-01-01

    The integration of auditory and visual speech is observed when modes specify different places of articulation. Influences of auditory variation on integration were examined using consonant identifi-cation, plus quality and similarity ratings. Auditory identification predicted auditory-visual

  9. Octave effect in auditory attention

    National Research Council Canada - National Science Library

    Tobias Borra; Huib Versnel; Chantal Kemner; A. John van Opstal; Raymond van Ee

    2013-01-01

    ... tones. Current auditory models explain this phenomenon by a simple bandpass attention filter. Here, we demonstrate that auditory attention involves multiple pass-bands around octave-related frequencies above and below the cued tone...

  10. Feasibility of external rhythmic cueing with the Google Glass for improving gait in people with Parkinson’s disease

    NARCIS (Netherlands)

    Zhao, Yan; Nonnekes, Johan Hendrik; Storcken, Erik J.M.; Janssen, Sabine; van Wegen, Erwin E.H.; Bloem, Bastiaan R.; Dorresteijn, Lucille D.A.; van Vugt, Jeroen P.P.; Heida, Tjitske; van Wezel, Richard Jack Anton

    New mobile technologies like smartglasses can deliver external cues that may improve gait in people with Parkinson’s disease in their natural environment. However, the potential of these devices must first be assessed in controlled experiments. Therefore, we evaluated rhythmic visual and auditory

  11. Feasibility of external rhythmic cueing with the Google Glass for improving gait in people with Parkinson's disease

    NARCIS (Netherlands)

    Zhao, Y; Nonnekes, J.H.; Storcken, E.J.; Janssen, S.; Wegen, E. van; Bloem, B.R.; Dorresteijn, L.D.A.; Vugt, J.P.P. van; Heida, T.; Wezel, R.J.A. van

    2016-01-01

    New mobile technologies like smartglasses can deliver external cues that may improve gait in people with Parkinson's disease in their natural environment. However, the potential of these devices must first be assessed in controlled experiments. Therefore, we evaluated rhythmic visual and auditory

  12. Owl monkeys (Aotus nigriceps and A. infulatus follow routes instead of food-related cues during foraging in captivity.

    Directory of Open Access Journals (Sweden)

    Renata Souza da Costa

    Full Text Available Foraging at night imposes different challenges from those faced during daylight, including the reliability of sensory cues. Owl monkeys (Aotus spp. are ideal models among anthropoids to study the information used during foraging at low light levels because they are unique by having a nocturnal lifestyle. Six Aotus nigriceps and four A. infulatus individuals distributed into five enclosures were studied for testing their ability to rely on olfactory, visual, auditory, or spatial and quantitative information for locating food rewards and for evaluating the use of routes to navigate among five visually similar artificial feeding boxes mounted in each enclosure. During most experiments only a single box was baited with a food reward in each session. The baited box changed randomly throughout the experiment. In the spatial and quantitative information experiment there were two baited boxes varying in the amount of food provided. These baited boxes remained the same throughout the experiment. A total of 45 sessions (three sessions per night during 15 consecutive nights per enclosure was conducted in each experiment. Only one female showed a performance suggestive of learning of the usefulness of sight to locate the food reward in the visual information experiment. Subjects showed a chance performance in the remaining experiments. All owl monkeys showed a preference for one box or a subset of boxes to inspect upon the beginning of each experimental session and consistently followed individual routes among feeding boxes.

  13. The role of reverberation-related binaural cues in the externalization of speech.

    Science.gov (United States)

    Catic, Jasmina; Santurette, Sébastien; Dau, Torsten

    2015-08-01

    The perception of externalization of speech sounds was investigated with respect to the monaural and binaural cues available at the listeners' ears in a reverberant environment. Individualized binaural room impulse responses (BRIRs) were used to simulate externalized sound sources via headphones. The measured BRIRs were subsequently modified such that the proportion of the response containing binaural vs monaural information was varied. Normal-hearing listeners were presented with speech sounds convolved with such modified BRIRs. Monaural reverberation cues were found to be sufficient for the externalization of a lateral sound source. In contrast, for a frontal source, an increased amount of binaural cues from reflections was required in order to obtain well externalized sound images. It was demonstrated that the interaction between the interaural cues of the direct sound and the reverberation strongly affects the perception of externalization. An analysis of the short-term binaural cues showed that the amount of fluctuations of the binaural cues corresponded well to the externalization ratings obtained in the listening tests. The results further suggested that the precedence effect is involved in the auditory processing of the dynamic binaural cues that are utilized for externalization perception.

  14. A preliminary investigation of a novel design of visual cue glasses that aid gait in Parkinson's disease.

    Science.gov (United States)

    McAuley, J H; Daly, P M; Curtis, C R

    2009-08-01

    Parkinson's disease is a relatively common progressive neurodegenerative disorder, one of whose main features is difficulty with walking. This can be partially corrected by providing cues for the placement of each step. We piloted the potential benefit of simple custom-designed 'walking glasses' worn by the patient that provide visual and auditory cues to aid in step placement. We used a repeated measures design to compare gait performance when unaided and when using the walking glasses with different patterns of visual and auditory stimulation by timing patients' walking over a 'real-life' predefined 30-m course. Hospital outpatient clinic. Fifteen patients with idiopathic Parkinson's disease who had significant gait problems and no other condition affecting gait performance. Timed walk. Using the glasses, 8 of 15 patients achieved a significant and meaningful average improvement in walking time of at least 10% (mean (95% confidence interval) improvement in these patients was 21.5% (3.9%)), while a further 2 had subjective and modest objective benefit. Different patterns of visual and auditory cues suited different patients. Visual cueing alone with a fixed horizontal cue line present all the time statistically resulted in the greatest improvement in walking time. This pilot study shows promising improvement in the gait of a significant proportion of Parkinson's disease patients through the use of a simple, inexpensive and robust design of walking glasses, suggesting practical applicability in a therapy setting to large numbers of such patients.

  15. Auditory processing in the brainstem and audiovisual integration in humans studied with fMRI

    NARCIS (Netherlands)

    Slabu, Lavinia Mihaela

    2008-01-01

    Functional magnetic resonance imaging (fMRI) is a powerful technique because of the high spatial resolution and the noninvasiveness. The applications of the fMRI to the auditory pathway remain a challenge due to the intense acoustic scanner noise of approximately 110 dB SPL. The auditory system

  16. The effects of rhythm and melody on auditory stream segregation.

    Science.gov (United States)

    Szalárdy, Orsolya; Bendixen, Alexandra; Böhm, Tamás M; Davies, Lucy A; Denham, Susan L; Winkler, István

    2014-03-01

    While many studies have assessed the efficacy of similarity-based cues for auditory stream segregation, much less is known about whether and how the larger-scale structure of sound sequences support stream formation and the choice of sound organization. Two experiments investigated the effects of musical melody and rhythm on the segregation of two interleaved tone sequences. The two sets of tones fully overlapped in pitch range but differed from each other in interaural time and intensity. Unbeknownst to the listener, separately, each of the interleaved sequences was created from the notes of a different song. In different experimental conditions, the notes and/or their timing could either follow those of the songs or they could be scrambled or, in case of timing, set to be isochronous. Listeners were asked to continuously report whether they heard a single coherent sequence (integrated) or two concurrent streams (segregated). Although temporal overlap between tones from the two streams proved to be the strongest cue for stream segregation, significant effects of tonality and familiarity with the songs were also observed. These results suggest that the regular temporal patterns are utilized as cues in auditory stream segregation and that long-term memory is involved in this process.

  17. Smell facilitates auditory contagious yawning in stranger rats.

    Science.gov (United States)

    Moyaho, Alejandro; Rivas-Zamudio, Xaman; Ugarte, Araceli; Eguibar, José R; Valencia, Jaime

    2015-01-01

    Most vertebrates yawn in situations ranging from relaxation to tension, but only humans and other primate species that show mental state attribution skills have been convincingly shown to display yawn contagion. Whether complex forms of empathy are necessary for yawn contagion to occur is still unclear. As empathy is a phylogenetically continuous trait, simple forms of empathy, such as emotional contagion, might be sufficient for non-primate species to show contagious yawning. In this study, we exposed pairs of male rats, which were selected for high yawning, with each other through a perforated wall and found that olfactory cues stimulated yawning, whereas visual cues inhibited it. Unexpectedly, cage-mate rats failed to show yawn contagion, although they did show correlated emotional reactivity. In contrast, stranger rats showed auditory contagious yawning and greater rates of smell-facilitated auditory contagious yawning, although they did not show correlated emotional reactivity. Strikingly, they did not show contagious yawning to rats from a low-yawning strain. These findings indicate that contagious yawning may be a widespread trait amongst vertebrates and that mechanisms other than empathy may be involved. We suggest that a communicatory function of yawning may be the mechanism responsible for yawn contagion in rats, as contagiousness was strain-specific and increased with olfactory cues, which are involved in mutual recognition.

  18. Expect the unexpected: a paradoxical effect of cue validity on the orienting of attention.

    Science.gov (United States)

    Jollie, Ashley; Ivanoff, Jason; Webb, Nicole E; Jamieson, Andrew S

    2016-10-01

    Predictive central cues generate location-based expectancies, voluntary shifts of attention, and facilitate target processing. Often, location-based expectancies and voluntary attention are confounded in cueing tasks. Here we vary the predictability of central cues to determine whether they can evoke the inhibition of target processing in three go/no-go experiments. In the first experiment, the central cue was uninformative and did not predict the target's location. Importantly, these cues did not seem to affect target processing. In the second experiment, the central cue indicated the most or the least likely location of the target. Surprisingly, both types of cues facilitated target processing at the cued location. In the third experiment, the central cue predicted the most likely location of a no-go target, but it did not provide relevant information pertaining to the location of the go target. Again, the central cue facilitated processing of the go target. These results suggest that efforts to strategically allocate inhibition may be thwarted by the paradoxical monitoring of the cued location. The current findings highlight the need to further explore the relationship between location-based expectancies and spatial attention in cueing tasks.

  19. The Cue-Approach Task as a General Mechanism for Long-Term Non-Reinforced Behavioral Change.

    Science.gov (United States)

    Salomon, Tom; Botvinik-Nezer, Rotem; Gutentag, Tony; Gera, Rani; Iwanir, Roni; Tamir, Maya; Schonberg, Tom

    2018-02-26

    Recent findings show that preferences for food items can be modified without external reinforcements using the cue-approach task. In the task, the mere association of food item images with a neutral auditory cue and a speeded button press, resulted in enhanced preferences for the associated stimuli. In a series of 10 independent samples with a total of 255 participants, we show for the first time that using this non-reinforced method we can enhance preferences for faces, fractals and affective images, as well as snack foods, using auditory, visual and even aversive cues. This change was highly durable in follow-up sessions performed one to six months after training. Preferences were successfully enhanced for all conditions, except for negative valence items. These findings promote our understanding of non-reinforced change, suggest a boundary condition for the effect and lay the foundation for development of novel applications.

  20. Incidental auditory category learning.

    Science.gov (United States)

    Gabay, Yafit; Dick, Frederic K; Zevin, Jason D; Holt, Lori L

    2015-08-01

    Very little is known about how auditory categories are learned incidentally, without instructions to search for category-diagnostic dimensions, overt category decisions, or experimenter-provided feedback. This is an important gap because learning in the natural environment does not arise from explicit feedback and there is evidence that the learning systems engaged by traditional tasks are distinct from those recruited by incidental category learning. We examined incidental auditory category learning with a novel paradigm, the Systematic Multimodal Associations Reaction Time (SMART) task, in which participants rapidly detect and report the appearance of a visual target in 1 of 4 possible screen locations. Although the overt task is rapid visual detection, a brief sequence of sounds precedes each visual target. These sounds are drawn from 1 of 4 distinct sound categories that predict the location of the upcoming visual target. These many-to-one auditory-to-visuomotor correspondences support incidental auditory category learning. Participants incidentally learn categories of complex acoustic exemplars and generalize this learning to novel exemplars and tasks. Further, learning is facilitated when category exemplar variability is more tightly coupled to the visuomotor associations than when the same stimulus variability is experienced across trials. We relate these findings to phonetic category learning. (c) 2015 APA, all rights reserved).

  1. Modelling auditory attention.

    Science.gov (United States)

    Kaya, Emine Merve; Elhilali, Mounya

    2017-02-19

    Sounds in everyday life seldom appear in isolation. Both humans and machines are constantly flooded with a cacophony of sounds that need to be sorted through and scoured for relevant information-a phenomenon referred to as the 'cocktail party problem'. A key component in parsing acoustic scenes is the role of attention, which mediates perception and behaviour by focusing both sensory and cognitive resources on pertinent information in the stimulus space. The current article provides a review of modelling studies of auditory attention. The review highlights how the term attention refers to a multitude of behavioural and cognitive processes that can shape sensory processing. Attention can be modulated by 'bottom-up' sensory-driven factors, as well as 'top-down' task-specific goals, expectations and learned schemas. Essentially, it acts as a selection process or processes that focus both sensory and cognitive resources on the most relevant events in the soundscape; with relevance being dictated by the stimulus itself (e.g. a loud explosion) or by a task at hand (e.g. listen to announcements in a busy airport). Recent computational models of auditory attention provide key insights into its role in facilitating perception in cluttered auditory scenes.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Authors.

  2. Auditory Channel Problems.

    Science.gov (United States)

    Mann, Philip H.; Suiter, Patricia A.

    This teacher's guide contains a list of general auditory problem areas where students have the following problems: (a) inability to find or identify source of sound; (b) difficulty in discriminating sounds of words and letters; (c) difficulty with reproducing pitch, rhythm, and melody; (d) difficulty in selecting important from unimportant sounds;…

  3. How hearing aids, background noise, and visual cues influence objective listening effort.

    Science.gov (United States)

    Picou, Erin M; Ricketts, Todd A; Hornsby, Benjamin W Y

    2013-09-01

    The purpose of this article was to evaluate factors that influence the listening effort experienced when processing speech for people with hearing loss. Specifically, the change in listening effort resulting from introducing hearing aids, visual cues, and background noise was evaluated. An additional exploratory aim was to investigate the possible relationships between the magnitude of listening effort change and individual listeners' working memory capacity, verbal processing speed, or lipreading skill. Twenty-seven participants with bilateral sensorineural hearing loss were fitted with linear behind-the-ear hearing aids and tested using a dual-task paradigm designed to evaluate listening effort. The primary task was monosyllable word recognition and the secondary task was a visual reaction time task. The test conditions varied by hearing aids (unaided, aided), visual cues (auditory-only, auditory-visual), and background noise (present, absent). For all participants, the signal to noise ratio was set individually so that speech recognition performance in noise was approximately 60% in both the auditory-only and auditory-visual conditions. In addition to measures of listening effort, working memory capacity, verbal processing speed, and lipreading ability were measured using the Automated Operational Span Task, a Lexical Decision Task, and the Revised Shortened Utley Lipreading Test, respectively. In general, the effects measured using the objective measure of listening effort were small (~10 msec). Results indicated that background noise increased listening effort, and hearing aids reduced listening effort, while visual cues did not influence listening effort. With regard to the individual variables, verbal processing speed was negatively correlated with hearing aid benefit for listening effort; faster processors were less likely to derive benefit. Working memory capacity, verbal processing speed, and lipreading ability were related to benefit from visual cues. No

  4. Human Perception of Ambiguous Inertial Motion Cues

    Science.gov (United States)

    Zhang, Guan-Lu

    2010-01-01

    Human daily activities on Earth involve motions that elicit both tilt and translation components of the head (i.e. gazing and locomotion). With otolith cues alone, tilt and translation can be ambiguous since both motions can potentially displace the otolithic membrane by the same magnitude and direction. Transitions between gravity environments (i.e. Earth, microgravity and lunar) have demonstrated to alter the functions of the vestibular system and exacerbate the ambiguity between tilt and translational motion cues. Symptoms of motion sickness and spatial disorientation can impair human performances during critical mission phases. Specifically, Space Shuttle landing records show that particular cases of tilt-translation illusions have impaired the performance of seasoned commanders. This sensorimotor condition is one of many operational risks that may have dire implications on future human space exploration missions. The neural strategy with which the human central nervous system distinguishes ambiguous inertial motion cues remains the subject of intense research. A prevailing theory in the neuroscience field proposes that the human brain is able to formulate a neural internal model of ambiguous motion cues such that tilt and translation components can be perceptually decomposed in order to elicit the appropriate bodily response. The present work uses this theory, known as the GIF resolution hypothesis, as the framework for experimental hypothesis. Specifically, two novel motion paradigms are employed to validate the neural capacity of ambiguous inertial motion decomposition in ground-based human subjects. The experimental setup involves the Tilt-Translation Sled at Neuroscience Laboratory of NASA JSC. This two degree-of-freedom motion system is able to tilt subjects in the pitch plane and translate the subject along the fore-aft axis. Perception data will be gathered through subject verbal reports. Preliminary analysis of perceptual data does not indicate that

  5. Directional hearing: from biophysical binaural cues to directional hearing outdoors.

    Science.gov (United States)

    Römer, Heiner

    2015-01-01

    When insects communicate by sound, or use acoustic cues to escape predators or detect prey or hosts they have to localize the sound in most cases, to perform adaptive behavioral responses. In the case of particle velocity receivers such as the antennae of mosquitoes, directionality is no problem because such receivers are inherently directional. Insects equipped with bilateral pairs of tympanate ears could principally make use of binaural cues for sound localization, like all other animals with two ears. However, their small size is a major problem to create sufficiently large binaural cues, with respect to both interaural time differences (ITDs, because interaural distances are so small), but also with respect to interaural intensity differences (IIDs), since the ratio of body size to the wavelength of sound is rather unfavorable for diffractive effects. In my review, I will only shortly cover these biophysical aspects of directional hearing. Instead, I will focus on aspects of directional hearing which received relatively little attention previously, the evolution of a pressure difference receiver, 3D-hearing, directional hearing outdoors, and directional hearing for auditory scene analysis.

  6. The time course of attention modulation elicited by spatial uncertainty.

    Science.gov (United States)

    Huang, Dan; Liang, Huilou; Xue, Linyan; Wang, Meijian; Hu, Qiyi; Chen, Yao

    2017-09-01

    Uncertainty regarding the target location is an influential factor for spatial attention. Modulation in spatial uncertainty can lead to adjustments in attention scope and variations in attention effects. Hence, investigating spatial uncertainty modulation is important for understanding the underlying mechanism of spatial attention. However, the temporal dynamics of this modulation remains unclear. To evaluate the time course of spatial uncertainty modulation, we adopted a Posner-like attention orienting paradigm with central or peripheral cues. Different numbers of cues were used to indicate the potential locations of the target and thereby manipulate the spatial uncertainty level. The time interval between the onsets of the cue and the target (stimulus onset asynchrony, SOA) varied from 50 to 2000ms. We found that under central cueing, the effect of spatial uncertainty modulation could be detected from 200 to 2000ms after the presence of the cues. Under peripheral cueing, the effect of spatial uncertainty modulation was observed from 50 to 2000ms after cueing. Our results demonstrate that spatial uncertainty modulation produces robust and sustained effects on target detection speed. The time course of this modulation is influenced by the cueing method, which suggests that discrepant processing procedures are involved under different cueing conditions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Path integration absent in scent-tracking fimbria-fornix rats: evidence for hippocampal involvement in "sense of direction" and "sense of distance" using self-movement cues.

    Science.gov (United States)

    Whishaw, I Q; Gorny, B

    1999-06-01

    Allothetic and idiothetic navigation strategies use very different cue constellations and computational processes. Allothetic navigation requires the use of the relationships between relatively stable external (visual, olfactory, auditory) cues, whereas idiothetic navigation requires the integration of cues generated by self-movement and/or efferent copy of movement commands. The flexibility with which animals can switch between these strategies and the neural structures that support these strategies are not well understood. By capitalizing on the proclivity of foraging rats to carry large food pellets back to a refuge for eating, the present study examined the contribution of the hippocampus to the use of allothetic versus idiothetic navigation strategies. Control rats and fimbria-fornix-ablated rats were trained to follow linear, polygonal, and octagonal scent trails that led to a piece of food. The ability of the rats to return to the refuge with the food via the shortest route using allothetic cues (visual cues and/or the odor trail available) or using ideothetic cues (the odor trail removed and the rats blindfolded or tested in infrared light) was examined. Control rats "closed the polygon" by returning directly home in all cue conditions. Fimbria-fornix rats successfully used allothetic cues (closed the polygon using visual cues or tracked back on the string) but were insensitive to the direction and distance of the refuge and were lost when restricted to idiothetic cues. The results support the hypothesis that the hippocampal formation is necessary for navigation requiring the integration of idiothetic cues.

  8. Reconstructing spectral cues for sound localization from responses to rippled noise stimuli

    Science.gov (United States)

    Vliegen, Joyce; Van Esch, Thamar

    2017-01-01

    Human sound localization in the mid-saggital plane (elevation) relies on an analysis of the idiosyncratic spectral shape cues provided by the head and pinnae. However, because the actual free-field stimulus spectrum is a-priori unknown to the auditory system, the problem of extracting the elevation angle from the sensory spectrum is ill-posed. Here we test different spectral localization models by eliciting head movements toward broad-band noise stimuli with randomly shaped, rippled amplitude spectra emanating from a speaker at a fixed location, while varying the ripple bandwidth between 1.5 and 5.0 cycles/octave. Six listeners participated in the experiments. From the distributions of localization responses toward the individual stimuli, we estimated the listeners’ spectral-shape cues underlying their elevation percepts, by applying maximum-likelihood estimation. The reconstructed spectral cues resulted to be invariant to the considerable variation in ripple bandwidth, and for each listener they had a remarkable resemblance to the idiosyncratic head-related transfer functions (HRTFs). These results are not in line with models that rely on the detection of a single peak or notch in the amplitude spectrum, nor with a local analysis of first- and second-order spectral derivatives. Instead, our data support a model in which the auditory system performs a cross-correlation between the sensory input at the eardrum-auditory nerve, and stored representations of HRTF spectral shapes, to extract the perceived elevation angle. PMID:28333967

  9. Reconstructing spectral cues for sound localization from responses to rippled noise stimuli.

    Directory of Open Access Journals (Sweden)

    A John Van Opstal

    Full Text Available Human sound localization in the mid-saggital plane (elevation relies on an analysis of the idiosyncratic spectral shape cues provided by the head and pinnae. However, because the actual free-field stimulus spectrum is a-priori unknown to the auditory system, the problem of extracting the elevation angle from the sensory spectrum is ill-posed. Here we test different spectral localization models by eliciting head movements toward broad-band noise stimuli with randomly shaped, rippled amplitude spectra emanating from a speaker at a fixed location, while varying the ripple bandwidth between 1.5 and 5.0 cycles/octave. Six listeners participated in the experiments. From the distributions of localization responses toward the individual stimuli, we estimated the listeners' spectral-shape cues underlying their elevation percepts, by applying maximum-likelihood estimation. The reconstructed spectral cues resulted to be invariant to the considerable variation in ripple bandwidth, and for each listener they had a remarkable resemblance to the idiosyncratic head-related transfer functions (HRTFs. These results are not in line with models that rely on the detection of a single peak or notch in the amplitude spectrum, nor with a local analysis of first- and second-order spectral derivatives. Instead, our data support a model in which the auditory system performs a cross-correlation between the sensory input at the eardrum-auditory nerve, and stored representations of HRTF spectral shapes, to extract the perceived elevation angle.

  10. Cross-Sensory Transfer of Reference Frames in Spatial Memory

    Science.gov (United States)

    Kelly, Jonathan W.; Avraamides, Marios N.

    2011-01-01

    Two experiments investigated whether visual cues influence spatial reference frame selection for locations learned through touch. Participants experienced visual cues emphasizing specific environmental axes and later learned objects through touch. Visual cues were manipulated and haptic learning conditions were held constant. Imagined perspective…

  11. A Transient Auditory Signal Shifts the Perceived Offset Position of a Moving Visual Object

    Directory of Open Access Journals (Sweden)

    Sung-En eChien

    2013-02-01

    Full Text Available Information received from different sensory modalities profoundly influences human perception. For example, changes in the auditory flutter rate induce changes in the apparent flicker rate of a flashing light (Shipley, 1964. In the present study, we investigated whether auditory information would affect the perceived offset position of a moving object. In Experiment 1, a visual object moved toward the center of the computer screen and disappeared abruptly. A transient auditory signal was presented at different times relative to the moment when the object disappeared. The results showed that if the auditory signal was presented before the abrupt offset of the moving object, the perceived final position was shifted backward, implying that the perceived offset position was affected by the transient auditory information. In Experiment 2, we presented the transient auditory signal to either the left or the right ear. The results showed that the perceived offset shifted backward more strongly when the auditory signal was presented to the same side from which the moving object originated. In Experiment 3, we found that the perceived timing of the visual offset was not affected by the spatial relation between the auditory signal and the visual offset. The present results are interpreted as indicating that an auditory signal may influence the offset position of a moving object through both spatial and temporal processes.

  12. The influence of imagery vividness on cognitive and perceptual cues in circular auditorily-induced vection.

    Science.gov (United States)

    Väljamäe, Aleksander; Sell, Sara

    2014-01-01

    In the absence of other congruent multisensory motion cues, sound contribution to illusions of self-motion (vection) is relatively weak and often attributed to purely cognitive, top-down processes. The present study addressed the influence of cognitive and perceptual factors in the experience of circular, yaw auditorily-induced vection (AIV), focusing on participants imagery vividness scores. We used different rotating sound sources (acoustic landmark vs. movable types) and their filtered versions that provided different binaural cues (interaural time or level differences, ITD vs. ILD) when delivering via loudspeaker array. The significant differences in circular vection intensity showed that (1) AIV was stronger for rotating sound fields containing auditory landmarks as compared to movable sound objects; (2) ITD based acoustic cues were more instrumental than ILD based ones for horizontal AIV; and (3) individual differences in imagery vividness significantly influenced the effects of contextual and perceptual cues. While participants with high scores of kinesthetic and visual imagery were helped by vection "rich" cues, i.e., acoustic landmarks and ITD cues, the participants from the low-vivid imagery group did not benefit from these cues automatically. Only when specifically asked to use their imagination intentionally did these external cues start influencing vection sensation in a similar way to high-vivid imagers. These findings are in line with the recent fMRI work which suggested that high-vivid imagers employ automatic, almost unconscious mechanisms in imagery generation, while low-vivid imagers rely on more schematic and conscious framework. Consequently, our results provide an additional insight into the interaction between perceptual and contextual cues when experiencing purely auditorily or multisensory induced vection.

  13. The influence of imagery vividness on cognitive and perceptual cues in circular auditorily-induced vection

    Directory of Open Access Journals (Sweden)

    Aleksander eVäljamäe

    2014-12-01

    Full Text Available In the absence of other congruent multisensory motion cues, sound contribution to illusions of self-motion (vection is relatively weak and often attributed to purely cognitive, top-down processes. The present study addressed the influence of cognitive and perceptual factors in the experience of circular, yaw auditorily-induced vection (AIV, focusing on participants’ imagery vividness scores. We used different rotating sound sources (acoustic landmark vs. movable types and their filtered versions that provided different binaural cues (interaural time or level differences, ITD vs. ILD when delivering via loudspeaker array. The significant differences in circular vection intensity showed that 1 AIV was stronger for rotating sound fields containing auditory landmarks as compared to movable sound objects; 2 ITD based acoustic cues were more instrumental than ILD based ones for horizontal AIV; and 3 individual differences in imagery vividness significantly influenced the effects of contextual and perceptual cues. While participants with high scores of kinesthetic and visual imagery were helped by vection ``rich cues, i.e. acoustic landmarks and ITD cues, the participants from the low-vivid imagery group did not benefit from these cues automatically. Only when specifically asked to use their imagination intentionally did these external cues start influencing vection sensation in similar way to high-vivid imagers. These findings are in line with the recent fMRI work which suggested that high-vivid imagers employ automatic, almost unconscious mechanisms in imagery generation, while low-vivid imagers rely on more schematic and conscious framework. Consequently, our results provide an additional insight into the interaction between perceptual and contextual cues when experiencing purely auditorily or multisensorily induced vection.

  14. Adults' implicit associations to infant positive and negative acoustic cues: Moderation by empathy and gender.

    Science.gov (United States)

    Senese, Vincenzo Paolo; Venuti, Paola; Giordano, Francesca; Napolitano, Maria; Esposito, Gianluca; Bornstein, Marc H

    2017-09-01

    In this study a novel auditory version of the Single Category Implicit Association Test (SC-IAT-A) was developed to investigate (a) the valence of adults' associations to infant cries and laughs, (b) moderation of implicit associations by gender and empathy, and (c) the robustness of implicit associations controlling for auditory sensitivity. Eighty adults (50% females) were administered two SC-IAT-As, the Empathy Quotient, and the Weinstein Noise Sensitivity Scale. Adults showed positive implicit associations to infant laugh and negative ones to infant cry; only the implicit associations with the infant laugh were negatively related to empathy scores, and no gender differences were observed. Finally, implicit associations to infant cry were affected by noise sensitivity. The SC-IAT-A is useful to evaluate the valence of implicit reactions to infant auditory cues and could provide fresh insights into understanding processes that regulate the quality of adult-infant relationships.

  15. Nocturnal activity positively correlated with auditory sensitivity in noctuoid moths.

    Science.gov (United States)

    ter Hofstede, Hannah M; Ratcliffe, John M; Fullard, James H

    2008-06-23

    We investigated the relationship between predator detection threshold and antipredator behaviour in noctuoid moths. Moths with ears sensitive to the echolocation calls of insectivorous bats use avoidance manoeuvres in flight to evade these predators. Earless moths generally fly less than eared species as a primary defence against predation by bats. For eared moths, however, there is interspecific variation in auditory sensitivity. At the species level, and when controlling for shared evolutionary history, nocturnal flight time and auditory sensitivity were positively correlated in moths, a relationship that most likely reflects selection pressure from aerial-hawking bats. We suggest that species-specific differences in the detection of predator cues are important but often overlooked factors in the evolution and maintenance of antipredator behaviour.

  16. Tuned with a tune: Talker normalization via general auditory processes

    Directory of Open Access Journals (Sweden)

    Erika J C Laing

    2012-06-01

    Full Text Available Voices have unique acoustic signatures, contributing to the acoustic variability listeners must contend with in perceiving speech, and it has long been proposed that listeners normalize speech perception to information extracted from a talker’s speech. Initial attempts to explain talker normalization relied on extraction of articulatory referents, but recent studies of context-dependent auditory perception suggest that general auditory referents such as the long-term average spectrum (LTAS of a talker’s speech similarly affect speech perception. The present study aimed to differentiate the contributions of articulatory/linguistic versus auditory referents for context-driven talker normalization effects and, more specifically, to identify the specific constraints under which such contexts impact speech perception. Synthesized sentences manipulated to sound like different talkers influenced categorization of a subsequent speech target only when differences in the sentences’ LTAS were in the frequency range of the acoustic cues relevant for the target phonemic contrast. This effect was true both for speech targets preceded by spoken sentence contexts and for targets preceded by nonspeech tone sequences that were LTAS-matched to the spoken sentence contexts. Specific LTAS characteristics, rather than perceived talker, predicted the results suggesting that general auditory mechanisms play an important role in effects considered to be instances of perceptual talker normalization.

  17. Neural Representation of Concurrent Vowels in Macaque Primary Auditory Cortex.

    Science.gov (United States)

    Fishman, Yonatan I; Micheyl, Christophe; Steinschneider, Mitchell

    2016-01-01

    Successful speech perception in real-world environments requires that the auditory system segregate competing voices that overlap in frequency and time into separate streams. Vowels are major constituents of speech and are comprised of frequencies (harmonics) that are integer multiples of a common fundamental frequency (F0). The pitch and identity of a vowel are determined by its F0 and spectral envelope (formant structure), respectively. When two spectrally overlapping vowels differing in F0 are presented concurrently, they can be readily perceived as two separate "auditory objects" with pitches at their respective F0s. A difference in pitch between two simultaneous vowels provides a powerful cue for their segregation, which in turn, facilitates their individual identification. The neural mechanisms underlying the segregation of concurrent vowels based on pitch differences are poorly understood. Here, we examine neural population responses in macaque primary auditory cortex (A1) to single and double concurrent vowels (/a/ and /i/) that differ in F0 such that they are heard as two separate auditory objects with distinct pitches. We find that neural population responses in A1 can resolve, via a rate-place code, lower harmonics of both single and double concurrent vowels. Furthermore, we show that the formant structures, and hence the identities, of single vowels can be reliably recovered from the neural representation of double concurrent vowels. We conclude that A1 contains sufficient spectral information to enable concurrent vowel segregation and identification by downstream cortical areas.

  18. Gender differences in identifying emotions from auditory and visual stimuli.

    Science.gov (United States)

    Waaramaa, Teija

    2017-12-01

    The present study focused on gender differences in emotion identification from auditory and visual stimuli produced by two male and two female actors. Differences in emotion identification from nonsense samples, language samples and prolonged vowels were investigated. It was also studied whether auditory stimuli can convey the emotional content of speech without visual stimuli, and whether visual stimuli can convey the emotional content of speech without auditory stimuli. The aim was to get a better knowledge of vocal attributes and a more holistic understanding of the nonverbal communication of emotion. Females tended to be more accurate in emotion identification than males. Voice quality parameters played a role in emotion identification in both genders. The emotional content of the samples was best conveyed by nonsense sentences, better than by prolonged vowels or shared native language of the speakers and participants. Thus, vocal non-verbal communication tends to affect the interpretation of emotion even in the absence of language. The emotional stimuli were better recognized from visual stimuli than auditory stimuli by both genders. Visual information about speech may not be connected to the language; instead, it may be based on the human ability to understand the kinetic movements in speech production more readily than the characteristics of the acoustic cues.

  19. Auditory pathways: anatomy and physiology.

    Science.gov (United States)

    Pickles, James O

    2015-01-01

    This chapter outlines the anatomy and physiology of the auditory pathways. After a brief analysis of the external, middle ears, and cochlea, the responses of auditory nerve fibers are described. The central nervous system is analyzed in more detail. A scheme is provided to help understand the complex and multiple auditory pathways running through the brainstem. The multiple pathways are based on the need to preserve accurate timing while extracting complex spectral patterns in the auditory input. The auditory nerve fibers branch to give two pathways, a ventral sound-localizing stream, and a dorsal mainly pattern recognition stream, which innervate the different divisions of the cochlear nucleus. The outputs of the two streams, with their two types of analysis, are progressively combined in the inferior colliculus and onwards, to produce the representation of what can be called the "auditory objects" in the external world. The progressive extraction of critical features in the auditory stimulus in the different levels of the central auditory system, from cochlear nucleus to auditory cortex, is described. In addition, the auditory centrifugal system, running from cortex in multiple stages to the organ of Corti of the cochlea, is described. © 2015 Elsevier B.V. All rights reserved.

  20. [Application of simultaneous auditory evoked potentials and functional magnetic resonance recordings for examination of central auditory system--preliminary results].

    Science.gov (United States)

    Milner, Rafał; Rusiniak, Mateusz; Wolak, Tomasz; Piatkowska-Janko, Ewa; Naumczyk, Patrycja; Bogorodzki, Piotr; Senderski, Andrzej; Ganc, Małgorzata; Skarzyński, Henryk

    2011-01-01

    Processing of auditory information in central nervous system bases on the series of quickly occurring neural processes that cannot be separately monitored using only the fMRI registration. Simultaneous recording of the auditory evoked potentials, characterized by good temporal resolution, and the functional magnetic resonance imaging with excellent spatial resolution allows studying higher auditory functions with precision both in time and space. was to implement the simultaneous AEP-fMRI recordings method for the investigation of information processing at different levels of central auditory system. Five healthy volunteers, aged 22-35 years, participated in the experiment. The study was performed using high-field (3T) MR scanner from Siemens and 64-channel electrophysiological system Neuroscan from Compumedics. Auditory evoked potentials generated by acoustic stimuli (standard and deviant tones) were registered using modified odd-ball procedure. Functional magnetic resonance recordings were performed using sparse acquisition paradigm. The results of electrophysiological registrations have been worked out by determining voltage distributions of AEP on skull and modeling their bioelectrical intracerebral generators (dipoles). FMRI activations were determined on the basis of deviant to standard and standard to deviant functional contrasts. Results obtained from electrophysiological studies have been integrated with functional outcomes. Morphology, amplitude, latency and voltage distribution of auditory evoked potentials (P1, N1, P2) to standard stimuli presented during simultaneous AEP-fMRI registrations were very similar to the responses obtained outside scanner room. Significant fMRI activations to standard stimuli were found mainly in the auditory cortex. Activations in these regions corresponded with N1 wave dipoles modeled based on auditory potentials generated by standard tones. Auditory evoked potentials to deviant stimuli were recorded only outside the MRI

  1. Auditory object cognition in dementia

    Science.gov (United States)

    Goll, Johanna C.; Kim, Lois G.; Hailstone, Julia C.; Lehmann, Manja; Buckley, Aisling; Crutch, Sebastian J.; Warren, Jason D.

    2011-01-01

    The cognition of nonverbal sounds in dementia has been relatively little explored. Here we undertook a systematic study of nonverbal sound processing in patient groups with canonical dementia syndromes comprising clinically diagnosed typical amnestic Alzheimer's disease (AD; n = 21), progressive nonfluent aphasia (PNFA; n = 5), logopenic progressive aphasia (LPA; n = 7) and aphasia in association with a progranulin gene mutation (GAA; n = 1), and in healthy age-matched controls (n = 20). Based on a cognitive framework treating complex sounds as ‘auditory objects’, we designed a novel neuropsychological battery to probe auditory object cognition at early perceptual (sub-object), object representational (apperceptive) and semantic levels. All patients had assessments of peripheral hearing and general neuropsychological functions in addition to the experimental auditory battery. While a number of aspects of auditory object analysis were impaired across patient groups and were influenced by general executive (working memory) capacity, certain auditory deficits had some specificity for particular dementia syndromes. Patients with AD had a disproportionate deficit of auditory apperception but preserved timbre processing. Patients with PNFA had salient deficits of timbre and auditory semantic processing, but intact auditory size and apperceptive processing. Patients with LPA had a generalised auditory deficit that was influenced by working memory function. In contrast, the patient with GAA showed substantial preservation of auditory function, but a mild deficit of pitch direction processing and a more severe deficit of auditory apperception. The findings provide evidence for separable stages of auditory object analysis and separable profiles of impaired auditory object cognition in different dementia syndromes. PMID:21689671

  2. Children use visual speech to compensate for non-intact auditory speech.

    Science.gov (United States)

    Jerger, Susan; Damian, Markus F; Tye-Murray, Nancy; Abdi, Hervé

    2014-10-01

    We investigated whether visual speech fills in non-intact auditory speech (excised consonant onsets) in typically developing children from 4 to 14 years of age. Stimuli with the excised auditory onsets were presented in the audiovisual (AV) and auditory-only (AO) modes. A visual speech fill-in effect occurs when listeners experience hearing the same non-intact auditory stimulus (e.g., /-b/ag) as different depending on the presence/absence of visual speech such as hearing /bag/ in the AV mode but hearing /ag/ in the AO mode. We quantified the visual speech fill-in effect by the difference in the number of correct consonant onset responses between the modes. We found that easy visual speech cues /b/ provided greater filling in than difficult cues /g/. Only older children benefited from difficult visual speech cues, whereas all children benefited from easy visual speech cues, although 4- and 5-year-olds did not benefit as much as older children. To explore task demands, we compared results on our new task with those on the McGurk task. The influence of visual speech was uniquely associated with age and vocabulary abilities for the visual speech fill--in effect but was uniquely associated with speechreading skills for the McGurk effect. This dissociation implies that visual speech--as processed by children-is a complicated and multifaceted phenomenon underpinned by heterogeneous abilities. These results emphasize that children perceive a speaker's utterance rather than the auditory stimulus per se. In children, as in adults, there is more to speech perception than meets the ear. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Auditory Reserve and the Legacy of Auditory Experience

    OpenAIRE

    Skoe, Erika; Kraus, Nina

    2014-01-01

    Musical training during childhood has been linked to more robust encoding of sound later in life. We take this as evidence for an auditory reserve: a mechanism by which individuals capitalize on earlier life experiences to promote auditory processing. We assert that early auditory experiences guide how the reserve develops and is maintained over the lifetime. Experiences that occur after childhood, or which are limited in nature, are theorized to affect the reserve, although their influence o...

  4. Treefrogs as animal models for research on auditory scene analysis and the cocktail party problem.

    Science.gov (United States)

    Bee, Mark A

    2015-02-01

    The perceptual analysis of acoustic scenes involves binding together sounds from the same source and separating them from other sounds in the environment. In large social groups, listeners experience increased difficulty performing these tasks due to high noise levels and interference from the concurrent signals of multiple individuals. While a substantial body of literature on these issues pertains to human hearing and speech communication, few studies have investigated how nonhuman animals may be evolutionarily adapted to solve biologically analogous communication problems. Here, I review recent and ongoing work aimed at testing hypotheses about perceptual mechanisms that enable treefrogs in the genus Hyla to communicate vocally in noisy, multi-source social environments. After briefly introducing the genus and the methods used to study hearing in frogs, I outline several functional constraints on communication posed by the acoustic environment of breeding "choruses". Then, I review studies of sound source perception aimed at uncovering how treefrog listeners may be adapted to cope with these constraints. Specifically, this review covers research on the acoustic cues used in sequential and simultaneous auditory grouping, spatial release from masking, and dip listening. Throughout the paper, I attempt to illustrate how broad-scale, comparative studies of carefully considered animal models may ultimately reveal an evolutionary diversity of underlying mechanisms for solving cocktail-party-like problems in communication. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Acoustic cues to Nehiyawewin constituency

    Science.gov (United States)

    Cook, Clare; Muehlbauer, Jeff

    2005-04-01

    This study examines how speakers use acoustic cues, e.g., pitch and pausing, to establish syntactic and semantic constituents in Nehiyawewin, an Algonquian language. Two Nehiyawewin speakers autobiographies, which have been recorded, transcribed, and translated by H. C. Wolfart in collaboration with a native speaker of Nehiyawewin, provide natural-speech data for the study. Since it is difficult for a non-native-speaker to reliably distinguish Nehiyawewin constituents, an intermediary is needed. The transcription provides this intermediary through punctuation marks (commas, semi-colons, em-dashes, periods), which have been shown to consistently mark constituency structure [Nunberg, CSLI 1990]. The acoustic cues are thus mapped onto the punctuated constituents, and then similar constituents are compared to see what acoustic cues they share. Preliminarily, the clearest acoustic signal to a constituent boundary is a pitch drop preceding the boundary and/or a pitch reset on the syllable following the boundary. Further, constituent boundaries marked by a period consistently end on a low pitch, are followed by a pitch reset of 30-90 Hz and have an average pause of 1.9 seconds. I also discuss cross-speaker cues, and prosodic cues that do not correlate to punctuation, with implications for the transcriptional view of orthography [Marckwardt, Oxford 1942].

  6. Maintaining realism in auditory length-perception experiments

    DEFF Research Database (Denmark)

    Kirkwood, Brent Christopher

    2005-01-01

    Humans are capable of hearing the lengths of wooden rods dropped onto hard floors. In an attempt to understand the influence of the stimulus presentation method for testing this kind of everyday listening task, listener performance was compared for three presentation methods in an auditory length......-estimation experiment. A comparison of the length-estimation accuracy for the three presentation methods indicates that the choice of presentation method is important for maintaining realism and for maintaining the acoustic cues utilized by listeners in perceiving length....

  7. Assessment of Spectral and Temporal Resolution in Cochlear Implant Users Using Psychoacoustic Discrimination and Speech Cue Categorization.

    Science.gov (United States)

    Winn, Matthew B; Won, Jong Ho; Moon, Il Joon

    This study was conducted to measure auditory perception by cochlear implant users in the spectral and temporal domains, using tests of either categorization (using speech-based cues) or discrimination (using conventional psychoacoustic tests). The authors hypothesized that traditional nonlinguistic tests assessing spectral and temporal auditory resolution would correspond to speech-based measures assessing specific aspects of phonetic categorization assumed to depend on spectral and temporal auditory resolution. The authors further hypothesized that speech-based categorization performance would ultimately be a superior predictor of speech recognition performance, because of the fundamental nature of speech recognition as categorization. Nineteen cochlear implant listeners and 10 listeners with normal hearing participated in a suite of tasks that included spectral ripple discrimination, temporal modulation detection, and syllable categorization, which was split into a spectral cue-based task (targeting the /ba/-/da/ contrast) and a timing cue-based task (targeting the /b/-/p/ and /d/-/t/ contrasts). Speech sounds were manipulated to contain specific spectral or temporal modulations (formant transitions or voice onset time, respectively) that could be categorized. Categorization responses were quantified using logistic regression to assess perceptual sensitivity to acoustic phonetic cues. Word recognition testing was also conducted for cochlear implant listeners. Cochlear implant users were generally less successful at utilizing both spectral and temporal cues for categorization compared with listeners with normal hearing. For the cochlear implant listener group, spectral ripple discrimination was significantly correlated with the categorization of formant transitions; both were correlated with better word recognition. Temporal modulation detection using 100- and 10-Hz-modulated noise was not correlated either with the cochlear implant subjects' categorization of

  8. Auditory Space Perception in Left- and Right-Handers

    Science.gov (United States)

    Ocklenburg, Sebastian; Hirnstein, Marco; Hausmann, Markus; Lewald, Jorg

    2010-01-01

    Several studies have shown that handedness has an impact on visual spatial abilities. Here we investigated the effect of laterality on auditory space perception. Participants (33 right-handers, 20 left-handers) completed two tasks of sound localization. In a dark, anechoic, and sound-proof room, sound stimuli (broadband noise) were presented via…

  9. Reduced P50 Auditory Sensory Gating Response in Professional Musicians

    Science.gov (United States)

    Kizkin, Sibel; Karlidag, Rifat; Ozcan, Cemal; Ozisik, Handan Isin

    2006-01-01

    Evoked potential studies have demonstrated that musicians have the ability to distinguish musical sounds preattentively and automatically at the temporal, spectral, and spatial levels in more detail. It is however not known whether there is a difference in the early processes of auditory data processing of musicians. The most emphasized and…

  10. Early hominin auditory capacities.

    Science.gov (United States)

    Quam, Rolf; Martínez, Ignacio; Rosa, Manuel; Bonmatí, Alejandro; Lorenzo, Carlos; de Ruiter, Darryl J; Moggi-Cecchi, Jacopo; Conde Valverde, Mercedes; Jarabo, Pilar; Menter, Colin G; Thackeray, J Francis; Arsuaga, Juan Luis

    2015-09-01

    Studies of sensory capacities in past life forms have offered new insights into their adaptations and lifeways. Audition is particularly amenable to study in fossils because it is strongly related to physical properties that can be approached through their skeletal structures. We have studied the anatomy of the outer and middle ear in the early hominin taxa Australopithecus africanus and Paranthropus robustus and estimated their auditory capacities. Compared with chimpanzees, the early hominin taxa are derived toward modern humans in their slightly shorter and wider external auditory canal, smaller tympanic membrane, and lower malleus/incus lever ratio, but they remain primitive in the small size of their stapes footplate. Compared with chimpanzees, both early hominin taxa show a heightened sensitivity to frequencies between 1.5 and 3.5 kHz and an occupied band of maximum sensitivity that is shifted toward slightly higher frequencies. The results have implications for sensory ecology and communication, and suggest that the early hominin auditory pattern may have facilitated an increased emphasis on short-range vocal communication in open habitats.

  11. Early hominin auditory capacities

    Science.gov (United States)

    Quam, Rolf; Martínez, Ignacio; Rosa, Manuel; Bonmatí, Alejandro; Lorenzo, Carlos; de Ruiter, Darryl J.; Moggi-Cecchi, Jacopo; Conde Valverde, Mercedes; Jarabo, Pilar; Menter, Colin G.; Thackeray, J. Francis; Arsuaga, Juan Luis

    2015-01-01

    Studies of sensory capacities in past life forms have offered new insights into their adaptations and lifeways. Audition is particularly amenable to study in fossils because it is strongly related to physical properties that can be approached through their skeletal structures. We have studied the anatomy of the outer and middle ear in the early hominin taxa Australopithecus africanus and Paranthropus robustus and estimated their auditory capacities. Compared with chimpanzees, the early hominin taxa are derived toward modern humans in their slightly shorter and wider external auditory canal, smaller tympanic membrane, and lower malleus/incus lever ratio, but they remain primitive in the small size of their stapes footplate. Compared with chimpanzees, both early hominin taxa show a heightened sensitivity to frequencies between 1.5 and 3.5 kHz and an occupied band of maximum sensitivity that is shifted toward slightly higher frequencies. The results have implications for sensory ecology and communication, and suggest that the early hominin auditory pattern may have facilitated an increased emphasis on short-range vocal communication in open habitats. PMID:26601261

  12. How does spatial hearing affect cocktail party conversations?

    Science.gov (United States)

    Shinn-Cunningham, Barbara G.

    2003-04-01

    Although spatial cues are not a dominant factor in auditory scene analysis by humans, spatial separation of a target talker and an interfering sound source often increases the intelligibility of the target. A large portion of this improvement arises from a simple acoustic effect: separating the target and interferer generally increases the target-to-interferer energy ratio at one ear. When the target is near threshold, binaural processing provides an additional small, but important improvement. Finally, in conditions where both target and interferer are audible but are difficult to segregate, spatial separation can improve the ability to stream the sources and interpret the target message. The echoes and reverberation present in everyday environments influence all of these factors, altering the target-to-interferer energy ratio, the effectiveness of binaural processing, and the ability to stream the sources. A number of studies will be reviewed to demonstrate how these different factors influence speech intelligibility and affect cocktail party conversations in everyday environments. [Work supported by the Air Force Office of Scientific Research and the Alfred P. Sloan Foundation.

  13. From maps to navigation: the role of cues in finding locations in a virtual environment.

    Science.gov (United States)

    Hutcheson, Adam T; Wedell, Douglas H

    2012-08-01

    In two experiments, participants navigated through a large arena within a virtual environment (VE) to a location encoded in memory from a map. In both experiments, participants recalled locations by navigating through the VE, but in Experiment 2, they additionally recalled the locations on the original map. Two cues were located outside and above the walls of the arena at either north-south locations or east-west locations. The pattern of angular bias was used to infer how the cues affected the creation of spatial categories influencing memory for location in the two tasks. When participants navigated to remembered locations in the VE, two cue-based spatial categories were inferred, with cues serving to demarcate the boundaries of the categories. When participants remembered locations on the original map, two cue-based categories were again formed, but with cues serving as category prototypes. The pattern of results implies that cue-based spatial categorization schemes may be formulated differently at the memory retrieval stage depending on task constraints.

  14. Auditory Grouping Mechanisms Reflect a Sound’s Relative Position in a Sequence

    Directory of Open Access Journals (Sweden)

    Kevin Thomas Hill

    2012-06-01

    Full Text Available The human brain uses acoustic cues to decompose complex auditory scenes into its components. For instance to improve communication, a listener can select an individual stream, such as a talker in a crowded room, based on cues such as pitch or location. Despite numerous investigations into auditory streaming, few have demonstrated clear correlates of perception; instead, in many studies perception covaries with changes in physical stimulus properties (e.g. frequency separation. In the current report, we employ a classic ABA streaming paradigm and human electroencephalography (EEG to disentangle the individual contributions of stimulus properties from changes in auditory perception. We find that changes in perceptual state – that is the perception of one versus two auditory streams with physically identical stimuli – and changes in physical stimulus properties are reflected independently in the event-related potential (ERP during overlapping time windows. These findings emphasize the necessity of controlling for stimulus properties when studying perceptual effects of streaming. Furthermore, the independence of the perceptual effect from stimulus properties suggests the neural correlates of streaming reflect a tone’s relative position within a larger sequence (1st, 2nd, 3rd rather than its acoustics. By clarifying the role of stimulus attributes along with perceptual changes, this study helps explain precisely how the brain is able to distinguish a sound source of interest in an auditory scene.

  15. Head direction cell representations maintain internal coherence during conflicting proximal and distal cue rotations: Comparison with hippocampal place cells

    OpenAIRE

    Yoganarasimha, D.; Yu, Xintian; Knierim, James J.

    2006-01-01

    Place cells of the hippocampal formation encode a spatial representation of the environment, and the orientation of this representation is apparently governed by the head direction cell system. The representation of a well-explored environment by CA1 place cells can be split when there is conflicting information from salient proximal and distal cues, as some place fields rotate to follow the distal cues while others rotate to follow the proximal cues (Knierim, 2002a). In contrast, the CA3 rep...

  16. The complementary roles of auditory and motor information evaluated in a Bayesian perceptuo-motor model of speech perception.

    Science.gov (United States)

    Laurent, Raphaël; Barnaud, Marie-Lou; Schwartz, Jean-Luc; Bessière, Pierre; Diard, Julien

    2017-10-01

    There is a consensus concerning the view that both auditory and motor representations intervene in the perceptual processing of speech units. However, the question of the functional role of each of these systems remains seldom addressed and poorly understood. We capitalized on the formal framework of Bayesian Programming to develop COSMO (Communicating Objects using Sensory-Motor Operations), an integrative model that allows principled comparisons of purely motor or purely auditory implementations of a speech perception task and tests the gain of efficiency provided by their Bayesian fusion. Here, we show 3 main results: (a) In a set of precisely defined "perfect conditions," auditory and motor theories of speech perception are indistinguishable; (b) When a learning process that mimics speech development is introduced into COSMO, it departs from these perfect conditions. Then auditory recognition becomes more efficient than motor recognition in dealing with learned stimuli, while motor recognition is more efficient in adverse conditions. We interpret this result as a general "auditory-narrowband versus motor-wideband" property; and (c) Simulations of plosive-vowel syllable recognition reveal possible cues from motor recognition for the invariant specification of the place of plosive articulation in context that are lacking in the auditory pathway. This provides COSMO with a second property, where auditory cues would be more efficient for vowel decoding and motor cues for plosive articulation decoding. These simulations provide several predictions, which are in good agreement with experimental data and suggest that there is natural complementarity between auditory and motor processing within a perceptuo-motor theory of speech perception. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  17. Temporal coherence sensitivity in auditory cortex.

    Science.gov (United States)

    Barbour, Dennis L; Wang, Xiaoqin

    2002-11-01

    Natural sounds often contain energy over a broad spectral range and consequently overlap in frequency when they occur simultaneously; however, such sounds under normal circumstances can be distinguished perceptually (e.g., the cocktail party effect). Sound components arising from different sources have distinct (i.e., incoherent) modulations, and incoherence appears to be one important cue used by the auditory system to segregate sounds into separately perceived acoustic objects. Here we show that, in the primary auditory cortex of awake marmoset monkeys, many neurons responsive to amplitude- or frequency-modulated tones at a particular carrier frequency [the characteristic frequency (CF)] also demonstrate sensitivity to the relative modulation phase between two otherwise identically modulated tones: one at CF and one at a different carrier frequency. Changes in relative modulation phase reflect alterations in temporal coherence between the two tones, and the most common neuronal response was found to be a maximum of suppression for the coherent condition. Coherence sensitivity was generally found in a narrow frequency range in the inhibitory portions of the frequency response areas (FRA), indicating that only some off-CF neuronal inputs into these cortical neurons interact with on-CF inputs on the same time scales. Over the population of neurons studied, carrier frequencies showing coherence sensitivity were found to coincide with the carrier frequencies of inhibition, implying that inhibitory inputs create the effect. The lack of strong coherence-induced facilitation also supports this interpretation. Coherence sensitivity was found to be greatest for modulation frequencies of 16-128 Hz, which is higher than the phase-locking capability of most cortical neurons, implying that subcortical neurons could play a role in the phenomenon. Collectively, these results reveal that auditory cortical neurons receive some off-CF inputs temporally matched and some temporally

  18. Compression of auditory space during forward self-motion.

    Directory of Open Access Journals (Sweden)

    Wataru Teramoto

    Full Text Available BACKGROUND: Spatial inputs from the auditory periphery can be changed with movements of the head or whole body relative to the sound source. Nevertheless, humans can perceive a stable auditory environment and appropriately react to a sound source. This suggests that the inputs are reinterpreted in the brain, while being integrated with information on the movements. Little is known, however, about how these movements modulate auditory perceptual processing. Here, we investigate the effect of the linear acceleration on auditory space representation. METHODOLOGY/PRINCIPAL FINDINGS: Participants were passively transported forward/backward at constant accelerations using a robotic wheelchair. An array of loudspeakers was aligned parallel to the motion direction along a wall to the right of the listener. A short noise burst was presented during the self-motion from one of the loudspeakers when the listener's physical coronal plane reached the location of one of the speakers (null point. In Experiments 1 and 2, the participants indicated which direction the sound was presented, forward or backward relative to their subjective coronal plane. The results showed that the sound position aligned with the subjective coronal plane was displaced ahead of the null point only during forward self-motion and that the magnitude of the displacement increased with increasing the acceleration. Experiment 3 investigated the structure of the auditory space in the traveling direction during forward self-motion. The sounds were presented at various distances from the null point. The participants indicated the perceived sound location by pointing a rod. All the sounds that were actually located in the traveling direction were perceived as being biased towards the null point. CONCLUSIONS/SIGNIFICANCE: These results suggest a distortion of the auditory space in the direction of movement during forward self-motion. The underlying mechanism might involve anticipatory spatial

  19. Auditory Perceptual Abilities Are Associated with Specific Auditory Experience

    Directory of Open Access Journals (Sweden)

    Yael Zaltz

    2017-11-01

    Full Text Available The extent to which auditory experience can shape general auditory perceptual abilities is still under constant debate. Some studies show that specific auditory expertise may have a general effect on auditory perceptual abilities, while others show a more limited influence, exhibited only in a relatively narrow range associated with the area of expertise. The current study addresses this issue by examining experience-dependent enhancement in perceptual abilities in the auditory domain. Three experiments were performed. In the first experiment, 12 pop and rock musicians and 15 non-musicians were tested in frequency discrimination (DLF, intensity discrimination, spectrum discrimination (DLS, and time discrimination (DLT. Results showed significant superiority of the musician group only for the DLF and DLT tasks, illuminating enhanced perceptual skills in the key features of pop music, in which miniscule changes in amplitude and spectrum are not critical to performance. The next two experiments attempted to differentiate between generalization and specificity in the influence of auditory experience, by comparing subgroups of specialists. First, seven guitar players and eight percussionists were tested in the DLF and DLT tasks that were found superior for musicians. Results showed superior abilities on the DLF task for guitar players, though no difference between the groups in DLT, demonstrating some dependency of auditory learning on the specific area of expertise. Subsequently, a third experiment was conducted, testing a possible influence of vowel density in native language on auditory perceptual abilities. Ten native speakers of German (a language characterized by a dense vowel system of 14 vowels, and 10 native speakers of Hebrew (characterized by a sparse vowel system of five vowels, were tested in a formant discrimination task. This is the linguistic equivalent of a DLS task. Results showed that German speakers had superior formant

  20. Heightened fear in response to a safety cue and extinguished fear cue in a rat model of maternal immune activation

    Directory of Open Access Journals (Sweden)

    Susan eSangha

    2014-05-01

    Full Text Available Maternal immune activation during pregnancy is an environmental risk factor for psychiatric illnesses such as schizophrenia and autism in the offspring. Hence, changes in an array of behaviors, including behavioral flexibility, consistent with altered functioning of cortico-limbic circuits have been reported in rodent models of maternal immune activation. Surprisingly, previous studies have not examined the effect of maternal immune activation on the extinction of fear conditioning which depends on cortico-limbic circuits. Thus, we tested the effects of treating pregnant Long Evans rats with the viral mimetic polyI:C (gestational day 15; 4 mg/kg; i.v. on fear conditioning and extinction in the male offspring using two different tasks. In the first experiment, we observed no effect of polyI:C treatment on the acquisition or extinction of a classically conditioned fear memory in a non-discriminative auditory cue paradigm. However, polyI:C-treated offspring did increase contextual freezing during the recall of fear extinction in this non-discriminative paradigm. The second experiment utilized a recently developed task to explicitly test the ability of rats to discriminate among cues signifying fear, reward, and safety; a task that requires behavioral flexibility. To our surprise, polyI:C-treated rats acquired the task in a manner similar to saline-treated rats. However, upon subsequent extinction training, they showed significantly faster extinction of the freezing response to the fear cue. In contrast, during the extinction recall test, polyI:C-treated offspring showed enhanced freezing behavior before and after presentation of the fear cue, suggesting an impairment in their ability to regulate fear behavior. These behavioral results are integrated into the literature suggesting impairments in cortico-limbic brain function in the offspring of rats treated with polyI:C during pregnancy.

  1. Impaired cognitive performance in subjects with methamphetamine dependence during exposure to neutral versus methamphetamine-related cues.

    Science.gov (United States)

    Tolliver, Bryan K; Price, Kimber L; Baker, Nathaniel L; LaRowe, Steven D; Simpson, Annie N; McRae-Clark, Aimee L; Saladin, Michael E; DeSantis, Stacia M; Chapman, Elizabeth; Garrett, Margaret; Brady, Kathleen T

    2012-05-01

    Chronic methamphetamine abuse is associated with cognitive deficits that may impede treatment in methamphetamine-dependent patients. Exposure to methamphetamine-related cues can elicit intense craving in chronic users of the drug, but the effects of exposure to drug cues on cognitive performance in these individuals are unknown. This study assessed whether exposure to methamphetamine-related visual cues can elicit craving and/or alter dual task cognitive performance in 30 methamphetamine-dependent subjects and 30 control subjects in the laboratory. Reaction time, response errors, and inhibition errors were assessed on an auditory Go-No Go task performed by adult participants (total N = 60) while watching neutral versus methamphetamine-related video cues. Craving was assessed with the Within-Session Rating Scale modified for methamphetamine-dependent subjects. Exposure to methamphetamine-related cues elicited craving only in methamphetamine-dependent subjects. Even in the absence of methamphetamine cues, methamphetamine-dependent subjects exhibited slower reaction times and higher rates of both inhibition and response errors than control subjects did. Upon exposure to methamphetamine cues, rates of both response errors and inhibition errors increased significantly in methamphetamine-dependent subjects. Control subjects exhibited no increase in inhibition errors and only slightly increased rates of response errors upon exposure to methamphetamine cues. Response error rates, but not inhibition error rates or reaction times, during methamphetamine cue exposure were significantly associated with craving scores in methamphetamine-dependent subjects. Methamphetamine-dependent individuals exhibit cognitive performance deficits that are more pronounced during exposure to methamphetamine-related cues. Interventions that reduce cue reactivity may have utility in the treatment of methamphetamine dependence.

  2. Auditory Discrimination and Auditory Sensory Behaviours in Autism Spectrum Disorders

    Science.gov (United States)

    Jones, Catherine R. G.; Happe, Francesca; Baird, Gillian; Simonoff, Emily; Marsden, Anita J. S.; Tregay, Jenifer; Phillips, Rebecca J.; Goswami, Usha; Thomson, Jennifer M.; Charman, Tony

    2009-01-01

    It has been hypothesised that auditory processing may be enhanced in autism spectrum disorders (ASD). We tested auditory discrimination ability in 72 adolescents with ASD (39 childhood autism; 33 other ASD) and 57 IQ and age-matched controls, assessing their capacity for successful discrimination of the frequency, intensity and duration…

  3. Auditory and non-auditory effects of noise on health

    NARCIS (Netherlands)

    Basner, M.; Babisch, W.; Davis, A.; Brink, M.; Clark, C.; Janssen, S.A.; Stansfeld, S.

    2013-01-01

    Noise is pervasive in everyday life and can cause both auditory and non-auditory health eff ects. Noise-induced hearing loss remains highly prevalent in occupational settings, and is increasingly caused by social noise exposure (eg, through personal music players). Our understanding of molecular

  4. The Central Auditory Processing Kit[TM]. Book 1: Auditory Memory [and] Book 2: Auditory Discrimination, Auditory Closure, and Auditory Synthesis [and] Book 3: Auditory Figure-Ground, Auditory Cohesion, Auditory Binaural Integration, and Compensatory Strategies.

    Science.gov (United States)

    Mokhemar, Mary Ann

    This kit for assessing central auditory processing disorders (CAPD), in children in grades 1 through 8 includes 3 books, 14 full-color cards with picture scenes, and a card depicting a phone key pad, all contained in a sturdy carrying case. The units in each of the three books correspond with auditory skill areas most commonly addressed in…

  5. Methylphenidate attenuates limbic brain inhibition after cocaine-cues exposure in cocaine abusers.

    Energy Technology Data Exchange (ETDEWEB)

    Volkow, N.D.; Wang, G.; Volkow, N.D.; Wang, G.-J.; Tomasi, D.; Telang, F.; Fowler, J.S.; Pradhan, K.; Jayne, M.; Logan, J.; Goldstein, R.Z.; Alia-Klein, N.; Wong, C.T.

    2010-07-01

    Dopamine (phasic release) is implicated in conditioned responses. Imaging studies in cocaine abusers show decreases in striatal dopamine levels, which we hypothesize may enhance conditioned responses since tonic dopamine levels modulate phasic dopamine release. To test this we assessed the effects of increasing tonic dopamine levels (using oral methylphenidate) on brain activation induced by cocaine-cues in cocaine abusers. Brain metabolism (marker of brain function) was measured with PET and {sup 18}FDG in 24 active cocaine abusers tested four times; twice watching a Neutral video (nature scenes) and twice watching a Cocaine-cues video; each video was preceded once by placebo and once by methylphenidate (20 mg). The Cocaine-cues video increased craving to the same extent with placebo (68%) and with methylphenidate (64%). In contrast, SPM analysis of metabolic images revealed that differences between Neutral versus Cocaine-cues conditions were greater with placebo than methylphenidate; whereas with placebo the Cocaine-cues decreased metabolism (p<0.005) in left limbic regions (insula, orbitofrontal, accumbens) and right parahippocampus, with methylphenidate it only decreased in auditory and visual regions, which also occurred with placebo. Decreases in metabolism in these regions were not associated with craving; in contrast the voxel-wise SPM analysis identified significant correlations with craving in anterior orbitofrontal cortex (p<0.005), amygdala, striatum and middle insula (p<0.05). This suggests that methylphenidate's attenuation of brain reactivity to Cocaine-cues is distinct from that involved in craving. Cocaine-cues decreased metabolism in limbic regions (reflects activity over 30 minutes), which contrasts with activations reported by fMRI studies (reflects activity over 2-5 minutes) that may reflect long-lasting limbic inhibition following activation. Studies to evaluate the clinical significance of methylphenidate's blunting of cue

  6. Methylphenidate attenuates limbic brain inhibition after cocaine-cues exposure in cocaine abusers.

    Directory of Open Access Journals (Sweden)

    Nora D Volkow

    2010-07-01

    Full Text Available Dopamine (phasic release is implicated in conditioned responses. Imaging studies in cocaine abusers show decreases in striatal dopamine levels, which we hypothesize may enhance conditioned responses since tonic dopamine levels modulate phasic dopamine release. To test this we assessed the effects of increasing tonic dopamine levels (using oral methylphenidate on brain activation induced by cocaine-cues in cocaine abusers. Brain metabolism (marker of brain function was measured with PET and (18FDG in 24 active cocaine abusers tested four times; twice watching a Neutral video (nature scenes and twice watching a Cocaine-cues video; each video was preceded once by placebo and once by methylphenidate (20 mg. The Cocaine-cues video increased craving to the same extent with placebo (68% and with methylphenidate (64%. In contrast, SPM analysis of metabolic images revealed that differences between Neutral versus Cocaine-cues conditions were greater with placebo than methylphenidate; whereas with placebo the Cocaine-cues decreased metabolism (p<0.005 in left limbic regions (insula, orbitofrontal, accumbens and right parahippocampus, with methylphenidate it only decreased in auditory and visual regions, which also occurred with placebo. Decreases in metabolism in these regions were not associated with craving; in contrast the voxel-wise SPM analysis identified significant correlations with craving in anterior orbitofrontal cortex (p<0.005, amygdala, striatum and middle insula (p<0.05. This suggests that methylphenidate's attenuation of brain reactivity to Cocaine-cues is distinct from that involved in craving. Cocaine-cues decreased metabolism in limbic regions (reflects activity over 30 minutes, which contrasts with activations reported by fMRI studies (reflects activity over 2-5 minutes that may reflect long-lasting limbic inhibition following activation. Studies to evaluate the clinical significance of methylphenidate's blunting of cue-induced limbic

  7. Western-style diet impairs stimulus control by food deprivation state cues: Implications for obesogenic environments.

    Science.gov (United States)

    Sample, Camille H; Martin, Ashley A; Jones, Sabrina; Hargrave, Sara L; Davidson, Terry L

    2015-10-01

    In western and westernized societies, large portions of the population live in what are considered to be "obesogenic" environments. Among other things, obesogenic environments are characterized by a high prevalence of external cues that are associated with highly palatable, energy-dense foods. One prominent hypothesis suggests that these external cues become such powerful conditioned elicitors of appetitive and eating behavior that they overwhelm the internal, physiological mechanisms that serve to maintain energy balance. The present research investigated a learning mechanism that may underlie this loss of internal relative to external control. In Experiment 1, rats were provided with both auditory cues (external stimuli) and varying levels of food deprivation (internal stimuli) that they could use to solve a simple discrimination task. Despite having access to clearly discriminable external cues, we found that the deprivation cues gained substantial discriminative control over conditioned responding. Experiment 2 found that, compared to standard chow, maintenance on a "western-style" diet high in saturated fat and sugar weakened discriminative control by food deprivation cues, but did not impair learning when external cues were also trained as relevant discriminative signals for sucrose. Thus, eating a western-style diet contributed to a loss of internal control over appetitive behavior relative to external cues. We discuss how this relative loss of control by food deprivation signals may result from interference with hippocampal-dependent learning and memory processes, forming the basis of a vicious-cycle of excessive intake, body weight gain, and progressive cognitive decline that may begin very early in life. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Western-style diet impairs stimulus control by food deprivation state cues. Implications for obesogenic environments☆

    Science.gov (United States)

    Sample, Camille H.; Martin, Ashley A.; Jones, Sabrina; Hargrave, Sara L.; Davidson, Terry L.

    2015-01-01

    In western and westernized societies, large portions of the population live in what are considered to be “obesogenic” environments. Among other things, obesogenic environments are characterized by a high prevalence of external cues that are associated with highly palatable, energy-dense foods. One prominent hypothesis suggests that these external cues become such powerful conditioned elicitors of appetitive and eating behavior that they overwhelm the internal, physiological mechanisms that serve to maintain energy balance. The present research investigated a learning mechanism that may underlie this loss of internal relative to external control. In Experiment 1, rats were provided with both auditory cues (external stimuli) and varying levels of food deprivation (internal stimuli) that they could use to solve a simple discrimination task. Despite having access to clearly discriminable external cues, we found that the deprivation cues gained substantial discriminative control over conditioned responding. Experiment 2 found that, compared to standard chow, maintenance on a “western-style” diet high in saturated fat and sugar weakened discriminative control by food deprivation cues, but did not impair learning when external cues were also trained as relevant discriminative signals for sucrose. Thus, eating a western-style diet contributed to a loss of internal control over appetitive behavior relative to external cues. We discuss how this relative loss of control by food deprivation signals may result from interference with hippocampal-dependent learning and memory processes, forming the basis of a vicious-cycle of excessive intake, body weight gain, and progressive cognitive decline that may begin very early in life. PMID:26002280

  9. Evaluation of multimodal ground cues

    DEFF Research Database (Denmark)

    Nordahl, Rolf; Lecuyer, Anatole; Serafin, Stefania

    2012-01-01

    This chapter presents an array of results on the perception of ground surfaces via multiple sensory modalities,with special attention to non visual perceptual cues, notably those arising from audition and haptics, as well as interactions between them. It also reviews approaches to combining...

  10. Optimal assessment of multiple cues

    NARCIS (Netherlands)

    Fawcett, Tim W; Johnstone, Rufus A

    2003-01-01

    In a wide range of contexts from mate choice to foraging, animals are required to discriminate between alternative options on the basis of multiple cues. How should they best assess such complex multicomponent stimuli? Here, we construct a model to investigate this problem, focusing on a simple case

  11. Revisiting the "enigma" of musicians with dyslexia: Auditory sequencing and speech abilities.

    Science.gov (United States)

    Zuk, Jennifer; Bishop-Liebler, Paula; Ozernov-Palchik, Ola; Moore, Emma; Overy, Katie; Welch, Graham; Gaab, Nadine

    2017-04-01

    Previous research has suggested a link between musical training and auditory processing skills. Musicians have shown enhanced perception of auditory features critical to both music and speech, suggesting that this link extends beyond basic auditory processing. It remains unclear to what extent musicians who also have dyslexia show these specialized abilities, considering often-observed persistent deficits that coincide with reading impairments. The present study evaluated auditory sequencing and speech discrimination in 52 adults comprised of musicians with dyslexia, nonmusicians with dyslexia, and typical musicians. An auditory sequencing task measuring perceptual acuity for tone sequences of increasing length was administered. Furthermore, subjects were asked to discriminate synthesized syllable continua varying in acoustic components of speech necessary for intraphonemic discrimination, which included spectral (formant frequency) and temporal (voice onset time [VOT] and amplitude envelope) features. Results indicate that musicians with dyslexia did not significantly differ from typical musicians and performed better than nonmusicians with dyslexia for auditory sequencing as well as discrimination of spectral and VOT cues within syllable continua. However, typical musicians demonstrated superior performance relative to both groups with dyslexia for discrimination of syllables varying in amplitude information. These findings suggest a distinct profile of speech processing abilities in musicians with dyslexia, with specific weaknesses in discerning amplitude cues within speech. Because these difficulties seem to remain persistent in adults with dyslexia despite musical training, this study only partly supports the potential for musical training to enhance the auditory processing skills known to be crucial for literacy in individuals with dyslexia. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  12. Orienting of Attention to Gaze Direction Cues in Rhesus Macaques: Species-specificity, and Effects of Cue Motion and Reward Predictiveness

    Directory of Open Access Journals (Sweden)

    Dian eYu

    2012-06-01

    Full Text Available Primates live in complex social groups and rely on social cues to direct their attention. For example, primates react faster to an unpredictable stimulus after seeing a conspecific looking in the direction of that stimulus. In the current study we tested the specificity of facial cues (gaze direction for orienting attention and their interaction with other cues that are known to guide attention. In particular, we tested whether macaque monkeys only respond to gaze cues from conspecifics or if the effect generalizes across species. We found an attentional advantage of conspecific faces over that of other human and cartoon faces. Because gaze cues are often conveyed by gesture, we also explored the effect of image motion (a simulated glance on the orienting of attention in monkeys. We found that the simulated glance did not significantly enhance the speed of orienting for monkey face stimuli, but had a significant effect for images of human faces. Finally, because gaze cues presumably guide attention towards relevant or rewarding stimuli, we explored whether orienting of attention was modulated by reward predictiveness. When the cue predicted reward location, face and non-face cues were effective in speeding responses towards the cued location. This effect was strongest for conspecific faces. In sum, our results suggest that while conspecific gaze cues activate an intrinsic process that reflexively directs spatial attention, its effect is relatively small in comparison to other features including motion and reward predictiveness. It is possible that gaze cues are more important for decision-making and voluntary orienting than for reflexive orienting.

  13. The acoustic and perceptual cues affecting melody segregation for listeners with a cochlear implant.

    Directory of Open Access Journals (Sweden)

    Jeremy eMarozeau

    2013-11-01

    Full Text Available Our ability to listen selectively to single sound sources in complex auditory environments is termed ‘auditory stream segregation.’ This ability is affected by peripheral disorders such as hearing loss, as well as plasticity in central processing such as occurs with musical training. Brain plasticity induced by musical training can enhance the ability to segregate sound, leading to improvements in a variety of auditory abilities. The melody segregation ability of 12 cochlear-implant recipients was tested using a new method to determine the perceptual distance needed to segregate a simple 4-note melody from a background of interleaved random-pitch distractor notes. In experiment 1, participants rated the difficulty of segregating the melody from distracter notes. Four physical properties of the distracter notes were changed. In experiment 2, listeners were asked to rate the dissimilarity between melody patterns whose notes differed on the four physical properties simultaneously. Multidimensional scaling analysis transformed the dissimilarity ratings into perceptual distances. Regression between physical and perceptual cues then derived the minimal perceptual distance needed to segregate the melody.The most efficient streaming cue for CI users was loudness. For the normal hearing listeners without musical backgrounds, a greater difference on the perceptual dimension correlated to the temporal envelope is needed for stream segregation in CI users. No differences in streaming efficiency were found between the perceptual dimensions linked to the F0 and the spectral envelope.Combined with our previous results in normally-hearing musicians and non-musicians, the results show that differences in training as well as differences in peripheral auditory processing (hearing impairment and the use of a hearing device influences the way that listeners use different acoustic cues for segregating interleaved musical streams.

  14. The footprints of visual attention during search with 100% valid and 100% invalid cues.

    Science.gov (United States)

    Eckstein, Miguel P; Pham, Binh T; Shimozaki, Steven S

    2004-06-01

    Human performance during visual search typically improves when spatial cues indicate the possible target locations. In many instances, the performance improvement is quantitatively predicted by a Bayesian or quasi-Bayesian observer in which visual attention simply selects the information at the cued locations without changing the quality of processing or sensitivity and ignores the information at the uncued locations. Aside from the general good agreement between the effect of the cue on model and human performance, there has been little independent confirmation that humans are effectively selecting the relevant information. In this study, we used the classification image technique to assess the effectiveness of spatial cues in the attentional selection of relevant locations and suppression of irrelevant locations indicated by spatial cues. Observers searched for a bright target among dimmer distractors that might appear (with 50% probability) in one of eight locations in visual white noise. The possible target location was indicated using a 100% valid box cue or seven 100% invalid box cues in which the only potential target locations was uncued. For both conditions, we found statistically significant perceptual templates shaped as differences of Gaussians at the relevant locations with no perceptual templates at the irrelevant locations. We did not find statistical significant differences between the shapes of the inferred perceptual templates for the 100% valid and 100% invalid cues conditions. The results confirm the idea that during search visual attention allows the observer to effectively select relevant information and ignore irrelevant information. The results for the 100% invalid cues condition suggests that the selection process is not drawn automatically to the cue but can be under the observers' voluntary control.

  15. Partial Epilepsy with Auditory Features

    Directory of Open Access Journals (Sweden)

    J Gordon Millichap

    2004-07-01

    Full Text Available The clinical characteristics of 53 sporadic (S cases of idiopathic partial epilepsy with auditory features (IPEAF were analyzed and compared to previously reported familial (F cases of autosomal dominant partial epilepsy with auditory features (ADPEAF in a study at the University of Bologna, Italy.

  16. Cues of control modulate the ascription of object ownership.

    Science.gov (United States)

    Scorolli, Claudia; Borghi, Anna M; Tummolini, Luca

    2017-06-06

    Knowing whether an object is owned and by whom is essential to avoid costly conflicts. We hypothesize that everyday interactions around objects are influenced by a minimal sense of object ownership grounded on respect of possession. In particular, we hypothesize that tracking object ownership can be influenced by any cue that predicts the establishment of individual physical control over objects. To test this hypothesis we used an indirect method to determine whether visual cues of physical control like spatial proximity to an object, temporal priority in seeing it, and touching it influence this minimal sense of object ownership. In Experiment 1 participants were shown a neutral object located on a table, in the reaching space of one of two characters. In Experiment 2 one character was the first to find the object then another character appeared and saw the object. In Experiments 3 and 4, spatial proximity, temporal priority, and touch are pitted against each other to assess their relative weight. After having seen the scenes, participants were required to judge the sensibility of sentences in which ownership of the object was ascribed to one of the two characters. Responses were faster when the objects were located in the reaching space of the character to whom ownership was ascribed in the sentence and when ownership was ascribed to the character who was the first to find the object. When contrasting the relevant cues, results indicate that touch is stronger than temporal priority in modulating the ascription of object ownership. However, all these effects were also influenced by contextual social cues like the gender of both characters and participants, the presence of a third-party observer, and the co-presence of characters. Consistently with our hypothesis, results indicate that many different cues of physical control influence the ascription of ownership in daily social contexts.

  17. The adaptation of visual and auditory integration in the barn owl superior colliculus with Spike Timing Dependent Plasticity.

    Science.gov (United States)

    Huo, Juan; Murray, Alan

    2009-09-01

    To localize a seen object, the superior colliculus of the barn owl integrates the visual and auditory localization cues which are accessed from the sensory system of the brain. These cues are formed as visual and auditory maps. The alignment between visual and auditory maps is very important for accurate localization in prey behavior. Blindness or prism wearing may interfere this alignment. The juvenile barn owl could adapt its auditory map to this mismatch after several weeks training. Here we investigate this process by building a computational model of auditory and visual integration in deep Superior Colliculus (SC). The adaptation of the map alignment is based on activity dependent axon developing in Inferior Colliculus (IC). This axon growing process is instructed by an inhibitory network in SC while the strength of the inhibition is adjusted by Spike Timing Dependent Plasticity (STDP). The simulation results of this model are in line with the biological experiment and support the idea that STDP is involved in the alignment of sensory maps. This model also provides a new spiking neuron based mechanism capable of eliminating the disparity in visual and auditory map integration.

  18. Amodal brain activation and functional connectivity in response to high-energy-density food cues in obesity.

    Science.gov (United States)

    Carnell, Susan; Benson, Leora; Pantazatos, Spiro P; Hirsch, Joy; Geliebter, Allan

    2014-11-01

    The obesogenic environment is pervasive, yet only some people become obese. The aim was to investigate whether obese individuals show differential neural responses to visual and auditory food cues, independent of cue modality. Obese (BMI 29-41, n = 10) and lean (BMI 20-24, n = 10) females underwent fMRI scanning during presentation of auditory (spoken word) and visual (photograph) cues representing high-energy-density (ED) and low-ED foods. The effect of obesity on whole-brain activation, and on functional connectivity with the midbrain/VTA, was examined. Obese compared with lean women showed greater modality-independent activation of the midbrain/VTA and putamen in response to high-ED (vs. low-ED) cues, as well as relatively greater functional connectivity between the midbrain/VTA and cerebellum (P food cues within the midbrain/VTA and putamen, and altered functional connectivity between the midbrain/VTA and cerebellum, could contribute to excessive food intake in obese individuals. © 2014 The Obesity Society.

  19. Deficits in auditory processing contribute to impairments in vocal affect recognition in autism spectrum disorders: A MEG study.

    Science.gov (United States)

    Demopoulos, Carly; Hopkins, Joyce; Kopald, Brandon E; Paulson, Kim; Doyle, Lauren; Andrews, Whitney E; Lewine, Jeffrey David

    2015-11-01

    The primary aim of this study was to examine whether there is an association between magnetoencephalography-based (MEG) indices of basic cortical auditory processing and vocal affect recognition (VAR) ability in individuals with autism spectrum disorder (ASD). MEG data were collected from 25 children/adolescents with ASD and 12 control participants using a paired-tone paradigm to measure quality of auditory physiology, sensory gating, and rapid auditory processing. Group differences were examined in auditory processing and vocal affect recognition ability. The relationship between differences in auditory processing and vocal affect recognition deficits was examined in the ASD group. Replicating prior studies, participants with ASD showed longer M1n latencies and impaired rapid processing compared with control participants. These variables were significantly related to VAR, with the linear combination of auditory processing variables accounting for approximately 30% of the variability after controlling for age and language skills in participants with ASD. VAR deficits in ASD are typically interpreted as part of a core, higher order dysfunction of the "social brain"; however, these results suggest they also may reflect basic deficits in auditory processing that compromise the extraction of socially relevant cues from the auditory environment. As such, they also suggest that therapeutic targeting of sensory dysfunction in ASD may have additional positive implications for other functional deficits. (c) 2015 APA, all rights reserved).

  20. The Perception of Auditory Motion

    Science.gov (United States)

    Leung, Johahn

    2016-01-01

    The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion. The fact that we perceive our auditory world to be stable despite almost continual movement of the head demonstrates the efficiency and effectiveness of this process. This review examines the acoustical basis of auditory motion perception and a wide range of psychophysical, electrophysiological, and cortical imaging studies that have probed the limits and possible mechanisms underlying this perception. PMID:27094029

  1. Sex difference in cue strategy in a modified version of the Morris water task: correlations between brain and behaviour.

    Science.gov (United States)

    Keeley, Robin J; Tyndall, Amanda V; Scott, Gavin A; Saucier, Deborah M

    2013-01-01

    Sex differences in spatial memory function have been reported with mixed results in the literature, with some studies showing male advantages and others showing no differences. When considering estrus cycle in females, results are mixed at to whether high or low circulating estradiol results in an advantage in spatial navigation tasks. Research involving humans and rodents has demonstrated males preferentially employ Euclidean strategies and utilize geometric cues in order to spatially navigate, whereas females employ landmark strategies and cues in order to spatially navigate. This study used the water-based snowcone maze in order to assess male and female preference for landmark or geometric cues, with specific emphasis placed on the effects of estrus cycle phase for female rat. Performance and preference for the geometric cue was examined in relation to total hippocampal and hippocampal subregions (CA1&2, CA3 and dentate gyrus) volumes and entorhinal cortex thickness in order to determine the relation between strategy and spatial performance and brain area size. The study revealed that males outperformed females overall during training trials, relied on the geometric cue when the platform was moved and showed significant correlations between entorhinal cortex thickness and spatial memory performance. No gross differences in behavioural performance was observed within females when accounting for cyclicity, and only total hippocampal volume was correlated with performance during the learning trials. This study demonstrates the sex-specific use of cues and brain areas in a spatial learning task.

  2. Musicians’ Online Performance during Auditory and Visual Statistical Learning Tasks

    Science.gov (United States)

    Mandikal Vasuki, Pragati R.; Sharma, Mridula; Ibrahim, Ronny K.; Arciuli, Joanne

    2017-01-01

    Musicians’ brains are considered to be a functional model of neuroplasticity due to the structural and functional changes associated with long-term musical training. In this study, we examined implicit extraction of statistical regularities from a continuous stream of stimuli—statistical learning (SL). We investigated whether long-term musical training is associated with better extraction of statistical cues in an auditory SL (aSL) task and a visual SL (vSL) task—both using the embedded triplet paradigm. Online measures, characterized by event related potentials (ERPs), were recorded during a familiarization phase while participants were exposed to a continuous stream of individually presented pure tones in the aSL task or individually presented cartoon figures in the vSL task. Unbeknown to participants, the stream was composed of triplets. Musicians showed advantages when compared to non-musicians in the online measure (early N1 and N400 triplet onset effects) during the aSL task. However, there were no differences between musicians and non-musicians for the vSL task. Results from the current study show that musical training is associated with enhancements in extraction of statistical cues only in the auditory domain. PMID:28352223

  3. A perceptual study of how rapidly and accurately audiovisual cues to utterance-final boundaries can be interpreted in Chinese and English

    NARCIS (Netherlands)

    Bi, Ran; Swerts, Marc

    2017-01-01

    Speakers and their addressees make use of both auditory and visual features as cues to the end of a speaking turn. Prior work, mostly based on analyses of languages like Dutch and English, has shown that intonational markers such as melodic boundary tones as well as variation in eye gaze behaviour

  4. Peripheral Auditory Mechanisms

    CERN Document Server

    Hall, J; Hubbard, A; Neely, S; Tubis, A

    1986-01-01

    How weIl can we model experimental observations of the peripheral auditory system'? What theoretical predictions can we make that might be tested'? It was with these questions in mind that we organized the 1985 Mechanics of Hearing Workshop, to bring together auditory researchers to compare models with experimental observations. Tbe workshop forum was inspired by the very successful 1983 Mechanics of Hearing Workshop in Delft [1]. Boston University was chosen as the site of our meeting because of the Boston area's role as a center for hearing research in this country. We made a special effort at this meeting to attract students from around the world, because without students this field will not progress. Financial support for the workshop was provided in part by grant BNS- 8412878 from the National Science Foundation. Modeling is a traditional strategy in science and plays an important role in the scientific method. Models are the bridge between theory and experiment. Tbey test the assumptions made in experim...

  5. Use of binaural and monaural cues to identify the lateral position of a virtual object using echoes.

    Science.gov (United States)

    Rowan, Daniel; Papadopoulos, Timos; Edwards, David; Allen, Robert

    2015-05-01

    Under certain conditions, sighted and blind humans can use echoes to discern characteristics of otherwise silent objects. Previous research concluded that robust horizontal-plane object localisation ability, without using head movement, depends on information above 2 kHz. While a strong interaural level difference (ILD) cue is available, it was not clear if listeners were using that or the monaural level cue that necessarily accompanies ILD. In this experiment, 13 sighted and normal-hearing listeners were asked to identify the right-vs.-left position of an object in virtual auditory space. Sounds were manipulated to remove binaural cues (binaural vs. diotic presentation) and prevent the use of monaural level cues (using level roving). With low- (2 kHz) frequency bands of noise, performance with binaural presentation and level rove exceeded that expected from use of monaural level cues and that with diotic presentation. It is argued that a high-frequency binaural cue (most likely ILD), and not a monaural level cue, is crucial for robust object localisation without head movement. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  6. Responses of mink to auditory stimuli: Prerequisites for applying the ‘cognitive bias’ approach

    DEFF Research Database (Denmark)

    Svendsen, Pernille Maj; Malmkvist, Jens; Halekoh, Ulrich

    2012-01-01

    The aim of the study was to determine and validate prerequisites for applying a cognitive (judgement) bias approach to assessing welfare in farmed mink (Neovison vison). We investigated discrimination ability and associative learning ability using auditory