WorldWideScience

Sample records for auditory spatial cues

  1. Spatial Hearing with Incongruent Visual or Auditory Room Cues

    DEFF Research Database (Denmark)

    Gil Carvajal, Juan Camilo; Cubick, Jens; Santurette, Sébastien;

    2016-01-01

    whether a mismatch between playback and recording room reduces perceived distance, azimuthal direction, and compactness of the auditory image, and whether this is mostly due to incongruent auditory cues or to expectations generated from the visual impression of the room. Perceived distance ratings...... decreased significantly when collected in a more reverberant environment than the recording room, whereas azimuthal direction and compactness remained room independent. Moreover, modifying visual room-related cues had no effect on these three attributes, while incongruent auditory room-related cues between...

  2. Attention Modulates the Auditory Cortical Processing of Spatial and Category Cues in Naturalistic Auditory Scenes

    Science.gov (United States)

    Renvall, Hanna; Staeren, Noël; Barz, Claudia S.; Ley, Anke; Formisano, Elia

    2016-01-01

    This combined fMRI and MEG study investigated brain activations during listening and attending to natural auditory scenes. We first recorded, using in-ear microphones, vocal non-speech sounds, and environmental sounds that were mixed to construct auditory scenes containing two concurrent sound streams. During the brain measurements, subjects attended to one of the streams while spatial acoustic information of the scene was either preserved (stereophonic sounds) or removed (monophonic sounds). Compared to monophonic sounds, stereophonic sounds evoked larger blood-oxygenation-level-dependent (BOLD) fMRI responses in the bilateral posterior superior temporal areas, independent of which stimulus attribute the subject was attending to. This finding is consistent with the functional role of these regions in the (automatic) processing of auditory spatial cues. Additionally, significant differences in the cortical activation patterns depending on the target of attention were observed. Bilateral planum temporale and inferior frontal gyrus were preferentially activated when attending to stereophonic environmental sounds, whereas when subjects attended to stereophonic voice sounds, the BOLD responses were larger at the bilateral middle superior temporal gyrus and sulcus, previously reported to show voice sensitivity. In contrast, the time-resolved MEG responses were stronger for mono- than stereophonic sounds in the bilateral auditory cortices at ~360 ms after the stimulus onset when attending to the voice excerpts within the combined sounds. The observed effects suggest that during the segregation of auditory objects from the auditory background, spatial sound cues together with other relevant temporal and spectral cues are processed in an attention-dependent manner at the cortical locations generally involved in sound recognition. More synchronous neuronal activation during monophonic than stereophonic sound processing, as well as (local) neuronal inhibitory mechanisms in

  3. Preconditioning of Spatial and Auditory Cues: Roles of the Hippocampus, Frontal Cortex, and Cue-Directed Attention

    Directory of Open Access Journals (Sweden)

    Andrew C. Talk

    2016-12-01

    Full Text Available Loss of function of the hippocampus or frontal cortex is associated with reduced performance on memory tasks, in which subjects are incidentally exposed to cues at specific places in the environment and are subsequently asked to recollect the location at which the cue was experienced. Here, we examined the roles of the rodent hippocampus and frontal cortex in cue-directed attention during encoding of memory for the location of a single incidentally experienced cue. During a spatial sensory preconditioning task, rats explored an elevated platform while an auditory cue was incidentally presented at one corner. The opposite corner acted as an unpaired control location. The rats demonstrated recollection of location by avoiding the paired corner after the auditory cue was in turn paired with shock. Damage to either the dorsal hippocampus or the frontal cortex impaired this memory ability. However, we also found that hippocampal lesions enhanced attention directed towards the cue during the encoding phase, while frontal cortical lesions reduced cue-directed attention. These results suggest that the deficit in spatial sensory preconditioning caused by frontal cortical damage may be mediated by inattention to the location of cues during the latent encoding phase, while deficits following hippocampal damage must be related to other mechanisms such as generation of neural plasticity.

  4. Proprioceptive cues modulate further processing of spatially congruent auditory information. a high-density EEG study.

    Science.gov (United States)

    Simon-Dack, S L; Teder-Sälejärvi, W A

    2008-07-18

    Multisensory integration and interaction occur when bimodal stimuli are presented as either spatially congruent or incongruent, but temporally coincident. We investigated whether proprioceptive cues interact with auditory attention to one of two sound sources in free-field. The participant's task was to attend to either the left or right speaker and to respond to occasional increased-bandwidth targets via a footswitch. We recorded high-density EEG in three experimental conditions: the participants either held the speakers in their hands (Hold), reached out close to them (Reach), or had their hands in their lap (Lap). In the last two conditions, the auditory event-related potentials (ERPs) revealed a prominent negativity around 200 ms post-stimulus (N2 wave) over fronto-central areas, which is a reliable index of further processing of spatial stimulus features in free-field. The N2 wave was markedly attenuated in the 'Hold' condition, which suggests that proprioceptive cues apparently solidify spatial information computed by the auditory system, in so doing alleviating the need for further processing of spatial coordinates solely based on auditory information.

  5. Selective attention modulates human auditory brainstem responses: relative contributions of frequency and spatial cues.

    Directory of Open Access Journals (Sweden)

    Alexandre Lehmann

    Full Text Available Selective attention is the mechanism that allows focusing one's attention on a particular stimulus while filtering out a range of other stimuli, for instance, on a single conversation in a noisy room. Attending to one sound source rather than another changes activity in the human auditory cortex, but it is unclear whether attention to different acoustic features, such as voice pitch and speaker location, modulates subcortical activity. Studies using a dichotic listening paradigm indicated that auditory brainstem processing may be modulated by the direction of attention. We investigated whether endogenous selective attention to one of two speech signals affects amplitude and phase locking in auditory brainstem responses when the signals were either discriminable by frequency content alone, or by frequency content and spatial location. Frequency-following responses to the speech sounds were significantly modulated in both conditions. The modulation was specific to the task-relevant frequency band. The effect was stronger when both frequency and spatial information were available. Patterns of response were variable between participants, and were correlated with psychophysical discriminability of the stimuli, suggesting that the modulation was biologically relevant. Our results demonstrate that auditory brainstem responses are susceptible to efferent modulation related to behavioral goals. Furthermore they suggest that mechanisms of selective attention actively shape activity at early subcortical processing stages according to task relevance and based on frequency and spatial cues.

  6. Selective attention modulates human auditory brainstem responses: relative contributions of frequency and spatial cues.

    Science.gov (United States)

    Lehmann, Alexandre; Schönwiesner, Marc

    2014-01-01

    Selective attention is the mechanism that allows focusing one's attention on a particular stimulus while filtering out a range of other stimuli, for instance, on a single conversation in a noisy room. Attending to one sound source rather than another changes activity in the human auditory cortex, but it is unclear whether attention to different acoustic features, such as voice pitch and speaker location, modulates subcortical activity. Studies using a dichotic listening paradigm indicated that auditory brainstem processing may be modulated by the direction of attention. We investigated whether endogenous selective attention to one of two speech signals affects amplitude and phase locking in auditory brainstem responses when the signals were either discriminable by frequency content alone, or by frequency content and spatial location. Frequency-following responses to the speech sounds were significantly modulated in both conditions. The modulation was specific to the task-relevant frequency band. The effect was stronger when both frequency and spatial information were available. Patterns of response were variable between participants, and were correlated with psychophysical discriminability of the stimuli, suggesting that the modulation was biologically relevant. Our results demonstrate that auditory brainstem responses are susceptible to efferent modulation related to behavioral goals. Furthermore they suggest that mechanisms of selective attention actively shape activity at early subcortical processing stages according to task relevance and based on frequency and spatial cues.

  7. Potential for using visual, auditory, and olfactory cues to manage foraging behaviour and spatial distribution of rangeland livestock

    Science.gov (United States)

    This paper reviews the literature and reports on the current state of knowledge regarding the potential for managers to use visual (VC), auditory (AC), and olfactory (OC) cues to manage foraging behavior and spatial distribution of rangeland livestock. We present evidence that free-ranging livestock...

  8. ERP Indications for Sustained and Transient Auditory Spatial Attention with Different Lateralization Cues

    Science.gov (United States)

    Widmann, Andreas; Schröger, Erich

    The presented study was designed to investigate ERP effects of auditory spatial attention in sustained attention condition (where the to-be-attended location is defined in a blockwise manner) and in a transient attention condition (where the to-be-attended location is defined in a trial-by-trial manner). Lateralization in the azimuth plane was manipulated (a) via monaural presentation of l- and right-ear sounds, (b) via interaural intensity differences, (c) via interaural time differences, (d) via an artificial-head recording, and (e) via free-field stimulation. Ten participants were delivered with frequent Nogo- and infrequent Go-Stimuli. In one half of the experiment participants were instructed to press a button if they detected a Go-stimulus at a predefined side (sustained attention), in the other half they were required to detect Go-stimuli following an arrow-cue at the cued side (transient attention). Results revealed negative differences (Nd) between ERPs elicited by to-be-attended and to-be-ignored sounds in all conditions. These Nd-effects were larger for the sustained than for the transient attention condition indicating that attentional selection according to spatial criteria is improved when subjects can focus to one and the same location for a series of stimuli.

  9. Competition between auditory and visual spatial cues during visual task performance

    NARCIS (Netherlands)

    Koelewijn, T.; Bronkhorst, A.; Theeuwes, J.

    2009-01-01

    There is debate in the crossmodal cueing literature as to whether capture of visual attention by means of sound is a fully automatic process. Recent studies show that when visual attention is endogenously focused sound still captures attention. The current study investigated whether there is interac

  10. Motor Training: Comparison of Visual and Auditory Coded Proprioceptive Cues

    Directory of Open Access Journals (Sweden)

    Philip Jepson

    2012-05-01

    Full Text Available Self-perception of body posture and movement is achieved through multi-sensory integration, particularly the utilisation of vision, and proprioceptive information derived from muscles and joints. Disruption to these processes can occur following a neurological accident, such as stroke, leading to sensory and physical impairment. Rehabilitation can be helped through use of augmented visual and auditory biofeedback to stimulate neuro-plasticity, but the effective design and application of feedback, particularly in the auditory domain, is non-trivial. Simple auditory feedback was tested by comparing the stepping accuracy of normal subjects when given a visual spatial target (step length and an auditory temporal target (step duration. A baseline measurement of step length and duration was taken using optical motion capture. Subjects (n=20 took 20 ‘training’ steps (baseline ±25% using either an auditory target (950 Hz tone, bell-shaped gain envelope or visual target (spot marked on the floor and were then asked to replicate the target step (length or duration corresponding to training with all feedback removed. Visual cues (mean percentage error=11.5%; SD ± 7.0%; auditory cues (mean percentage error = 12.9%; SD ± 11.8%. Visual cues elicit a high degree of accuracy both in training and follow-up un-cued tasks; despite the novelty of the auditory cues present for subjects, the mean accuracy of subjects approached that for visual cues, and initial results suggest a limited amount of practice using auditory cues can improve performance.

  11. Cross-modal cueing in audiovisual spatial attention

    DEFF Research Database (Denmark)

    Blurton, Steven Paul; Greenlee, Mark W.; Gondan, Matthias

    2015-01-01

    Visual processing is most effective at the location of our attentional focus. It has long been known that various spatial cues can direct visuospatial attention and influence the detection of auditory targets. Cross-modal cueing, however, seems to depend on the type of the visual cue: facilitation...... that the perception of multisensory signals is modulated by a single, supramodal system operating in a top-down manner (Experiment 1). In contrast, bottom-up control of attention, as observed in the exogenous cueing task of Experiment 2, mainly exerts its influence through modality-specific subsystems. Experiment 3...

  12. Retrosplenial Cortex Is Required for the Retrieval of Remote Memory for Auditory Cues

    Science.gov (United States)

    Todd, Travis P.; Mehlman, Max L.; Keene, Christopher S.; DeAngeli, Nicole E.; Bucci, David J.

    2016-01-01

    The retrosplenial cortex (RSC) has a well-established role in contextual and spatial learning and memory, consistent with its known connectivity with visuo-spatial association areas. In contrast, RSC appears to have little involvement with delay fear conditioning to an auditory cue. However, all previous studies have examined the contribution of…

  13. Designing auditory cues for Parkinson's disease gait rehabilitation.

    Science.gov (United States)

    Cancela, Jorge; Moreno, Eugenio M; Arredondo, Maria T; Bonato, Paolo

    2014-01-01

    Recent works have proved that Parkinson's disease (PD) patients can be largely benefit by performing rehabilitation exercises based on audio cueing and music therapy. Specially, gait can benefit from repetitive sessions of exercises using auditory cues. Nevertheless, all the experiments are based on the use of a metronome as auditory stimuli. Within this work, Human-Computer Interaction methodologies have been used to design new cues that could benefit the long-term engagement of PD patients in these repetitive routines. The study has been also extended to commercial music and musical pieces by analyzing features and characteristics that could benefit the engagement of PD patients to rehabilitation tasks.

  14. The plastic ear and perceptual relearning in auditory spatial perception.

    Science.gov (United States)

    Carlile, Simon

    2014-01-01

    The auditory system of adult listeners has been shown to accommodate to altered spectral cues to sound location which presumably provides the basis for recalibration to changes in the shape of the ear over a life time. Here we review the role of auditory and non-auditory inputs to the perception of sound location and consider a range of recent experiments looking at the role of non-auditory inputs in the process of accommodation to these altered spectral cues. A number of studies have used small ear molds to modify the spectral cues that result in significant degradation in localization performance. Following chronic exposure (10-60 days) performance recovers to some extent and recent work has demonstrated that this occurs for both audio-visual and audio-only regions of space. This begs the questions as to the teacher signal for this remarkable functional plasticity in the adult nervous system. Following a brief review of influence of the motor state in auditory localization, we consider the potential role of auditory-motor learning in the perceptual recalibration of the spectral cues. Several recent studies have considered how multi-modal and sensory-motor feedback might influence accommodation to altered spectral cues produced by ear molds or through virtual auditory space stimulation using non-individualized spectral cues. The work with ear molds demonstrates that a relatively short period of training involving audio-motor feedback (5-10 days) significantly improved both the rate and extent of accommodation to altered spectral cues. This has significant implications not only for the mechanisms by which this complex sensory information is encoded to provide spatial cues but also for adaptive training to altered auditory inputs. The review concludes by considering the implications for rehabilitative training with hearing aids and cochlear prosthesis.

  15. The plastic ear and perceptual relearning in auditory spatial perception.

    Directory of Open Access Journals (Sweden)

    Simon eCarlile

    2014-08-01

    Full Text Available The auditory system of adult listeners has been shown to accommodate to altered spectral cues to sound location which presumably provides the basis for recalibration to changes in the shape of the ear over a life time. Here we review the role of auditory and non-auditory inputs to the perception of sound location and consider a range of recent experiments looking at the role of non-auditory inputs in the process of accommodation to these altered spectral cues. A number of studies have used small ear moulds to modify the spectral cues that result in significant degradation in localization performance. Following chronic exposure (10-60 days performance recovers to some extent and recent work has demonstrated that this occurs for both audio-visual and audio-only regions of space. This begs the questions as to the teacher signal for this remarkable functional plasticity in the adult nervous system. Following a brief review of influence of the motor state in auditory localisation, we consider the potential role of auditory-motor learning in the perceptual recalibration of the spectral cues. Several recent studies have considered how multi-modal and sensory-motor feedback might influence accommodation to altered spectral cues produced by ear moulds or through virtual auditory space stimulation using non-individualised spectral cues. The work with ear moulds demonstrates that a relatively short period of training involving sensory-motor feedback (5 – 10 days significantly improved both the rate and extent of accommodation to altered spectral cues. This has significant implications not only for the mechanisms by which this complex sensory information is encoded to provide a spatial code but also for adaptive training to altered auditory inputs. The review concludes by considering the implications for rehabilitative training with hearing aids and cochlear prosthesis.

  16. When Symbolic Spatial Cues Go before Numbers

    Science.gov (United States)

    Herrera, Amparo; Macizo, Pedro

    2011-01-01

    This work explores the effect of spatial cueing on number processing. Participants performed a parity judgment task. However, shortly before the target number, a cue (arrow pointing to left, arrow pointing to right or a cross) was centrally presented. In Experiment 1, in which responses were lateralized, the cue direction modulated the interaction…

  17. An auditory cue-depreciation effect.

    Science.gov (United States)

    Gibson, J M; Watkins, M J

    1991-01-01

    An experiment is reported in which subjects first heard a list of words and then tried to identify these same words from degraded utterances. Paralleling previous findings in the visual modality, the probability of identifying a given utterance was reduced when the utterance was immediately preceded by other, more degraded, utterances of the same word. A second experiment replicated this "cue-depreciation effect" and in addition found the effect to be weakened, if not eliminated, when the target word was not included in the initial list or when the test was delayed by two days.

  18. Spatial audition in a static virtual environment: the role of auditory-visual interaction

    Directory of Open Access Journals (Sweden)

    Isabelle Viaud-Delmon

    2009-04-01

    Full Text Available The integration of the auditory modality in virtual reality environments is known to promote the sensations of immersion and presence. However it is also known from psychophysics studies that auditory-visual interaction obey to complex rules and that multisensory conflicts may disrupt the adhesion of the participant to the presented virtual scene. It is thus important to measure the accuracy of the auditory spatial cues reproduced by the auditory display and their consistency with the spatial visual cues. This study evaluates auditory localization performances under various unimodal and auditory-visual bimodal conditions in a virtual reality (VR setup using a stereoscopic display and binaural reproduction over headphones in static conditions. The auditory localization performances observed in the present study are in line with those reported in real conditions, suggesting that VR gives rise to consistent auditory and visual spatial cues. These results validate the use of VR for future psychophysics experiments with auditory and visual stimuli. They also emphasize the importance of a spatially accurate auditory and visual rendering for VR setups.

  19. The effect of visual and auditory cues on seat preference in an opera theater.

    Science.gov (United States)

    Jeon, Jin Yong; Kim, Yong Hee; Cabrera, Densil; Bassett, John

    2008-06-01

    Opera performance conveys both visual and auditory information to an audience, and so opera theaters should be evaluated in both domains. This study investigates the effect of static visual and auditory cues on seat preference in an opera theater. Acoustical parameters were measured and visibility was analyzed for nine seats. Subjective assessments for visual-only, auditory-only, and auditory-visual preferences for these seat positions were made through paired-comparison tests. In the cases of visual-only and auditory-only subjective evaluations, preference judgment tests on a rating scale were also employed. Visual stimuli were based on still photographs, and auditory stimuli were based on binaural impulse responses convolved with a solo tenor recording. For the visual-only experiment, preference is predicted well by measurements taken related to the angle of seats from the theater midline at the center of the stage, the size of the photographed stage view, the visual obstruction, and the distance from the stage. Sound pressure level was the dominant predictor of auditory preference in the auditory-only experiment. In the cross-modal experiments, both auditory and visual preferences were shown to contribute to overall impression, but auditory cues were more influential than the static visual cues. The results show that both a positive visual-only or a positive auditory-only evaluations positively contribute to the assessments of seat quality.

  20. Early, but not late visual distractors affect movement synchronization to a temporal-spatial visual cue.

    Science.gov (United States)

    Booth, Ashley J; Elliott, Mark T

    2015-01-01

    The ease of synchronizing movements to a rhythmic cue is dependent on the modality of the cue presentation: timing accuracy is much higher when synchronizing with discrete auditory rhythms than an equivalent visual stimulus presented through flashes. However, timing accuracy is improved if the visual cue presents spatial as well as temporal information (e.g., a dot following an oscillatory trajectory). Similarly, when synchronizing with an auditory target metronome in the presence of a second visual distracting metronome, the distraction is stronger when the visual cue contains spatial-temporal information rather than temporal only. The present study investigates individuals' ability to synchronize movements to a temporal-spatial visual cue in the presence of same-modality temporal-spatial distractors. Moreover, we investigated how increasing the number of distractor stimuli impacted on maintaining synchrony with the target cue. Participants made oscillatory vertical arm movements in time with a vertically oscillating white target dot centered on a large projection screen. The target dot was surrounded by 2, 8, or 14 distractor dots, which had an identical trajectory to the target but at a phase lead or lag of 0, 100, or 200 ms. We found participants' timing performance was only affected in the phase-lead conditions and when there were large numbers of distractors present (8 and 14). This asymmetry suggests participants still rely on salient events in the stimulus trajectory to synchronize movements. Subsequently, distractions occurring in the window of attention surrounding those events have the maximum impact on timing performance.

  1. Auditory-visual spatial interaction and modularity

    Science.gov (United States)

    Radeau, M

    1994-02-01

    The results of dealing with the conditions for pairing visual and auditory data coming from spatially separate locations argue for cognitive impenetrability and computational autonomy, the pairing rules being the Gestalt principles of common fate and proximity. Other data provide evidence for pairing with several properties of modular functioning. Arguments for domain specificity are inferred from comparison with audio-visual speech. Suggestion of innate specification can be found in developmental data indicating that the grouping of visual and auditory signals is supported very early in life by the same principles that operate in adults. Support for a specific neural architecture comes from neurophysiological studies of the bimodal (auditory-visual) neurons of the cat superior colliculus. Auditory-visual pairing thus seems to present the four main properties of the Fodorian module.

  2. The possible price of auditory cueing: influence on obstacle avoidance in Parkinson's disease.

    NARCIS (Netherlands)

    Nanhoe-Mahabier, S.W.; Delval, A.; Snijders, A.H.; Weerdesteijn, V.G.; Overeem, S.; Bloem, B.R.

    2012-01-01

    BACKGROUND: Under carefully controlled conditions, rhythmic auditory cueing can improve gait in patients with Parkinson's disease (PD). In complex environments, attention paid to cueing might adversely affect gait, for example when a simultaneous task-such as avoiding obstacles-has to be executed. W

  3. Disentangling attention from action in the emotional spatial cueing task.

    Science.gov (United States)

    Mulckhuyse, Manon; Crombez, Geert

    2014-01-01

    In the emotional spatial cueing task, a peripheral cue--either emotional or non-emotional--is presented before target onset. A stronger cue validity effect with an emotional relative to a non-emotional cue (i.e., more efficient responding to validly cued targets relative to invalidly cued targets) is taken as an indication of emotional modulation of attentional processes. However, results from previous emotional spatial cueing studies are not consistent. Some studies find an effect at the validly cued location (shorter reaction times compared to a non-emotional cue), whereas other studies find an effect at the invalidly cued location (longer reaction times compared to a non-emotional cue). In the current paper, we explore which parameters affect emotional modulation of the cue validity effect in the spatial cueing task. Results from five experiments in healthy volunteers led to the conclusion that a threatening spatial cue did not affect attention processes but rather indicate that motor processes are affected. A possible mechanism might be that a strong aversive cue stimulus decreases reaction times by means of stronger action preparation. Consequently, in case of a spatially congruent response with the peripheral cue, a stronger cue validity effect could be obtained due to stronger response priming. The implications for future research are discussed.

  4. Auditory Spatial Perception without Vision

    Science.gov (United States)

    Voss, Patrice

    2016-01-01

    Valuable insights into the role played by visual experience in shaping spatial representations can be gained by studying the effects of visual deprivation on the remaining sensory modalities. For instance, it has long been debated how spatial hearing evolves in the absence of visual input. While several anecdotal accounts tend to associate complete blindness with exceptional hearing abilities, experimental evidence supporting such claims is, however, matched by nearly equal amounts of evidence documenting spatial hearing deficits. The purpose of this review is to summarize the key findings which support either enhancements or deficits in spatial hearing observed following visual loss and to provide a conceptual framework that isolates the specific conditions under which they occur. Available evidence will be examined in terms of spatial dimensions (horizontal, vertical, and depth perception) and in terms of frames of reference (egocentric and allocentric). Evidence suggests that while early blind individuals show superior spatial hearing in the horizontal plane, they also show significant deficits in the vertical plane. Potential explanations underlying these contrasting findings will be discussed. Early blind individuals also show spatial hearing impairments when performing tasks that require the use of an allocentric frame of reference. Results obtained with late-onset blind individuals suggest that early visual experience plays a key role in the development of both spatial hearing enhancements and deficits. PMID:28066286

  5. Simultaneous EEG-fMRI brain signatures of auditory cue utilization

    Directory of Open Access Journals (Sweden)

    Mathias eScharinger

    2014-06-01

    Full Text Available Optimal utilization of acoustic cues during auditory categorization is a vital skill, particularly when informative cues become occluded or degraded. Consequently, the acoustic environment requires flexible choosing and switching amongst available cues. The present study targets the brain functions underlying such changes in cue utilization. Participants performed a categorization task with immediate feedback on acoustic stimuli from two categories that varied in duration and spectral properties, while we simultaneously recorded Blood Oxygenation Level Dependent (BOLD responses in fMRI and electroencephalograms (EEGs. In the first half of the experiment, categories could be best discriminated by spectral properties. Halfway through the experiment, spectral degradation rendered the stimulus duration the more informative cue. Behaviorally, degradation decreased the likelihood of utilizing spectral cues. Spectrally degrading the acoustic signal led to increased alpha power compared to nondegraded stimuli. The EEG-informed fMRI analyses revealed that alpha power correlated with BOLD changes in inferior parietal cortex and right posterior superior temporal gyrus (including planum temporale. In both areas, spectral degradation led to a weaker coupling of BOLD response to behavioral utilization of the spectral cue. These data provide converging evidence from behavioral modeling, electrophysiology, and hemodynamics that (a increased alpha power mediates the inhibition of uninformative (here spectral stimulus features, and that (b the parietal attention network supports optimal cue utilization in auditory categorization. The results highlight the complex cortical processing of auditory categorization under realistic listening challenges.

  6. Valid cues for auditory or somatosensory targets affect their perception: a signal detection approach.

    Science.gov (United States)

    Van Hulle, Lore; Van Damme, Stefaan; Crombez, Geert

    2013-01-01

    We investigated the effects of focusing attention towards auditory or somatosensory stimuli on perceptual sensitivity and response bias using a signal detection task. Participants (N = 44) performed an unspeeded detection task in which weak (individually calibrated) somatosensory or auditory stimuli were delivered. The focus of attention was manipulated by the presentation of a visual cue at the start of each trial. The visual cue consisted of the word "warmth" or the word "tone". This word cue was predictive of the corresponding target on two-thirds of the trials. As hypothesised, the results showed that cueing attention to a specific sensory modality resulted in a higher perceptual sensitivity for validly cued targets than for invalidly cued targets, as well as in a more liberal response criterion for reporting stimuli in the valid modality than in the invalid modality. The value of this experimental paradigm for investigating excessive attentional focus or hypervigilance in various non-clinical and clinical populations is discussed.

  7. Task-dependent calibration of auditory spatial perception through environmental visual observation.

    Science.gov (United States)

    Tonelli, Alessia; Brayda, Luca; Gori, Monica

    2015-01-01

    Visual information is paramount to space perception. Vision influences auditory space estimation. Many studies show that simultaneous visual and auditory cues improve precision of the final multisensory estimate. However, the amount or the temporal extent of visual information, that is sufficient to influence auditory perception, is still unknown. It is therefore interesting to know if vision can improve auditory precision through a short-term environmental observation preceding the audio task and whether this influence is task-specific or environment-specific or both. To test these issues we investigate possible improvements of acoustic precision with sighted blindfolded participants in two audio tasks [minimum audible angle (MAA) and space bisection] and two acoustically different environments (normal room and anechoic room). With respect to a baseline of auditory precision, we found an improvement of precision in the space bisection task but not in the MAA after the observation of a normal room. No improvement was found when performing the same task in an anechoic chamber. In addition, no difference was found between a condition of short environment observation and a condition of full vision during the whole experimental session. Our results suggest that even short-term environmental observation can calibrate auditory spatial performance. They also suggest that echoes can be the cue that underpins visual calibration. Echoes may mediate the transfer of information from the visual to the auditory system.

  8. Early, but not late visual distractors affect movement synchronization to a temporal-spatial visual cue

    Directory of Open Access Journals (Sweden)

    Ashley J Booth

    2015-06-01

    Full Text Available The ease of synchronising movements to a rhythmic cue is dependent on the modality of the cue presentation: timing accuracy is much higher when synchronising with discrete auditory rhythms than an equivalent visual stimulus presented through flashes. However, timing accuracy is improved if the visual cue presents spatial as well as temporal information (e.g. a dot following an oscillatory trajectory. Similarly, when synchronising with an auditory target metronome in the presence of a second visual distracting metronome, the distraction is stronger when the visual cue contains spatial-temporal information rather than temporal only. The present study investigates individuals’ ability to synchronise movements to a temporal-spatial visual cue in the presence of same-modality temporal-spatial distractors. Moreover, we investigated how increasing the number of distractor stimuli impacted on maintaining synchrony with the target cue. Participants made oscillatory vertical arm movements in time with a vertically oscillating white target dot centred on a large projection screen. The target dot was surrounded by 2, 8 or 14 distractor dots, which had an identical trajectory to the target but at a phase lead or lag of 0, 100 or 200ms. We found participants’ timing performance was only affected in the phase-lead conditions and when there were large numbers of distractors present (8 and 14. This asymmetry suggests participants still rely on salient events in the stimulus trajectory to synchronise movements. Subsequently, distractions occurring in the window of attention surrounding those events have the maximum impact on timing performance.

  9. Volume Attenuation and High Frequency Loss as Auditory Depth Cues in Stereoscopic 3D Cinema

    Science.gov (United States)

    Manolas, Christos; Pauletto, Sandra

    2014-09-01

    Assisted by the technological advances of the past decades, stereoscopic 3D (S3D) cinema is currently in the process of being established as a mainstream form of entertainment. The main focus of this collaborative effort is placed on the creation of immersive S3D visuals. However, with few exceptions, little attention has been given so far to the potential effect of the soundtrack on such environments. The potential of sound both as a means to enhance the impact of the S3D visual information and to expand the S3D cinematic world beyond the boundaries of the visuals is large. This article reports on our research into the possibilities of using auditory depth cues within the soundtrack as a means of affecting the perception of depth within cinematic S3D scenes. We study two main distance-related auditory cues: high-end frequency loss and overall volume attenuation. A series of experiments explored the effectiveness of these auditory cues. Results, although not conclusive, indicate that the studied auditory cues can influence the audience judgement of depth in cinematic 3D scenes, sometimes in unexpected ways. We conclude that 3D filmmaking can benefit from further studies on the effectiveness of specific sound design techniques to enhance S3D cinema.

  10. Auditory gist: recognition of very short sounds from timbre cues.

    Science.gov (United States)

    Suied, Clara; Agus, Trevor R; Thorpe, Simon J; Mesgarani, Nima; Pressnitzer, Daniel

    2014-03-01

    Sounds such as the voice or musical instruments can be recognized on the basis of timbre alone. Here, sound recognition was investigated with severely reduced timbre cues. Short snippets of naturally recorded sounds were extracted from a large corpus. Listeners were asked to report a target category (e.g., sung voices) among other sounds (e.g., musical instruments). All sound categories covered the same pitch range, so the task had to be solved on timbre cues alone. The minimum duration for which performance was above chance was found to be short, on the order of a few milliseconds, with the best performance for voice targets. Performance was independent of pitch and was maintained when stimuli contained less than a full waveform cycle. Recognition was not generally better when the sound snippets were time-aligned with the sound onset compared to when they were extracted with a random starting time. Finally, performance did not depend on feedback or training, suggesting that the cues used by listeners in the artificial gating task were similar to those relevant for longer, more familiar sounds. The results show that timbre cues for sound recognition are available at a variety of time scales, including very short ones.

  11. Auditory and visual spatial impression: Recent studies of three auditoria

    Science.gov (United States)

    Nguyen, Andy; Cabrera, Densil

    2004-10-01

    Auditory spatial impression is widely studied for its contribution to auditorium acoustical quality. By contrast, visual spatial impression in auditoria has received relatively little attention in formal studies. This paper reports results from a series of experiments investigating the auditory and visual spatial impression of concert auditoria. For auditory stimuli, a fragment of an anechoic recording of orchestral music was convolved with calibrated binaural impulse responses, which had been made with the dummy head microphone at a wide range of positions in three auditoria and the sound source on the stage. For visual stimuli, greyscale photographs were used, taken at the same positions in the three auditoria, with a visual target on the stage. Subjective experiments were conducted with auditory stimuli alone, visual stimuli alone, and visual and auditory stimuli combined. In these experiments, subjects rated apparent source width, listener envelopment, intimacy and source distance (auditory stimuli), and spaciousness, envelopment, stage dominance, intimacy and target distance (visual stimuli). Results show target distance to be of primary importance in auditory and visual spatial impression-thereby providing a basis for covariance between some attributes of auditory and visual spatial impression. Nevertheless, some attributes of spatial impression diverge between the senses.

  12. Tactile Cueing as a Gravitational Substitute for Spatial Navigation During Parabolic Flight

    Science.gov (United States)

    Montgomery, K. L.; Beaton, K. H.; Barba, J. M.; Cackler, J. M.; Son, J. H.; Horsfield, S. P.; Wood, S. J.

    2010-01-01

    INTRODUCTION: Spatial navigation requires an accurate awareness of orientation in your environment. The purpose of this experiment was to examine how spatial awareness was impaired with changing gravitational cues during parabolic flight, and the extent to which vibrotactile feedback of orientation could be used to help improve performance. METHODS: Six subjects were restrained in a chair tilted relative to the plane floor, and placed at random positions during the start of the microgravity phase. Subjects reported their orientation using verbal reports, and used a hand-held controller to point to a desired target location presented using a virtual reality video mask. This task was repeated with and without constant tactile cueing of "down" direction using a belt of 8 tactors placed around the mid-torso. Control measures were obtained during ground testing using both upright and tilted conditions. RESULTS: Perceptual estimates of orientation and pointing accuracy were impaired during microgravity or during rotation about an upright axis in 1g. The amount of error was proportional to the amount of chair displacement. Perceptual errors were reduced during movement about a tilted axis on earth. CONCLUSIONS: Reduced perceptual errors during tilts in 1g indicate the importance of otolith and somatosensory cues for maintaining spatial awareness. Tactile cueing may improve navigation in operational environments or clinical populations, providing a non-visual non-auditory feedback of orientation or desired direction heading.

  13. Deceptive body movements reverse spatial cueing in soccer.

    Directory of Open Access Journals (Sweden)

    Michael J Wright

    Full Text Available The purpose of the experiments was to analyse the spatial cueing effects of the movements of soccer players executing normal and deceptive (step-over turns with the ball. Stimuli comprised normal resolution or point-light video clips of soccer players dribbling a football towards the observer then turning right or left with the ball. Clips were curtailed before or on the turn (-160, -80, 0 or +80 ms to examine the time course of direction prediction and spatial cueing effects. Participants were divided into higher-skilled (HS and lower-skilled (LS groups according to soccer experience. In experiment 1, accuracy on full video clips was higher than on point-light but results followed the same overall pattern. Both HS and LS groups correctly identified direction on normal moves at all occlusion levels. For deceptive moves, LS participants were significantly worse than chance and HS participants were somewhat more accurate but nevertheless substantially impaired. In experiment 2, point-light clips were used to cue a lateral target. HS and LS groups showed faster reaction times to targets that were congruent with the direction of normal turns, and to targets incongruent with the direction of deceptive turns. The reversed cueing by deceptive moves coincided with earlier kinematic events than cueing by normal moves. It is concluded that the body kinematics of soccer players generate spatial cueing effects when viewed from an opponent's perspective. This could create a reaction time advantage when anticipating the direction of a normal move. A deceptive move is designed to turn this cueing advantage into a disadvantage. Acting on the basis of advance information, the presence of deceptive moves primes responses in the wrong direction, which may be only partly mitigated by delaying a response until veridical cues emerge.

  14. Positive effects of auditory cue in locomotor pattern of people with Parkinson’s disease (off and on medication

    Directory of Open Access Journals (Sweden)

    Natalia Madalena Rinaldi

    2014-12-01

    Full Text Available Gait disorders are identified in people with Parkinson’s disease. The aim of this study was to investigate the effect of auditory cues and medication on kinematic, kinetic and EMG parameters, during different gait phases of people with PD and healthy elderly. Thirty subjects distributed in two groups (Group 1, PD patients off and on medication; Group 2, healthy elderly participated in this study and were instructed to walk in two experimental conditions: non-cued and cued. Therefore, kinematic, kinetic and electromyography analyses were utilized to investigate the locomotor pattern. Changes in locomotor pattern (greater muscular activity with auditory cue were observed for PD patients. Regarding the medication, locomotor parameter improvement was observed after levodopa intake in association with the auditory cue. These results confirm the hypothesis about the external cues therapy that could be used as a complement to drug therapy to achieve improvement in the locomotor pattern of PD patients.

  15. The effect of visual cues on auditory stream segregation in musicians and non-musicians.

    Directory of Open Access Journals (Sweden)

    Jeremy Marozeau

    Full Text Available BACKGROUND: The ability to separate two interleaved melodies is an important factor in music appreciation. This ability is greatly reduced in people with hearing impairment, contributing to difficulties in music appreciation. The aim of this study was to assess whether visual cues, musical training or musical context could have an effect on this ability, and potentially improve music appreciation for the hearing impaired. METHODS: Musicians (N = 18 and non-musicians (N = 19 were asked to rate the difficulty of segregating a four-note repeating melody from interleaved random distracter notes. Visual cues were provided on half the blocks, and two musical contexts were tested, with the overlap between melody and distracter notes either gradually increasing or decreasing. CONCLUSIONS: Visual cues, musical training, and musical context all affected the difficulty of extracting the melody from a background of interleaved random distracter notes. Visual cues were effective in reducing the difficulty of segregating the melody from distracter notes, even in individuals with no musical training. These results are consistent with theories that indicate an important role for central (top-down processes in auditory streaming mechanisms, and suggest that visual cues may help the hearing-impaired enjoy music.

  16. Auditory and Cross-Modal Spatial Attention

    Science.gov (United States)

    2007-01-01

    interaural level and interaural envelope timing (weak cues for left-right direction). This work, published in Acustica united with Acta Acustica in...Acta Acust united Acustica 2005; 91:967-9. Durlach NI, Mason CR, Gallun FJ, Shinn-Cunningham BG, Colburn HS, and Kidd G Jr. Informational masking for

  17. Development of auditory localization accuracy and auditory spatial discrimination in children and adolescents.

    Science.gov (United States)

    Kühnle, S; Ludwig, A A; Meuret, S; Küttner, C; Witte, C; Scholbach, J; Fuchs, M; Rübsamen, R

    2013-01-01

    The present study investigated the development of two parameters of spatial acoustic perception in children and adolescents with normal hearing, aged 6-18 years. Auditory localization accuracy was quantified by means of a sound source identification task and auditory spatial discrimination acuity by measuring minimum audible angles (MAA). Both low- and high-frequency noise bursts were employed in the tests, thereby separately addressing auditory processing based on interaural time and intensity differences. Setup consisted of 47 loudspeakers mounted in the frontal azimuthal hemifield, ranging from 90° left to 90° right (-90°, +90°). Target signals were presented from 8 loudspeaker positions in the left and right hemifields (±4°, ±30°, ±60° and ±90°). Localization accuracy and spatial discrimination acuity showed different developmental courses. Localization accuracy remained stable from the age of 6 onwards. In contrast, MAA thresholds and interindividual variability of spatial discrimination decreased significantly with increasing age. Across all age groups, localization was most accurate and MAA thresholds were lower for frontal than for lateral sound sources, and for low-frequency compared to high-frequency noise bursts. The study also shows better performance in spatial hearing based on interaural time differences rather than on intensity differences throughout development. These findings confirm that specific aspects of central auditory processing show continuous development during childhood up to adolescence.

  18. Encoding of sound localization cues by an identified auditory interneuron: effects of stimulus temporal pattern.

    Science.gov (United States)

    Samson, Annie-Hélène; Pollack, Gerald S

    2002-11-01

    An important cue for sound localization is binaural comparison of stimulus intensity. Two features of neuronal responses, response strength, i.e., spike count and/or rate, and response latency, vary with stimulus intensity, and binaural comparison of either or both might underlie localization. Previous studies at the receptor-neuron level showed that these response features are affected by the stimulus temporal pattern. When sounds are repeated rapidly, as occurs in many natural sounds, response strength decreases and latency increases, resulting in altered coding of localization cues. In this study we analyze binaural cues for sound localization at the level of an identified pair of interneurons (the left and right AN2) in the cricket auditory system, with emphasis on the effects of stimulus temporal pattern on binaural response differences. AN2 spike count decreases with rapidly repeated stimulation and latency increases. Both effects depend on stimulus intensity. Because of the difference in intensity at the two ears, binaural differences in spike count and latency change as stimulation continues. The binaural difference in spike count decreases, whereas the difference in latency increases. The proportional changes in response strength and in latency are greater at the interneuron level than at the receptor level, suggesting that factors in addition to decrement of receptor responses are involved. Intracellular recordings reveal that a slowly building, long-lasting hyperpolarization is established in AN2. At the same time, the level of depolarization reached during the excitatory postsynaptic potential (EPSP) resulting from each sound stimulus decreases. Neither these effects on membrane potential nor the changes in spiking response are accounted for by contralateral inhibition. Based on comparison of our results with earlier behavioral experiments, it is unlikely that crickets use the binaural difference in latency of AN2 responses as the main cue for

  19. Nonlinear dynamics of human locomotion: effects of rhythmic auditory cueing on local dynamic stability

    Directory of Open Access Journals (Sweden)

    Philippe eTerrier

    2013-09-01

    Full Text Available It has been observed that times series of gait parameters (stride length (SL, stride time (ST and stride speed (SS, exhibit long-term persistence and fractal-like properties. Synchronizing steps with rhythmic auditory stimuli modifies the persistent fluctuation pattern to anti-persistence. Another nonlinear method estimates the degree of resilience of gait control to small perturbations, i.e. the local dynamic stability (LDS. The method makes use of the maximal Lyapunov exponent, which estimates how fast a nonlinear system embedded in a reconstructed state space (attractor diverges after an infinitesimal perturbation. We propose to use an instrumented treadmill to simultaneously measure basic gait parameters (time series of SL, ST and SS from which the statistical persistence among consecutive strides can be assessed, and the trajectory of the center of pressure (from which the LDS can be estimated. In 20 healthy participants, the response to rhythmic auditory cueing (RAC of LDS and of statistical persistence (assessed with detrended fluctuation analysis (DFA was compared. By analyzing the divergence curves, we observed that long-term LDS (computed as the reverse of the average logarithmic rate of divergence between the 4th and the 10th strides downstream from nearest neighbors in the reconstructed attractor was strongly enhanced (relative change +47%. That is likely the indication of a more dampened dynamics. The change in short-term LDS (divergence over one step was smaller (+3%. DFA results (scaling exponents confirmed an anti-persistent pattern in ST, SL and SS. Long-term LDS (but not short-term LDS and scaling exponents exhibited a significant correlation between them (r=0.7. Both phenomena probably result from the more conscious/voluntary gait control that is required by RAC. We suggest that LDS and statistical persistence should be used to evaluate the efficiency of cueing therapy in patients with neurological gait disorders.

  20. Verbal Auditory Cueing of Improvisational Dance: A Proposed Method for Training Agency in Parkinson's Disease.

    Science.gov (United States)

    Batson, Glenna; Hugenschmidt, Christina E; Soriano, Christina T

    2016-01-01

    Dance is a non-pharmacological intervention that helps maintain functional independence and quality of life in people with Parkinson's disease (PPD). Results from controlled studies on group-delivered dance for people with mild-to-moderate stage Parkinson's have shown statistically and clinically significant improvements in gait, balance, and psychosocial factors. Tested interventions include non-partnered dance forms (ballet and modern dance) and partnered (tango). In all of these dance forms, specific movement patterns initially are learned through repetition and performed in time-to-music. Once the basic steps are mastered, students may be encouraged to improvise on the learned steps as they perform them in rhythm with the music. Here, we summarize a method of teaching improvisational dance that advances previous reported benefits of dance for people with Parkinson's disease (PD). The method relies primarily on improvisational verbal auditory cueing with less emphasis on directed movement instruction. This method builds on the idea that daily living requires flexible, adaptive responses to real-life challenges. In PD, movement disorders not only limit mobility but also impair spontaneity of thought and action. Dance improvisation demands open and immediate interpretation of verbally delivered movement cues, potentially fostering the formation of spontaneous movement strategies. Here, we present an introduction to a proposed method, detailing its methodological specifics, and pointing to future directions. The viewpoint advances an embodied cognitive approach that has eco-validity in helping PPD meet the changing demands of daily living.

  1. Processing of spatial sounds in the impaired auditory system

    DEFF Research Database (Denmark)

    Arweiler, Iris

    Understanding speech in complex acoustic environments presents a challenge for most hearing-impaired listeners. In conditions where normal-hearing listeners effortlessly utilize spatial cues to improve speech intelligibility, hearing-impaired listeners often struggle. In this thesis, the influenc...... implications for speech perception models and the development of compensation strategies in future generations of hearing instruments....

  2. Spatial organization of tettigoniid auditory receptors: insights from neuronal tracing.

    Science.gov (United States)

    Strauß, Johannes; Lehmann, Gerlind U C; Lehmann, Arne W; Lakes-Harlan, Reinhard

    2012-11-01

    The auditory sense organ of Tettigoniidae (Insecta, Orthoptera) is located in the foreleg tibia and consists of scolopidial sensilla which form a row termed crista acustica. The crista acustica is associated with the tympana and the auditory trachea. This ear is a highly ordered, tonotopic sensory system. As the neuroanatomy of the crista acustica has been documented for several species, the most distal somata and dendrites of receptor neurons have occasionally been described as forming an alternating or double row. We investigate the spatial arrangement of receptor cell bodies and dendrites by retrograde tracing with cobalt chloride solution. In six tettigoniid species studied, distal receptor neurons are consistently arranged in double-rows of somata rather than a linear sequence. This arrangement of neurons is shown to affect 30-50% of the overall auditory receptors. No strict correlation of somata positions between the anterio-posterior and dorso-ventral axis was evident within the distal crista acustica. Dendrites of distal receptors occasionally also occur in a double row or are even massed without clear order. Thus, a substantial part of auditory receptors can deviate from a strictly straight organization into a more complex morphology. The linear organization of dendrites is not a morphological criterion that allows hearing organs to be distinguished from nonhearing sense organs serially homologous to ears in all species. Both the crowded arrangement of receptor somata and dendrites may result from functional constraints relating to frequency discrimination, or from developmental constraints of auditory morphogenesis in postembryonic development.

  3. Listenmee and Listenmee smartphone application: synchronizing walking to rhythmic auditory cues to improve gait in Parkinson's disease.

    Science.gov (United States)

    Lopez, William Omar Contreras; Higuera, Carlos Andres Escalante; Fonoff, Erich Talamoni; Souza, Carolina de Oliveira; Albicker, Ulrich; Martinez, Jairo Alberto Espinoza

    2014-10-01

    Evidence supports the use of rhythmic external auditory signals to improve gait in PD patients (Arias & Cudeiro, 2008; Kenyon & Thaut, 2000; McIntosh, Rice & Thaut, 1994; McIntosh et al., 1997; Morris, Iansek, & Matyas, 1994; Thaut, McIntosh, & Rice, 1997; Suteerawattananon, Morris, Etnyre, Jankovic, & Protas , 2004; Willems, Nieuwboer, Chavert, & Desloovere, 2006). However, few prototypes are available for daily use, and to our knowledge, none utilize a smartphone application allowing individualized sounds and cadence. Therefore, we analyzed the effects on gait of Listenmee®, an intelligent glasses system with a portable auditory device, and present its smartphone application, the Listenmee app®, offering over 100 different sounds and an adjustable metronome to individualize the cueing rate as well as its smartwatch with accelerometer to detect magnitude and direction of the proper acceleration, track calorie count, sleep patterns, steps count and daily distances. The present study included patients with idiopathic PD presented gait disturbances including freezing. Auditory rhythmic cues were delivered through Listenmee®. Performance was analyzed in a motion and gait analysis laboratory. The results revealed significant improvements in gait performance over three major dependent variables: walking speed in 38.1%, cadence in 28.1% and stride length in 44.5%. Our findings suggest that auditory cueing through Listenmee® may significantly enhance gait performance. Further studies are needed to elucidate the potential role and maximize the benefits of these portable devices.

  4. Domestic pigs' (Sus scrofa domestica) use of direct and indirect visual and auditory cues in an object choice task.

    Science.gov (United States)

    Nawroth, Christian; von Borell, Eberhard

    2015-05-01

    Recently, foraging strategies have been linked to the ability to use indirect visual information. More selective feeders should express a higher aversion against losses compared to non-selective feeders and should therefore be more prone to avoid empty food locations. To extend these findings, in this study, we present a series of studies investigating the use of direct and indirect visual and auditory information by an omnivorous but selective feeder-the domestic pig. Subjects had to choose between two buckets, with only one containing a reward. Before making a choice, the subjects in Experiment 1 (N = 8) received full information regarding both the baited and non-baited location, either in a visual or auditory domain. In this experiment, the subjects were able to use visual but not auditory cues to infer the location of the reward spontaneously. Additionally, four individuals learned to use auditory cues after a period of training. In Experiment 2 (N = 8), the pigs were given different amounts of visual information about the content of the buckets-lifting either both of the buckets (full information), the baited bucket (direct information), the empty bucket (indirect information) or no bucket at all (no information). The subjects as a group were able to use direct and indirect visual cues. However, over the course of the experiment, the performance dropped to chance level when indirect information was provided. A final experiment (N = 3) provided preliminary results for pigs' use of indirect auditory information to infer the location of a reward. We conclude that pigs at a very young age are able to make decisions based on indirect information in the visual domain, whereas their performance in the use of indirect auditory information warrants further investigation.

  5. Speed on the dance floor: Auditory and visual cues for musical tempo.

    Science.gov (United States)

    London, Justin; Burger, Birgitta; Thompson, Marc; Toiviainen, Petri

    2016-02-01

    Musical tempo is most strongly associated with the rate of the beat or "tactus," which may be defined as the most prominent rhythmic periodicity present in the music, typically in a range of 1.67-2 Hz. However, other factors such as rhythmic density, mean rhythmic inter-onset interval, metrical (accentual) structure, and rhythmic complexity can affect perceived tempo (Drake, Gros, & Penel, 1999; London, 2011 Drake, Gros, & Penel, 1999; London, 2011). Visual information can also give rise to a perceived beat/tempo (Iversen, et al., 2015), and auditory and visual temporal cues can interact and mutually influence each other (Soto-Faraco & Kingstone, 2004; Spence, 2015). A five-part experiment was performed to assess the integration of auditory and visual information in judgments of musical tempo. Participants rated the speed of six classic R&B songs on a seven point scale while observing an animated figure dancing to them. Participants were presented with original and time-stretched (±5%) versions of each song in audio-only, audio+video (A+V), and video-only conditions. In some videos the animations were of spontaneous movements to the different time-stretched versions of each song, and in other videos the animations were of "vigorous" versus "relaxed" interpretations of the same auditory stimulus. Two main results were observed. First, in all conditions with audio, even though participants were able to correctly rank the original vs. time-stretched versions of each song, a song-specific tempo-anchoring effect was observed, such that sped-up versions of slower songs were judged to be faster than slowed-down versions of faster songs, even when their objective beat rates were the same. Second, when viewing a vigorous dancing figure in the A+V condition, participants gave faster tempo ratings than from the audio alone or when viewing the same audio with a relaxed dancing figure. The implications of this illusory tempo percept for cross-modal sensory integration and

  6. Hand proximity facilitates spatial discrimination of auditory tones

    Directory of Open Access Journals (Sweden)

    Philip eTseng

    2014-06-01

    Full Text Available The effect of hand proximity on vision and visual attention has been well documented. In this study we tested whether such effect(s would also be present in the auditory modality. With hands placed either near or away from the audio sources, participants performed an auditory-spatial discrimination (Exp 1: left or right side, pitch discrimination (Exp 2: high, med, or low tone, and spatial-plus-pitch (Exp 3: left or right; high, med, or low discrimination task. In Exp 1, when hands were away from the audio source, participants consistently responded faster with their right hand regardless of stimulus location. This right hand advantage, however, disappeared in the hands-near condition because of a significant improvement in left hand’s reaction time. No effect of hand proximity was found in Exp 2 or 3, where a choice reaction time task requiring pitch discrimination was used. Together, these results suggest that the effect of hand proximity is not exclusive to vision alone, but is also present in audition, though in a much weaker form. Most important, these findings provide evidence from auditory attention that supports the multimodal account originally raised by Reed et al. in 2006.

  7. Quadri-stability of a spatially ambiguous auditory illusion

    Directory of Open Access Journals (Sweden)

    Constance May Bainbridge

    2015-01-01

    Full Text Available In addition to vision, audition plays an important role in sound localization in our world. One way we estimate the motion of an auditory object moving towards or away from us is from changes in volume intensity. However, the human auditory system has unequally distributed spatial resolution, including difficulty distinguishing sounds in front versus behind the listener. Here, we introduce a novel quadri-stable illusion, the Transverse-and-Bounce Auditory Illusion, which combines front-back confusion with changes in volume levels of a nonspatial sound to create ambiguous percepts of an object approaching and withdrawing from the listener. The sound can be perceived as traveling transversely from front to back or back to front, or bouncing to remain exclusively in front of or behind the observer. Here we demonstrate how human listeners experience this illusory phenomenon by comparing ambiguous and unambiguous stimuli for each of the four possible motion percepts. When asked to rate their confidence in perceiving each sound’s motion, participants reported equal confidence for the illusory and unambiguous stimuli. Participants perceived all four illusory motion percepts, and could not distinguish the illusion from the unambiguous stimuli. These results show that this illusion is effectively quadri-stable. In a second experiment, the illusory stimulus was looped continuously in headphones while participants identified its perceived path of motion to test properties of perceptual switching, locking, and biases. Participants were biased towards perceiving transverse compared to bouncing paths, and they became perceptually locked into alternating between front-to-back and back-to-front percepts, perhaps reflecting how auditory objects commonly move in the real world. This multi-stable auditory illusion opens opportunities for studying the perceptual, cognitive, and neural representation of objects in motion, as well as exploring multimodal perceptual

  8. Selective importance of the rat anterior thalamic nuclei for configural learning involving distal spatial cues.

    Science.gov (United States)

    Dumont, Julie R; Amin, Eman; Aggleton, John P

    2014-01-01

    To test potential parallels between hippocampal and anterior thalamic function, rats with anterior thalamic lesions were trained on a series of biconditional learning tasks. The anterior thalamic lesions did not disrupt learning two biconditional associations in operant chambers where a specific auditory stimulus (tone or click) had a differential outcome depending on whether it was paired with a particular visual context (spot or checkered wall-paper) or a particular thermal context (warm or cool). Likewise, rats with anterior thalamic lesions successfully learnt a biconditional task when they were reinforced for digging in one of two distinct cups (containing either beads or shredded paper), depending on the particular appearance of the local context on which the cup was placed (one of two textured floors). In contrast, the same rats were severely impaired at learning the biconditional rule to select a specific cup when in a particular location within the test room. Place learning was then tested with a series of go/no-go discriminations. Rats with anterior thalamic nuclei lesions could learn to discriminate between two locations when they were approached from a constant direction. They could not, however, use this acquired location information to solve a subsequent spatial biconditional task where those same places dictated the correct choice of digging cup. Anterior thalamic lesions produced a selective, but severe, biconditional learning deficit when the task incorporated distal spatial cues. This deficit mirrors that seen in rats with hippocampal lesions, so extending potential interdependencies between the two sites.

  9. Visual spatial cue use for guiding orientation in two-to-three-year-old children

    OpenAIRE

    Danielle evan den Brink; Gabriele eJanzen

    2013-01-01

    In spatial development representations of the environment and the use of spatial cues change over time. To date, the influence of individual differences in skills relevant for orientation and navigation has not received much attention. The current study investigated orientation abilities on the basis of visual spatial cues in two-to-three-year-old children, and assessed factors that possibly influence spatial task performance. Thirty-month and 35-month-olds performed an on-screen Virtual Real...

  10. The role of different cues in the brain mechanism on visual spatial attention

    Institute of Scientific and Technical Information of China (English)

    SONG Weiqun; LUO Yuejia; CHI Song; JI Xunming; LING Feng; ZHAO Lun; WANG Maobin; SHI Jiannong

    2006-01-01

    The visual spatial attention mechanism in the brain was studied in 16 young subjects through the visual search paradigm of precue-target by the event-related potential (ERP) technique, with the attentive ranges cued by different scales of Chinese character and region cues. The results showed that the response time for Chinese character cues was much longer than that for region cues especially for small region cues. With the exterior interferences, the target stimuli recognition under region cues was much quicker than that under Chinese character cues. Compared with that under region cues, targets under Chinese character cues could lead to increase of the posterior P1,decrease of the N1 and increase of the P2. It should also be noted that the differences between region cues and Chinese character cues were affected by the interference types. Under exterior interferences, no significant difference was found between region cues and Chinese character cues; however, it was not the case under the interior interferences. Considering the difference between the exterior interferences and the interior interferences, we could conclude that with the increase of difficulty in target recognition there was obvious difference in the consumption of anterior frontal resources by target stimuli under the two kinds of cues.

  11. Three-dimensional motion analysis of the effects of auditory cueing on gait pattern in patients with Parkinson's disease: a preliminary investigation.

    Science.gov (United States)

    Picelli, Alessandro; Camin, Maruo; Tinazzi, Michele; Vangelista, Antonella; Cosentino, Alessandro; Fiaschi, Antonio; Smania, Nicola

    2010-08-01

    Auditory cueing enhances gait in parkinsonian patients. Our aim was to evaluate its effects on spatiotemporal (stride length, stride time, cadence, gait speed, single and double support duration) kinematic (range of amplitude of the hip, knee and ankle joint angles registered in the sagittal plane) and kinetic (maximal values of the hip and ankle joint power) gait parameters using three-dimensional motion analysis. Eight parkinsonian patients performed 12 walking tests: 3 repetitions of 4 conditions (normal walking, 90, 100, and 110% of the mean cadence at preferred pace cued walking). Subjects were asked to uniform their cadence to the cueing rhythm. In the presence of auditory cues stride length, cadence, gait speed and ratio single/double support duration increased. Range of motion of the ankle joint decreased and the maximal values within the pull-off phase of the hip joint power increased. Thus, auditory cues could improve gait modifying motor strategy in parkinsonian patients.

  12. Spatial scale of motion segmentation from speed cues

    Science.gov (United States)

    Mestre, D. R.; Masson, G. S.; Stone, L. S.

    2001-01-01

    For the accurate perception of multiple, potentially overlapping, surfaces or objects, the visual system must distinguish different local motion vectors and selectively integrate similar motion vectors over space to segment the retinal image properly. We recently showed that large differences in speed are required to yield a percept of motion transparency. In the present study, to investigate the spatial scale of motion segmentation from speed cues alone, we measured the speed-segmentation threshold (the minimum speed difference required for 75% performance accuracy) for 'corrugated' random-dot patterns, i.e. patterns in which dots with two different speeds were alternately placed in adjacent bars of variable width. In a first experiment, we found that, at large bar widths, a smaller speed difference was required to segment and perceive the corrugated pattern of moving dots, while at small bar-widths, a larger speed difference was required to segment the two speeds and perceive two transparent surfaces of moving dots. Both the perceptual and segmentation performance transitions occurred at a bar width of around 0.4 degrees. In a second experiment, speed-segmentation thresholds were found to increase sharply when dots with different speeds were paired within a local pooling area. The critical pairing distance was about 0.2 degrees in the fovea and increased linearly with stimulus eccentricity. However, across the range of eccentricities tested (up to 15 degrees ), the critical pairing distance did not change much and remained close to the receptive field size of neurons within the primate primary visual cortex. In a third experiment, increasing dot density changed the relationship between speed-segmentation thresholds and bar width. Thresholds decreased for large bar widths, but increased for small bar widths. All of these results are well fit by a simple stochastic model, which estimates the probabilities of having identical or different motion vectors within a

  13. Switching of auditory attention in "cocktail-party" listening: ERP evidence of cueing effects in younger and older adults.

    Science.gov (United States)

    Getzmann, Stephan; Jasny, Julian; Falkenstein, Michael

    2017-02-01

    Verbal communication in a "cocktail-party situation" is a major challenge for the auditory system. In particular, changes in target speaker usually result in declined speech perception. Here, we investigated whether speech cues indicating a subsequent change in target speaker reduce the costs of switching in younger and older adults. We employed event-related potential (ERP) measures and a speech perception task, in which sequences of short words were simultaneously presented by four speakers. Changes in target speaker were either unpredictable or semantically cued by a word within the target stream. Cued changes resulted in a less decreased performance than uncued changes in both age groups. The ERP analysis revealed shorter latencies in the change-related N400 and late positive complex (LPC) after cued changes, suggesting an acceleration in context updating and attention switching. Thus, both younger and older listeners used semantic cues to prepare changes in speaker setting.

  14. Neural Mechanisms of Attentional Shifts Due to Irrelevant Spatial and Numerical Cues

    Science.gov (United States)

    Ranzini, Mariagrazia; Dehaene, Stanislas; Piazza, Manuela; Hubbard, Edward M.

    2009-01-01

    Studies of endogenous (cue-directed) attention have traditionally assumed that such shifts must be volitional. However, recent behavioural experiments have shown that participants make automatic endogenous shifts of attention when presented with symbolic cues that are systematically associated with particular spatial directions, such as arrows and…

  15. Visual spatial cue use for guiding orientation in two-to-three-year-old children.

    Science.gov (United States)

    van den Brink, Danielle; Janzen, Gabriele

    2013-01-01

    In spatial development representations of the environment and the use of spatial cues change over time. To date, the influence of individual differences in skills relevant for orientation and navigation has not received much attention. The current study investigated orientation abilities on the basis of visual spatial cues in 2-3-year-old children, and assessed factors that possibly influence spatial task performance. Thirty-month and 35-month-olds performed an on-screen Virtual Reality (VR) orientation task searching for an animated target in the presence of visual self-movement cues and landmark information. Results show that, in contrast to 30-month-old children, 35-month-olds were successful in using visual spatial cues for maintaining orientation. Neither age group benefited from landmarks present in the environment, suggesting that successful task performance relied on the use of optic flow cues, rather than object-to-object relations. Analysis of individual differences revealed that 2-year-olds who were relatively more independent in comparison to their peers, as measured by the daily living skills scale of the parental questionnaire Vineland-Screener were most successful at the orientation task. These results support previous findings indicating that the use of various spatial cues gradually improves during early childhood. Our data show that a developmental transition in spatial cue use can be witnessed within a relatively short period of 5 months only. Furthermore, this study indicates that rather than chronological age, individual differences may play a role in successful use of visual cues for spatial updating in an orientation task. Future studies are necessary to assess the exact nature of these individual differences.

  16. Visual spatial cue use for guiding orientation in two-to-three-year-old children

    Directory of Open Access Journals (Sweden)

    Danielle evan den Brink

    2013-12-01

    Full Text Available In spatial development representations of the environment and the use of spatial cues change over time. To date, the influence of individual differences in skills relevant for orientation and navigation has not received much attention. The current study investigated orientation abilities on the basis of visual spatial cues in two-to-three-year-old children, and assessed factors that possibly influence spatial task performance. Thirty-month and 35-month-olds performed an on-screen Virtual Reality orientation task searching for an animated target in the presence of visual self-movement cues and landmark information. Results show that, in contrast to 30-month-old children, 35-month-olds were successful in using visual spatial cues for maintaining orientation. Neither age group benefited from landmarks present in the environment, suggesting that successful task performance relied on the use of optic flow cues, rather than object-to-object relations. Analysis of individual differences revealed that two-year-olds who were relatively more independent in comparison to their peers, as measured by the daily living skills scale of the parental questionnaire Vineland-Screener were most successful at the orientation task. These results support previous findings indicating that the use of various spatial cues gradually improves during early childhood. Our data show that a developmental transition in spatial cue use can be witnessed within a relatively short period of 5 months only. Furthermore, this study indicates that rather than chronological age, individual differences may play a role in successful use of visual cues for spatial updating in an orientation task. Future studies are necessary to assess the exact nature of these individual differences.

  17. Effect of combined motor and spatial cues on mathematical reasoning: a polarity correspondence account.

    Science.gov (United States)

    Verselder, Hélène; Freddi, Sébastien; Dru, Vincent

    2016-08-27

    We examined whether combined motor or spatial polarities could influence accuracy in two mathematical operations. Four experiments were conducted and showed that, when two corresponding polarities were activated, accuracy in multiplicative operations was greater than when non-corresponding polarities were activated, whereas no effect was found for additive operations. These results were established with motor cues (Left/Right and Arm Extension/Flexion, as behavioral approach-avoidance tendencies) and perceptual spatial cues (Left/Right and DOWN/UP cues). A polarity correspondence effect was established and proposed for multiplication. A combination of polarities was associated with a corresponding combination of numerical digits, assessed with mathematical operations, such as multiplication.

  18. Microhabitat use affects goby (Gobiidae) cue choice in spatial learning task.

    Science.gov (United States)

    White, G E; Brown, C

    2015-04-01

    This study investigated whether spatial learning ability and cue use of gobies (Gobiidae) from two contrasting habitats differed in a spatial task. Gobies were collected from the spatially complex rock pools and dynamic, homogenous sandy shores. Fishes were trained to locate a shelter under the simulated threat of predation and it was determined whether they used local or extra-maze (global) and geometric cues to do so. It was hypothesized that fishes from rock pools would outperform fishes from sandy shores in their ability to relocate shelter and the two groups would differ in their cue use. It was found that rock-pool species learnt the location of the correct shelter much faster, made fewer errors and used a combination of all available cues to locate the shelter, while sand species relied significantly more on extra-maze and geometric cues for orientation. The results reported here support the hypothesis that fishes living in complex habitats have enhanced capacity for spatial learning and are more likely to rely on local landmarks as directional cues than fishes living in mundane habitats where local cues such as visual landmarks are unreliable.

  19. Spatial selective auditory attention in the presence of reverberant energy: individual differences in normal-hearing listeners.

    Science.gov (United States)

    Ruggles, Dorea; Shinn-Cunningham, Barbara

    2011-06-01

    Listeners can selectively attend to a desired target by directing attention to known target source features, such as location or pitch. Reverberation, however, reduces the reliability of the cues that allow a target source to be segregated and selected from a sound mixture. Given this, it is likely that reverberant energy interferes with selective auditory attention. Anecdotal reports suggest that the ability to focus spatial auditory attention degrades even with early aging, yet there is little evidence that middle-aged listeners have behavioral deficits on tasks requiring selective auditory attention. The current study was designed to look for individual differences in selective attention ability and to see if any such differences correlate with age. Normal-hearing adults, ranging in age from 18 to 55 years, were asked to report a stream of digits located directly ahead in a simulated rectangular room. Simultaneous, competing masker digit streams were simulated at locations 15° left and right of center. The level of reverberation was varied to alter task difficulty by interfering with localization cues (increasing localization blur). Overall, performance was best in the anechoic condition and worst in the high-reverberation condition. Listeners nearly always reported a digit from one of the three competing streams, showing that reverberation did not render the digits unintelligible. Importantly, inter-subject differences were extremely large. These differences, however, were not significantly correlated with age, memory span, or hearing status. These results show that listeners with audiometrically normal pure tone thresholds differ in their ability to selectively attend to a desired source, a task important in everyday communication. Further work is necessary to determine if these differences arise from differences in peripheral auditory function or in more central function.

  20. Auditory spatial resolution in horizontal, vertical, and diagonal planes.

    Science.gov (United States)

    Grantham, D Wesley; Hornsby, Benjamin W Y; Erpenbeck, Eric A

    2003-08-01

    Minimum audible angle (MAA) and minimum audible movement angle (MAMA) thresholds were measured for stimuli in horizontal, vertical, and diagonal (60 degrees) planes. A pseudovirtual technique was employed in which signals were recorded through KEMAR's ears and played back to subjects through insert earphones. Thresholds were obtained for wideband, high-pass, and low-pass noises. Only 6 of 20 subjects obtained wideband vertical-plane MAAs less than 10 degrees, and only these 6 subjects were retained for the complete study. For all three filter conditions thresholds were lowest in the horizontal plane, slightly (but significantly) higher in the diagonal plane, and highest for the vertical plane. These results were similar in magnitude and pattern to those reported by Perrott and Saberi [J. Acoust. Soc. Am. 87, 1728-1731 (1990)] and Saberi and Perrott [J. Acoust. Soc. Am. 88, 2639-2644 (1990)], except that these investigators generally found that thresholds for diagonal planes were as good as those for the horizontal plane. The present results are consistent with the hypothesis that diagonal-plane performance is based on independent contributions from a horizontal-plane system (sensitive to interaural differences) and a vertical-plane system (sensitive to pinna-based spectral changes). Measurements of the stimuli recorded through KEMAR indicated that sources presented from diagonal planes can produce larger interaural level differences (ILDs) in certain frequency regions than would be expected based on the horizontal projection of the trajectory. Such frequency-specific ILD cues may underlie the very good performance reported in previous studies for diagonal spatial resolution. Subjects in the present study could apparently not take advantage of these cues in the diagonal-plane condition, possibly because they did not externalize the images to their appropriate positions in space or possibly because of the absence of a patterned visual field.

  1. Rapid cortical dynamics associated with auditory spatial attention gradients.

    Science.gov (United States)

    Mock, Jeffrey R; Seay, Michael J; Charney, Danielle R; Holmes, John L; Golob, Edward J

    2015-01-01

    Behavioral and EEG studies suggest spatial attention is allocated as a gradient in which processing benefits decrease away from an attended location. Yet the spatiotemporal dynamics of cortical processes that contribute to attentional gradients are unclear. We measured EEG while participants (n = 35) performed an auditory spatial attention task that required a button press to sounds at one target location on either the left or right. Distractor sounds were randomly presented at four non-target locations evenly spaced up to 180° from the target location. Attentional gradients were quantified by regressing ERP amplitudes elicited by distractors against their spatial location relative to the target. Independent component analysis was applied to each subject's scalp channel data, allowing isolation of distinct cortical sources. Results from scalp ERPs showed a tri-phasic response with gradient slope peaks at ~300 ms (frontal, positive), ~430 ms (posterior, negative), and a plateau starting at ~550 ms (frontal, positive). Corresponding to the first slope peak, a positive gradient was found within a central component when attending to both target locations and for two lateral frontal components when contralateral to the target location. Similarly, a central posterior component had a negative gradient that corresponded to the second slope peak regardless of target location. A right posterior component had both an ipsilateral followed by a contralateral gradient. Lateral posterior clusters also had decreases in α and β oscillatory power with a negative slope and contralateral tuning. Only the left posterior component (120-200 ms) corresponded to absolute sound location. The findings indicate a rapid, temporally-organized sequence of gradients thought to reflect interplay between frontal and parietal regions. We conclude these gradients support a target-based saliency map exhibiting aspects of both right-hemisphere dominance and opponent process models.

  2. Role of gravitational versus egocentric cues for human spatial orientation.

    Science.gov (United States)

    Bury, Nils; Bock, Otmar

    2016-04-01

    Our perception of the vertical depends on allocentric information about the visual surrounds, egocentric information about the own body axis and gravicentric information about the pull of gravity. Previous work has documented that some individuals rely strongly on allocentric information, while others do not, and the present work scrutinizes the existence of yet another dichotomy: We hypothesize that in the absence of allocentric cues, some individuals rely strongly on gravicentric information, while others do not. Twenty-four participants were tested at three angles of body pitch (0° = upright, -90° = supine, -110° = head down) after eliminating visual orientation cues. When asked to adjust a rotating tree '…such that the tree looks right,' nine persons set the tree consistently parallel to gravity, eight consistently parallel to their longitudinal axis and seven switched between these two references; responses mid-between gravity and body axis were rare. The outcome was similar when tactile cues were masked by body vibration, as well as when participants were asked to adjust the tree '… such that leaves are at the top and roots are at the bottom'; the incidence of gravicentric responses increased with the instruction to set the tree '… such that leaves are at the top and roots are at the bottom in space, irrespective of your own position.' We conclude that the perceived vertical can be anchored in gravicentric or in egocentric space, depending on instructions and individual preference.

  3. Multimodal information Management: Evaluation of Auditory and Haptic Cues for NextGen Communication Displays

    Science.gov (United States)

    Begault, Durand R.; Bittner, Rachel M.; Anderson, Mark R.

    2012-01-01

    Auditory communication displays within the NextGen data link system may use multiple synthetic speech messages replacing traditional ATC and company communications. The design of an interface for selecting amongst multiple incoming messages can impact both performance (time to select, audit and release a message) and preference. Two design factors were evaluated: physical pressure-sensitive switches versus flat panel "virtual switches", and the presence or absence of auditory feedback from switch contact. Performance with stimuli using physical switches was 1.2 s faster than virtual switches (2.0 s vs. 3.2 s); auditory feedback provided a 0.54 s performance advantage (2.33 s vs. 2.87 s). There was no interaction between these variables. Preference data were highly correlated with performance.

  4. Visual spatial cue use for guiding orientation in two-to-three-year-old children

    NARCIS (Netherlands)

    Brink, D. van den; Janzen, G.

    2013-01-01

    In spatial development representations of the environment and the use of spatial cues change over time. To date, the influence of individual differences in skills relevant for orientation and navigation has not received much attention. The current study investigated orientation abilities on the basi

  5. The Effect of Tactile Cues on Auditory Stream Segregation Ability of Musicians and Nonmusicians

    DEFF Research Database (Denmark)

    Slater, Kyle D.; Marozeau, Jeremy

    2016-01-01

    Difficulty perceiving music is often cited as one of the main problems facing hearing-impaired listeners. It has been suggested that musical enjoyment could be enhanced if sound information absent due to impairment is transmitted via other sensory modalities such as vision or touch. In this study......, we test whether tactile cues can be used to segregate 2 interleaved melodies. Twelve musicians and 12 nonmusicians were asked to detect changes in a 4-note repeated melody interleaved with a random melody. In order to perform this task, the listener must be able to segregate the target melody from...... the random melody. Tactile cues were applied to the listener’s fingers on half of the blocks. Results showed that tactile cues can significantly improve the melodic segregation ability in both musician and nonmusician groups in challenging listening conditions. Overall, the musician group performance...

  6. Recognizing Visual and Auditory Cues in the Detection of Foreign-Language Anxiety

    Science.gov (United States)

    Gregersen, Tammy

    2009-01-01

    This study examines whether nonverbal visual and/or auditory channels are more effective in detecting foreign-language anxiety. Recent research suggests that language teachers are often able to successfully decode the nonverbal behaviors indicative of foreign-language anxiety; however, relatively little is known about whether visual and/or…

  7. Rehabilitation treatment of gait in patients with Parkinson's disease with freezing: a comparison between two physical therapy protocols using visual and auditory cues with or without treadmill training.

    Science.gov (United States)

    Frazzitta, Giuseppe; Maestri, Roberto; Uccellini, Davide; Bertotti, Gabriella; Abelli, Paola

    2009-06-15

    Freezing is a disabling symptom in patients with Parkinson's disease. We investigated the effectiveness of a new rehabilitation strategy based on treadmill training associated with auditory and visual cues. Forty Parkinsonian patients with freezing were randomly assigned to two groups: Group 1 underwent a rehabilitation program based on treadmill training associated with auditory and visual cues, while Group 2 followed a rehabilitation protocol using cues and not associated with treadmill. Functional evaluation was based on the Unified Parkinson's Disease Rating Scale Motor Section (UPDRS III), Freezing of Gait Questionnaire (FOGQ), 6-minute walking test (6MWT), gait speed, and stride cycle. Patients in both the groups had significant improvements in all variables considered by the end of the rehabilitation program (all P = 0.0001). Patients treated with the protocol including treadmill, had more improvement than patients in Group 2 in most functional indicators (P = 0.007, P = 0.0004, P = 0.0126, and P = 0.0263 for FOGQ, 6MWT, gait speed, stride cycle, respectively). The most striking result was obtained for 6MWT, with a mean increase of 130 m in Group 1 compared with 57 m in Group 2. Our results suggest that treadmill training associated with auditory and visual cues might give better results than more conventional treatments. Treadmill training probably acts as a supplementary external cue.

  8. Attention Cueing and Activity Equally Reduce False Alarm Rate in Visual-Auditory Associative Learning through Improving Memory.

    Science.gov (United States)

    Nikouei Mahani, Mohammad-Ali; Haghgoo, Hojjat Allah; Azizi, Solmaz; Nili Ahmadabadi, Majid

    2016-01-01

    In our daily life, we continually exploit already learned multisensory associations and form new ones when facing novel situations. Improving our associative learning results in higher cognitive capabilities. We experimentally and computationally studied the learning performance of healthy subjects in a visual-auditory sensory associative learning task across active learning, attention cueing learning, and passive learning modes. According to our results, the learning mode had no significant effect on learning association of congruent pairs. In addition, subjects' performance in learning congruent samples was not correlated with their vigilance score. Nevertheless, vigilance score was significantly correlated with the learning performance of the non-congruent pairs. Moreover, in the last block of the passive learning mode, subjects significantly made more mistakes in taking non-congruent pairs as associated and consciously reported lower confidence. These results indicate that attention and activity equally enhanced visual-auditory associative learning for non-congruent pairs, while false alarm rate in the passive learning mode did not decrease after the second block. We investigated the cause of higher false alarm rate in the passive learning mode by using a computational model, composed of a reinforcement learning module and a memory-decay module. The results suggest that the higher rate of memory decay is the source of making more mistakes and reporting lower confidence in non-congruent pairs in the passive learning mode.

  9. Two Persons with Multiple Disabilities Use Orientation Technology with Auditory Cues to Manage Simple Indoor Traveling

    Science.gov (United States)

    Lancioni, Giulio E.; Singh, Nirbhay N.; O'Reilly, Mark F.; Sigafoos, Jeff; Campodonico, Francesca; Oliva, Doretta

    2010-01-01

    This study was an effort to extend the evaluation of orientation technology for promoting independent indoor traveling in persons with multiple disabilities. Two participants (adults) were included, who were to travel to activity destinations within occupational settings. The orientation system involved (a) cueing sources only at the destinations…

  10. Verbal Auditory Cueing of Improvisational Dance: A Proposed Method for Training Agency in Parkinson’s Disease

    Science.gov (United States)

    Batson, Glenna; Hugenschmidt, Christina E.; Soriano, Christina T.

    2016-01-01

    Dance is a non-pharmacological intervention that helps maintain functional independence and quality of life in people with Parkinson’s disease (PPD). Results from controlled studies on group-delivered dance for people with mild-to-moderate stage Parkinson’s have shown statistically and clinically significant improvements in gait, balance, and psychosocial factors. Tested interventions include non-partnered dance forms (ballet and modern dance) and partnered (tango). In all of these dance forms, specific movement patterns initially are learned through repetition and performed in time-to-music. Once the basic steps are mastered, students may be encouraged to improvise on the learned steps as they perform them in rhythm with the music. Here, we summarize a method of teaching improvisational dance that advances previous reported benefits of dance for people with Parkinson’s disease (PD). The method relies primarily on improvisational verbal auditory cueing with less emphasis on directed movement instruction. This method builds on the idea that daily living requires flexible, adaptive responses to real-life challenges. In PD, movement disorders not only limit mobility but also impair spontaneity of thought and action. Dance improvisation demands open and immediate interpretation of verbally delivered movement cues, potentially fostering the formation of spontaneous movement strategies. Here, we present an introduction to a proposed method, detailing its methodological specifics, and pointing to future directions. The viewpoint advances an embodied cognitive approach that has eco-validity in helping PPD meet the changing demands of daily living. PMID:26925029

  11. Verbal auditory cueing of improvisational dance: A proposed method for training agency in Parkinson’s disease

    Directory of Open Access Journals (Sweden)

    Glenna eBatson

    2016-02-01

    Full Text Available Dance is a non-pharmacological intervention that helps maintain functional independence and quality of life in people with Parkinson’s disease (PPD. Results from controlled studies on group-delivered dance for people with mild-to-moderate stage Parkinson’s have shown statistically and clinically significant improvements in gait, balance, and psychosocial factors. Tested interventions include non-partnered dance forms (ballet and modern dance and partnered (tango. In all of these dance forms, specific movement patterns initially are learned through repetition and performed in time to music. Once the basic steps are mastered, students may be encouraged to improvise on the learned steps as they perform them in rhythm with the music. Here, we summarize a method of teaching improvisational dance that advances previous reported benefits of dance for people with PD. The method relies primarily on improvisational verbal auditory cueing (VAC with less emphasis on directed movement instruction. This method builds on the idea that daily living requires flexible, adaptive responses to real-life challenges. In PD, movement disorders not only limit mobility, but also impair spontaneity of thought and action. Dance improvisation trains spontaneity of thought, fostering open and immediate interpretation of verbally delivered movement cues. Here we present an introduction to a proposed method, detailing its methodological specifics, and pointing to future directions. The viewpoint advances an embodied cognitive approach that has eco-validity in helping PPD meet the changing demands of daily living.

  12. The Effect of Attentional Cueing and Spatial Uncertainty in Visual Field Testing.

    Directory of Open Access Journals (Sweden)

    Jack Phu

    Full Text Available To determine the effect of reducing spatial uncertainty by attentional cueing on contrast sensitivity at a range of spatial locations and with different stimulus sizes.Six observers underwent perimetric testing with the Humphrey Visual Field Analyzer (HFA full threshold paradigm, and the output thresholds were compared to conditions where stimulus location was verbally cued to the observer. We varied the number of points cued, the eccentric and spatial location, and stimulus size (Goldmann size I, III and V. Subsequently, four observers underwent laboratory-based psychophysical testing on a custom computer program using Method of Constant Stimuli to determine the frequency-of-seeing (FOS curves with similar variables.We found that attentional cueing increased contrast sensitivity when measured using the HFA. We report a difference of approximately 2 dB with size I at peripheral and mid-peripheral testing locations. For size III, cueing had a greater effect for points presented in the periphery than in the mid-periphery. There was an exponential decay of the effect of cueing with increasing number of elements cued. Cueing a size V stimulus led to no change. FOS curves generated from laboratory-based psychophysical testing confirmed an increase in contrast detection sensitivity under the same conditions. We found that the FOS curve steepened when spatial uncertainty was reduced.We show that attentional cueing increases contrast sensitivity when using a size I or size III test stimulus on the HFA when up to 8 points are cued but not when a size V stimulus is cued. We show that this cueing also alters the slope of the FOS curve. This suggests that at least 8 points should be used to minimise potential attentional factors that may affect measurement of contrast sensitivity in the visual field.

  13. Flexible spatial perspective-taking: Conversational partners weigh multiple cues in collaborative tasks

    Directory of Open Access Journals (Sweden)

    Alexia eGalati

    2013-09-01

    Full Text Available Research on spatial perspective-taking often focuses on the cognitive processes of isolated individuals as they adopt or maintain imagined perspectives. Collaborative studies of spatial perspective-taking typically examine speakers’ linguistic choices, while overlooking their underlying processes and representations. We review evidence from two collaborative experiments that examine the contribution of social and representational cues to spatial perspective choices in both language and the organization of spatial memory. Across experiments, speakers organized their memory representations according to the convergence of various cues. When layouts were randomly configured and did not afford intrinsic cues, speakers encoded their partner’s viewpoint in memory, if available, but did not use it as an organizing direction. On the other hand, when the layout afforded an intrinsic structure, speakers organized their spatial memories according to the person-centered perspective reinforced by the layout’s structure. Similarly, in descriptions, speakers considered multiple cues whether available a priori or at the interaction. They used partner-centered expressions more frequently (e.g., to your right when the partner’s viewpoint was misaligned by a small offset or coincided with the layout’s structure. Conversely, they used egocentric expressions more frequently when their own viewpoint coincided with the intrinsic structure or when the partner was misaligned by a computationally difficult, oblique offset. Based on these findings we advocate for a framework for flexible perspective-taking: people weigh multiple cues (including social ones to make attributions about the relative difficulty of perspective-taking for each partner, and adapt behavior to minimize their collective effort. This framework is not specialized for spatial reasoning but instead emerges from the same principles and memory-depended processes that govern perspective-taking in

  14. Overshadowing of geometric cues by a beacon in a spatial navigation task.

    Science.gov (United States)

    Redhead, Edward S; Hamilton, Derek A; Parker, Matthew O; Chan, Wai; Allison, Craig

    2013-06-01

    In three experiments, we examined whether overshadowing of geometric cues by a discrete landmark (beacon) is due to the relative saliences of the cues. Using a virtual water maze task, human participants were required to locate a platform marked by a beacon in a distinctively shaped pool. In Experiment 1, the beacon overshadowed geometric cues in a trapezium, but not in an isosceles triangle. The longer escape latencies during acquisition in the trapezium control group with no beacon suggest that the geometric cues in the trapezium were less salient than those in the triangle. In Experiment 2, we evaluated whether generalization decrement, caused by the removal of the beacon at test, could account for overshadowing. An additional beacon was placed in an alternative corner. For the control groups, the beacons were identical; for the overshadow groups, they were visually unique. Overshadowing was again found in the trapezium. In Experiment 3, we tested whether the absence of overshadowing in the triangle was due to the geometric cues being more salient than the beacon. Following training, the beacon was relocated to a different corner. Participants approached the beacon rather than the trained platform corner, suggesting that the beacon was more salient. These results suggest that associative processes do not fully explain cue competition in the spatial domain.

  15. Estimating the relative weights of visual and auditory tau versus heuristic-based cues for time-to-contact judgments in realistic, familiar scenes by older and younger adults.

    Science.gov (United States)

    Keshavarz, Behrang; Campos, Jennifer L; DeLucia, Patricia R; Oberfeld, Daniel

    2017-04-01

    Estimating time to contact (TTC) involves multiple sensory systems, including vision and audition. Previous findings suggested that the ratio of an object's instantaneous optical size/sound intensity to its instantaneous rate of change in optical size/sound intensity (τ) drives TTC judgments. Other evidence has shown that heuristic-based cues are used, including final optical size or final sound pressure level. Most previous studies have used decontextualized and unfamiliar stimuli (e.g., geometric shapes on a blank background). Here we evaluated TTC estimates by using a traffic scene with an approaching vehicle to evaluate the weights of visual and auditory TTC cues under more realistic conditions. Younger (18-39 years) and older (65+ years) participants made TTC estimates in three sensory conditions: visual-only, auditory-only, and audio-visual. Stimuli were presented within an immersive virtual-reality environment, and cue weights were calculated for both visual cues (e.g., visual τ, final optical size) and auditory cues (e.g., auditory τ, final sound pressure level). The results demonstrated the use of visual τ as well as heuristic cues in the visual-only condition. TTC estimates in the auditory-only condition, however, were primarily based on an auditory heuristic cue (final sound pressure level), rather than on auditory τ. In the audio-visual condition, the visual cues dominated overall, with the highest weight being assigned to visual τ by younger adults, and a more equal weighting of visual τ and heuristic cues in older adults. Overall, better characterizing the effects of combined sensory inputs, stimulus characteristics, and age on the cues used to estimate TTC will provide important insights into how these factors may affect everyday behavior.

  16. Assessing implicit odor localization in humans using a cross-modal spatial cueing paradigm.

    Directory of Open Access Journals (Sweden)

    Carolin Moessnang

    Full Text Available BACKGROUND: Navigation based on chemosensory information is one of the most important skills in the animal kingdom. Studies on odor localization suggest that humans have lost this ability. However, the experimental approaches used so far were limited to explicit judgements, which might ignore a residual ability for directional smelling on an implicit level without conscious appraisal. METHODS: A novel cueing paradigm was developed in order to determine whether an implicit ability for directional smelling exists. Participants performed a visual two-alternative forced choice task in which the target was preceded either by a side-congruent or a side-incongruent olfactory spatial cue. An explicit odor localization task was implemented in a second experiment. RESULTS: No effect of cue congruency on mean reaction times could be found. However, a time by condition interaction emerged, with significantly slower responses to congruently compared to incongruently cued targets at the beginning of the experiment. This cueing effect gradually disappeared throughout the course of the experiment. In addition, participants performed at chance level in the explicit odor localization task, thus confirming the results of previous research. CONCLUSION: The implicit cueing task suggests the existence of spatial information processing in the olfactory system. Response slowing after a side-congruent olfactory cue is interpreted as a cross-modal attentional interference effect. In addition, habituation might have led to a gradual disappearance of the cueing effect. It is concluded that under immobile conditions with passive monorhinal stimulation, humans are unable to explicitly determine the location of a pure odorant. Implicitly, however, odor localization seems to exert an influence on human behaviour. To our knowledge, these data are the first to show implicit effects of odor localization on overt human behaviour and thus support the hypothesis of residual

  17. Self-Generated Auditory Feedback as a Cue to Support Rhythmic Motor Stability

    Directory of Open Access Journals (Sweden)

    Gopher Daniel

    2011-12-01

    Full Text Available A goal of the SKILLS project is to develop Virtual Reality (VR-based training simulators for different application domains, one of which is juggling. Within this context the value of multimodal VR environments for skill acquisition is investigated. In this study, we investigated whether it was necessary to render the sounds of virtual balls hitting virtual hands within the juggling training simulator. First, we recorded sounds at the jugglers’ ears and found the sound of ball hitting hands to be audible. Second, we asked 24 jugglers to juggle under normal conditions (Audible or while listening to pink noise intended to mask the juggling sounds (Inaudible. We found that although the jugglers themselves reported no difference in their juggling across these two conditions, external juggling experts rated rhythmic stability worse in the Inaudible condition than in the Audible condition. This result suggests that auditory information should be rendered in the VR juggling training simulator.

  18. Use of local visual cues for spatial orientation in terrestrial toads (Rhinella arenarum): The role of distance to a goal.

    Science.gov (United States)

    Daneri, M Florencia; Casanave, Emma B; Muzio, Rubén N

    2015-08-01

    The use of environmental visual cues for navigation is an ability present in many groups of animals. The effect of spatial proximity between a visual cue and a goal on reorientation in an environment has been studied in several vertebrate groups, but never previously in amphibians. In this study, we tested the use of local visual cues (beacons) to orient in an open field in the terrestrial toad (Rhinella arenarum). Experiment 1 showed that toads could orient in space using 2 cues located near the rewarded container. Experiment 2 used only 1 cue placed at different distances to the goal and revealed that learning speed was affected by the proximity to the goal (the closer the cue was to the goal, the faster toads learned its location). Experiment 3 showed that the position of a cue results in a different predictive value. Toads preferred cues located closer to the goal more than those located farther away as a reference for orientation. Present results revealed, for the first time, that (a) toads can learn to orient in an open space using visual cues, and that (b) the effect of spatial proximity between a cue and a goal, a learning phenomenon previously observed in other groups of animals such as mammals, birds, fish, and invertebrates, also affects orientation in amphibians. Thus, our results suggest that toads are able to employ spatial strategies that closely parallel those described in other vertebrate groups, supporting an early evolutionary origin for these spatial orientation skills.

  19. A randomised controlled trial evaluating the effect of an individual auditory cueing device on freezing and gait speed in people with Parkinson's disease

    Directory of Open Access Journals (Sweden)

    Lynch Deirdre

    2008-12-01

    Full Text Available Abstract Background Parkinson's disease is a progressive neurological disorder resulting from a degeneration of dopamine producing cells in the substantia nigra. Clinical symptoms typically affect gait pattern and motor performance. Evidence suggests that the use of individual auditory cueing devices may be used effectively for the management of gait and freezing in people with Parkinson's disease. The primary aim of the randomised controlled trial is to evaluate the effect of an individual auditory cueing device on freezing and gait speed in people with Parkinson's disease. Methods A prospective multi-centre randomised cross over design trial will be conducted. Forty-seven subjects will be randomised into either Group A or Group B, each with a control and intervention phase. Baseline measurements will be recorded using the Freezing of Gait Questionnaire as the primary outcome measure and 3 secondary outcome measures, the 10 m Walk Test, Timed "Up & Go" Test and the Modified Falls Efficacy Scale. Assessments are taken 3-times over a 3-week period. A follow-up assessment will be completed after three months. A secondary aim of the study is to evaluate the impact of such a device on the quality of life of people with Parkinson's disease using a qualitative methodology. Conclusion The Apple iPod-Shuffle™ and similar devices provide a cost effective and an innovative platform for integration of individual auditory cueing devices into clinical, social and home environments and are shown to have immediate effect on gait, with improvements in walking speed, stride length and freezing. It is evident that individual auditory cueing devices are of benefit to people with Parkinson's disease and the aim of this randomised controlled trial is to maximise the benefits by allowing the individual to use devices in both a clinical and social setting, with minimal disruption to their daily routine. Trial registration The protocol for this study is registered

  20. A randomised controlled trial evaluating the effect of an individual auditory cueing device on freezing and gait speed in people with Parkinson's disease

    OpenAIRE

    Lynch Deirdre; Galvin Rose; Ledger Sean; Stokes Emma K

    2008-01-01

    Abstract Background Parkinson's disease is a progressive neurological disorder resulting from a degeneration of dopamine producing cells in the substantia nigra. Clinical symptoms typically affect gait pattern and motor performance. Evidence suggests that the use of individual auditory cueing devices may be used effectively for the management of gait and freezing in people with Parkinson's disease. The primary aim of the randomised controlled trial is to evaluate the effect of an individual a...

  1. Training-induced plasticity of auditory localization in adult mammals.

    Directory of Open Access Journals (Sweden)

    Oliver Kacelnik

    2006-04-01

    Full Text Available Accurate auditory localization relies on neural computations based on spatial cues present in the sound waves at each ear. The values of these cues depend on the size, shape, and separation of the two ears and can therefore vary from one individual to another. As with other perceptual skills, the neural circuits involved in spatial hearing are shaped by experience during development and retain some capacity for plasticity in later life. However, the factors that enable and promote plasticity of auditory localization in the adult brain are unknown. Here we show that mature ferrets can rapidly relearn to localize sounds after having their spatial cues altered by reversibly occluding one ear, but only if they are trained to use these cues in a behaviorally relevant task, with greater and more rapid improvement occurring with more frequent training. We also found that auditory adaptation is possible in the absence of vision or error feedback. Finally, we show that this process involves a shift in sensitivity away from the abnormal auditory spatial cues to other cues that are less affected by the earplug. The mature auditory system is therefore capable of adapting to abnormal spatial information by reweighting different localization cues. These results suggest that training should facilitate acclimatization to hearing aids in the hearing impaired.

  2. [Development of auditory-visual spatial integration using saccadic response time as the index].

    Science.gov (United States)

    Kato, Masaharu; Konishi, Kaoru; Kurosawa, Makiko; Konishi, Yukuo

    2006-05-01

    We measured saccadic response time (SRT) to investigate developmental changes related to spatially aligned or misaligned auditory and visual stimuli responses. We exposed 4-, 5-, and 11-month-old infants to ipsilateral or contralateral auditory-visual stimuli and monitored their eye movements using an electro-oculographic (EOG) system. The SRT analyses revealed four main results. First, saccades were triggered by visual stimuli but not always triggered by auditory stimuli. Second, SRTs became shorter as the children grew older. Third, SRTs for the ipsilateral and visual-only conditions were the same in all infants. Fourth, SRTs for the contralateral condition were longer than for the ipsilateral and visual-only conditions in 11-month-old infants but were the same for all three conditions in 4- and 5-month-old infants. These findings suggest that infants acquire the function of auditory-visual spatial integration underlying saccadic eye movement between the ages of 5 and 11 months. The dependency of SRTs on the spatial configuration of auditory and visual stimuli can be explained by cortical control of the superior colliculus. Our finding of no differences in SRTs between the ipsilateral and visual-only conditions suggests that there are multiple pathways for controlling the superior colliculus and that these pathways have different developmental time courses.

  3. Cross-modal training induces changes in spatial representations early in the auditory processing pathway.

    Science.gov (United States)

    Bruns, Patrick; Liebnau, Ronja; Röder, Brigitte

    2011-09-01

    In the ventriloquism aftereffect, brief exposure to a consistent spatial disparity between auditory and visual stimuli leads to a subsequent shift in subjective sound localization toward the positions of the visual stimuli. Such rapid adaptive changes probably play an important role in maintaining the coherence of spatial representations across the various sensory systems. In the research reported here, we used event-related potentials (ERPs) to identify the stage in the auditory processing stream that is modulated by audiovisual discrepancy training. Both before and after exposure to synchronous audiovisual stimuli that had a constant spatial disparity of 15°, participants reported the perceived location of brief auditory stimuli that were presented from central and lateral locations. In conjunction with a sound localization shift in the direction of the visual stimuli (the behavioral ventriloquism aftereffect), auditory ERPs as early as 100 ms poststimulus (N100) were systematically modulated by the disparity training. These results suggest that cross-modal learning was mediated by a relatively early stage in the auditory cortical processing stream.

  4. Effects of Visual, Auditory, and Tactile Navigation Cues on Navigation Performance, Situation Awareness, and Mental Workload

    Science.gov (United States)

    2007-02-01

    MRT ) (Wickens & Hollands, 2000), which was also the basis for the IMPRINT workload analysis. Multiple resource theory proposes that people have...are supported by MRT , several display modalities for presenting navigation waypoint information were designed, including visual, monaural and spatial...all other experimental conditions. A Cyber Acoustics AC-200 supra-aural stereo headset was connected to the PC via the Creative Labs Sound Blaster

  5. Are Shadows Only Coarsely Processed? Exploring Depth Discrimination with Cast Shadow Cue Conflicts Across Spatial Frequency

    Directory of Open Access Journals (Sweden)

    P.G Lovell

    2014-08-01

    Full Text Available Shape-from-shading is a ubiquitous cue informing object identification and depth judgements. Cast-shadows contribute towards these judgements (see Mammassian, Knill and Kersten, 1998. A number of studies have reported that search-times for inconsistent shadows vary according to whether the scene is presented as-if illuminated from above or below. Though the direction of these inhomogeneities is sometimes contested (see Rensink and Cavanagh, 2004 and Lovell et al, 2009. Lovell et. al. posit that the processing of shadows is handled by coarse-scale processes, but only in light-from-above presentations. The current study explores depth discrimination judgements informed by cast shadows. We create stimuli featuring a pair of floating discs, casting shadows onto a fronto-parallel surface. Participants were asked to identify the disc protruding the most towards them. One disc featured a cast-shadow with a cue-conflict, where low and high spatial-frequency components conveyed different depth information. This allowed us to estimate the weight assigned to the different cues when depth discrimination judgements were made. Firstly, we find that depth judgements consistently reflected the coarse-scale cues, fine-scale cues were largely ignored. Secondly, we found only small differences in the cue weightings for stimuli presented as-if light were above or below. The latter result is perplexing as previous studies have shown a difference between light-from-above and below conditions. We speculate that this difference reflects the task undertaken, i.e. discriminating depths rather than searching for odd shadows.

  6. Effect of Exogenous Cues on Covert Spatial Orienting in Deaf and Normal Hearing Individuals.

    Science.gov (United States)

    Prasad, Seema Gorur; Patil, Gouri Shanker; Mishra, Ramesh Kumar

    2015-01-01

    Deaf individuals have been known to process visual stimuli better at the periphery compared to the normal hearing population. However, very few studies have examined attention orienting in the oculomotor domain in the deaf, particularly when targets appear at variable eccentricity. In this study, we examined if the visual perceptual processing advantage reported in the deaf people also modulates spatial attentional orienting with eye movement responses. We used a spatial cueing task with cued and uncued targets that appeared at two different eccentricities and explored attentional facilitation and inhibition. We elicited both a saccadic and a manual response. The deaf showed a higher cueing effect for the ocular responses than the normal hearing participants. However, there was no group difference for the manual responses. There was also higher facilitation at the periphery for both saccadic and manual responses, irrespective of groups. These results suggest that, owing to their superior visual processing ability, the deaf may orient attention faster to targets. We discuss the results in terms of previous studies on cueing and attentional orienting in deaf.

  7. Cross-modal activation of auditory regions during visuo-spatial working memory in early deafness.

    Science.gov (United States)

    Ding, Hao; Qin, Wen; Liang, Meng; Ming, Dong; Wan, Baikun; Li, Qiang; Yu, Chunshui

    2015-09-01

    Early deafness can reshape deprived auditory regions to enable the processing of signals from the remaining intact sensory modalities. Cross-modal activation has been observed in auditory regions during non-auditory tasks in early deaf subjects. In hearing subjects, visual working memory can evoke activation of the visual cortex, which further contributes to behavioural performance. In early deaf subjects, however, whether and how auditory regions participate in visual working memory remains unclear. We hypothesized that auditory regions may be involved in visual working memory processing and activation of auditory regions may contribute to the superior behavioural performance of early deaf subjects. In this study, 41 early deaf subjects (22 females and 19 males, age range: 20-26 years, age of onset of deafness deaf subjects exhibited faster reaction times on the spatial working memory task than did the hearing controls. Compared with hearing controls, deaf subjects exhibited increased activation in the superior temporal gyrus bilaterally during the recognition stage. This increased activation amplitude predicted faster and more accurate working memory performance in deaf subjects. Deaf subjects also had increased activation in the superior temporal gyrus bilaterally during the maintenance stage and in the right superior temporal gyrus during the encoding stage. These increased activation amplitude also predicted faster reaction times on the spatial working memory task in deaf subjects. These findings suggest that cross-modal plasticity occurs in auditory association areas in early deaf subjects. These areas are involved in visuo-spatial working memory. Furthermore, amplitudes of cross-modal activation during the maintenance stage were positively correlated with the age of onset of hearing aid use and were negatively correlated with the percentage of lifetime hearing aid use in deaf subjects. These findings suggest that earlier and longer hearing aid use may

  8. Male Music Frogs Compete Vocally on the Basis of Temporal Sequence Rather Than Spatial Cues of Rival Calls

    Institute of Scientific and Technical Information of China (English)

    Fan JIANG; Guangzhan FANG; Fei XUE; Jianguo CUI; Steven E BRAUTH; Yezhong TANG

    2015-01-01

    Male-male vocal competition in anuran species may be influenced by cues related to the temporal sequence of male calls as well by internal temporal, spectral and spatial ones. Nevertheless, the conditions under which each type of cue is important remain unclear. Since the salience of different cues could be reflected by dynamic properties of male-male competition under certain experimental manipulation, we investigated the effects of repeating playbacks of conspecific calls on male call production in the Emei music frog (Babina daunchina). In Babina, most males produce calls from nest burrows which modify the spectral features of the cues. Females prefer calls produced from inside burrows which are defined as highly sexually attractive (HSA) while those produced outside burrows as low sexual attractiveness (LSA). In this study HSA and LSA calls were broadcasted either antiphonally or stereophonically through spatially separated speakers in which the temporal sequence and/or spatial position of the playbacks was either predictable or random. Results showed that most males consistently avoided producing advertisement calls overlapping the playback stimuli and generally produced calls competitively in advance of the playbacks. Furthermore males preferentially competed with the HSA calls when the sequence was predictable but competed equally with HSA and LSA calls if the sequence was random regardless of the availability of spatial cues, implying that males relied more on available sequence cues than spatial ones to remain competitive.

  9. Comparison of Gated Audiovisual Speech Identification in Elderly Hearing Aid Users and Elderly Normal-Hearing Individuals: Effects of Adding Visual Cues to Auditory Speech Stimuli.

    Science.gov (United States)

    Moradi, Shahram; Lidestam, Björn; Rönnberg, Jerker

    2016-06-17

    The present study compared elderly hearing aid (EHA) users (n = 20) with elderly normal-hearing (ENH) listeners (n = 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context.

  10. Cues, context, and long-term memory: the role of the retrosplenial cortex in spatial cognition

    Directory of Open Access Journals (Sweden)

    Adam M P Miller

    2014-08-01

    Full Text Available Spatial navigation requires representations of landmarks and other navigation cues. The retrosplenial cortex (RSC is anatomically positioned between limbic areas important for memory formation, such as the hippocampus and the anterior thalamus, and cortical regions along the dorsal stream known to contribute importantly to long-term spatial representation, such as the posterior parietal cortex. Damage to the RSC severely impairs allocentric representations of the environment, including the ability to derive navigational information from landmarks. The specific deficits seen in tests of human and rodent navigation suggest that the RSC supports allocentric representation by processing the stable features of the environment and the spatial relationships among them. In addition to spatial cognition, the RSC plays a key role in contextual and episodic memory. The RSC also contributes importantly to the acquisition and consolidation of long-term spatial and contextual memory through its interactions with the hippocampus. Within this framework, the RSC plays a dual role as part of the feedforward network providing sensory and mnemonic input to the hippocampus and as a target of the hippocampal-dependent systems consolidation of long-term memory.

  11. Detection of auditory signals in quiet and noisy backgrounds while performing a visuo-spatial task

    Directory of Open Access Journals (Sweden)

    Vishakha W Rawool

    2016-01-01

    Full Text Available Context: The ability to detect important auditory signals while performing visual tasks may be further compounded by background chatter. Thus, it is important to know how task performance may interact with background chatter to hinder signal detection. Aim: To examine any interactive effects of speech spectrum noise and task performance on the ability to detect signals. Settings and Design: The setting was a sound-treated booth. A repeated measures design was used. Materials and Methods: Auditory thresholds of 20 normal adults were determined at 0.5, 1, 2 and 4 kHz in the following conditions presented in a random order: (1 quiet with attention; (2 quiet with a visuo-spatial task or puzzle (distraction; (3 noise with attention and (4 noise with task. Statistical Analysis: Multivariate analyses of variance (MANOVA with three repeated factors (quiet versus noise, visuo-spatial task versus no task, signal frequency. Results: MANOVA revealed significant main effects for noise and signal frequency and significant noise–frequency and task–frequency interactions. Distraction caused by performing the task worsened the thresholds for tones presented at the beginning of the experiment and had no effect on tones presented in the middle. At the end of the experiment, thresholds (4 kHz were better while performing the task than those obtained without performing the task. These effects were similar across the quiet and noise conditions. Conclusion: Detection of auditory signals is difficult at the beginning of a distracting visuo-spatial task but over time, task learning and auditory training effects can nullify the effect of distraction and may improve detection of high frequency sounds.

  12. Pip and pop : Non-spatial auditory signals improve spatial visual search

    NARCIS (Netherlands)

    Burg, E. van der; Olivers, C.N.L.; Bronkhorst, A.W.; Theeuwes, J.

    2008-01-01

    Searching for an object within a cluttered, continuously changing environment can be a very time-consuming process. The authors show that a simple auditory pip drastically decreases search times for a synchronized visual object that is normally very difficult to find. This effect occurs even though

  13. A reinforcement learning approach to model interactions between landmarks and geometric cues during spatial learning.

    Science.gov (United States)

    Sheynikhovich, Denis; Arleo, Angelo

    2010-12-13

    In contrast to predictions derived from the associative learning theory, a number of behavioral studies suggested the absence of competition between geometric cues and landmarks in some experimental paradigms. In parallel to these studies, neurobiological experiments suggested the existence of separate independent memory systems which may not always interact according to classic associative principles. In this paper we attempt to combine these two lines of research by proposing a model of spatial learning that is based on the theory of multiple memory systems. In our model, a place-based locale strategy uses activities of modeled hippocampal place cells to drive navigation to a hidden goal, while a stimulus-response taxon strategy, presumably mediated by the dorso-lateral striatum, learns landmark-approaching behavior. A strategy selection network, proposed to reside in the prefrontal cortex, implements a simple reinforcement learning rule to switch behavioral strategies. The model is used to reproduce the results of a behavioral experiment in which an interaction between a landmark and geometric cues was studied. We show that this model, built on the basis of neurobiological data, can explain the lack of competition between the landmark and geometry, potentiation of geometry learning by the landmark, and blocking. Namely, we propose that the geometry potentiation is a consequence of cooperation between memory systems during learning, while blocking is due to competition between the memory systems during action selection.

  14. The relationship between visual-spatial and auditory-verbal working memory span in Senegalese and Ugandan children.

    Directory of Open Access Journals (Sweden)

    Michael J Boivin

    Full Text Available BACKGROUND: Using the Kaufman Assessment Battery for Children (K-ABC Conant et al. (1999 observed that visual and auditory working memory (WM span were independent in both younger and older children from DR Congo, but related in older American children and in Lao children. The present study evaluated whether visual and auditory WM span were independent in Ugandan and Senegalese children. METHOD: In a linear regression analysis we used visual (Spatial Memory, Hand Movements and auditory (Number Recall WM along with education and physical development (weight/height as predictors. The predicted variable in this analysis was Word Order, which is a verbal memory task that has both visual and auditory memory components. RESULTS: Both the younger (8.5 yrs Ugandan children had auditory memory span (Number Recall that was strongly predictive of Word Order performance. For both the younger and older groups of Senegalese children, only visual WM span (Spatial Memory was strongly predictive of Word Order. Number Recall was not significantly predictive of Word Order in either age group. CONCLUSIONS: It is possible that greater literacy from more schooling for the Ugandan age groups mediated their greater degree of interdependence between auditory and verbal WM. Our findings support those of Conant et al., who observed in their cross-cultural comparisons that stronger education seemed to enhance the dominance of the phonological-auditory processing loop for WM.

  15. Dissociable Memory- and Response-Related Activity in Parietal Cortex during Auditory Spatial Working Memory

    Directory of Open Access Journals (Sweden)

    Claude Alain

    2010-12-01

    Full Text Available Attending and responding to sound location generates increased activity in parietal cortex which may index auditory spatial working memory and/or goal-directed action. Here, we used an n-back task (Experiment 1 and an adaptation paradigm (Experiment 2 to distinguish memory-related activity from that associated with goal-directed action. In Experiment 1, participants indicated, in separate blocks of trials, whether the incoming stimulus was presented at the same location as in the previous trial (1-back or two trials ago (2-back. Prior to a block of trials, participants were told to use their left or right index finger. Accuracy and reaction times were worse for the 2-back than for the 1-back condition. The analysis of fMRI data revealed greater sustained task-related activity in the inferior parietal lobule (IPL and superior frontal sulcus during 2-back than 1-back after accounting for response-related activity elicited by the targets. Target detection and response execution were also associated with enhanced activity in the IPL bilaterally, though the activation was anterior to that associated with sustained task-related activity. In Experiment 2, we used an event-related design in which participants listened (no response required to trials that comprised four sounds presented either at the same location or at four different locations. We found larger IPL activation for changes in sound location than for sounds presented at the same location. The IPL activation overlapped with that observed during auditory spatial working memory task. Together, these results provide converging evidence supporting the role of parietal cortex in auditory spatial working memory which can be dissociated from response selection and execution.

  16. How does experience modulate auditory spatial processing in individuals with blindness?

    Science.gov (United States)

    Tao, Qian; Chan, Chetwyn C H; Luo, Yue-jia; Li, Jian-jun; Ting, Kin-hung; Wang, Jun; Lee, Tatia M C

    2015-05-01

    Comparing early- and late-onset blindness in individuals offers a unique model for studying the influence of visual experience on neural processing. This study investigated how prior visual experience would modulate auditory spatial processing among blind individuals. BOLD responses of early- and late-onset blind participants were captured while performing a sound localization task. The task required participants to listen to novel "Bat-ears" sounds, analyze the spatial information embedded in the sounds, and specify out of 15 locations where the sound would have been emitted. In addition to sound localization, participants were assessed on visuospatial working memory and general intellectual abilities. The results revealed common increases in BOLD responses in the middle occipital gyrus, superior frontal gyrus, precuneus, and precentral gyrus during sound localization for both groups. Between-group dissociations, however, were found in the right middle occipital gyrus and left superior frontal gyrus. The BOLD responses in the left superior frontal gyrus were significantly correlated with accuracy on sound localization and visuospatial working memory abilities among the late-onset blind participants. In contrast, the accuracy on sound localization only correlated with BOLD responses in the right middle occipital gyrus among the early-onset counterpart. The findings support the notion that early-onset blind individuals rely more on the occipital areas as a result of cross-modal plasticity for auditory spatial processing, while late-onset blind individuals rely more on the prefrontal areas which subserve visuospatial working memory.

  17. Auditory spatial acuity approximates the resolving power of space-specific neurons.

    Directory of Open Access Journals (Sweden)

    Avinash D S Bala

    Full Text Available The relationship between neuronal acuity and behavioral performance was assessed in the barn owl (Tyto alba, a nocturnal raptor renowned for its ability to localize sounds and for the topographic representation of auditory space found in the midbrain. We measured discrimination of sound-source separation using a newly developed procedure involving the habituation and recovery of the pupillary dilation response. The smallest discriminable change of source location was found to be about two times finer in azimuth than in elevation. Recordings from neurons in its midbrain space map revealed that their spatial tuning, like the spatial discrimination behavior, was also better in azimuth than in elevation by a factor of about two. Because the PDR behavioral assay is mediated by the same circuitry whether discrimination is assessed in azimuth or in elevation, this difference in vertical and horizontal acuity is likely to reflect a true difference in sensory resolution, without additional confounding effects of differences in motor performance in the two dimensions. Our results, therefore, are consistent with the hypothesis that the acuity of the midbrain space map determines auditory spatial discrimination.

  18. Developmental Changes in the Effect of Verbal, Non-verbal, and Spatial-Positional Cues for Memory

    Science.gov (United States)

    Derevensky, Jeffrey

    1976-01-01

    Sixty kindergarten, sixty second grade, and sixty fourth grade students performed several memory tasks under one of six conditions. The conditions differed as to the method of presentation of information. The study focused on developmental changes in children's use of verbal, nonverbal, and spatial-positional cues for memory. (Editor)

  19. Developmental Changes in the Effect of Verbal, Non-Verbal and Spatial-Positional Cues on Retention.

    Science.gov (United States)

    Derevensky, Jeffrey

    Sixty kindergarten, 60 second-grade, and 60 fourth-grade students performed several memory tasks under one of six conditions. The conditions differed as to the method of presentation of information. The study was focused on developmental changes in children's use of verbal, nonverbal, and spatial-positional cues for memory. The results, in…

  20. Learning of spatial statistics in nonhuman primates: contextual cueing in baboons (Papio papio).

    Science.gov (United States)

    Goujon, Annabelle; Fagot, Joel

    2013-06-15

    A growing number of theories of cognition suggest that many of our behaviors result from the ability to implicitly extract and use statistical redundancies present in complex environments. In an attempt to develop an animal model of statistical learning mechanisms in humans, the current study investigated spatial contextual cueing (CC) in nonhuman primates. Twenty-five baboons (Papio papio) were trained to search for a target (T) embedded within configurations of distrators (L) that were either predictive or non-predictive of the target location. Baboons exhibited an early CC effect, which remained intact after a 6-week delay and stable across extensive training of 20,000 trials. These results demonstrate the baboons' ability to learn spatial contingencies, as well as the robustness of CC as a cognitive phenomenon across species. Nevertheless, in both the youngest and oldest baboons, CC required many more trials to emerge than in baboons of intermediate age. As a whole, these results reveal strong similarities between CC in humans and baboons, suggesting similar statistical learning mechanisms in these two species. Therefore, baboons provide a valid model to investigate how statistical learning mechanisms develop and/or age during the life span, as well as how these mechanisms are implemented in neural networks, and how they have evolved throughout the phylogeny.

  1. Using Spatial Manipulation to Examine Interactions between Visual and Auditory Encoding of Pitch and Time.

    Science.gov (United States)

    McLachlan, Neil M; Greco, Loretta J; Toner, Emily C; Wilson, Sarah J

    2010-01-01

    Music notations use both symbolic and spatial representation systems. Novice musicians do not have the training to associate symbolic information with musical identities, such as chords or rhythmic and melodic patterns. They provide an opportunity to explore the mechanisms underpinning multimodal learning when spatial encoding strategies of feature dimensions might be expected to dominate. In this study, we applied a range of transformations (such as time reversal) to short melodies and rhythms and asked novice musicians to identify them with or without the aid of notation. Performance using a purely spatial (graphic) notation was contrasted with the more symbolic, traditional western notation over a series of weekly sessions. The results showed learning effects for both notation types, but performance improved more for graphic notation. This points to greater compatibility of auditory and visual neural codes for novice musicians when using spatial notation, suggesting that pitch and time may be spatially encoded in multimodal associative memory. The findings also point to new strategies for training novice musicians.

  2. Using spatial manipulation to examine interactions between visual and auditory encoding of pitch and time

    Directory of Open Access Journals (Sweden)

    Neil M McLachlan

    2010-12-01

    Full Text Available Music notations use both symbolic and spatial representation systems. Novice musicians do not have the training to associate symbolic information with musical identities, such as chords or rhythmic and melodic patterns. They provide an opportunity to explore the mechanisms underpinning multimodal learning when spatial encoding strategies of feature dimensions might be expected to dominate. In this study, we applied a range of transformations (such as time reversal to short melodies and rhythms and asked novice musicians to identify them with or without the aid of notation. Performance using a purely spatial (graphic notation was contrasted with the more symbolic, traditional western notation over a series of weekly sessions. The results showed learning effects for both notation types, but performance improved more for graphic notation. This points to greater compatibility of auditory and visual neural codes for novice musicians when using spatial notation, suggesting that pitch and time may be spatially encoded in multimodal associative memory. The findings also point to new strategies for training novice musicians.

  3. Spatial profile and differential recruitment of GABAB modulate oscillatory activity in auditory cortex.

    Science.gov (United States)

    Oswald, Anne-Marie M; Doiron, Brent; Rinzel, John; Reyes, Alex D

    2009-08-19

    The interplay between inhibition and excitation is at the core of cortical network activity. In many cortices, including auditory cortex (ACx), interactions between excitatory and inhibitory neurons generate synchronous network gamma oscillations (30-70 Hz). Here, we show that differences in the connection patterns and synaptic properties of excitatory-inhibitory microcircuits permit the spatial extent of network inputs to modulate the magnitude of gamma oscillations. Simultaneous multiple whole-cell recordings from connected fast-spiking interneurons and pyramidal cells in L2/3 of mouse ACx slices revealed that for intersomatic distances <50 microm, most inhibitory connections occurred in reciprocally connected (RC) pairs; at greater distances, inhibitory connections were equally likely in RC and nonreciprocally connected (nRC) pairs. Furthermore, the GABA(B)-mediated inhibition in RC pairs was weaker than in nRC pairs. Simulations with a network model that incorporated these features showed strong, gamma band oscillations only when the network inputs were confined to a small area. These findings suggest a novel mechanism by which oscillatory activity can be modulated by adjusting the spatial distribution of afferent input.

  4. Effects of spatial response coding on distractor processing: evidence from auditory spatial negative priming tasks with keypress, joystick, and head movement responses.

    Science.gov (United States)

    Möller, Malte; Mayr, Susanne; Buchner, Axel

    2015-01-01

    Prior studies of spatial negative priming indicate that distractor-assigned keypress responses are inhibited as part of visual, but not auditory, processing. However, recent evidence suggests that static keypress responses are not directly activated by spatially presented sounds and, therefore, might not call for an inhibitory process. In order to investigate the role of response inhibition in auditory processing, we used spatially directed responses that have been shown to result in direct response activation to irrelevant sounds. Participants localized a target sound by performing manual joystick responses (Experiment 1) or head movements (Experiment 2B) while ignoring a concurrent distractor sound. Relations between prime distractor and probe target were systematically manipulated (repeated vs. changed) with respect to identity and location. Experiment 2A investigated the influence of distractor sounds on spatial parameters of head movements toward target locations and showed that distractor-assigned responses are immediately inhibited to prevent false responding in the ongoing trial. Interestingly, performance in Experiments 1 and 2B was not generally impaired when the probe target appeared at the location of the former prime distractor and required a previously withheld and presumably inhibited response. Instead, performance was impaired only when prime distractor and probe target mismatched in terms of location or identity, which fully conforms to the feature-mismatching hypothesis. Together, the results suggest that response inhibition operates in auditory processing when response activation is provided but is presumably too short-lived to affect responding on the subsequent trial.

  5. Express attentional re-engagement but delayed entry into consciousness following invalid spatial cues in visual search.

    Directory of Open Access Journals (Sweden)

    Benoit Brisson

    Full Text Available BACKGROUND: In predictive spatial cueing studies, reaction times (RT are shorter for targets appearing at cued locations (valid trials than at other locations (invalid trials. An increase in the amplitude of early P1 and/or N1 event-related potential (ERP components is also present for items appearing at cued locations, reflecting early attentional sensory gain control mechanisms. However, it is still unknown at which stage in the processing stream these early amplitude effects are translated into latency effects. METHODOLOGY/PRINCIPAL FINDINGS: Here, we measured the latency of two ERP components, the N2pc and the sustained posterior contralateral negativity (SPCN, to evaluate whether visual selection (as indexed by the N2pc and visual-short term memory processes (as indexed by the SPCN are delayed in invalid trials compared to valid trials. The P1 was larger contralateral to the cued side, indicating that attention was deployed to the cued location prior to the target onset. Despite these early amplitude effects, the N2pc onset latency was unaffected by cue validity, indicating an express, quasi-instantaneous re-engagement of attention in invalid trials. In contrast, latency effects were observed for the SPCN, and these were correlated to the RT effect. CONCLUSIONS/SIGNIFICANCE: Results show that latency differences that could explain the RT cueing effects must occur after visual selection processes giving rise to the N2pc, but at or before transfer in visual short-term memory, as reflected by the SPCN, at least in discrimination tasks in which the target is presented concurrently with at least one distractor. Given that the SPCN was previously associated to conscious report, these results further show that entry into consciousness is delayed following invalid cues.

  6. Goal orientation by geometric and feature cues: spatial learning in the terrestrial toad Rhinella arenarum.

    Science.gov (United States)

    Sotelo, María Inés; Bingman, Verner Peter; Muzio, Rubén N

    2015-01-01

    Although of crucial importance in vertebrate evolution, amphibians are rarely considered in studies of comparative cognition. Using water as reward, we studied whether the terrestrial toad, Rhinella arenarum, is also capable of encoding geometric and feature information to navigate to a goal location. Experimental toads, partially dehydrated, were trained in either a white rectangular box (Geometry-only, Experiment 1) or in the same box with a removable colored panel (Geometry-Feature, Experiment 2) covering one wall. Four water containers were used, but only one (Geometry-Feature), or two in geometrically equivalent corners (Geometry-only), had water accessible to the trained animals. After learning to successfully locate the water reward, probe trials were carried out by changing the shape of the arena or the location of the feature cue. Probe tests revealed that, under the experimental conditions used, toads can use both geometry and feature to locate a goal location, but geometry is more potent as a navigational cue. The results generally agree with findings from other vertebrates and support the idea that at the behavioral-level geometric orientation is a conserved feature shared by all vertebrates.

  7. Colorful Success: Preschoolers' Use of Perceptual Color Cues to Solve a Spatial Reasoning Problem

    Science.gov (United States)

    Joh, Amy S.; Spivey, Leigh A.

    2012-01-01

    Spatial reasoning, a crucial skill for everyday actions, develops gradually during the first several years of childhood. Previous studies have shown that perceptual information and problem solving strategies are critical for successful spatial reasoning in young children. Here, we sought to link these two factors by examining children's use of…

  8. Neuronal representations of distance in human auditory cortex.

    Science.gov (United States)

    Kopčo, Norbert; Huang, Samantha; Belliveau, John W; Raij, Tommi; Tengshe, Chinmayi; Ahveninen, Jyrki

    2012-07-03

    Neuronal mechanisms of auditory distance perception are poorly understood, largely because contributions of intensity and distance processing are difficult to differentiate. Typically, the received intensity increases when sound sources approach us. However, we can also distinguish between soft-but-nearby and loud-but-distant sounds, indicating that distance processing can also be based on intensity-independent cues. Here, we combined behavioral experiments, fMRI measurements, and computational analyses to identify the neural representation of distance independent of intensity. In a virtual reverberant environment, we simulated sound sources at varying distances (15-100 cm) along the right-side interaural axis. Our acoustic analysis suggested that, of the individual intensity-independent depth cues available for these stimuli, direct-to-reverberant ratio (D/R) is more reliable and robust than interaural level difference (ILD). However, on the basis of our behavioral results, subjects' discrimination performance was more consistent with complex intensity-independent distance representations, combining both available cues, than with representations on the basis of either D/R or ILD individually. fMRI activations to sounds varying in distance (containing all cues, including intensity), compared with activations to sounds varying in intensity only, were significantly increased in the planum temporale and posterior superior temporal gyrus contralateral to the direction of stimulation. This fMRI result suggests that neurons in posterior nonprimary auditory cortices, in or near the areas processing other auditory spatial features, are sensitive to intensity-independent sound properties relevant for auditory distance perception.

  9. Effects of Spatial and Non-Spatial Multi-Modal Cues on Orienting of Visual-Spatial Attention in an Augmented Environment

    Science.gov (United States)

    2007-11-01

    C-1 LIST OF FIGURES FIGURE 1. BROADBENT’S (1958) MODEL OF ATTENTION (AFTER BROADBENT , 1958, P. 299...draws our attention to it by virtue of its characteristics (Treisman & Gormican, 1988; Broadbent , 1958; Wickens, 1984). Stimuli are categorized into two... Broadbent 1958). Although the multiple resource model only includes visual and auditory modalities, this model still has applicability to multi-tasks

  10. Spatially valid proprioceptive cues improve the detection of a visual stimulus

    DEFF Research Database (Denmark)

    Jackson, Carl P T; Miall, R Chris; Balslev, Daniela

    2010-01-01

    Vision and proprioception are the main sensory modalities that convey hand location and direction of movement. Fusion of these sensory signals into a single robust percept is now well documented. However, it is not known whether these modalities also interact in the spatial allocation of attention...... it was incompatible. These results suggest that proprioception influences the allocation of attention in visual space....

  11. Contributions of sensory coding and attentional control to individual differences in performance in spatial auditory selective attention tasks

    Directory of Open Access Journals (Sweden)

    Lengshi Dai

    2016-10-01

    Full Text Available Listeners with normal hearing thresholds differ in their ability to steer attention to whatever sound source is important. This ability depends on top-down executive control, which modulates the sensory representation of sound in cortex. Yet, this sensory representation also depends on the coding fidelity of the peripheral auditory system. Both of these factors may thus contribute to the individual differences in performance. We designed a selective auditory attention paradigm in which we could simultaneously measure envelope following responses (EFRs, reflecting peripheral coding, onset event-related potentials from the scalp (ERPs, reflecting cortical responses to sound, and behavioral scores. We performed two experiments that varied stimulus conditions to alter the degree to which performance might be limited due to fine stimulus details vs. due to control of attentional focus. Consistent with past work, in both experiments we find that attention strongly modulates cortical ERPs. Importantly, in Experiment I, where coding fidelity limits the task, individual behavioral performance correlates with subcortical coding strength (derived by computing how the EFR is degraded for fully masked tones compared to partially masked tones; however, in this experiment, the effects of attention on cortical ERPs were unrelated to individual subject performance. In contrast, in Experiment II, where sensory cues for segregation are robust (and thus less of a limiting factor on task performance, inter-subject behavioral differences correlate with subcortical coding strength. In addition, after factoring out the influence of subcortical coding strength, behavioral differences are also correlated with the strength of attentional modulation of ERPs. These results support the hypothesis that behavioral abilities amongst listeners with normal hearing thresholds can arise due to both subcortical coding differences and differences in attentional control, depending on

  12. From repulsion to attraction: species- and spatial context-dependent threat sensitive response of the spider mite Tetranychus urticae to predatory mite cues

    Science.gov (United States)

    Fernández Ferrari, M. Celeste; Schausberger, Peter

    2013-06-01

    Prey perceiving predation risk commonly change their behavior to avoid predation. However, antipredator strategies are costly. Therefore, according to the threat-sensitive predator avoidance hypothesis, prey should match the intensity of their antipredator behaviors to the degree of threat, which may depend on the predator species and the spatial context. We assessed threat sensitivity of the two-spotted spider mite, Tetranychus urticae, to the cues of three predatory mites, Phytoseiulus persimilis, Neoseiulus californicus, and Amblyseius andersoni, posing different degrees of risk in two spatial contexts. We first conducted a no-choice test measuring oviposition and activity of T. urticae exposed to chemical traces of predators or traces plus predator eggs. Then, we tested the site preference of T. urticae in choice tests, using artificial cages and leaves. In the no-choice test, T. urticae deposited their first egg later in the presence of cues of P. persimilis than of the other two predators and cue absence, indicating interspecific threat-sensitivity. T. urticae laid also fewer eggs in the presence of cues of P. persimilis and A. andersoni than of N. californicus and cue absence. In the artificial cage test, the spider mites preferred the site with predator traces, whereas in the leaf test, they preferentially resided on leaves without traces. We argue that in a nonplant environment, chemical predator traces do not indicate a risk for T. urticae, and instead, these traces function as indirect habitat cues. The spider mites were attracted to these cues because they associated them with the existence of a nearby host plant.

  13. Effects of temporal and spatial cueing on anticipatory postural control in a rapid interceptive task.

    Science.gov (United States)

    Huntley, Andrew H; Zettel, John L

    2015-04-10

    Balance disruptions induced by voluntary focal arm actions are accommodated via anticipatory postural adjustments, but how this coordinated control is organized by the central nervous system remains unclear: either as combined or separate streams of postural-focal motor commands. For example, a focal arm task that dictates extremely tight temporal constraints may induce a focal response in absence of an anticipatory postural adjustment, providing evidence for separate focal-postural control streams. This study sought to probe the organization of focal-postural control via an interceptive task with very little available response time, and to determine whether focal-postural coordination depends on temporal and/or spatial foreknowledge of the task. Ten healthy young adults (5 males and 5 females; 20-29 years) reacted to catch a ball when standing under four conditions of temporal and spatial foreknowledge. Response onset was characterized by muscle activity from both postural and focal arm muscles. The catching task resulted in rapid muscle responses, but there was no difference between the fastest focal and postural muscle onsets. As expected, temporal cuing resulted in faster focal and postural onsets compared to spatial and control cuing trials. The accompaniment and time-locking of focal and postural muscle onsets, suggests that postural-focal coupling remains intact even under external time constraints and provides evidence for a single combined command stream of postural and focal control under such circumstances.

  14. Auditory attention in childhood and adolescence: An event-related potential study of spatial selective attention to one of two simultaneous stories.

    Science.gov (United States)

    Karns, Christina M; Isbell, Elif; Giuliano, Ryan J; Neville, Helen J

    2015-06-01

    Auditory selective attention is a critical skill for goal-directed behavior, especially where noisy distractions may impede focusing attention. To better understand the developmental trajectory of auditory spatial selective attention in an acoustically complex environment, in the current study we measured auditory event-related potentials (ERPs) across five age groups: 3-5 years; 10 years; 13 years; 16 years; and young adults. Using a naturalistic dichotic listening paradigm, we characterized the ERP morphology for nonlinguistic and linguistic auditory probes embedded in attended and unattended stories. We documented robust maturational changes in auditory evoked potentials that were specific to the types of probes. Furthermore, we found a remarkable interplay between age and attention-modulation of auditory evoked potentials in terms of morphology and latency from the early years of childhood through young adulthood. The results are consistent with the view that attention can operate across age groups by modulating the amplitude of maturing auditory early-latency evoked potentials or by invoking later endogenous attention processes. Development of these processes is not uniform for probes with different acoustic properties within our acoustically dense speech-based dichotic listening task. In light of the developmental differences we demonstrate, researchers conducting future attention studies of children and adolescents should be wary of combining analyses across diverse ages.

  15. Auditory attention in childhood and adolescence: An event-related potential study of spatial selective attention to one of two simultaneous stories

    Directory of Open Access Journals (Sweden)

    Christina M. Karns

    2015-06-01

    Full Text Available Auditory selective attention is a critical skill for goal-directed behavior, especially where noisy distractions may impede focusing attention. To better understand the developmental trajectory of auditory spatial selective attention in an acoustically complex environment, in the current study we measured auditory event-related potentials (ERPs across five age groups: 3–5 years; 10 years; 13 years; 16 years; and young adults. Using a naturalistic dichotic listening paradigm, we characterized the ERP morphology for nonlinguistic and linguistic auditory probes embedded in attended and unattended stories. We documented robust maturational changes in auditory evoked potentials that were specific to the types of probes. Furthermore, we found a remarkable interplay between age and attention-modulation of auditory evoked potentials in terms of morphology and latency from the early years of childhood through young adulthood. The results are consistent with the view that attention can operate across age groups by modulating the amplitude of maturing auditory early-latency evoked potentials or by invoking later endogenous attention processes. Development of these processes is not uniform for probes with different acoustic properties within our acoustically dense speech-based dichotic listening task. In light of the developmental differences we demonstrate, researchers conducting future attention studies of children and adolescents should be wary of combining analyses across diverse ages.

  16. The effects of distraction and a brief intervention on auditory and visual-spatial working memory in college students with attention deficit hyperactivity disorder.

    Science.gov (United States)

    Lineweaver, Tara T; Kercood, Suneeta; O'Keeffe, Nicole B; O'Brien, Kathleen M; Massey, Eric J; Campbell, Samantha J; Pierce, Jenna N

    2012-01-01

    Two studies addressed how young adult college students with attention deficit hyperactivity disorder (ADHD) (n = 44) compare to their nonaffected peers (n = 42) on tests of auditory and visual-spatial working memory (WM), are vulnerable to auditory and visual distractions, and are affected by a simple intervention. Students with ADHD demonstrated worse auditory WM than did controls. A near significant trend indicated that auditory distractions interfered with the visual WM of both groups and that, whereas controls were also vulnerable to visual distractions, visual distractions improved visual WM in the ADHD group. The intervention was ineffective. Limited correlations emerged between self-reported ADHD symptoms and objective test performances; students with ADHD who perceived themselves as more symptomatic often had better WM and were less vulnerable to distractions than their ADHD peers.

  17. Auditory Spatial Perception: Auditory Localization

    Science.gov (United States)

    2012-05-01

    for normal directional hearing. A number of animal studies have demonstrated that the interruption of the neural pathways passing through the TB...view is supported by results from animal studies, indicating that some types of lesions in the brain affect the precision of absolute localization...psichici. Archives Fisiologia 1911, 9, 523–574. [Cited by Gulick et al. (1989)]. Aharonson, V.; Furst, M.; Levine, R. A.; Chaigrecht, M.; Korczyn

  18. Familiar Real-World Spatial Cues Provide Memory Benefits in Older and Younger Adults.

    Science.gov (United States)

    Robin, Jessica; Moscovitch, Morris

    2017-02-23

    Episodic memory, future thinking, and memory for scenes have all been proposed to rely on the hippocampus, and evidence suggests that these all decline in healthy aging. Despite this age-related memory decline, studies examining the effects of context reinstatement on episodic memory have demonstrated that reinstating elements of the encoding context of an event leads to better memory retrieval in both younger and older adults. The current study was designed to test whether more familiar, real-world contexts, such as locations that participants visited often, would improve the detail richness and vividness of memory for scenes, autobiographical events, and imagination of future events in young and older adults. The predicted age-related decline in internal details across all 3 conditions was accompanied by persistent effects of contextual familiarity, in which a more familiar spatial context led to increased detail and vividness of remembered scenes, autobiographical events, and, to some extent, imagined future events. This study demonstrates that autobiographical memory, imagination of the future, and scene memory are similarly affected by aging, and all benefit from being associated with more familiar (real-world) contexts, illustrating the stability of contextual reinstatement effects on memory throughout the life span. (PsycINFO Database Record

  19. From ear to hand: the role of the auditory-motor loop in pointing to an auditory source

    Directory of Open Access Journals (Sweden)

    Eric Olivier Boyer

    2013-04-01

    Full Text Available Studies of the nature of the neural mechanisms involved in goal-directed movements tend to concentrate on the role of vision. We present here an attempt to address the mechanisms whereby an auditory input is transformed into a motor command. The spatial and temporal organization of hand movements were studied in normal human subjects as they pointed towards unseen auditory targets located in a horizontal plane in front of them. Positions and movements of the hand were measured by a six infrared camera tracking system. In one condition, we assessed the role of auditory information about target position in correcting the trajectory of the hand. To accomplish this, the duration of the target presentation was varied. In another condition, subjects received continuous auditory feedback of their hand movement while pointing to the auditory targets. Online auditory control of the direction of pointing movements was assessed by evaluating how subjects reacted to shifts in heard hand position. Localization errors were exacerbated by short duration of target presentation but not modified by auditory feedback of hand position. Long duration of target presentation gave rise to a higher level of accuracy and was accompanied by early automatic head orienting movements consistently related to target direction. These results highlight the efficiency of auditory feedback processing in online motor control and suggest that the auditory system takes advantages of dynamic changes of the acoustic cues due to changes in head orientation in order to process online motor control. How to design an informative acoustic feedback needs to be carefully studied to demonstrate that auditory feedback of the hand could assist the monitoring of movements directed at objects in auditory space.

  20. From ear to hand: the role of the auditory-motor loop in pointing to an auditory source

    Science.gov (United States)

    Boyer, Eric O.; Babayan, Bénédicte M.; Bevilacqua, Frédéric; Noisternig, Markus; Warusfel, Olivier; Roby-Brami, Agnes; Hanneton, Sylvain; Viaud-Delmon, Isabelle

    2013-01-01

    Studies of the nature of the neural mechanisms involved in goal-directed movements tend to concentrate on the role of vision. We present here an attempt to address the mechanisms whereby an auditory input is transformed into a motor command. The spatial and temporal organization of hand movements were studied in normal human subjects as they pointed toward unseen auditory targets located in a horizontal plane in front of them. Positions and movements of the hand were measured by a six infrared camera tracking system. In one condition, we assessed the role of auditory information about target position in correcting the trajectory of the hand. To accomplish this, the duration of the target presentation was varied. In another condition, subjects received continuous auditory feedback of their hand movement while pointing to the auditory targets. Online auditory control of the direction of pointing movements was assessed by evaluating how subjects reacted to shifts in heard hand position. Localization errors were exacerbated by short duration of target presentation but not modified by auditory feedback of hand position. Long duration of target presentation gave rise to a higher level of accuracy and was accompanied by early automatic head orienting movements consistently related to target direction. These results highlight the efficiency of auditory feedback processing in online motor control and suggest that the auditory system takes advantages of dynamic changes of the acoustic cues due to changes in head orientation in order to process online motor control. How to design an informative acoustic feedback needs to be carefully studied to demonstrate that auditory feedback of the hand could assist the monitoring of movements directed at objects in auditory space. PMID:23626532

  1. Comparing perceived auditory width to the visual image of a performing ensemble in contrasting bi-modal environments.

    Science.gov (United States)

    Valente, Daniel L; Braasch, Jonas; Myrbeck, Shane A

    2012-01-01

    Despite many studies investigating auditory spatial impressions in rooms, few have addressed the impact of simultaneous visual cues on localization and the perception of spaciousness. The current research presents an immersive audiovisual environment in which participants were instructed to make auditory width judgments in dynamic bi-modal settings. The results of these psychophysical tests suggest the importance of congruent audio visual presentation to the ecological interpretation of an auditory scene. Supporting data were accumulated in five rooms of ascending volumes and varying reverberation times. Participants were given an audiovisual matching test in which they were instructed to pan the auditory width of a performing ensemble to a varying set of audio and visual cues in rooms. Results show that both auditory and visual factors affect the collected responses and that the two sensory modalities coincide in distinct interactions. The greatest differences between the panned audio stimuli given a fixed visual width were found in the physical space with the largest volume and the greatest source distance. These results suggest, in this specific instance, a predominance of auditory cues in the spatial analysis of the bi-modal scene.

  2. Neural dynamics of object-based multifocal visual spatial attention and priming: object cueing, useful-field-of-view, and crowding.

    Science.gov (United States)

    Foley, Nicholas C; Grossberg, Stephen; Mingolla, Ennio

    2012-08-01

    How are spatial and object attention coordinated to achieve rapid object learning and recognition during eye movement search? How do prefrontal priming and parietal spatial mechanisms interact to determine the reaction time costs of intra-object attention shifts, inter-object attention shifts, and shifts between visible objects and covertly cued locations? What factors underlie individual differences in the timing and frequency of such attentional shifts? How do transient and sustained spatial attentional mechanisms work and interact? How can volition, mediated via the basal ganglia, influence the span of spatial attention? A neural model is developed of how spatial attention in the where cortical stream coordinates view-invariant object category learning in the what cortical stream under free viewing conditions. The model simulates psychological data about the dynamics of covert attention priming and switching requiring multifocal attention without eye movements. The model predicts how "attentional shrouds" are formed when surface representations in cortical area V4 resonate with spatial attention in posterior parietal cortex (PPC) and prefrontal cortex (PFC), while shrouds compete among themselves for dominance. Winning shrouds support invariant object category learning, and active surface-shroud resonances support conscious surface perception and recognition. Attentive competition between multiple objects and cues simulates reaction-time data from the two-object cueing paradigm. The relative strength of sustained surface-driven and fast-transient motion-driven spatial attention controls individual differences in reaction time for invalid cues. Competition between surface-driven attentional shrouds controls individual differences in detection rate of peripheral targets in useful-field-of-view tasks. The model proposes how the strength of competition can be mediated, though learning or momentary changes in volition, by the basal ganglia. A new explanation of

  3. Effects of dynamic range compression on spatial selective auditory attention in normal-hearing listeners

    Science.gov (United States)

    Schwartz, Andrew H.; Shinn-Cunningham, Barbara G.

    2013-01-01

    Many hearing aids introduce compressive gain to accommodate the reduced dynamic range that often accompanies hearing loss. However, natural sounds produce complicated temporal dynamics in hearing aid compression, as gain is driven by whichever source dominates at a given moment. Moreover, independent compression at the two ears can introduce fluctuations in interaural level differences (ILDs) important for spatial perception. While independent compression can interfere with spatial perception of sound, it does not always interfere with localization accuracy or speech identification. Here, normal-hearing listeners reported a target message played simultaneously with two spatially separated masker messages. We measured the amount of spatial separation required between the target and maskers for subjects to perform at threshold in this task. Fast, syllabic compression that was independent at the two ears increased the required spatial separation, but linking the compressors to provide identical gain to both ears (preserving ILDs) restored much of the deficit caused by fast, independent compression. Effects were less clear for slower compression. Percent-correct performance was lower with independent compression, but only for small spatial separations. These results may help explain differences in previous reports of the effect of compression on spatial perception of sound. PMID:23556599

  4. Robotic and Virtual Reality BCIs Using Spatial Tactile and Auditory Oddball Paradigms

    Science.gov (United States)

    Rutkowski, Tomasz M.

    2016-01-01

    The paper reviews nine robotic and virtual reality (VR) brain–computer interface (BCI) projects developed by the author, in collaboration with his graduate students, within the BCI–lab research group during its association with University of Tsukuba, Japan. The nine novel approaches are discussed in applications to direct brain-robot and brain-virtual-reality-agent control interfaces using tactile and auditory BCI technologies. The BCI user intentions are decoded from the brainwaves in realtime using a non-invasive electroencephalography (EEG) and they are translated to a symbiotic robot or virtual reality agent thought-based only control. A communication protocol between the BCI output and the robot or the virtual environment is realized in a symbiotic communication scenario using an user datagram protocol (UDP), which constitutes an internet of things (IoT) control scenario. Results obtained from healthy users reproducing simple brain-robot and brain-virtual-agent control tasks in online experiments support the research goal of a possibility to interact with robotic devices and virtual reality agents using symbiotic thought-based BCI technologies. An offline BCI classification accuracy boosting method, using a previously proposed information geometry derived approach, is also discussed in order to further support the reviewed robotic and virtual reality thought-based control paradigms. PMID:27999538

  5. Robotic and Virtual Reality BCIs Using Spatial Tactile and Auditory Oddball Paradigms

    Directory of Open Access Journals (Sweden)

    Tomasz Maciej Rutkowski

    2016-12-01

    Full Text Available The paper reviews nine robotic and virtual reality (VR brain-computer interface (BCI projects developed by the author, in collaboration with his graduate students, within the BCI-lab research group during its association with University of Tsukuba, Japan. The nine novel approaches are discussed in applications to direct brain-robot and brain-virtual-reality-agent control interfaces using tactile and auditory BCI technologies. The BCI user intentions are decoded from the brainwaves in realtime using a non-invasive electroencephalography (EEG and they are translated to a symbiotic robot or virtual reality agent thought-based only control. A communication protocol between the BCI output and the robot or the virtual environment is realized in a symbiotic communication scenario using an user datagram protocol (UDP, which constitutes an internet of things (IoT control scenario. Results obtained from healthy users reproducing simple brain-robot and brain-virtual-agent control tasks in online experiments support the research goal of a possibility to interact with robotic devices and virtual reality agents using symbiotic thought-based BCI technologies. An offline BCI classification accuracy boosting method, using a previously proposed information geometry derived approach, is also discussed in order to further support the reviewed robotic and virtual reality thought-based control paradigms.

  6. Chronic exposure to broadband noise at moderate sound pressure levels spatially shifts tone-evoked responses in the rat auditory midbrain.

    Science.gov (United States)

    Lau, Condon; Pienkowski, Martin; Zhang, Jevin W; McPherson, Bradley; Wu, Ed X

    2015-11-15

    Noise-induced hearing disorders are a significant public health concern. One cause of such disorders is exposure to high sound pressure levels (SPLs) above 85 dBA for eight hours/day. High SPL exposures occur in occupational and recreational settings and affect a substantial proportion of the population. However, an even larger proportion is exposed to more moderate SPLs for longer durations. Therefore, there is significant need to better understand the impact of chronic, moderate SPL exposures on auditory processing, especially in the absence of hearing loss. In this study, we applied functional magnetic resonance imaging (fMRI) with tonal acoustic stimulation on an established broadband rat exposure model (65 dB SPL, 30 kHz low-pass, 60 days). The auditory midbrain response of exposed subjects to 7 kHz stimulation (within exposure bandwidth) shifts dorsolaterally to regions that typically respond to lower stimulation frequencies. This shift is quantified by a region of interest analysis that shows that fMRI signals are higher in the dorsolateral midbrain of exposed subjects and in the ventromedial midbrain of control subjects (pmidbrain regions above the exposure bandwidth spatially expand due to exposure. This expansion shifts lower frequency regions dorsolaterally. Similar observations have previously been made in the rat auditory cortex. Therefore, moderate SPL exposures affect auditory processing at multiple levels, from the auditory cortex to the midbrain.

  7. The role of spatial abilities and age in performance in an auditory computer navigation task.

    Science.gov (United States)

    Pak, Richard; Czaja, Sara J; Sharit, Joseph; Rogers, Wendy A; Fisk, Arthur D

    2006-01-01

    Age-related differences in spatial ability have been suggested as a mediator of age-related differences in computer-based task performance. However, the vast majority of tasks studied have primarily used a visual display (e.g., graphical user interfaces). In the current study, the relationship between spatial ability and performance in a non-visual computer-based navigation task was examined in a sample of 196 participants ranging in age from 18 to 91. Participants called into a simulated interactive voice response system and carried out a variety of transactions. They also completed measures of attention, working memory, and spatial abilities. The results showed that age-related differences in spatial ability predicted a significant amount of variance in performance in the non-visual computer task, even after controlling for other abilities. Understanding the abilities that influence performance with technology may provide insight into the source of age-related performance differences in the successful use of technology.

  8. Hippocampal-dependent memory in the plus-maze discriminative avoidance task: The role of spatial cues and CA1 activity.

    Science.gov (United States)

    Leão, Anderson H F F; Medeiros, André M; Apolinário, Gênedy K S; Cabral, Alícia; Ribeiro, Alessandra M; Barbosa, Flávio F; Silva, Regina H

    2016-05-01

    The plus-maze discriminative avoidance task (PMDAT) has been used to investigate interactions between aversive memory and an anxiety-like response in rodents. Suitable performance in this task depends on the activity of the basolateral amygdala, similar to other aversive-based memory tasks. However, the role of spatial cues and hippocampal-dependent learning in the performance of PMDAT remains unknown. Here, we investigated the role of proximal and distal cues in the retrieval of this task. Animals tested under misplaced proximal cues had diminished performance, and animals tested under both misplaced proximal cues and absent distal cues could not discriminate the aversive arm. We also assessed the role of the dorsal hippocampus (CA1) in this aversive memory task. Temporary bilateral inactivation of dorsal CA1 was conducted with muscimol (0.05 μg, 0.1 μg, and 0.2 μg) prior to the training session. While the acquisition of the task was not altered, muscimol impaired the performance in the test session and reduced the anxiety-like response in the training session. We also performed a spreading analysis of a fluorophore-conjugated muscimol to confirm selective inhibition of CA1. In conclusion, both distal and proximal cues are required to retrieve the task, with the latter being more relevant to spatial orientation. Dorsal CA1 activity is also required for aversive memory formation in this task, and interfered with the anxiety-like response as well. Importantly, both effects were detected by different parameters in the same paradigm, endorsing the previous findings of independent assessment of aversive memory and anxiety-like behavior in the PMDAT. Taken together, these findings suggest that the PMDAT probably requires an integration of multiple systems for memory formation, resembling an episodic-like memory rather than a pure conditioning behavior. Furthermore, the concomitant and independent assessment of emotionality and memory in rodents is relevant to

  9. Signaled two-way avoidance learning using electrical stimulation of the inferior colliculus as negative reinforcement: effects of visual and auditory cues as warning stimuli

    Directory of Open Access Journals (Sweden)

    A.C. Troncoso

    1998-03-01

    Full Text Available The inferior colliculus is a primary relay for the processing of auditory information in the brainstem. The inferior colliculus is also part of the so-called brain aversion system as animals learn to switch off the electrical stimulation of this structure. The purpose of the present study was to determine whether associative learning occurs between aversion induced by electrical stimulation of the inferior colliculus and visual and auditory warning stimuli. Rats implanted with electrodes into the central nucleus of the inferior colliculus were placed inside an open-field and thresholds for the escape response to electrical stimulation of the inferior colliculus were determined. The rats were then placed inside a shuttle-box and submitted to a two-way avoidance paradigm. Electrical stimulation of the inferior colliculus at the escape threshold (98.12 ± 6.15 (A, peak-to-peak was used as negative reinforcement and light or tone as the warning stimulus. Each session consisted of 50 trials and was divided into two segments of 25 trials in order to determine the learning rate of the animals during the sessions. The rats learned to avoid the inferior colliculus stimulation when light was used as the warning stimulus (13.25 ± 0.60 s and 8.63 ± 0.93 s for latencies and 12.5 ± 2.04 and 19.62 ± 1.65 for frequencies in the first and second halves of the sessions, respectively, P0.05 in both cases. Taken together, the present results suggest that rats learn to avoid the inferior colliculus stimulation when light is used as the warning stimulus. However, this learning process does not occur when the neutral stimulus used is an acoustic one. Electrical stimulation of the inferior colliculus may disturb the signal transmission of the stimulus to be conditioned from the inferior colliculus to higher brain structures such as amygdala

  10. Development of visuo-auditory integration in space and time

    Directory of Open Access Journals (Sweden)

    Monica eGori

    2012-09-01

    Full Text Available Adults integrate multisensory information optimally (e.g. Ernst & Banks, 2002 while children are not able to integrate multisensory visual haptic cues until 8-10 years of age (e.g. Gori, Del Viva, Sandini, & Burr, 2008. Before that age strong unisensory dominance is present for size and orientation visual-haptic judgments maybe reflecting a process of cross-sensory calibration between modalities. It is widely recognized that audition dominates time perception, while vision dominates space perception. If the cross sensory calibration process is necessary for development, then the auditory modality should calibrate vision in a bimodal temporal task, and the visual modality should calibrate audition in a bimodal spatial task. Here we measured visual-auditory integration in both the temporal and the spatial domains reproducing for the spatial task a child-friendly version of the ventriloquist stimuli used by Alais and Burr (2004 and for the temporal task a child-friendly version of the stimulus used by Burr, Banks and Morrone (2009. Unimodal and bimodal (conflictual or not conflictual audio-visual thresholds and PSEs were measured and compared with the Bayesian predictions. In the temporal domain, we found that both in children and adults, audition dominates the bimodal visuo-auditory task both in perceived time and precision thresholds. Contrarily, in the visual-auditory spatial task, children younger than 12 years of age show clear visual dominance (on PSEs and bimodal thresholds higher than the Bayesian prediction. Only in the adult group bimodal thresholds become optimal. In agreement with previous studies, our results suggest that also visual-auditory adult-like behaviour develops late. Interestingly, the visual dominance for space and the auditory dominance for time that we found might suggest a cross-sensory comparison of vision in a spatial visuo-audio task and a cross-sensory comparison of audition in a temporal visuo-audio task.

  11. The Influence of Secondary Depth Cues on the Understanding by Nigerian Schoolboys of Spatial Relationships in Pictures

    Science.gov (United States)

    Nicholson, J. R.; Seddon, G. M.

    1977-01-01

    Attempts to determine how the ability of African secondary students to understand pictures three-dimensionally changes, as the number of different types of depth cue increases in carefully stages. Also investigates the existence of interactions involving the different types of picture and differences in amount of formal training which people have…

  12. Influence of auditory spatial attention on cross-modal semantic priming effect: evidence from N400 effect.

    Science.gov (United States)

    Wang, Hongyan; Zhang, Gaoyan; Liu, Baolin

    2017-01-01

    Semantic priming is an important research topic in the field of cognitive neuroscience. Previous studies have shown that the uni-modal semantic priming effect can be modulated by attention. However, the influence of attention on cross-modal semantic priming is unclear. To investigate this issue, the present study combined a cross-modal semantic priming paradigm with an auditory spatial attention paradigm, presenting the visual pictures as the prime stimuli and the semantically related or unrelated sounds as the target stimuli. Event-related potentials results showed that when the target sound was attended to, the N400 effect was evoked. The N400 effect was also observed when the target sound was not attended to, demonstrating that the cross-modal semantic priming effect persists even though the target stimulus is not focused on. Further analyses revealed that the N400 effect evoked by the unattended sound was significantly lower than the effect evoked by the attended sound. This contrast provides new evidence that the cross-modal semantic priming effect can be modulated by attention.

  13. Real color captures attention and overrides spatial cues in grapheme-color synesthetes but not in controls.

    Science.gov (United States)

    van Leeuwen, Tessa M; Hagoort, Peter; Händel, Barbara F

    2013-08-01

    Grapheme-color synesthetes perceive color when reading letters or digits. We investigated oscillatory brain signals of synesthetes vs. controls using magnetoencephalography. Brain oscillations specifically in the alpha band (∼10Hz) have two interesting features: alpha has been linked to inhibitory processes and can act as a marker for attention. The possible role of reduced inhibition as an underlying cause of synesthesia, as well as the precise role of attention in synesthesia is widely discussed. To assess alpha power effects due to synesthesia, synesthetes as well as matched controls viewed synesthesia-inducing graphemes, colored control graphemes, and non-colored control graphemes while brain activity was recorded. Subjects had to report a color change at the end of each trial which allowed us to assess the strength of synesthesia in each synesthete. Since color (synesthetic or real) might allocate attention we also included an attentional cue in our paradigm which could direct covert attention. In controls the attentional cue always caused a lateralization of alpha power with a contralateral decrease and ipsilateral alpha increase over occipital sensors. In synesthetes, however, the influence of the cue was overruled by color: independent of the attentional cue, alpha power decreased contralateral to the color (synesthetic or real). This indicates that in synesthetes color guides attention. This was confirmed by reaction time effects due to color, i.e. faster RTs for the color side independent of the cue. Finally, the stronger the observed color dependent alpha lateralization, the stronger was the manifestation of synesthesia as measured by congruency effects of synesthetic colors on RTs. Behavioral and imaging results indicate that color induces a location-specific, automatic shift of attention towards color in synesthetes but not in controls. We hypothesize that this mechanism can facilitate coupling of grapheme and color during the development of

  14. Divided multimodal attention sensory trace and context coding strategies in spatially congruent auditory and visual presentation.

    Science.gov (United States)

    Kristjánsson, Tómas; Thorvaldsson, Tómas Páll; Kristjánsson, Arni

    2014-01-01

    Previous research involving both unimodal and multimodal studies suggests that single-response change detection is a capacity-free process while a discriminatory up or down identification is capacity-limited. The trace/context model assumes that this reflects different memory strategies rather than inherent differences between identification and detection. To perform such tasks, one of two strategies is used, a sensory trace or a context coding strategy, and if one is blocked, people will automatically use the other. A drawback to most preceding studies is that stimuli are presented at separate locations, creating the possibility of a spatial confound, which invites alternative interpretations of the results. We describe a series of experiments, investigating divided multimodal attention, without the spatial confound. The results challenge the trace/context model. Our critical experiment involved a gap before a change in volume and brightness, which according to the trace/context model blocks the sensory trace strategy, simultaneously with a roaming pedestal, which should block the context coding strategy. The results clearly show that people can use strategies other than sensory trace and context coding in the tasks and conditions of these experiments, necessitating changes to the trace/context model.

  15. Reorganisation of the right occipito-parietal stream for auditory spatial processing in early blind humans. A transcranial magnetic stimulation study.

    Science.gov (United States)

    Collignon, O; Davare, M; Olivier, E; De Volder, A G

    2009-05-01

    It is well known that, following an early visual deprivation, the neural network involved in processing auditory spatial information undergoes a profound reorganization. In particular, several studies have demonstrated an extensive activation of occipital brain areas, usually regarded as essentially "visual", when early blind subjects (EB) performed a task that requires spatial processing of sounds. However, little is known about the possible consequences of the activation of occipitals area on the function of the large cortical network known, in sighted subjects, to be involved in the processing of auditory spatial information. To address this issue, we used event-related transcranial magnetic stimulation (TMS) to induce virtual lesions of either the right intra-parietal sulcus (rIPS) or the right dorsal extrastriate occipital cortex (rOC) at different delays in EB subjects performing a sound lateralization task. Surprisingly, TMS applied over rIPS, a region critically involved in the spatial processing of sound in sighted subjects, had no influence on the task performance in EB. In contrast, TMS applied over rOC 50 ms after sound onset, disrupted the spatial processing of sounds originating from the contralateral hemifield. The present study shed new lights on the reorganisation of the cortical network dedicated to the spatial processing of sounds in EB by showing an early contribution of rOC and a lesser involvement of rIPS.

  16. Spectral and spatial tuning of onset and offset response functions in auditory cortical fields A1 and CL of rhesus macaques.

    Science.gov (United States)

    Ramamurthy, Deepa L; Recanzone, Gregg H

    2016-12-07

    The mammalian auditory cortex is necessary for spectral and spatial processing of acoustic stimuli. Most physiological studies of single neurons in the auditory cortex have focused on the onset and sustained portions of evoked responses, but there have been far fewer studies on the relationship between onset and offset responses. In the current study, we compared spectral and spatial tuning of onset and offset responses of neurons in primary auditory cortex (A1) and the caudolateral (CL) belt area of awake macaque monkeys. Several different metrics were used to determine the relationship between onset and offset response profiles in both frequency and space domains. In the frequency domain, a substantial proportion of neurons in A1 and CL displayed highly dissimilar best stimuli for onset- and offset-evoked responses, though even for these neurons, there was usually a large overlap in the range of frequencies that elicited onset and offset responses and distributions of tuning overlap metrics were mostly unimodal. In the spatial domain, the vast majority of neurons displayed very similar best locations for onset- and offset-evoked responses, along with unimodal distributions of all tuning overlap metrics considered. Finally, for both spectral and spatial tuning, a slightly larger fraction of neurons in A1 displayed non-overlapping onset and offset response profiles, relative to CL, which supports hierarchical differences in the processing of sounds in the two areas. However, these differences are small compared to differences in proportions of simple cells (low overlap) and complex cells (high overlap) in primary and secondary visual areas.

  17. Intestinal GPS: bile and bicarbonate control cyclic di-GMP to provide Vibrio cholerae spatial cues within the small intestine.

    Science.gov (United States)

    Koestler, Benjamin J; Waters, Christopher M

    2014-01-01

    The second messenger cyclic di-GMP (c-di-GMP) regulates numerous phenotypes in response to environmental stimuli to enable bacteria to transition between different lifestyles. Here we discuss our recent findings that the human pathogen Vibrio cholerae recognizes 2 host-specific signals, bile and bicarbonate, to regulate intracellular c-di-GMP. We have demonstrated that bile acids increase intracellular c-di-GMP to promote biofilm formation. We have also shown that this bile-mediated increase of intracellular c-di-GMP is negated by bicarbonate, and that this interaction is dependent on pH, suggesting that V. cholerae uses these 2 environmental cues to sense and adapt to its relative location in the small intestine. Increased intracellular c-di-GMP by bile is attributed to increased c-di-GMP synthesis by 3 diguanylate cyclases (DGCs) and decreased expression of one phosphodiesterase (PDE) in the presence of bile. The molecular mechanisms by which bile controls the activity of the 3 DGCs and the regulators of bile-mediated transcriptional repression of the PDE are not yet known. Moreover, the impact of varying concentrations of bile and bicarbonate at different locations within the small intestine and the response of V. cholerae to these cues remains unclear. The native microbiome and pharmaceuticals, such as omeprazole, can impact bile and pH within the small intestine, suggesting these are potential unappreciated factors that may alter V. cholerae pathogenesis.

  18. Listeners' expectation of room acoustical parameters based on visual cues

    Science.gov (United States)

    Valente, Daniel L.

    Despite many studies investigating auditory spatial impressions in rooms, few have addressed the impact of simultaneous visual cues on localization and the perception of spaciousness. The current research presents an immersive audio-visual study, in which participants are instructed to make spatial congruency and quantity judgments in dynamic cross-modal environments. The results of these psychophysical tests suggest the importance of consilient audio-visual presentation to the legibility of an auditory scene. Several studies have looked into audio-visual interaction in room perception in recent years, but these studies rely on static images, speech signals, or photographs alone to represent the visual scene. Building on these studies, the aim is to propose a testing method that uses monochromatic compositing (blue-screen technique) to position a studio recording of a musical performance in a number of virtual acoustical environments and ask subjects to assess these environments. In the first experiment of the study, video footage was taken from five rooms varying in physical size from a small studio to a small performance hall. Participants were asked to perceptually align two distinct acoustical parameters---early-to-late reverberant energy ratio and reverberation time---of two solo musical performances in five contrasting visual environments according to their expectations of how the room should sound given its visual appearance. In the second experiment in the study, video footage shot from four different listening positions within a general-purpose space was coupled with sounds derived from measured binaural impulse responses (IRs). The relationship between the presented image, sound, and virtual receiver position was examined. It was found that many visual cues caused different perceived events of the acoustic environment. This included the visual attributes of the space in which the performance was located as well as the visual attributes of the performer

  19. Oscillations Go the Distance: Low-Frequency Human Hippocampal Oscillations Code Spatial Distance in the Absence of Sensory Cues during Teleportation.

    Science.gov (United States)

    Vass, Lindsay K; Copara, Milagros S; Seyal, Masud; Shahlaie, Kiarash; Farias, Sarah Tomaszewski; Shen, Peter Y; Ekstrom, Arne D

    2016-03-16

    Low-frequency (delta/theta band) hippocampal neural oscillations play prominent roles in computational models of spatial navigation, but their exact function remains unknown. Some theories propose they are primarily generated in response to sensorimotor processing, while others suggest a role in memory-related processing. We directly recorded hippocampal EEG activity in patients undergoing seizure monitoring while they explored a virtual environment containing teleporters. Critically, this manipulation allowed patients to experience movement through space in the absence of visual and self-motion cues. The prevalence and duration of low-frequency hippocampal oscillations were unchanged by this manipulation, indicating that sensorimotor processing was not required to elicit them during navigation. Furthermore, the frequency-wise pattern of oscillation prevalence during teleportation contained spatial information capable of classifying the distance teleported. These results demonstrate that movement-related sensory information is not required to drive spatially informative low-frequency hippocampal oscillations during navigation and suggest a specific function in memory-related spatial updating.

  20. Polarizing cues.

    Science.gov (United States)

    Nicholson, Stephen P

    2012-01-01

    People categorize themselves and others, creating ingroup and outgroup distinctions. In American politics, parties constitute the in- and outgroups, and party leaders hold sway in articulating party positions. A party leader's endorsement of a policy can be persuasive, inducing co-partisans to take the same position. In contrast, a party leader's endorsement may polarize opinion, inducing out-party identifiers to take a contrary position. Using survey experiments from the 2008 presidential election, I examine whether in- and out-party candidate cues—John McCain and Barack Obama—affected partisan opinion. The results indicate that in-party leader cues do not persuade but that out-party leader cues polarize. This finding holds in an experiment featuring President Bush in which his endorsement did not persuade Republicans but it polarized Democrats. Lastly, I compare the effect of party leader cues to party label cues. The results suggest that politicians, not parties, function as polarizing cues.

  1. Involvement of the superior temporal cortex and the occipital cortex in spatial hearing: evidence from repetitive transcranial magnetic stimulation.

    Science.gov (United States)

    Lewald, Jörg; Meister, Ingo G; Weidemann, Jürgen; Töpper, Rudolf

    2004-06-01

    The processing of auditory spatial information in cortical areas of the human brain outside of the primary auditory cortex remains poorly understood. Here we investigated the role of the superior temporal gyrus (STG) and the occipital cortex (OC) in spatial hearing using repetitive transcranial magnetic stimulation (rTMS). The right STG is known to be of crucial importance for visual spatial awareness, and has been suggested to be involved in auditory spatial perception. We found that rTMS of the right STG induced a systematic error in the perception of interaural time differences (a primary cue for sound localization in the azimuthal plane). This is in accordance with the recent view, based on both neurophysiological data obtained in monkeys and human neuroimaging studies, that information on sound location is processed within a dorsolateral "where" stream including the caudal STG. A similar, but opposite, auditory shift was obtained after rTMS of secondary visual areas of the right OC. Processing of auditory information in the OC has previously been shown to exist only in blind persons. Thus, the latter finding provides the first evidence of an involvement of the visual cortex in spatial hearing in sighted human subjects, and suggests a close interconnection of the neural representation of auditory and visual space. Because rTMS induced systematic shifts in auditory lateralization, but not a general deterioration, we propose that rTMS of STG or OC specifically affected neuronal circuits transforming auditory spatial coordinates in order to maintain alignment with vision.

  2. Listen, you are writing!Speeding up online spelling with a dynamic auditory BCI

    Directory of Open Access Journals (Sweden)

    Martijn eSchreuder

    2011-10-01

    Full Text Available Representing an intuitive spelling interface for Brain-Computer Interfaces (BCI in the auditory domain is not straightforward. In consequence, all existing approaches based on event-related potentials (ERP rely at least partially on a visual representation of the interface. This online study introduces an auditory spelling interface that eliminates the necessity for such a visualization. In up to two sessions, a group of healthy subjects (N=21 was asked to use a text entry application, utilizing the spatial cues of the AMUSE paradigm (Auditory Multiclass Spatial ERP. The speller relies on the auditory sense both for stimulation and the core feedback. Without prior BCI experience, 76% of the participants were able to write a full sentence during the first session. By exploiting the advantages of a newly introduced dynamic stopping method, a maximum writing speed of 1.41 characters/minute (7.55 bits/minute could be reached during the second session (average: .94 char/min, 5.26 bits/min. For the first time, the presented work shows that an auditory BCI can reach performances similar to state-of-the-art visual BCIs based on covert attention. These results represent an important step towards a purely auditory BCI.

  3. The impact of anterior thalamic lesions on active and passive spatial learning in stimulus controlled environments: geometric cues and pattern arrangement.

    Science.gov (United States)

    Dumont, Julie R; Wright, Nicholas F; Pearce, John M; Aggleton, John P

    2014-04-01

    The anterior thalamic nuclei are vital for many spatial tasks. To determine more precisely their role, the present study modified the conventional Morris watermaze task. In each of 3 experiments, rats were repeatedly placed on a submerged platform in 1 corner (the 'correct' corner) of either a rectangular pool (Experiment 1) or a square pool with walls of different appearances (Experiments 2 and 3). The rats were then released into the pool for a first test trial in the absence of the platform. In Experiment 1, normal rats distinguished the 2 sets of corners in the rectangular pool by their geometric properties, preferring the correct corner and its diagonally opposite partner. Anterior thalamic lesions severely impaired this discrimination. In Experiments 2 and 3, normal rats typically swam directly to the correct corner of the square pool on the first test trial. Rats with anterior thalamic lesions, however, often failed to initially select the correct corner, taking more time to reach that location. Nevertheless, the lesioned rats still showed a subsequent preference for the correct corner. The same lesioned rats also showed no deficits in Experiments 2 and 3 when subsequently trained to swim to the correct corner over repeated trials. The findings show how the anterior thalamic nuclei contribute to multiple aspects of spatial processing. These thalamic nuclei may be required to distinguish relative dimensions (Experiment 1) as well as translate the appearance of spatial cues when viewed for the first time from different perspectives (Experiments 2, 3).

  4. The role of temporal coherence in auditory stream segregation

    DEFF Research Database (Denmark)

    Christiansen, Simon Krogholt

    The ability to perceptually segregate concurrent sound sources and focus one’s attention on a single source at a time is essential for the ability to use acoustic information. While perceptual experiments have determined a range of acoustic cues that help facilitate auditory stream segregation......, it is not clear how the auditory system realizes the task. This thesis presents a study of the mechanisms involved in auditory stream segregation. Through a combination of psychoacoustic experiments, designed to characterize the influence of acoustic cues on auditory stream formation, and computational models...... of auditory processing, the role of auditory preprocessing and temporal coherence in auditory stream formation was evaluated. The computational model presented in this study assumes that auditory stream segregation occurs when sounds stimulate non-overlapping neural populations in a temporally incoherent...

  5. Cue validity probability influences neural processing of targets.

    Science.gov (United States)

    Arjona, Antonio; Escudero, Miguel; Gómez, Carlos M

    2016-09-01

    The neural bases of the so-called Spatial Cueing Effect in a visuo-auditory version of the Central Cue Posneŕs Paradigm (CCPP) are analyzed by means of behavioral patterns (Reaction Times and Errors) and Event-Related Potentials (ERPs), namely the Contingent Negative Variation (CNV), N1, P2a, P2p, P3a, P3b and Negative Slow Wave (NSW). The present version consisted of three types of trial blocks with different validity/invalidity proportions: 50% valid - 50% invalid trials, 68% valid - 32% invalid trials and 86% valid - 14% invalid trials. Thus, ERPs can be analyzed as the proportion of valid trials per block increases. Behavioral (Reaction Times and Incorrect responses) and ERP (lateralized component of CNV, P2a, P3b and NSW) results showed a spatial cueing effect as the proportion of valid trials per block increased. Results suggest a brain activity modulation related to sensory-motor attention and working memory updating, in order to adapt to external unpredictable contingencies.

  6. The Difference in the Profile of Working Memory, Auditory Working Memory, and Spatial Working Memory between Drug, Stimulant, and Methadone Abusers and Normal People

    Directory of Open Access Journals (Sweden)

    Ahmad Alipour

    2015-06-01

    Full Text Available Objective: The present study was an attempt to examine the difference in the profile of working memory, auditory working memory, and spatial working memory between drug, stimulant, and methadone abusers and normal people. Method: This study was a causal-comparative one with between-group comparison methodology. All the individuals addicted to opiates, stimulants, and methadone who had referred to Khomeini treatment centers of the city from September 2013 to February 2014 constituted the statistical population of the study. The number of 154 abusers (54 drug abusers, 50 stimulant abusers, and 50 methadone abusers and the number of 50 normal participants were chosen as the sample of the study by purposive sampling method. The participants responded to Wechsler Memory Scale—third edition (WMS-III. Results: There was a significant difference between the normal group and drug, stimulant, and methadone abusers in terms of working memory, auditory working memory, and spatial working memory. Conclusion: Drug and stimulant use leads to sustained damage in cognitive processes such as working memory. However, research indicates that these cognitive processes will improve with the passage of time.

  7. Integration of auditory and tactile inputs in musical meter perception.

    Science.gov (United States)

    Huang, Juan; Gamble, Darik; Sarnlertsophon, Kristine; Wang, Xiaoqin; Hsiao, Steven

    2013-01-01

    Musicians often say that they not only hear but also "feel" music. To explore the contribution of tactile information to "feeling" music, we investigated the degree that auditory and tactile inputs are integrated in humans performing a musical meter-recognition task. Subjects discriminated between two types of sequences, "duple" (march-like rhythms) and "triple" (waltz-like rhythms), presented in three conditions: (1) unimodal inputs (auditory or tactile alone); (2) various combinations of bimodal inputs, where sequences were distributed between the auditory and tactile channels such that a single channel did not produce coherent meter percepts; and (3) bimodal inputs where the two channels contained congruent or incongruent meter cues. We first show that meter is perceived similarly well (70-85 %) when tactile or auditory cues are presented alone. We next show in the bimodal experiments that auditory and tactile cues are integrated to produce coherent meter percepts. Performance is high (70-90 %) when all of the metrically important notes are assigned to one channel and is reduced to 60 % when half of these notes are assigned to one channel. When the important notes are presented simultaneously to both channels, congruent cues enhance meter recognition (90 %). Performance dropped dramatically when subjects were presented with incongruent auditory cues (10 %), as opposed to incongruent tactile cues (60 %), demonstrating that auditory input dominates meter perception. These observations support the notion that meter perception is a cross-modal percept with tactile inputs underlying the perception of "feeling" music.

  8. Auditory and Visual Sensations

    CERN Document Server

    Ando, Yoichi

    2010-01-01

    Professor Yoichi Ando, acoustic architectural designer of the Kirishima International Concert Hall in Japan, presents a comprehensive rational-scientific approach to designing performance spaces. His theory is based on systematic psychoacoustical observations of spatial hearing and listener preferences, whose neuronal correlates are observed in the neurophysiology of the human brain. A correlation-based model of neuronal signal processing in the central auditory system is proposed in which temporal sensations (pitch, timbre, loudness, duration) are represented by an internal autocorrelation representation, and spatial sensations (sound location, size, diffuseness related to envelopment) are represented by an internal interaural crosscorrelation function. Together these two internal central auditory representations account for the basic auditory qualities that are relevant for listening to music and speech in indoor performance spaces. Observed psychological and neurophysiological commonalities between auditor...

  9. The role of social cues in the deployment of spatial attention: Head-body relationships automatically activate directional spatial codes in a Simon task

    Directory of Open Access Journals (Sweden)

    Iwona ePomianowska

    2012-02-01

    Full Text Available The role of body orientation in the orienting and allocation of social attention was examined using an adapted Simon paradigm. Participants categorized the facial expression of forward facing, computer-generated human figures by pressing one of two response keys, each located left or right of the observers’ body midline, while the orientation of the stimulus figure’s body (trunk, arms, and legs, which was the task-irrelevant feature of interest, was manipulated (oriented towards the left or right visual hemifield with respect to the spatial location of the required response. We found that when the orientation of the body was compatible with the required response location, responses were slower relative to when body orientation was incompatible with the response location. This reverse compatibility effect suggests that body orientation is automatically processed into a directional spatial code, but that this code is based on an integration of head and body orientation within an allocentric-based frame of reference. Moreover, we argue that this code may be derived from the motion information implied in the image of a figure when head and body orientation are incongruent. Our results have implications for understanding the nature of the information that affects the allocation of attention for social orienting.

  10. Auditory Motion Elicits a Visual Motion Aftereffect

    Directory of Open Access Journals (Sweden)

    Christopher C. Berger

    2016-12-01

    Full Text Available The visual motion aftereffect is a visual illusion in which exposure to continuous motion in one direction leads to a subsequent illusion of visual motion in the opposite direction. Previous findings have been mixed with regard to whether this visual illusion can be induced cross-modally by auditory stimuli. Based on research on multisensory perception demonstrating the profound influence auditory perception can have on the interpretation and perceived motion of visual stimuli, we hypothesized that exposure to auditory stimuli with strong directional motion cues should induce a visual motion aftereffect. Here, we demonstrate that horizontally moving auditory stimuli induced a significant visual motion aftereffect—an effect that was driven primarily by a change in visual motion perception following exposure to leftward moving auditory stimuli. This finding is consistent with the notion that visual and auditory motion perception rely on at least partially overlapping neural substrates.

  11. Mixed Messages: Illusory Durations Induced by Cue Combination

    Directory of Open Access Journals (Sweden)

    Craig Aaen-Stockdale

    2012-05-01

    Full Text Available Pairing a visual stimulus with a concurrent auditory stimulus of subtly longer or shorter duration expands or contracts the duration of that visual stimulus, even when the observer is asked to ignore the irrelevant auditory component. Here we map out this relationship and find a roughly linear relationship between perceived duration of the visual component and the duration of the irrelevant auditory component. Beyond this ‘window of integration’ the obligatory combination of cues breaks down rather suddenly, at durations 0.2 log units longer or shorter than baseline. Conversely, a visual duration has virtually no effect on the perceived duration of a concurrently presented auditory duration. A model is presented based on obligatory combination of visual and auditory cues within a window defined by the respective JNDs of vision and audition.

  12. Keeping eyes peeled: guppies exposed to chemical alarm cue are more responsive to ambiguous visual cues

    OpenAIRE

    Stephenson, Jessica Frances

    2016-01-01

    Abstract Information received from the visual and chemical senses is qualitatively different. For prey species in aquatic environments, visual cues are spatially and temporally reliable but risky as the prey and predator must often be in close proximity. Chemical cues, by contrast, can be distorted by currents or linger and thus provide less reliable spatial and temporal information, but can be detected from a safe distance. Chemical cues are therefore often the first detected and may provide...

  13. Modeling the Development of Audiovisual Cue Integration in Speech Perception

    Science.gov (United States)

    Getz, Laura M.; Nordeen, Elke R.; Vrabic, Sarah C.; Toscano, Joseph C.

    2017-01-01

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues. PMID:28335558

  14. Dissociating temporal attention from spatial attention and motor response preparation: A high-density EEG study.

    Science.gov (United States)

    Faugeras, Frédéric; Naccache, Lionel

    2016-01-01

    Engagement of various forms of attention and response preparation determines behavioral performance during stimulus-response tasks. Many studies explored the respective properties and neural signatures of each of these processes. However, very few experiments were conceived to explore their interaction. In the present work we used an auditory target detection task during which both temporal attention on the one side, and spatial attention and motor response preparation on the other side could be explicitly cued. Both cueing effects speeded response times, and showed strictly additive effects. Target ERP analysis revealed modulations of N1 and P3 responses by these two forms of cueing. Cue-target interval analysis revealed two main effects paralleling behavior. First, a typical contingent negative variation (CNV), induced by the cue and resolved immediately after target onset, was found larger for temporal attention cueing than for spatial and motor response cueing. Second, a posterior and late cue-P3 complex showed the reverse profile. Analyses of lateralized readiness potentials (LRP) revealed both patterns of motor response inhibition and activation. Taken together these results help to clarify and disentangle the respective effects of temporal attention on the one hand, and of the combination of spatial attention and motor response preparation on the other hand on brain activity and behavior.

  15. Auditory hallucinations.

    Science.gov (United States)

    Blom, Jan Dirk

    2015-01-01

    Auditory hallucinations constitute a phenomenologically rich group of endogenously mediated percepts which are associated with psychiatric, neurologic, otologic, and other medical conditions, but which are also experienced by 10-15% of all healthy individuals in the general population. The group of phenomena is probably best known for its verbal auditory subtype, but it also includes musical hallucinations, echo of reading, exploding-head syndrome, and many other types. The subgroup of verbal auditory hallucinations has been studied extensively with the aid of neuroimaging techniques, and from those studies emerges an outline of a functional as well as a structural network of widely distributed brain areas involved in their mediation. The present chapter provides an overview of the various types of auditory hallucination described in the literature, summarizes our current knowledge of the auditory networks involved in their mediation, and draws on ideas from the philosophy of science and network science to reconceptualize the auditory hallucinatory experience, and point out directions for future research into its neurobiologic substrates. In addition, it provides an overview of known associations with various clinical conditions and of the existing evidence for pharmacologic and non-pharmacologic treatments.

  16. [Visual cues as a therapeutic tool in Parkinson's disease. A systematic review].

    Science.gov (United States)

    Muñoz-Hellín, Elena; Cano-de-la-Cuerda, Roberto; Miangolarra-Page, Juan Carlos

    2013-01-01

    Sensory stimuli or sensory cues are being used as a therapeutic tool for improving gait disorders in Parkinson's disease patients, but most studies seem to focus on auditory stimuli. The aim of this study was to conduct a systematic review regarding the use of visual cues over gait disorders, dual tasks during gait, freezing and the incidence of falls in patients with Parkinson to obtain therapeutic implications. We conducted a systematic review in main databases such as Cochrane Database of Systematic Reviews, TripDataBase, PubMed, Ovid MEDLINE, Ovid EMBASE and Physiotherapy Evidence Database, during 2005 to 2012, according to the recommendations of the Consolidated Standards of Reporting Trials, evaluating the quality of the papers included with the Downs & Black Quality Index. 21 articles were finally included in this systematic review (with a total of 892 participants) with variable methodological quality, achieving an average of 17.27 points in the Downs and Black Quality Index (range: 11-21). Visual cues produce improvements over temporal-spatial parameters in gait, turning execution, reducing the appearance of freezing and falls in Parkinson's disease patients. Visual cues appear to benefit dual tasks during gait, reducing the interference of the second task. Further studies are needed to determine the preferred type of stimuli for each stage of the disease.

  17. Influence of auditory and audiovisual stimuli on the right-left prevalence effect

    DEFF Research Database (Denmark)

    Vu, Kim-Phuong L; Minakata, Katsumi; Ngo, Mary Kim

    2014-01-01

    vertical coding through use of the spatial-musical association of response codes (SMARC) effect, where pitch is coded in terms of height in space. In Experiment 1, we found a larger right-left prevalence effect for unimodal auditory than visual stimuli. Neutral, non-pitch coded, audiovisual stimuli did...... not result in cross-modal facilitation, but did show evidence of visual dominance. The right-left prevalence effect was eliminated in the presence of SMARC audiovisual stimuli, but the effect influenced horizontal rather than vertical coding. Experiment 2 showed that the influence of the pitch dimension...... was not in terms of influencing response selection on a trial-to-trial basis, but in terms of altering the salience of the task environment. Taken together, these findings indicate that in the absence of salient vertical cues, auditory and audiovisual stimuli tend to be coded along the horizontal dimension...

  18. Components representation of negative numbers: evidence from auditory stimuli detection and number classification tasks.

    Science.gov (United States)

    Kong, Feng; Zhao, Jingjing; You, Xuqun

    2012-01-01

    Past research suggested that negative numbers could be represented in terms of their components in the visual modality. The present study examined the processing of negative numbers in the auditory modality and whether it is affected by context. Experiment 1 employed a stimuli detection task where only negative numbers were presented binaurally. Experiment 2 employed the same task, but both positive and negative numbers were mixed as cues. A reverse attentional spatial-numerical association of response codes (SNARC) effect for negative numbers was obtained in these two experiments. Experiment 3 employed a number classification task where only negative numbers were presented binaurally. Experiment 4 employed the same task, but both positive and negative numbers were mixed. A reverse SNARC effect for negative numbers was obtained in these two experiments. These findings suggest that negative numbers in the auditory modality are generated from the set of positive numbers, thus supporting a components representation.

  19. The Influence of Tactile Cognitive Maps on Auditory Space Perception in Sighted Persons.

    Science.gov (United States)

    Tonelli, Alessia; Gori, Monica; Brayda, Luca

    2016-01-01

    We have recently shown that vision is important to improve spatial auditory cognition. In this study, we investigate whether touch is as effective as vision to create a cognitive map of a soundscape. In particular, we tested whether the creation of a mental representation of a room, obtained through tactile exploration of a 3D model, can influence the perception of a complex auditory task in sighted people. We tested two groups of blindfolded sighted people - one experimental and one control group - in an auditory space bisection task. In the first group, the bisection task was performed three times: specifically, the participants explored with their hands the 3D tactile model of the room and were led along the perimeter of the room between the first and the second execution of the space bisection. Then, they were allowed to remove the blindfold for a few minutes and look at the room between the second and third execution of the space bisection. Instead, the control group repeated for two consecutive times the space bisection task without performing any environmental exploration in between. Considering the first execution as a baseline, we found an improvement in the precision after the tactile exploration of the 3D model. Interestingly, no additional gain was obtained when room observation followed the tactile exploration, suggesting that no additional gain was obtained by vision cues after spatial tactile cues were internalized. No improvement was found between the first and the second execution of the space bisection without environmental exploration in the control group, suggesting that the improvement was not due to task learning. Our results show that tactile information modulates the precision of an ongoing space auditory task as well as visual information. This suggests that cognitive maps elicited by touch may participate in cross-modal calibration and supra-modal representations of space that increase implicit knowledge about sound propagation.

  20. The influence of tactile cognitive maps on auditory space perception in sighted persons.

    Directory of Open Access Journals (Sweden)

    Alessia Tonelli

    2016-11-01

    Full Text Available We have recently shown that vision is important to improve spatial auditory cognition. In this study we investigate whether touch is as effective as vision to create a cognitive map of a soundscape. In particular we tested whether the creation of a mental representation of a room, obtained through tactile exploration of a 3D model, can influence the perception of a complex auditory task in sighted people. We tested two groups of blindfolded sighted people – one experimental and one control group – in an auditory space bisection task. In the first group the bisection task was performed three times: specifically, the participants explored with their hands the 3D tactile model of the room and were led along the perimeter of the room between the first and the second execution of the space bisection. Then, they were allowed to remove the blindfold for a few minutes and look at the room between the second and third execution of the space bisection. Instead, the control group repeated for two consecutive times the space bisection task without performing any environmental exploration in between. Considering the first execution as a baseline, we found an improvement in the precision after the tactile exploration of the 3D model. Interestingly, no additional gain was obtained when room observation followed the tactile exploration, suggesting that no additional gain was obtained by vision cues after spatial tactile cues were internalized. No improvement was found between the first and the second execution of the space bisection without environmental exploration in the control group, suggesting that the improvement was not due to task learning. Our results show that tactile information modulates the precision of an ongoing space auditory task as well as visual information. This suggests that cognitive maps elicited by touch may participate in cross-modal calibration and supra-modal representations of space that increase implicit knowledge about sound

  1. Representation of spatial and spectro-temporal cues in the midbrain and forebrain of North American barn owls (Tyto furcata pratincola)

    OpenAIRE

    2015-01-01

    The barn owl is a crepuscular and nocturnal bird of prey that relies mainly on its acoustic system for the identification and localization of potential prey. The barn owl is able to localize even faint sounds in a natural environment precisely. Like mammals, barn owls use the interaural time difference (ITD) for the localization of the azimuthal sound source position. In the barn owl’s auditory system, ITD is processed in two separate pathways, the midbrain and forebrain pathways, which are b...

  2. 邻近效应对多媒体学习中图文整合的影响:线索的作用%The Spatial Contiguity Effect in Multimedia Learning:The Role of Cueing

    Institute of Scientific and Technical Information of China (English)

    王福兴; 段朝辉; 周宗奎; 陈珺

    2015-01-01

    Text and illustrations integrated in spatial distribution could be helpful for learners’ performance during multimedia learning. In addition, recent studies showed that cues, e.g. highlighting with color, arrows, bold typeface, could guide learner’s attention and improve their learning outcomes. The researchers speculate that the picture and text close to each other can shorten the visual search time and reduce the cognitive load, thereby enhancing the learning results. Previous studies also showed that adding cues to the learning materials could guide the learners’ attention, promoted the organization and integration of the new learning knowledge. But what are the specific processes of the contiguity effect? Whether the changes of the picture-text’s location and adding cues would affect the allocation of attention? In this study, we expected that the contiguity effects and cueing would affect the learners' attention allocation, and then influence the memory tests. Consequently, the integrated text and pictures with cues would have more fixation counts and longer dwell time on the task related area, and higher scores in the retention test and transfer test. In this study, fifty one college students were recruited from Central China Normal University as the participants with prior knowledge questionnaire. And a computer-generated animation depicting the process of lightning formation was used as the experiment material. Highlighting red color on text and pictures were manipulated as cues. First of all, a demographic questionnaire including a prior knowledge questionnaire would be sent to all of the prospective participants who want to participate in the experiment. The student who could be the participants had been measured by the prior knowledge questionnaire, to ensure they knew little about the lightning knowledge. After that they were randomized into four groups. The four groups were as follows: the integrated text picture with cues, the integrated text

  3. Spatial attention alleviates temporal crowding, but neither temporal nor spatial uncertainty are necessary for the emergence of temporal crowding.

    Science.gov (United States)

    Tkacz-Domb, Shira; Yeshurun, Yaffa

    2017-03-01

    Recently, we demonstrated temporal crowding with normal observers: Target identification was impaired when it was surrounded by other stimuli in time, even when the interstimuli intervals (ISIs) were relatively long. Here, we examined whether temporal and spatial uncertainties play a critical role in the emergence of temporal crowding. We presented a sequence of three letters to the same peripheral location, right or left of fixation, separated by varying ISI (106-459 ms). One of these letters was the target, and the observers indicated its orientation. To eliminate temporal uncertainty, the position of the target within the sequence was fixed for an entire block (Experiment 1). To eliminate spatial uncertainty, we employed spatial attentional precues that indicated the letters' location. The precue was either auditory (Experiment 2) or visual (Experiment 3). We found temporal crowding to result in worse performance with shorter ISIs, even when there was no temporal or spatial uncertainty. Unlike the auditory cue, the visual cue affected performance. Specifically, when there was uncertainty regarding the target location (i.e., when the target appeared in the first display), precueing the target location improved overall performance and reduced the ISI effect, although it was not completely eliminated. These results suggest that temporal and spatial uncertainties are not necessary for the emergence of temporal crowding and that spatial attention can reduce temporal crowding.

  4. Feeling music: integration of auditory and tactile inputs in musical meter perception.

    Science.gov (United States)

    Huang, Juan; Gamble, Darik; Sarnlertsophon, Kristine; Wang, Xiaoqin; Hsiao, Steven

    2012-01-01

    Musicians often say that they not only hear, but also "feel" music. To explore the contribution of tactile information in "feeling" musical rhythm, we investigated the degree that auditory and tactile inputs are integrated in humans performing a musical meter recognition task. Subjects discriminated between two types of sequences, 'duple' (march-like rhythms) and 'triple' (waltz-like rhythms) presented in three conditions: 1) Unimodal inputs (auditory or tactile alone), 2) Various combinations of bimodal inputs, where sequences were distributed between the auditory and tactile channels such that a single channel did not produce coherent meter percepts, and 3) Simultaneously presented bimodal inputs where the two channels contained congruent or incongruent meter cues. We first show that meter is perceived similarly well (70%-85%) when tactile or auditory cues are presented alone. We next show in the bimodal experiments that auditory and tactile cues are integrated to produce coherent meter percepts. Performance is high (70%-90%) when all of the metrically important notes are assigned to one channel and is reduced to 60% when half of these notes are assigned to one channel. When the important notes are presented simultaneously to both channels, congruent cues enhance meter recognition (90%). Performance drops dramatically when subjects were presented with incongruent auditory cues (10%), as opposed to incongruent tactile cues (60%), demonstrating that auditory input dominates meter perception. We believe that these results are the first demonstration of cross-modal sensory grouping between any two senses.

  5. Feeling music: integration of auditory and tactile inputs in musical meter perception.

    Directory of Open Access Journals (Sweden)

    Juan Huang

    Full Text Available Musicians often say that they not only hear, but also "feel" music. To explore the contribution of tactile information in "feeling" musical rhythm, we investigated the degree that auditory and tactile inputs are integrated in humans performing a musical meter recognition task. Subjects discriminated between two types of sequences, 'duple' (march-like rhythms and 'triple' (waltz-like rhythms presented in three conditions: 1 Unimodal inputs (auditory or tactile alone, 2 Various combinations of bimodal inputs, where sequences were distributed between the auditory and tactile channels such that a single channel did not produce coherent meter percepts, and 3 Simultaneously presented bimodal inputs where the two channels contained congruent or incongruent meter cues. We first show that meter is perceived similarly well (70%-85% when tactile or auditory cues are presented alone. We next show in the bimodal experiments that auditory and tactile cues are integrated to produce coherent meter percepts. Performance is high (70%-90% when all of the metrically important notes are assigned to one channel and is reduced to 60% when half of these notes are assigned to one channel. When the important notes are presented simultaneously to both channels, congruent cues enhance meter recognition (90%. Performance drops dramatically when subjects were presented with incongruent auditory cues (10%, as opposed to incongruent tactile cues (60%, demonstrating that auditory input dominates meter perception. We believe that these results are the first demonstration of cross-modal sensory grouping between any two senses.

  6. Auditory temporal processing skills in musicians with dyslexia.

    Science.gov (United States)

    Bishop-Liebler, Paula; Welch, Graham; Huss, Martina; Thomson, Jennifer M; Goswami, Usha

    2014-08-01

    The core cognitive difficulty in developmental dyslexia involves phonological processing, but adults and children with dyslexia also have sensory impairments. Impairments in basic auditory processing show particular links with phonological impairments, and recent studies with dyslexic children across languages reveal a relationship between auditory temporal processing and sensitivity to rhythmic timing and speech rhythm. As rhythm is explicit in music, musical training might have a beneficial effect on the auditory perception of acoustic cues to rhythm in dyslexia. Here we took advantage of the presence of musicians with and without dyslexia in musical conservatoires, comparing their auditory temporal processing abilities with those of dyslexic non-musicians matched for cognitive ability. Musicians with dyslexia showed equivalent auditory sensitivity to musicians without dyslexia and also showed equivalent rhythm perception. The data support the view that extensive rhythmic experience initiated during childhood (here in the form of music training) can affect basic auditory processing skills which are found to be deficient in individuals with dyslexia.

  7. Neural Dynamics of Object-Based Multifocal Visual Spatial Attention and Priming: Object Cueing, Useful-Field-of-View, and Crowding

    Science.gov (United States)

    Foley, Nicholas C.; Grossberg, Stephen; Mingolla, Ennio

    2012-01-01

    How are spatial and object attention coordinated to achieve rapid object learning and recognition during eye movement search? How do prefrontal priming and parietal spatial mechanisms interact to determine the reaction time costs of intra-object attention shifts, inter-object attention shifts, and shifts between visible objects and covertly cued…

  8. The many facets of auditory display

    Science.gov (United States)

    Blattner, Meera M.

    1995-01-01

    In this presentation we will examine some of the ways sound can be used in a virtual world. We make the case that many different types of audio experience are available to us. A full range of audio experiences include: music, speech, real-world sounds, auditory displays, and auditory cues or messages. The technology of recreating real-world sounds through physical modeling has advanced in the past few years allowing better simulation of virtual worlds. Three-dimensional audio has further enriched our sensory experiences.

  9. Estimating the intended sound direction of the user: toward an auditory brain-computer interface using out-of-head sound localization.

    Directory of Open Access Journals (Sweden)

    Isao Nambu

    Full Text Available The auditory Brain-Computer Interface (BCI using electroencephalograms (EEG is a subject of intensive study. As a cue, auditory BCIs can deal with many of the characteristics of stimuli such as tone, pitch, and voices. Spatial information on auditory stimuli also provides useful information for a BCI. However, in a portable system, virtual auditory stimuli have to be presented spatially through earphones or headphones, instead of loudspeakers. We investigated the possibility of an auditory BCI using the out-of-head sound localization technique, which enables us to present virtual auditory stimuli to users from any direction, through earphones. The feasibility of a BCI using this technique was evaluated in an EEG oddball experiment and offline analysis. A virtual auditory stimulus was presented to the subject from one of six directions. Using a support vector machine, we were able to classify whether the subject attended the direction of a presented stimulus from EEG signals. The mean accuracy across subjects was 70.0% in the single-trial classification. When we used trial-averaged EEG signals as inputs to the classifier, the mean accuracy across seven subjects reached 89.5% (for 10-trial averaging. Further analysis showed that the P300 event-related potential responses from 200 to 500 ms in central and posterior regions of the brain contributed to the classification. In comparison with the results obtained from a loudspeaker experiment, we confirmed that stimulus presentation by out-of-head sound localization achieved similar event-related potential responses and classification performances. These results suggest that out-of-head sound localization enables us to provide a high-performance and loudspeaker-less portable BCI system.

  10. Auditory Hallucination

    Directory of Open Access Journals (Sweden)

    MohammadReza Rajabi

    2003-09-01

    Full Text Available Auditory Hallucination or Paracusia is a form of hallucination that involves perceiving sounds without auditory stimulus. A common is hearing one or more talking voices which is associated with psychotic disorders such as schizophrenia or mania. Hallucination, itself, is the most common feature of perceiving the wrong stimulus or to the better word perception of the absence stimulus. Here we will discuss four definitions of hallucinations:1.Perceiving of a stimulus without the presence of any subject; 2. hallucination proper which are the wrong perceptions that are not the falsification of real perception, Although manifest as a new subject and happen along with and synchronously with a real perception;3. hallucination is an out-of-body perception which has no accordance with a real subjectIn a stricter sense, hallucinations are defined as perceptions in a conscious and awake state in the absence of external stimuli which have qualities of real perception, in that they are vivid, substantial, and located in external objective space. We are going to discuss it in details here.

  11. Perceptual Load Influences Auditory Space Perception in the Ventriloquist Aftereffect

    Science.gov (United States)

    Eramudugolla, Ranmalee; Kamke, Marc. R.; Soto-Faraco, Salvador; Mattingley, Jason B.

    2011-01-01

    A period of exposure to trains of simultaneous but spatially offset auditory and visual stimuli can induce a temporary shift in the perception of sound location. This phenomenon, known as the "ventriloquist aftereffect", reflects a realignment of auditory and visual spatial representations such that they approach perceptual alignment despite their…

  12. Reduced Sensitivity to Slow-Rate Dynamic Auditory Information in Children with Dyslexia

    Science.gov (United States)

    Poelmans, Hanne; Luts, Heleen; Vandermosten, Maaike; Boets, Bart; Ghesquiere, Pol; Wouters, Jan

    2011-01-01

    The etiology of developmental dyslexia remains widely debated. An appealing theory postulates that the reading and spelling problems in individuals with dyslexia originate from reduced sensitivity to slow-rate dynamic auditory cues. This low-level auditory deficit is thought to provoke a cascade of effects, including inaccurate speech perception…

  13. Action Enhances Acoustic Cues for 3-D Target Localization by Echolocating Bats

    Science.gov (United States)

    Wohlgemuth, Melville J.

    2016-01-01

    Under natural conditions, animals encounter a barrage of sensory information from which they must select and interpret biologically relevant signals. Active sensing can facilitate this process by engaging motor systems in the sampling of sensory information. The echolocating bat serves as an excellent model to investigate the coupling between action and sensing because it adaptively controls both the acoustic signals used to probe the environment and movements to receive echoes at the auditory periphery. We report here that the echolocating bat controls the features of its sonar vocalizations in tandem with the positioning of the outer ears to maximize acoustic cues for target detection and localization. The bat’s adaptive control of sonar vocalizations and ear positioning occurs on a millisecond timescale to capture spatial information from arriving echoes, as well as on a longer timescale to track target movement. Our results demonstrate that purposeful control over sonar sound production and reception can serve to improve acoustic cues for localization tasks. This finding also highlights the general importance of movement to sensory processing across animal species. Finally, our discoveries point to important parallels between spatial perception by echolocation and vision. PMID:27608186

  14. Multiple cue use and integration in pigeons (Columba livia).

    Science.gov (United States)

    Legge, Eric L G; Madan, Christopher R; Spetch, Marcia L; Ludvig, Elliot A

    2016-05-01

    Encoding multiple cues can improve the accuracy and reliability of navigation and goal localization. Problems may arise, however, if one cue is displaced and provides information which conflicts with other cues. Here we investigated how pigeons cope with cue conflict by training them to locate a goal relative to two landmarks and then varying the amount of conflict between the landmarks. When the amount of conflict was small, pigeons tended to integrate both cues in their search patterns. When the amount of conflict was large, however, pigeons used information from both cues independently. This context-dependent strategy for resolving spatial cue conflict agrees with Bayes optimal calculations for using information from multiple sources.

  15. Female hummingbirds do not relocate rewards using colour cues

    OpenAIRE

    Tello Ramos, Maria Cristina; Hurly, T. Andrew; Healy, Susan D.

    2014-01-01

    This research was supported by CONACYT (The Mexican National Council for Science and Technology) grant number: 310717, the University of Lethbridge and the Natural Sciences and Engineering Research Council of Canada (grant number: RGPIN 121496-2003) and the University of St Andrew's Russell Trust Award. Males generally outperform females in spatial tasks. This difference in spatial performance may reflect differences in cue preference because males often use both spatial cues 9distance and...

  16. Acoustic tone or medial geniculate stimulation cue training in the rat is associated with neocortical neuroplasticity and reduced akinesia under haloperidol challenge.

    Science.gov (United States)

    Brown, Andrew R; Hu, Bin; Kolb, Bryan; Teskey, G Campbell

    2010-12-06

    Sensory cues can improve movement deficits in Parkinson's disease, but little is known about the mechanisms involved. To investigate neuroplastic changes following sensorimotor cue training, rats were shaped to respond to acoustic tone or medial geniculate stimulation cues by retrieving a food reward. Neuroplasticity associated with training was assessed by changes in auditory neocortical evoked field potentials and dendritic morphology. Stimulation cue training was associated with changes in dendritic arbour length and complexity in auditory and motor neocortices, but was without effect on evoked electrophysiological responses. Tone cue training was associated with a significant increase in peak height of the evoked auditory response and then under haloperidol challenge, demonstrated reduced akinesia. Results indicate that cue-training induces neuroplastic changes that may be related to improved sensorimotor function under dopaminergic antagonism.

  17. Persistent fluctuations in stride intervals under fractal auditory stimulation

    NARCIS (Netherlands)

    Marmelat, V.C.M.; Torre, K.; Beek, P.J.; Daffertshofer, A.

    2014-01-01

    Stride sequences of healthy gait are characterized by persistent long-range correlations, which become anti-persistent in the presence of an isochronous metronome. The latter phenomenon is of particular interest because auditory cueing isgenerally considered to reduce stride variability and may henc

  18. Thalamic and parietal brain morphology predicts auditory category learning.

    Science.gov (United States)

    Scharinger, Mathias; Henry, Molly J; Erb, Julia; Meyer, Lars; Obleser, Jonas

    2014-01-01

    Auditory categorization is a vital skill involving the attribution of meaning to acoustic events, engaging domain-specific (i.e., auditory) as well as domain-general (e.g., executive) brain networks. A listener's ability to categorize novel acoustic stimuli should therefore depend on both, with the domain-general network being particularly relevant for adaptively changing listening strategies and directing attention to relevant acoustic cues. Here we assessed adaptive listening behavior, using complex acoustic stimuli with an initially salient (but later degraded) spectral cue and a secondary, duration cue that remained nondegraded. We employed voxel-based morphometry (VBM) to identify cortical and subcortical brain structures whose individual neuroanatomy predicted task performance and the ability to optimally switch to making use of temporal cues after spectral degradation. Behavioral listening strategies were assessed by logistic regression and revealed mainly strategy switches in the expected direction, with considerable individual differences. Gray-matter probability in the left inferior parietal lobule (BA 40) and left precentral gyrus was predictive of "optimal" strategy switch, while gray-matter probability in thalamic areas, comprising the medial geniculate body, co-varied with overall performance. Taken together, our findings suggest that successful auditory categorization relies on domain-specific neural circuits in the ascending auditory pathway, while adaptive listening behavior depends more on brain structure in parietal cortex, enabling the (re)direction of attention to salient stimulus properties.

  19. Discovering Structure in Auditory Input: Evidence from Williams Syndrome

    Science.gov (United States)

    Elsabbagh, Mayada; Cohen, Henri; Karmiloff-Smith, Annette

    2010-01-01

    We examined auditory perception in Williams syndrome by investigating strategies used in organizing sound patterns into coherent units. In Experiment 1, we investigated the streaming of sound sequences into perceptual units, on the basis of pitch cues, in a group of children and adults with Williams syndrome compared to typical controls. We showed…

  20. Speech identification in noise: Contribution of temporal, spectral, and visual speech cues.

    Science.gov (United States)

    Kim, Jeesun; Davis, Chris; Groot, Christopher

    2009-12-01

    This study investigated the degree to which two types of reduced auditory signals (cochlear implant simulations) and visual speech cues combined for speech identification. The auditory speech stimuli were filtered to have only amplitude envelope cues or both amplitude envelope and spectral cues and were presented with/without visual speech. In Experiment 1, IEEE sentences were presented in quiet and noise. For in-quiet presentation, speech identification was enhanced by the addition of both spectral and visual speech cues. Due to a ceiling effect, the degree to which these effects combined could not be determined. In noise, these facilitation effects were more marked and were additive. Experiment 2 examined consonant and vowel identification in the context of CVC or VCV syllables presented in noise. For consonants, both spectral and visual speech cues facilitated identification and these effects were additive. For vowels, the effect of combined cues was underadditive, with the effect of spectral cues reduced when presented with visual speech cues. Analysis indicated that without visual speech, spectral cues facilitated the transmission of place information and vowel height, whereas with visual speech, they facilitated lip rounding, with little impact on the transmission of place information.

  1. Arrow-Elicited Cueing Effects at Short Intervals: Rapid Attentional Orienting or Cue-Target Stimulus Conflict?

    Science.gov (United States)

    Green, Jessica J.; Woldorff, Marty G.

    2012-01-01

    The observation of cueing effects (faster responses for cued than uncued targets) rapidly following centrally-presented arrows has led to the suggestion that arrows trigger rapid automatic shifts of spatial attention. However, these effects have primarily been observed during easy target-detection tasks when both cue and target remain on the…

  2. Nogo stimuli do not receive more attentional suppression or response inhibition than neutral stimuli: evidence from the N2pc, PD and N2 components in a spatial cueing paradigm

    Directory of Open Access Journals (Sweden)

    Caroline eBarras

    2016-05-01

    Full Text Available It has been claimed that stimuli sharing the color of the nogo-target are suppressed because of the strong incentive to not process the nogo-target, but we failed to replicate this finding. Participants searched for a color singleton in the target display and indicated its shape when it was in the go color. If the color singleton in the target display was in the nogo color, they had to withhold the response. The target display was preceded by a cue display that also contained a color singleton (the cue. The cue was either in the color of the go or nogo target, or it was in an unrelated, neutral color. With cues in the go color, reaction times (RTs were shorter when the cue appeared at the same location as the target compared to when it appeared at a different location. Also, electrophysiological recordings showed that an index of attentional selection, the N2pc, was elicited by go cues. Surprisingly, we failed to replicate cueing costs for cues in the nogo color that were originally reported by Anderson and Folk (2012. Consistently, we also failed to find an electrophysiological index of attentional suppression (the PD for cues in the nogo color. Further, fronto-central ERPs to the cue display showed the same negativity for nogo and neutral stimuli relative to go stimuli, which is at odds with response inhibition and conflict monitoring accounts of the Nogo-N2. Thus, the modified cueing paradigm employed here provides little evidence that features associated with nogo-targets are suppressed at the level of attention or response selection. Rather, nogo-stimuli are efficiently ignored and attention is focused on features that require a response.

  3. Efficacy of the LiSN & Learn Auditory Training Software: randomized blinded controlled study

    Directory of Open Access Journals (Sweden)

    Sharon Cameron

    2012-01-01

    Full Text Available Background: Children with a spatial processing disorder (SPD require a more favorable signal-to-noise ratio in the classroom because they have difficulty perceiving sound source location cues. Previous research has shown that a novel training program - LiSN & Learn - employing spatialized sound, overcomes this deficit. Here we investigate whether improvements in spatial processing ability are specific to the LiSN & Learn training program. Materials and methods: Participants were ten children (aged between 6;0 [years;months] and 9;9 with normal peripheral hearing who were diagnosed as having SPD using the Listening in Spatialized Noise – Sentences Test (LISN-S. In a blinded controlled study, the participants were randomly allocated to train with either the LiSN & Learn or another auditory training program – Earobics - for approximately 15 minutes per day for twelve weeks. Results: There was a significant improvement post-training on the conditions of the LiSN-S that evaluate spatial processing ability for the LiSN & Learn group (p=0.03 to 0.0008, η2=0.75 to 0.95, n=5, but not for the Earobics group (p=0.5 to 0.7, η2=0.1 to 0.04, n=5. Results from questionnaires completed by the participants and their parents and teachers revealed improvements in real-world listening performance post-training were greater in the LiSN & Learn group than the Earobics group. Conclusions: LiSN & Learn training improved binaural processing ability in children with SPD, enhancing their ability to understand speech in noise. Exposure to non-spatialized auditory training does not produce similar outcomes, emphasizing the importance of deficit-specific remediation.

  4. Efficacy of the LiSN & Learn auditory training software: randomized blinded controlled study

    Directory of Open Access Journals (Sweden)

    Sharon Cameron

    2012-09-01

    Full Text Available Children with a spatial processing disorder (SPD require a more favorable signal-to-noise ratio in the classroom because they have difficulty perceiving sound source location cues. Previous research has shown that a novel training program - LiSN & Learn - employing spatialized sound, overcomes this deficit. Here we investigate whether improvements in spatial processing ability are specific to the LiSN & Learn training program. Participants were ten children (aged between 6;0 [years;months] and 9;9 with normal peripheral hearing who were diagnosed as having SPD using the Listening in Spatialized Noise - Sentences test (LiSN-S. In a blinded controlled study, the participants were randomly allocated to train with either the LiSN & Learn or another auditory training program - Earobics - for approximately 15 min per day for twelve weeks. There was a significant improvement post-training on the conditions of the LiSN-S that evaluate spatial processing ability for the LiSN & Learn group (P=0.03 to 0.0008, η 2=0.75 to 0.95, n=5, but not for the Earobics group (P=0.5 to 0.7, η 2=0.1 to 0.04, n=5. Results from questionnaires completed by the participants and their parents and teachers revealed improvements in real-world listening performance post-training were greater in the LiSN & Learn group than the Earobics group. LiSN & Learn training improved binaural processing ability in children with SPD, enhancing their ability to understand speech in noise. Exposure to non-spatialized auditory training does not produce similar outcomes, emphasizing the importance of deficit-specific remediation.

  5. Speaker's voice as a memory cue.

    Science.gov (United States)

    Campeanu, Sandra; Craik, Fergus I M; Alain, Claude

    2015-02-01

    Speaker's voice occupies a central role as the cornerstone of auditory social interaction. Here, we review the evidence suggesting that speaker's voice constitutes an integral context cue in auditory memory. Investigation into the nature of voice representation as a memory cue is essential to understanding auditory memory and the neural correlates which underlie it. Evidence from behavioral and electrophysiological studies suggest that while specific voice reinstatement (i.e., same speaker) often appears to facilitate word memory even without attention to voice at study, the presence of a partial benefit of similar voices between study and test is less clear. In terms of explicit memory experiments utilizing unfamiliar voices, encoding methods appear to play a pivotal role. Voice congruency effects have been found when voice is specifically attended at study (i.e., when relatively shallow, perceptual encoding takes place). These behavioral findings coincide with neural indices of memory performance such as the parietal old/new recollection effect and the late right frontal effect. The former distinguishes between correctly identified old words and correctly identified new words, and reflects voice congruency only when voice is attended at study. Characterization of the latter likely depends upon voice memory, rather than word memory. There is also evidence to suggest that voice effects can be found in implicit memory paradigms. However, the presence of voice effects appears to depend greatly on the task employed. Using a word identification task, perceptual similarity between study and test conditions is, like for explicit memory tests, crucial. In addition, the type of noise employed appears to have a differential effect. While voice effects have been observed when white noise is used at both study and test, using multi-talker babble does not confer the same results. In terms of neuroimaging research modulations, characterization of an implicit memory effect

  6. Auditory Imagery: Empirical Findings

    Science.gov (United States)

    Hubbard, Timothy L.

    2010-01-01

    The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d)…

  7. Development of kinesthetic-motor and auditory-motor representations in school-aged children.

    Science.gov (United States)

    Kagerer, Florian A; Clark, Jane E

    2015-07-01

    In two experiments using a center-out task, we investigated kinesthetic-motor and auditory-motor integrations in 5- to 12-year-old children and young adults. In experiment 1, participants moved a pen on a digitizing tablet from a starting position to one of three targets (visuo-motor condition), and then to one of four targets without visual feedback of the movement. In both conditions, we found that with increasing age, the children moved faster and straighter, and became less variable in their feedforward control. Higher control demands for movements toward the contralateral side were reflected in longer movement times and decreased spatial accuracy across all age groups. When feedforward control relies predominantly on kinesthesia, 7- to 10-year-old children were more variable, indicating difficulties in switching between feedforward and feedback control efficiently during that age. An inverse age progression was found for directional endpoint error; larger errors increasing with age likely reflect stronger functional lateralization for the dominant hand. In experiment 2, the same visuo-motor condition was followed by an auditory-motor condition in which participants had to move to acoustic targets (either white band or one-third octave noise). Since in the latter directional cues come exclusively from transcallosally mediated interaural time differences, we hypothesized that auditory-motor representations would show age effects. The results did not show a clear age effect, suggesting that corpus callosum functionality is sufficient in children to allow them to form accurate auditory-motor maps already at a young age.

  8. 感知提醒疗法治疗脑卒中后单侧空间忽略的疗效观察%Sensory cueing in the treatment of unilateral spatial neglect

    Institute of Scientific and Technical Information of China (English)

    杨永红; 王凤怡; 黄秋月; 左京京; 方乃权

    2015-01-01

    Objective To investigate the effects of sensory cueing (SC) on unilateral spatial neglect after stroke.Methods Five stroke survivors with unilateral spatial neglect underwent a tailored sensory cueing treatment (wearing a sensory cueing device 3 hours a day, 5 days a week for 2 weeks) in addition to their conventional rehabilitation.Two weeks before and one day before the treatment, and then one day, two weeks and 4 weeks after the treatment, all five patients were assessed using the Hong Kong edition of the behavioral inattention test (BIT-C).Results No significant changes were identified in the average BIT-C ratings at the two time point before the intervention.However, the average score had increased significantly only one day after the start of the intervention, with further significant improvement at each of the succeeding 2 week intervals.The greatest improvement was in finishing cancellation tasks, and the most severely affected patient showed the greatest improvement.Conclusion Sensory cueing treatment may be useful and feasible in reducing unilateral spatial neglect for stroke survivors.However, randomized and controlled trials with larger samples are needed to further verify its effects.%目的 观察感知提醒疗法治疗脑卒中后单侧空间忽略的疗效.方法 采用单组小样本治疗前、后对照预试验设计,共选取5例稳定期脑卒中后单侧忽略患者,在常规康复训练基础上辅以感知提醒干预(每天累计提醒时间为3h,每周治疗5d),持续治疗2周.分别于治疗开始前2周、治疗开始前1天、治疗结束后1天、治疗结束后2周及治疗结束后4周采用香港版单侧忽略行为测试常规子量表(BIT-C)对入选患者单侧空间忽略程度进行评定.结果 治疗前2周与治疗前1天时入选患者BIT-C评分[分别为(51.2±11.0)分、(61.4±12.1)分]差异无统计学意义(P>0.05);治疗结束后1天、治疗结束后2周及治疗结束后4周时入

  9. Cue conflicts in context

    DEFF Research Database (Denmark)

    Boeg Thomsen, Ditte; Poulsen, Mads

    2015-01-01

    When learning their first language, children develop strategies for assigning semantic roles to sentence structures, depending on morphosyntactic cues such as case and word order. Traditionally, comprehension experiments have presented transitive clauses in isolation, and crosslinguistically...... in discourse-pragmatically felicitous contexts. Our results extend previous findings of preschoolers’ sensitivity to discourse-contextual cues in sentence comprehension (Hurewitz, 2001; Song & Fisher, 2005) to the basic task of assigning agent and patient roles....

  10. Auditory perception of a human walker.

    Science.gov (United States)

    Cottrell, David; Campbell, Megan E J

    2014-01-01

    When one hears footsteps in the hall, one is able to instantly recognise it as a person: this is an everyday example of auditory biological motion perception. Despite the familiarity of this experience, research into this phenomenon is in its infancy compared with visual biological motion perception. Here, two experiments explored sensitivity to, and recognition of, auditory stimuli of biological and nonbiological origin. We hypothesised that the cadence of a walker gives rise to a temporal pattern of impact sounds that facilitates the recognition of human motion from auditory stimuli alone. First a series of detection tasks compared sensitivity with three carefully matched impact sounds: footsteps, a ball bouncing, and drumbeats. Unexpectedly, participants were no more sensitive to footsteps than to impact sounds of nonbiological origin. In the second experiment participants made discriminations between pairs of the same stimuli, in a series of recognition tasks in which the temporal pattern of impact sounds was manipulated to be either that of a walker or the pattern more typical of the source event (a ball bouncing or a drumbeat). Under these conditions, there was evidence that both temporal and nontemporal cues were important in recognising theses stimuli. It is proposed that the interval between footsteps, which reflects a walker's cadence, is a cue for the recognition of the sounds of a human walking.

  11. Auditory and Visual Cues for Spatiotemporal Rhythm Reproduction

    DEFF Research Database (Denmark)

    Maculewicz, Justyna; Serafin, Stefania; Kofoed, Lise B.

    2013-01-01

    The goal of this experiment is to investigate the role of au- ditory and visual feedback in a rhythmic tapping task. Subjects had to tap with the finger following presented rhythms, which were divided into easy and difficult patterns. Specificity of the task was that participants had to take...

  12. The contribution of dynamic visual cues to audiovisual speech perception.

    Science.gov (United States)

    Jaekl, Philip; Pesquita, Ana; Alsius, Agnes; Munhall, Kevin; Soto-Faraco, Salvador

    2015-08-01

    Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech.

  13. Effects of auditory and tactile warning on response to visual hazards under a noisy environment.

    Science.gov (United States)

    Murata, Atsuo; Kuroda, Takashi; Karwowski, Waldemar

    2017-04-01

    A warning signal presented via a visual or an auditory cue might interfere with auditory or visual information inside and outside a vehicle. On the other hand, such interference would be certainly reduced if a tactile cue is used. Therefore, it is expected that tactile cues would be promising as warning signals, especially in a noisy environment. In order to determine the most suitable modality of cue (warning) to a visual hazard in noisy environments, auditory and tactile cues were examined in this study. The condition of stimulus onset asynchrony (SOA) was set to 0ms, 500ms, and 1000ms. Two types of noises were used: white noise and noise outside a vehicle recorded in a real-world driving environment. The noise level LAeq (equivalent continuous A-weighted sound pressure level) inside the experimental chamber of each type of noise was adjusted to approximately 60 dB (A), 70 dB (A), and 80 dB (A). As a result, it was verified that tactile warning was more effective than auditory warning. When the noise outside a vehicle from a real-driving environment was used as the noise inside the experimental chamber, the reaction time to the auditory warning was not affected by the noise level.

  14. Age differences in visual-auditory self-motion perception during a simulated driving task

    Directory of Open Access Journals (Sweden)

    Robert eRamkhalawansingh

    2016-04-01

    Full Text Available Recent evidence suggests that visual-auditory cue integration may change as a function of age such that integration is heightened among older adults. Our goal was to determine whether these changes in multisensory integration are also observed in the context of self-motion perception under realistic task constraints. Thus, we developed a simulated driving paradigm in which we provided older and younger adults with visual motion cues (i.e. optic flow and systematically manipulated the presence or absence of congruent auditory cues to self-motion (i.e. engine, tire, and wind sounds. Results demonstrated that the presence or absence of congruent auditory input had different effects on older and younger adults. Both age groups demonstrated a reduction in speed variability when auditory cues were present compared to when they were absent, but older adults demonstrated a proportionally greater reduction in speed variability under combined sensory conditions. These results are consistent with evidence indicating that multisensory integration is heightened in older adults. Importantly, this study is the first to provide evidence to suggest that age differences in multisensory integration may generalize from simple stimulus detection tasks to the integration of the more complex and dynamic visual and auditory cues that are experienced during self-motion.

  15. Reactivity to nicotine cues over repeated cue reactivity sessions.

    Science.gov (United States)

    LaRowe, Steven D; Saladin, Michael E; Carpenter, Matthew J; Upadhyaya, Himanshu P

    2007-12-01

    The present study investigated whether reactivity to nicotine-related cues would attenuate across four experimental sessions held 1 week apart. Participants were nineteen non-treatment seeking, nicotine-dependent males. Cue reactivity sessions were performed in an outpatient research center using in vivo cues consisting of standardized smoking-related paraphernalia (e.g., cigarettes) and neutral comparison paraphernalia (e.g., pencils). Craving ratings were collected before and after both cue presentations while physiological measures (heart rate, skin conductance) were collected before and during the cue presentations. Although craving levels decreased across sessions, smoking-related cues consistently evoked significantly greater increases in craving relative to neutral cues over all four experimental sessions. Skin conductance was higher in response to smoking cues, though this effect was not as robust as that observed for craving. Results suggest that, under the described experimental parameters, craving can be reliably elicited over repeated cue reactivity sessions.

  16. Cues and expressions

    Directory of Open Access Journals (Sweden)

    Thorbjörg Hróarsdóttir

    2005-02-01

    Full Text Available A number of European languages have undergone a change from object-verb to verb-object order. We focus on the change in English and Icelandic, showing that while the structural change was the same, it took place at different times and different ways in the two languages, triggered by different E-language changes. As seen from the English viewpoint, low-level facts of inflection morphology may express the relevant cue for parameters, and so the loss of inflection may lead to a grammar change. This analysis does not carry over to Icelandic, as the loss of OV there took place despite rich case morphology. We aim to show how this can be explained within a cue-style approach, arguing for a universal set of cues. However, the relevant cue may be expressed differently among languages: While it may have been expressed through morphology in English, it as expressed through information structure in Icelandic. In both cases, external effects led to fewer expressions of the relevant (universal cue and a grammar change took place.

  17. Active stream segregation specifically involves the left human auditory cortex.

    Science.gov (United States)

    Deike, Susann; Scheich, Henning; Brechmann, André

    2010-06-14

    An important aspect of auditory scene analysis is the sequential grouping of similar sounds into one "auditory stream" while keeping competing streams separate. In the present low-noise fMRI study we presented sequences of alternating high-pitch (A) and low-pitch (B) complex harmonic tones using acoustic parameters that allow the perception of either two separate streams or one alternating stream. However, the subjects were instructed to actively and continuously segregate the A from the B stream. This was controlled by the additional instruction to listen for rare level deviants only in the low-pitch stream. Compared to the control condition in which only one non-separable stream was presented the active segregation of the A from the B stream led to a selective increase of activation in the left auditory cortex (AC). Together with a similar finding from a previous study using a different acoustic cue for streaming, namely timbre, this suggests that the left auditory cortex plays a dominant role in active sequential stream segregation. However, we found cue differences within the left AC: Whereas in the posterior areas, including the planum temporale, activation increased for both acoustic cues, the anterior areas, including Heschl's gyrus, are only involved in stream segregation based on pitch.

  18. Persistent fluctuations in stride intervals under fractal auditory stimulation.

    Directory of Open Access Journals (Sweden)

    Vivien Marmelat

    Full Text Available Stride sequences of healthy gait are characterized by persistent long-range correlations, which become anti-persistent in the presence of an isochronous metronome. The latter phenomenon is of particular interest because auditory cueing is generally considered to reduce stride variability and may hence be beneficial for stabilizing gait. Complex systems tend to match their correlation structure when synchronizing. In gait training, can one capitalize on this tendency by using a fractal metronome rather than an isochronous one? We examined whether auditory cues with fractal variations in inter-beat intervals yield similar fractal inter-stride interval variability as isochronous auditory cueing in two complementary experiments. In Experiment 1, participants walked on a treadmill while being paced by either an isochronous or a fractal metronome with different variation strengths between beats in order to test whether participants managed to synchronize with a fractal metronome and to determine the necessary amount of variability for participants to switch from anti-persistent to persistent inter-stride intervals. Participants did synchronize with the metronome despite its fractal randomness. The corresponding coefficient of variation of inter-beat intervals was fixed in Experiment 2, in which participants walked on a treadmill while being paced by non-isochronous metronomes with different scaling exponents. As expected, inter-stride intervals showed persistent correlations similar to self-paced walking only when cueing contained persistent correlations. Our results open up a new window to optimize rhythmic auditory cueing for gait stabilization by integrating fractal fluctuations in the inter-beat intervals.

  19. Persistent fluctuations in stride intervals under fractal auditory stimulation.

    Science.gov (United States)

    Marmelat, Vivien; Torre, Kjerstin; Beek, Peter J; Daffertshofer, Andreas

    2014-01-01

    Stride sequences of healthy gait are characterized by persistent long-range correlations, which become anti-persistent in the presence of an isochronous metronome. The latter phenomenon is of particular interest because auditory cueing is generally considered to reduce stride variability and may hence be beneficial for stabilizing gait. Complex systems tend to match their correlation structure when synchronizing. In gait training, can one capitalize on this tendency by using a fractal metronome rather than an isochronous one? We examined whether auditory cues with fractal variations in inter-beat intervals yield similar fractal inter-stride interval variability as isochronous auditory cueing in two complementary experiments. In Experiment 1, participants walked on a treadmill while being paced by either an isochronous or a fractal metronome with different variation strengths between beats in order to test whether participants managed to synchronize with a fractal metronome and to determine the necessary amount of variability for participants to switch from anti-persistent to persistent inter-stride intervals. Participants did synchronize with the metronome despite its fractal randomness. The corresponding coefficient of variation of inter-beat intervals was fixed in Experiment 2, in which participants walked on a treadmill while being paced by non-isochronous metronomes with different scaling exponents. As expected, inter-stride intervals showed persistent correlations similar to self-paced walking only when cueing contained persistent correlations. Our results open up a new window to optimize rhythmic auditory cueing for gait stabilization by integrating fractal fluctuations in the inter-beat intervals.

  20. Viewpoint-independent contextual cueing effect

    Directory of Open Access Journals (Sweden)

    taiga etsuchiai

    2012-06-01

    Full Text Available We usually perceive things in our surroundings as unchanged despite viewpoint changes caused by self-motion. The visual system therefore must have a function to process objects independently of viewpoint. In this study, we examined whether viewpoint-independent spatial layout can be obtained implicitly. For this purpose, we used a contextual cueing effect, a learning effect of spatial layout in visual search displays known to be an implicit effect. We compared the transfer of the contextual cueing effect between cases with and without self-motion by using visual search displays for 3D objects, which changed according to the participant’s assumed location for viewing the stimuli. The contextual cueing effect was obtained with self-motion but disappeared when the display changed without self-motion. This indicates that there is an implicit learning effect in spatial coordinates and suggests that the spatial representation of object layouts or scenes can be obtained and updated implicitly. We also showed that binocular disparity play an important role in the layout representations.

  1. Development of cue integration in human navigation.

    Science.gov (United States)

    Nardini, Marko; Jones, Peter; Bedford, Rachael; Braddick, Oliver

    2008-05-06

    Mammalian navigation depends both on visual landmarks and on self-generated (e.g., vestibular and proprioceptive) cues that signal the organism's own movement [1-5]. When these conflict, landmarks can either reset estimates of self-motion or be integrated with them [6-9]. We asked how humans combine these information sources and whether children, who use both from a young age [10-12], combine them as adults do. Participants attempted to return an object to its original place in an arena when given either visual landmarks only, nonvisual self-motion information only, or both. Adults, but not 4- to 5-year-olds or 7- to 8-year-olds, reduced their response variance when both information sources were available. In an additional "conflict" condition that measured relative reliance on landmarks and self-motion, we predicted behavior under two models: integration (weighted averaging) of the cues and alternation between them. Adults' behavior was predicted by integration, in which the cues were weighted nearly optimally to reduce variance, whereas children's behavior was predicted by alternation. These results suggest that development of individual spatial-representational systems precedes development of the capacity to combine these within a common reference frame. Humans can integrate spatial cues nearly optimally to navigate, but this ability depends on an extended developmental process.

  2. Composition: Cue Wheel

    DEFF Research Database (Denmark)

    Bergstrøm-Nielsen, Carl

    2014-01-01

    Cue Rondo is an open composition to be realised by improvising musicians. See more about my composition practise in the entry "Composition - General Introduction". This work is licensed under a Creative Commons "by-nc" License. You may for non-commercial purposes use and distribute it, performance...

  3. The effects of rhythmic sensory cues on the temporal dynamics of human gait.

    Science.gov (United States)

    Sejdić, Ervin; Fu, Yingying; Pak, Alison; Fairley, Jillian A; Chau, Tom

    2012-01-01

    Walking is a complex, rhythmic task performed by the locomotor system. However, natural gait rhythms can be influenced by metronomic auditory stimuli, a phenomenon of particular interest in neurological rehabilitation. In this paper, we examined the effects of aural, visual and tactile rhythmic cues on the temporal dynamics associated with human gait. Data were collected from fifteen healthy adults in two sessions. Each session consisted of five 15-minute trials. In the first trial of each session, participants walked at their preferred walking speed. In subsequent trials, participants were asked to walk to a metronomic beat, provided through visually, aurally, tactile or all three cues (simultaneously and in sync), the pace of which was set to the preferred walking speed of the first trial. Using the collected data, we extracted several parameters including: gait speed, mean stride interval, stride interval variability, scaling exponent and maximum Lyapunov exponent. The extracted parameters showed that rhythmic sensory cues affect the temporal dynamics of human gait. The auditory rhythmic cue had the greatest influence on the gait parameters, while the visual cue had no statistically significant effect on the scaling exponent. These results demonstrate that visual rhythmic cues could be considered as an alternative cueing modality in rehabilitation without concern of adversely altering the statistical persistence of walking.

  4. The effects of rhythmic sensory cues on the temporal dynamics of human gait.

    Directory of Open Access Journals (Sweden)

    Ervin Sejdić

    Full Text Available Walking is a complex, rhythmic task performed by the locomotor system. However, natural gait rhythms can be influenced by metronomic auditory stimuli, a phenomenon of particular interest in neurological rehabilitation. In this paper, we examined the effects of aural, visual and tactile rhythmic cues on the temporal dynamics associated with human gait. Data were collected from fifteen healthy adults in two sessions. Each session consisted of five 15-minute trials. In the first trial of each session, participants walked at their preferred walking speed. In subsequent trials, participants were asked to walk to a metronomic beat, provided through visually, aurally, tactile or all three cues (simultaneously and in sync, the pace of which was set to the preferred walking speed of the first trial. Using the collected data, we extracted several parameters including: gait speed, mean stride interval, stride interval variability, scaling exponent and maximum Lyapunov exponent. The extracted parameters showed that rhythmic sensory cues affect the temporal dynamics of human gait. The auditory rhythmic cue had the greatest influence on the gait parameters, while the visual cue had no statistically significant effect on the scaling exponent. These results demonstrate that visual rhythmic cues could be considered as an alternative cueing modality in rehabilitation without concern of adversely altering the statistical persistence of walking.

  5. On the relative contributions of multisensory integration and crossmodal exogenous spatial attention to multisensory response enhancement.

    Science.gov (United States)

    Van der Stoep, N; Spence, C; Nijboer, T C W; Van der Stigchel, S

    2015-11-01

    Two processes that can give rise to multisensory response enhancement (MRE) are multisensory integration (MSI) and crossmodal exogenous spatial attention. It is, however, currently unclear what the relative contribution of each of these is to MRE. We investigated this issue using two tasks that are generally assumed to measure MSI (a redundant target effect task) and crossmodal exogenous spatial attention (a spatial cueing task). One block of trials consisted of unimodal auditory and visual targets designed to provide a unimodal baseline. In two other blocks of trials, the participants were presented with spatially and temporally aligned and misaligned audiovisual (AV) targets (0, 50, 100, and 200ms SOA). In the integration block, the participants were instructed to respond to the onset of the first target stimulus that they detected (A or V). The instruction for the cueing block was to respond only to the onset of the visual targets. The targets could appear at one of three locations: left, center, and right. The participants were instructed to respond only to lateral targets. The results indicated that MRE was caused by MSI at 0ms SOA. At 50ms SOA, both crossmodal exogenous spatial attention and MSI contributed to the observed MRE, whereas the MRE observed at the 100 and 200ms SOAs was attributable to crossmodal exogenous spatial attention, alerting, and temporal preparation. These results therefore suggest that there may be a temporal window in which both MSI and exogenous crossmodal spatial attention can contribute to multisensory response enhancement.

  6. Superior Temporal Activation in Response to Dynamic Audio-Visual Emotional Cues

    Science.gov (United States)

    Robins, Diana L.; Hunyadi, Elinora; Schultz, Robert T.

    2009-01-01

    Perception of emotion is critical for successful social interaction, yet the neural mechanisms underlying the perception of dynamic, audio-visual emotional cues are poorly understood. Evidence from language and sensory paradigms suggests that the superior temporal sulcus and gyrus (STS/STG) play a key role in the integration of auditory and visual…

  7. Analysis of Parallel and Transverse Visual Cues on the Gait of Individuals with Idiopathic Parkinson's Disease

    Science.gov (United States)

    de Melo Roiz, Roberta; Azevedo Cacho, Enio Walker; Cliquet, Alberto, Jr.; Barasnevicius Quagliato, Elizabeth Maria Aparecida

    2011-01-01

    Idiopathic Parkinson's disease (IPD) has been defined as a chronic progressive neurological disorder with characteristics that generate changes in gait pattern. Several studies have reported that appropriate external influences, such as visual or auditory cues may improve the gait pattern of patients with IPD. Therefore, the objective of this…

  8. Sensory information and associative cues used in food detection by wild vervet monkeys.

    Science.gov (United States)

    Teichroeb, Julie A; Chapman, Colin A

    2014-05-01

    Understanding animals' spatial perception is a critical step toward discerning their cognitive processes. The spatial sense is multimodal and based on both the external world and mental representations of that world. Navigation in each species depends upon its evolutionary history, physiology, and ecological niche. We carried out foraging experiments on wild vervet monkeys (Chlorocebus pygerythrus) at Lake Nabugabo, Uganda, to determine the types of cues used to detect food and whether associative cues could be used to find hidden food. Our first and second set of experiments differentiated between vervets' use of global spatial cues (including the arrangement of feeding platforms within the surrounding vegetation) and/or local layout cues (the position of platforms relative to one another), relative to the use of goal-object cues on each platform. Our third experiment provided an associative cue to the presence of food with global spatial, local layout, and goal-object cues disguised. Vervets located food above chance levels when goal-object cues and associative cues were present, and visual signals were the predominant goal-object cues that they attended to. With similar sample sizes and methods as previous studies on New World monkeys, vervets were not able to locate food using only global spatial cues and local layout cues, unlike all five species of platyrrhines thus far tested. Relative to these platyrrhines, the spatial location of food may need to stay the same for a longer time period before vervets encode this information, and goal-object cues may be more salient for them in small-scale space.

  9. Different Neural Networks are Involved in Cross-Modal Non-Spatial Inhibition of Return (IOR: The Effect of the Sensory Modality of Behavioral Targets

    Directory of Open Access Journals (Sweden)

    Qi Chen

    2011-10-01

    Full Text Available We employed a novel cross-modal non-spatial inhibition of return (IOR paradigm with fMRI to investigate whether object concept is organized by supramodal or modality-specific systems. A precue-neutral cue-target sequence was presented and participants were asked to discriminate whether the target was a dog or a cat. The precue and the target could be either a picture or vocalization of a dog or a cat. The neutral cue (bird was always from the same modality as the precue. Behaviorally, for both visual and auditory targets, the main effect of cue validity was the only significant effect, p<0.01, with equivalent effects for within- and cross-modal IOR. Neurally, for visual targets, left inferior frontal gyrus and left medial temporal gyrus showed significantly higher neural activity in cued than uncued condition, irrespective of the precue-target relationship, indicating that the two areas are involved in inhibiting a supramodal representation of previously attended object concept. For auditory targets, left lateral occipital gyrus and right postcentral gyrus showed significantly higher neural activity in uncued than cued condition irrespective of the cue-target relationship, indicating that the two areas are involved in creating a new supramodal representation when a novel object concept appears.

  10. Functional dissociation of transient and sustained fMRI BOLD components in human auditory cortex revealed with a streaming paradigm based on interaural time differences.

    Science.gov (United States)

    Schadwinkel, Stefan; Gutschalk, Alexander

    2010-12-01

    A number of physiological studies suggest that feature-selective adaptation is relevant to the pre-processing for auditory streaming, the perceptual separation of overlapping sound sources. Most of these studies are focused on spectral differences between streams, which are considered most important for streaming. However, spatial cues also support streaming, alone or in combination with spectral cues, but physiological studies of spatial cues for streaming remain scarce. Here, we investigate whether the tuning of selective adaptation for interaural time differences (ITD) coincides with the range where streaming perception is observed. FMRI activation that has been shown to adapt depending on the repetition rate was studied with a streaming paradigm where two tones were differently lateralized by ITD. Listeners were presented with five different ΔITD conditions (62.5, 125, 187.5, 343.75, or 687.5 μs) out of an active baseline with no ΔITD during fMRI. The results showed reduced adaptation for conditions with ΔITD ≥ 125 μs, reflected by enhanced sustained BOLD activity. The percentage of streaming perception for these stimuli increased from approximately 20% for ΔITD = 62.5 μs to > 60% for ΔITD = 125 μs. No further sustained BOLD enhancement was observed when the ΔITD was increased beyond ΔITD = 125 μs, whereas the streaming probability continued to increase up to 90% for ΔITD = 687.5 μs. Conversely, the transient BOLD response, at the transition from baseline to ΔITD blocks, increased most prominently as ΔITD was increased from 187.5 to 343.75 μs. These results demonstrate a clear dissociation of transient and sustained components of the BOLD activity in auditory cortex.

  11. Complex-tone pitch representations in the human auditory system

    DEFF Research Database (Denmark)

    Bianchi, Federica

    Understanding how the human auditory system processes the physical properties of an acoustical stimulus to give rise to a pitch percept is a fascinating aspect of hearing research. Since most natural sounds are harmonic complex tones, this work focused on the nature of pitch-relevant cues...... that are necessary for the auditory system to retrieve the pitch of complex sounds. The existence of different pitch-coding mechanisms for low-numbered (spectrally resolved) and high-numbered (unresolved) harmonics was investigated by comparing pitch-discrimination performance across different cohorts of listeners......) listeners and the effect of musical training for pitch discrimination of complex tones with resolved and unresolved harmonics. Concerning the first topic, behavioral and modeling results in listeners with sensorineural hearing loss (SNHL) indicated that temporal envelope cues of complex tones...

  12. Auditory function in individuals within Leber's hereditary optic neuropathy pedigrees.

    Science.gov (United States)

    Rance, Gary; Kearns, Lisa S; Tan, Johanna; Gravina, Anthony; Rosenfeld, Lisa; Henley, Lauren; Carew, Peter; Graydon, Kelley; O'Hare, Fleur; Mackey, David A

    2012-03-01

    The aims of this study are to investigate whether auditory dysfunction is part of the spectrum of neurological abnormalities associated with Leber's hereditary optic neuropathy (LHON) and to determine the perceptual consequences of auditory neuropathy (AN) in affected listeners. Forty-eight subjects confirmed by genetic testing as having one of four mitochondrial mutations associated with LHON (mt11778, mtDNA14484, mtDNA14482 and mtDNA3460) participated. Thirty-two of these had lost vision, and 16 were asymptomatic at the point of data collection. While the majority of individuals showed normal sound detection, >25% (of both symptomatic and asymptomatic participants) showed electrophysiological evidence of AN with either absent or severely delayed auditory brainstem potentials. Abnormalities were observed for each of the mutations, but subjects with the mtDNA11778 type were the most affected. Auditory perception was also abnormal in both symptomatic and asymptomatic subjects, with >20% of cases showing impaired detection of auditory temporal (timing) cues and >30% showing abnormal speech perception both in quiet and in the presence of background noise. The findings of this study indicate that a relatively high proportion of individuals with the LHON genetic profile may suffer functional hearing difficulties due to neural abnormality in the central auditory pathways.

  13. Measuring Auditory Selective Attention using Frequency Tagging

    Directory of Open Access Journals (Sweden)

    Hari M Bharadwaj

    2014-02-01

    Full Text Available Frequency tagging of sensory inputs (presenting stimuli that fluctuate periodically at rates to which the cortex can phase lock has been used to study attentional modulation of neural responses to inputs in different sensory modalities. For visual inputs, the visual steady-state response (VSSR at the frequency modulating an attended object is enhanced, while the VSSR to a distracting object is suppressed. In contrast, the effect of attention on the auditory steady-state response (ASSR is inconsistent across studies. However, most auditory studies analyzed results at the sensor level or used only a small number of equivalent current dipoles to fit cortical responses. In addition, most studies of auditory spatial attention used dichotic stimuli (independent signals at the ears rather than more natural, binaural stimuli. Here, we asked whether these methodological choices help explain discrepant results. Listeners attended to one of two competing speech streams, one simulated from the left and one from the right, that were modulated at different frequencies. Using distributed source modeling of magnetoencephalography results, we estimate how spatially directed attention modulates the ASSR in neural regions across the whole brain. Attention enhances the ASSR power at the frequency of the attended stream in the contralateral auditory cortex. The attended-stream modulation frequency also drives phase-locked responses in the left (but not right precentral sulcus (lPCS, a region implicated in control of eye gaze and visual spatial attention. Importantly, this region shows no phase locking to the distracting stream suggesting that the lPCS in engaged in an attention-specific manner. Modeling results that take account of the geometry and phases of the cortical sources phase locked to the two streams (including hemispheric asymmetry of lPCS activity help partly explain why past ASSR studies of auditory spatial attention yield seemingly contradictory

  14. Effects of sequential streaming on auditory masking using psychoacoustics and auditory evoked potentials.

    Science.gov (United States)

    Verhey, Jesko L; Ernst, Stephan M A; Yasin, Ifat

    2012-03-01

    The present study was aimed at investigating the relationship between the mismatch negativity (MMN) and psychoacoustical effects of sequential streaming on comodulation masking release (CMR). The influence of sequential streaming on CMR was investigated using a psychoacoustical alternative forced-choice procedure and electroencephalography (EEG) for the same group of subjects. The psychoacoustical data showed, that adding precursors comprising of only off-signal-frequency maskers abolished the CMR. Complementary EEG data showed an MMN irrespective of the masker envelope correlation across frequency when only the off-signal-frequency masker components were present. The addition of such precursors promotes a separation of the on- and off-frequency masker components into distinct auditory objects preventing the auditory system from using comodulation as an additional cue. A frequency-specific adaptation changing the representation of the flanking bands in the streaming conditions may also contribute to the reduction of CMR in the stream conditions, however, it is unlikely that adaptation is the primary reason for the streaming effect. A neurophysiological correlate of sequential streaming was found in EEG data using MMN, but the magnitude of the MMN was not correlated with the audibility of the signal in CMR experiments. Dipole source analysis indicated different cortical regions involved in processing auditory streaming and modulation detection. In particular, neural sources for processing auditory streaming include cortical regions involved in decision-making.

  15. Spatial Olfactory Learning Contributes to Place Field Formation in the Hippocampus

    Science.gov (United States)

    Zhang, Sijie; Manahan-Vaughan, Denise

    2015-01-01

    Spatial encoding in the hippocampus is multifactorial, and it is well established that metric information about space is conferred by place cells that fire when an animal finds itself in a specific environmental location. Visuospatial contexts comprise a key element in the formation of place fields. Nevertheless, hippocampus does not only use visual cues to generate spatial representations. In the absence of visual input, both humans and other vertebrates studied in this context, are capable of generating very effective spatial representations. However, little is known about the relationship between nonvisual sensory modalities and the establishment of place fields. Substantial evidence exists that olfactory information can be used to learn spatial contexts. Here, we report that learning about a distinct odor constellation in an environment, where visual and auditory cues are suppressed, results in stable place fields that rotate when the odor constellations are rotated and remap when the odor constellations are shuffled. These data support that the hippocampus can use nonvisuospatial resources, and specifically can use spatial olfactory information, to generate spatial representations. Despite the less precise nature of olfactory stimuli compared with visual stimuli, these can substitute for visual inputs to enable the acquisition of metric information about space. PMID:24008582

  16. Listeners use speaker identity to access representations of spatial perspective during online language comprehension.

    Science.gov (United States)

    Ryskin, Rachel A; Wang, Ranxiao Frances; Brown-Schmidt, Sarah

    2016-02-01

    Little is known about how listeners represent another person's spatial perspective during language processing (e.g., two people looking at a map from different angles). Can listeners use contextual cues such as speaker identity to access a representation of the interlocutor's spatial perspective? In two eye-tracking experiments, participants received auditory instructions to move objects around a screen from two randomly alternating spatial perspectives (45° vs. 315° or 135° vs. 225° rotations from the participant's viewpoint). Instructions were spoken either by one voice, where the speaker's perspective switched at random, or by two voices, where each speaker maintained one perspective. Analysis of participant eye-gaze showed that interpretation of the instructions improved when each viewpoint was associated with a different voice. These findings demonstrate that listeners can learn mappings between individual talkers and viewpoints, and use these mappings to guide online language processing.

  17. Sensitivity of cochlear nucleus neurons to spatio-temporal changes in auditory nerve activity.

    Science.gov (United States)

    Wang, Grace I; Delgutte, Bertrand

    2012-12-01

    The spatio-temporal pattern of auditory nerve (AN) activity, representing the relative timing of spikes across the tonotopic axis, contains cues to perceptual features of sounds such as pitch, loudness, timbre, and spatial location. These spatio-temporal cues may be extracted by neurons in the cochlear nucleus (CN) that are sensitive to relative timing of inputs from AN fibers innervating different cochlear regions. One possible mechanism for this extraction is "cross-frequency" coincidence detection (CD), in which a central neuron converts the degree of coincidence across the tonotopic axis into a rate code by preferentially firing when its AN inputs discharge in synchrony. We used Huffman stimuli (Carney LH. J Neurophysiol 64: 437-456, 1990), which have a flat power spectrum but differ in their phase spectra, to systematically manipulate relative timing of spikes across tonotopically neighboring AN fibers without changing overall firing rates. We compared responses of CN units to Huffman stimuli with responses of model CD cells operating on spatio-temporal patterns of AN activity derived from measured responses of AN fibers with the principle of cochlear scaling invariance. We used the maximum likelihood method to determine the CD model cell parameters most likely to produce the measured CN unit responses, and thereby could distinguish units behaving like cross-frequency CD cells from those consistent with same-frequency CD (in which all inputs would originate from the same tonotopic location). We find that certain CN unit types, especially those associated with globular bushy cells, have responses consistent with cross-frequency CD cells. A possible functional role of a cross-frequency CD mechanism in these CN units is to increase the dynamic range of binaural neurons that process cues for sound localization.

  18. Using auditory-visual speech to probe the basis of noise-impaired consonant-vowel perception in dyslexia and auditory neuropathy

    Science.gov (United States)

    Ramirez, Joshua; Mann, Virginia

    2005-08-01

    Both dyslexics and auditory neuropathy (AN) subjects show inferior consonant-vowel (CV) perception in noise, relative to controls. To better understand these impairments, natural acoustic speech stimuli that were masked in speech-shaped noise at various intensities were presented to dyslexic, AN, and control subjects either in isolation or accompanied by visual articulatory cues. AN subjects were expected to benefit from the pairing of visual articulatory cues and auditory CV stimuli, provided that their speech perception impairment reflects a relatively peripheral auditory disorder. Assuming that dyslexia reflects a general impairment of speech processing rather than a disorder of audition, dyslexics were not expected to similarly benefit from an introduction of visual articulatory cues. The results revealed an increased effect of noise masking on the perception of isolated acoustic stimuli by both dyslexic and AN subjects. More importantly, dyslexics showed less effective use of visual articulatory cues in identifying masked speech stimuli and lower visual baseline performance relative to AN subjects and controls. Last, a significant positive correlation was found between reading ability and the ameliorating effect of visual articulatory cues on speech perception in noise. These results suggest that some reading impairments may stem from a central deficit of speech processing.

  19. Auditory Integration Training

    Directory of Open Access Journals (Sweden)

    Zahra Jafari

    2002-07-01

    Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depressin, and hyperacute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of The Sound of a Moracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.

  20. The Relationship between the Field-Shifting Phenomenon and Representational Coherence of Place Cells in CA1 and CA3 in a Cue-Altered Environment

    Science.gov (United States)

    Lee, Inah; Knierim, James J.

    2007-01-01

    Subfields of the hippocampus display differential dynamics in processing a spatial environment, especially when changes are introduced to the environment. Specifically, when familiar cues in the environment are spatially rearranged, place cells in the CA3 subfield tend to rotate with a particular set of cues (e.g., proximal cues), maintaining a…

  1. Auditory short-term memory activation during score reading.

    Directory of Open Access Journals (Sweden)

    Veerle L Simoens

    Full Text Available Performing music on the basis of reading a score requires reading ahead of what is being played in order to anticipate the necessary actions to produce the notes. Score reading thus not only involves the decoding of a visual score and the comparison to the auditory feedback, but also short-term storage of the musical information due to the delay of the auditory feedback during reading ahead. This study investigates the mechanisms of encoding of musical information in short-term memory during such a complicated procedure. There were three parts in this study. First, professional musicians participated in an electroencephalographic (EEG experiment to study the slow wave potentials during a time interval of short-term memory storage in a situation that requires cross-modal translation and short-term storage of visual material to be compared with delayed auditory material, as it is the case in music score reading. This delayed visual-to-auditory matching task was compared with delayed visual-visual and auditory-auditory matching tasks in terms of EEG topography and voltage amplitudes. Second, an additional behavioural experiment was performed to determine which type of distractor would be the most interfering with the score reading-like task. Third, the self-reported strategies of the participants were also analyzed. All three parts of this study point towards the same conclusion according to which during music score reading, the musician most likely first translates the visual score into an auditory cue, probably starting around 700 or 1300 ms, ready for storage and delayed comparison with the auditory feedback.

  2. Visuospatial information processing load and the ratio between parietal cue and target P3 amplitudes in the Attentional Network Test.

    Science.gov (United States)

    Abramov, Dimitri M; Pontes, Monique; Pontes, Adailton T; Mourao-Junior, Carlos A; Vieira, Juliana; Quero Cunha, Carla; Tamborino, Tiago; Galhanone, Paulo R; deAzevedo, Leonardo C; Lazarev, Vladimir V

    2017-03-20

    In ERP studies of cognitive processes during attentional tasks, the cue signals containing information about the target can increase the amplitude of the parietal cue P3 in relation to the 'neutral' temporal cue, and reduce the subsequent target P3 when this information is valid, i.e. corresponds to the target's attributes. The present study compared the cue-to-target P3 ratios in neutral and visuospatial cueing, in order to estimate the contribution of valid visuospatial information from the cue to target stages of the task performance, in terms of cognitive load. The P3 characteristics were also correlated with the results of individuals' performance of the visuospatial tasks, in order to estimate the relationship of the observed ERP with spatial reasoning. In 20 typically developing boys, aged 10-13 years (11.3±0.86), the intelligence quotient (I.Q.) was estimated by the Block Design and Vocabulary subtests from the WISC-III. The subjects performed the Attentional Network Test (ANT) accompanied by EEG recording. The cued two-choice task had three equiprobable cue conditions: No cue, with no information about the target; Neutral (temporal) cue, with an asterisk in the center of the visual field, predicting the target onset; and Spatial cues, with an asterisk in the upper or lower hemifield, predicting the onset and corresponding location of the target. The ERPs were estimated for the mid-frontal (Fz) and mid-parietal (Pz) scalp derivations. In the Pz, the Neutral cue P3 had a lower amplitude than the Spatial cue P3; whereas for the target ERPs, the P3 of the Neutral cue condition was larger than that of the Spatial cue condition. However, the sums of the magnitudes of the cue and target P3 were equal in the spatial and neutral cueing, probably indicating that in both cases the equivalent information processing load is included in either the cue or the target reaction, respectively. Meantime, in the Fz, the analog ERP components for both the cue and target

  3. Theta oscillations accompanying concurrent auditory stream segregation.

    Science.gov (United States)

    Tóth, Brigitta; Kocsis, Zsuzsanna; Urbán, Gábor; Winkler, István

    2016-08-01

    The ability to isolate a single sound source among concurrent sources is crucial for veridical auditory perception. The present study investigated the event-related oscillations evoked by complex tones, which could be perceived as a single sound and tonal complexes with cues promoting the perception of two concurrent sounds by inharmonicity, onset asynchrony, and/or perceived source location difference of the components tones. In separate task conditions, participants performed a visual change detection task (visual control), watched a silent movie (passive listening) or reported for each tone whether they perceived one or two concurrent sounds (active listening). In two time windows, the amplitude of theta oscillation was modulated by the presence vs. absence of the cues: 60-350ms/6-8Hz (early) and 350-450ms/4-8Hz (late). The early response appeared both in the passive and the active listening conditions; it did not closely match the task performance; and it had a fronto-central scalp distribution. The late response was only elicited in the active listening condition; it closely matched the task performance; and it had a centro-parietal scalp distribution. The neural processes reflected by these responses are probably involved in the processing of concurrent sound segregation cues, in sound categorization, and response preparation and monitoring. The current results are compatible with the notion that theta oscillations mediate some of the processes involved in concurrent sound segregation.

  4. Spatial cognition

    Science.gov (United States)

    Kaiser, Mary Kister; Remington, Roger

    1988-01-01

    Spatial cognition is the ability to reason about geometric relationships in the real (or a metaphorical) world based on one or more internal representations of those relationships. The study of spatial cognition is concerned with the representation of spatial knowledge, and our ability to manipulate these representations to solve spatial problems. Spatial cognition is utilized most critically when direct perceptual cues are absent or impoverished. Examples are provided of how human spatial cognitive abilities impact on three areas of space station operator performance: orientation, path planning, and data base management. A videotape provides demonstrations of relevant phenomena (e.g., the importance of orientation for recognition of complex, configural forms). The presentation is represented by abstract and overhead visuals only.

  5. Auditory Responses of Infants

    Science.gov (United States)

    Watrous, Betty Springer; And Others

    1975-01-01

    Forty infants, 3- to 12-months-old, participated in a study designed to differentiate the auditory response characteristics of normally developing infants in the age ranges 3 - 5 months, 6 - 8 months, and 9 - 12 months. (Author)

  6. Using auditory pre-information to solve the cocktail-party problem: electrophysiological evidence for age-specific differences

    Directory of Open Access Journals (Sweden)

    Stephan eGetzmann

    2014-12-01

    Full Text Available Speech understanding in complex and dynamic listening environments requires (a auditory scene analysis, namely auditory object formation and segregation, and (b allocation of the attentional focus to the talker of interest. There is evidence that pre-information is actively used to facilitate these two aspects of the so-called cocktail-party problem. Here, a simulated multi-talker scenario was combined with electroencephalography to study scene analysis and allocation of attention in young and middle-aged adults. Sequences of short words (combinations of brief company names and stock-price values from four talkers at different locations were simultaneously presented, and the detection of target names and the discrimination between critical target values were assessed. Immediately prior to speech sequences, auditory pre-information was provided via cues that either prepared auditory scene analysis or attentional focusing, or non-specific pre-information was given. While performance was generally better in younger than older participants, both age groups benefited from auditory pre-information. The analysis of the cue-related event-related potentials revealed age-specific differences in the use of pre-cues: Younger adults showed a pronounced N2 component, suggesting early inhibition of concurrent speech stimuli; older adults exhibited a stronger late P3 component, suggesting increased resource allocation to process the pre-information. In sum, the results argue for an age-specific utilization of auditory pre-information to improve listening in complex dynamic auditory environments.

  7. The Influence of Visual Cues on Sound Externalization

    DEFF Research Database (Denmark)

    Carvajal, Juan Camilo Gil; Santurette, Sébastien; Cubick, Jens;

    Background: The externalization of virtual sounds reproduced via binaural headphone-based auralization systems has been reported to be less robust when the listening environment differs from the room in which binaural room impulse responses (BRIRs) were recorded. It has been debated whether...... this is due to incongruent auditory cues between the recording and playback room during sound reproduction or to an expectation effect from the visual impression of the room. This study investigated the influence of a priori acoustic and visual knowledge of the playback room on sound externalization...... the more reverberant the listening environment was. While the visual impression of the playback room did not affect perceived distance, visual cues helped resolve localization ambiguities and improved compactness perception....

  8. Auditory localisation of conventional and electric cars : laboratory results and implications for cycling safety

    NARCIS (Netherlands)

    Stelling-Konczak, A. Hagenzieker, M.P. Commandeur, J.J.F. Agterberg, M.J.H. & Wee, B. van

    2016-01-01

    When driven at low speeds, cars operating in electric mode have been found to be quieter than conventional cars. As a result, the auditory cues which pedestrians and cyclists use to assess the presence, proximity and location oncoming traffic may be reduced, posing a safety hazard. This laboratory s

  9. Auditory localisation of conventional and electric cars : laboratory results and implications for cycling safety.

    NARCIS (Netherlands)

    Stelling-Konczak, A. Hagenzieker, M.P. Commandeur, J.J.F. Agterberg, M.J.H. & Wee, B. van

    2016-01-01

    When driven at low speeds, cars operating in electric mode have been found to be quieter than conventional cars. As a result, the auditory cues which pedestrians and cyclists use to assess the presence, proximity and location oncoming traffic may be reduced, posing a safety hazard. This laboratory s

  10. The Power Cues%权力线索

    Institute of Scientific and Technical Information of China (English)

    魏秋江

    2012-01-01

    权力线索指人们判断权力所依赖的各种信息,其能预测人们的思维和行为。除以视觉刺激和听觉刺激的形式直接影响人们的权力感知外,权力线索也可利用人们对其在空间和数字上的心理表征,间接影响人们的权力判断。各种权力线索的具体效应仍存争议。学者已开始关注现有线索去伪存真、分类和标准化等问题,还从生理视角对其加以验证,并探求新的权力线索。%Power cues are the internal and external stimuli that people utilize to judge the power of others and themselves. Recognizing people's power is the basic interaction in social and organizational life, which reduces the likelihood of conflicts within and between the groups and effectively assigns resources. Recognizing power also important to self - reinforcing and self - definition. Power cues are not only the statement of targets' power, but also can be used to predict people's minds and behaviors. Generally speaking, there are two kinds of encoding, visual and auditory, for the input information. The visual encoding includes appearance, such as the formation of face, behaviors, especially non - verbal behaviors, which always come out without consciousness but indicate peoples' power more exactly. The auditory encoding includes several parameters of sound, such as formant dispersion (Dr) , fundamental frequency ( F0 ) , variation in F0 , intensity, and utterance duration. Some kinds of messages are different, such as semantic content, via both ways, which connect with power based on higher level of cognition. In these three viewpoints, more cues are needed to be explored. Surprisingly, there is another odd factor, i.e. , gender. Research related to it reveals a diversity of results. So gender is more of a moderator than a definite power cue, which calls for more attention to the interaction effect. Besides, the mental representation of power, which involves mental simulation of space

  11. Effects of localized auditory information on visual target detection performance using a helmet-mounted display.

    Science.gov (United States)

    Nelson, W T; Hettinger, L J; Cunningham, J A; Brickman, B J; Haas, M W; McKinley, R L

    1998-09-01

    An experiment was conducted to evaluate the effects of localized auditory information on visual target detection performance. Visual targets were presented on either a wide field-of-view dome display or a helmet-mounted display and were accompanied by either localized, nonlocalized, or no auditory information. The addition of localized auditory information resulted in significant increases in target detection performance and significant reductions in workload ratings as compared with conditions in which auditory information was either nonlocalized or absent. Qualitative and quantitative analyses of participants' head motions revealed that the addition of localized auditory information resulted in extremely efficient and consistent search strategies. Implications for the development and design of multisensory virtual environments are discussed. Actual or potential applications of this research include the use of spatial auditory displays to augment visual information presented in helmet-mounted displays, thereby leading to increases in performance efficiency, reductions in physical and mental workload, and enhanced spatial awareness of objects in the environment.

  12. Perception of aircraft Deviation Cues

    Science.gov (United States)

    Martin, Lynne; Azuma, Ronald; Fox, Jason; Verma, Savita; Lozito, Sandra

    2005-01-01

    To begin to address the need for new displays, required by a future airspace concept to support new roles that will be assigned to flight crews, a study of potentially informative display cues was undertaken. Two cues were tested on a simple plan display - aircraft trajectory and flight corridor. Of particular interest was the speed and accuracy with which participants could detect an aircraft deviating outside its flight corridor. Presence of the trajectory cue significantly reduced participant reaction time to a deviation while the flight corridor cue did not. Although non-significant, the flight corridor cue seemed to have a relationship with the accuracy of participants judgments rather than their speed. As this is the second of a series of studies, these issues will be addressed further in future studies.

  13. Responses of mink to auditory stimuli: Prerequisites for applying the ‘cognitive bias’ approach

    DEFF Research Database (Denmark)

    Svendsen, Pernille Maj; Malmkvist, Jens; Halekoh, Ulrich

    2012-01-01

    The aim of the study was to determine and validate prerequisites for applying a cognitive (judgement) bias approach to assessing welfare in farmed mink (Neovison vison). We investigated discrimination ability and associative learning ability using auditory cues. The mink (n = 15 females) were...... mink only showed habituation in experiment 2. Regardless of the frequency used (2 and 18 kHz), cues predicting the danger situation initially elicited slower responses compared to those predicting the safe situation but quickly became faster. Using auditory cues as discrimination stimuli for female...... farmed mink in a judgement bias approach would thus appear to be feasible. However several specific issues are to be considered in order to successfully adapt a cognitive bias approach to mink, and these are discussed....

  14. When they listen and when they watch: Pianists’ use of nonverbal audio and visual cues during duet performance

    Science.gov (United States)

    Goebl, Werner

    2015-01-01

    Nonverbal auditory and visual communication helps ensemble musicians predict each other’s intentions and coordinate their actions. When structural characteristics of the music make predicting co-performers’ intentions difficult (e.g., following long pauses or during ritardandi), reliance on incoming auditory and visual signals may change. This study tested whether attention to visual cues during piano–piano and piano–violin duet performance increases in such situations. Pianists performed the secondo part to three duets, synchronizing with recordings of violinists or pianists playing the primo parts. Secondos’ access to incoming audio and visual signals and to their own auditory feedback was manipulated. Synchronization was most successful when primo audio was available, deteriorating when primo audio was removed and only cues from primo visual signals were available. Visual cues were used effectively following long pauses in the music, however, even in the absence of primo audio. Synchronization was unaffected by the removal of secondos’ own auditory feedback. Differences were observed in how successfully piano–piano and piano–violin duos synchronized, but these effects of instrument pairing were not consistent across pieces. Pianists’ success at synchronizing with violinists and other pianists is likely moderated by piece characteristics and individual differences in the clarity of cueing gestures used. PMID:26279610

  15. Show me your opinion : Perceptual cues in creating and reading argument diagrams

    NARCIS (Netherlands)

    van Amelsvoort, Marije; Maes, Alfons

    2016-01-01

    In argument diagrams, perceptual cues are important to aid understanding. However, we do not know what perceptual cues are used and produced to aid under- standing. We present two studies in which we investigate (1) which spatial, graphical and textual elements people spontaneously use in creating f

  16. Auditory and visual scene analysis: an overview

    Science.gov (United States)

    2017-01-01

    We perceive the world as stable and composed of discrete objects even though auditory and visual inputs are often ambiguous owing to spatial and temporal occluders and changes in the conditions of observation. This raises important questions regarding where and how ‘scene analysis’ is performed in the brain. Recent advances from both auditory and visual research suggest that the brain does not simply process the incoming scene properties. Rather, top-down processes such as attention, expectations and prior knowledge facilitate scene perception. Thus, scene analysis is linked not only with the extraction of stimulus features and formation and selection of perceptual objects, but also with selective attention, perceptual binding and awareness. This special issue covers novel advances in scene-analysis research obtained using a combination of psychophysics, computational modelling, neuroimaging and neurophysiology, and presents new empirical and theoretical approaches. For integrative understanding of scene analysis beyond and across sensory modalities, we provide a collection of 15 articles that enable comparison and integration of recent findings in auditory and visual scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044011

  17. Missing a trick: Auditory load modulates conscious awareness in audition.

    Science.gov (United States)

    Fairnie, Jake; Moore, Brian C J; Remington, Anna

    2016-07-01

    In the visual domain there is considerable evidence supporting the Load Theory of Attention and Cognitive Control, which holds that conscious perception of background stimuli depends on the level of perceptual load involved in a primary task. However, literature on the applicability of this theory to the auditory domain is limited and, in many cases, inconsistent. Here we present a novel "auditory search task" that allows systematic investigation of the impact of auditory load on auditory conscious perception. An array of simultaneous, spatially separated sounds was presented to participants. On half the trials, a critical stimulus was presented concurrently with the array. Participants were asked to detect which of 2 possible targets was present in the array (primary task), and whether the critical stimulus was present or absent (secondary task). Increasing the auditory load of the primary task (raising the number of sounds in the array) consistently reduced the ability to detect the critical stimulus. This indicates that, at least in certain situations, load theory applies in the auditory domain. The implications of this finding are discussed both with respect to our understanding of typical audition and for populations with altered auditory processing. (PsycINFO Database Record

  18. Proportional spike-timing precision and firing reliability underlie efficient temporal processing of periodicity and envelope shape cues.

    Science.gov (United States)

    Zheng, Y; Escabí, M A

    2013-08-01

    Temporal sound cues are essential for sound recognition, pitch, rhythm, and timbre perception, yet how auditory neurons encode such cues is subject of ongoing debate. Rate coding theories propose that temporal sound features are represented by rate tuned modulation filters. However, overwhelming evidence also suggests that precise spike timing is an essential attribute of the neural code. Here we demonstrate that single neurons in the auditory midbrain employ a proportional code in which spike-timing precision and firing reliability covary with the sound envelope cues to provide an efficient representation of the stimulus. Spike-timing precision varied systematically with the timescale and shape of the sound envelope and yet was largely independent of the sound modulation frequency, a prominent cue for pitch. In contrast, spike-count reliability was strongly affected by the modulation frequency. Spike-timing precision extends from sub-millisecond for brief transient sounds up to tens of milliseconds for sounds with slow-varying envelope. Information theoretic analysis further confirms that spike-timing precision depends strongly on the sound envelope shape, while firing reliability was strongly affected by the sound modulation frequency. Both the information efficiency and total information were limited by the firing reliability and spike-timing precision in a manner that reflected the sound structure. This result supports a temporal coding strategy in the auditory midbrain where proportional changes in spike-timing precision and firing reliability can efficiently signal shape and periodicity temporal cues.

  19. Feasibility of external rhythmic cueing with the Google Glass for improving gait in people with Parkinson's disease.

    Science.gov (United States)

    Zhao, Yan; Nonnekes, Jorik; Storcken, Erik J M; Janssen, Sabine; van Wegen, Erwin E H; Bloem, Bastiaan R; Dorresteijn, Lucille D A; van Vugt, Jeroen P P; Heida, Tjitske; van Wezel, Richard J A

    2016-06-01

    New mobile technologies like smartglasses can deliver external cues that may improve gait in people with Parkinson's disease in their natural environment. However, the potential of these devices must first be assessed in controlled experiments. Therefore, we evaluated rhythmic visual and auditory cueing in a laboratory setting with a custom-made application for the Google Glass. Twelve participants (mean age = 66.8; mean disease duration = 13.6 years) were tested at end of dose. We compared several key gait parameters (walking speed, cadence, stride length, and stride length variability) and freezing of gait for three types of external cues (metronome, flashing light, and optic flow) and a control condition (no-cue). For all cueing conditions, the subjects completed several walking tasks of varying complexity. Seven inertial sensors attached to the feet, legs and pelvis captured motion data for gait analysis. Two experienced raters scored the presence and severity of freezing of gait using video recordings. User experience was evaluated through a semi-open interview. During cueing, a more stable gait pattern emerged, particularly on complicated walking courses; however, freezing of gait did not significantly decrease. The metronome was more effective than rhythmic visual cues and most preferred by the participants. Participants were overall positive about the usability of the Google Glass and willing to use it at home. Thus, smartglasses like the Google Glass could be used to provide personalized mobile cueing to support gait; however, in its current form, auditory cues seemed more effective than rhythmic visual cues.

  20. Owl monkeys (Aotus nigriceps and A. infulatus) follow routes instead of food-related cues during foraging in captivity.

    Science.gov (United States)

    da Costa, Renata Souza; Bicca-Marques, Júlio César

    2014-01-01

    Foraging at night imposes different challenges from those faced during daylight, including the reliability of sensory cues. Owl monkeys (Aotus spp.) are ideal models among anthropoids to study the information used during foraging at low light levels because they are unique by having a nocturnal lifestyle. Six Aotus nigriceps and four A. infulatus individuals distributed into five enclosures were studied for testing their ability to rely on olfactory, visual, auditory, or spatial and quantitative information for locating food rewards and for evaluating the use of routes to navigate among five visually similar artificial feeding boxes mounted in each enclosure. During most experiments only a single box was baited with a food reward in each session. The baited box changed randomly throughout the experiment. In the spatial and quantitative information experiment there were two baited boxes varying in the amount of food provided. These baited boxes remained the same throughout the experiment. A total of 45 sessions (three sessions per night during 15 consecutive nights) per enclosure was conducted in each experiment. Only one female showed a performance suggestive of learning of the usefulness of sight to locate the food reward in the visual information experiment. Subjects showed a chance performance in the remaining experiments. All owl monkeys showed a preference for one box or a subset of boxes to inspect upon the beginning of each experimental session and consistently followed individual routes among feeding boxes.

  1. Early visual deprivation severely compromises the auditory sense of space in congenitally blind children.

    Science.gov (United States)

    Vercillo, Tiziana; Burr, David; Gori, Monica

    2016-06-01

    A recent study has shown that congenitally blind adults, who have never had visual experience, are impaired on an auditory spatial bisection task (Gori, Sandini, Martinoli, & Burr, 2014). In this study we investigated how thresholds for auditory spatial bisection and auditory discrimination develop with age in sighted and congenitally blind children (9 to 14 years old). Children performed 2 spatial tasks (minimum audible angle and space bisection) and 1 temporal task (temporal bisection). There was no impairment in the temporal task for blind children but, like adults, they showed severely compromised thresholds for spatial bisection. Interestingly, the blind children also showed lower precision in judging minimum audible angle. These results confirm the adult study and go on to suggest that even simpler auditory spatial tasks are compromised in children, and that this capacity recovers over time. (PsycINFO Database Record

  2. Representation of lateralization and tonotopy in primary versus secondary human auditory cortex

    NARCIS (Netherlands)

    Langers, Dave R. M.; Backes, Walter H.; van Dijk, Pim

    2007-01-01

    Functional MRI was performed to investigate differences in the basic functional organization of the primary and secondary auditory cortex regarding preferred stimulus lateratization and frequency. A modified sparse acquisition scheme was used to spatially map the characteristics of the auditory cort

  3. Auditory processing in the brainstem and audiovisual integration in humans studied with fMRI

    NARCIS (Netherlands)

    Slabu, Lavinia Mihaela

    2008-01-01

    Functional magnetic resonance imaging (fMRI) is a powerful technique because of the high spatial resolution and the noninvasiveness. The applications of the fMRI to the auditory pathway remain a challenge due to the intense acoustic scanner noise of approximately 110 dB SPL. The auditory system cons

  4. Auditory evacuation beacons

    NARCIS (Netherlands)

    Wijngaarden, S.J. van; Bronkhorst, A.W.; Boer, L.C.

    2005-01-01

    Auditory evacuation beacons can be used to guide people to safe exits, even when vision is totally obscured by smoke. Conventional beacons make use of modulated noise signals. Controlled evacuation experiments show that such signals require explicit instructions and are often misunderstood. A new si

  5. Virtual Auditory Displays

    Science.gov (United States)

    2000-01-01

    timbre , intensity, distance, room modeling, radio communication Virtual Environments Handbook Chapter 4 Virtual Auditory Displays Russell D... musical note “A” as a pure sinusoid, there will be 440 condensations and rarefactions per second. The distance between two adjacent condensations or...and complexity are pitch, loudness, and timbre respectively. This distinction between physical and perceptual measures of sound properties is an

  6. The neglected neglect: auditory neglect.

    Science.gov (United States)

    Gokhale, Sankalp; Lahoti, Sourabh; Caplan, Louis R

    2013-08-01

    Whereas visual and somatosensory forms of neglect are commonly recognized by clinicians, auditory neglect is often not assessed and therefore neglected. The auditory cortical processing system can be functionally classified into 2 distinct pathways. These 2 distinct functional pathways deal with recognition of sound ("what" pathway) and the directional attributes of the sound ("where" pathway). Lesions of higher auditory pathways produce distinct clinical features. Clinical bedside evaluation of auditory neglect is often difficult because of coexisting neurological deficits and the binaural nature of auditory inputs. In addition, auditory neglect and auditory extinction may show varying degrees of overlap, which makes the assessment even harder. Shielding one ear from the other as well as separating the ear from space is therefore critical for accurate assessment of auditory neglect. This can be achieved by use of specialized auditory tests (dichotic tasks and sound localization tests) for accurate interpretation of deficits. Herein, we have reviewed auditory neglect with an emphasis on the functional anatomy, clinical evaluation, and basic principles of specialized auditory tests.

  7. Evaluation of multimodal ground cues

    DEFF Research Database (Denmark)

    Nordahl, Rolf; Lecuyer, Anatole; Serafin, Stefania

    2012-01-01

    This chapter presents an array of results on the perception of ground surfaces via multiple sensory modalities,with special attention to non visual perceptual cues, notably those arising from audition and haptics, as well as interactions between them. It also reviews approaches to combining synth...... synthetic multimodal cues, from vision, haptics, and audition, in order to realize virtual experiences of walking on simulated ground surfaces or other features....

  8. Resolving conflicting views: Gaze and arrow cues do not trigger rapid reflexive shifts of attention.

    Science.gov (United States)

    Green, Jessica J; Gamble, Marissa L; Woldorff, Marty G

    2013-01-01

    It has become widely accepted that the direction of another individual's eye gaze induces rapid, automatic, attentional orienting, due to it being such a vital cue as to where in our environment we should attend. This automatic orienting has also been associated with the directional-arrow cues used in studies of spatial attention. Here, we present evidence that the response-time cueing effects reported for spatially non-predictive gaze and arrow cues are not the result of rapid, automatic shifts of attention. For both cue types, response-time effects were observed only for long-duration cue and target stimuli that overlapped temporally, were largest when the cues were presented simultaneously with the response-relevant target, and were driven by a slowing of responses for invalidly cued targets rather than speeding for validly cued ones. These results argue against automatic attention-orienting accounts and support a novel spatial-incongruency explanation for a whole class of rapid behavioral cueing effects.

  9. Human Perception of Ambiguous Inertial Motion Cues

    Science.gov (United States)

    Zhang, Guan-Lu

    2010-01-01

    Human daily activities on Earth involve motions that elicit both tilt and translation components of the head (i.e. gazing and locomotion). With otolith cues alone, tilt and translation can be ambiguous since both motions can potentially displace the otolithic membrane by the same magnitude and direction. Transitions between gravity environments (i.e. Earth, microgravity and lunar) have demonstrated to alter the functions of the vestibular system and exacerbate the ambiguity between tilt and translational motion cues. Symptoms of motion sickness and spatial disorientation can impair human performances during critical mission phases. Specifically, Space Shuttle landing records show that particular cases of tilt-translation illusions have impaired the performance of seasoned commanders. This sensorimotor condition is one of many operational risks that may have dire implications on future human space exploration missions. The neural strategy with which the human central nervous system distinguishes ambiguous inertial motion cues remains the subject of intense research. A prevailing theory in the neuroscience field proposes that the human brain is able to formulate a neural internal model of ambiguous motion cues such that tilt and translation components can be perceptually decomposed in order to elicit the appropriate bodily response. The present work uses this theory, known as the GIF resolution hypothesis, as the framework for experimental hypothesis. Specifically, two novel motion paradigms are employed to validate the neural capacity of ambiguous inertial motion decomposition in ground-based human subjects. The experimental setup involves the Tilt-Translation Sled at Neuroscience Laboratory of NASA JSC. This two degree-of-freedom motion system is able to tilt subjects in the pitch plane and translate the subject along the fore-aft axis. Perception data will be gathered through subject verbal reports. Preliminary analysis of perceptual data does not indicate that

  10. The influence of imagery vividness on cognitive and perceptual cues in circular auditorily-induced vection

    Directory of Open Access Journals (Sweden)

    Aleksander eVäljamäe

    2014-12-01

    Full Text Available In the absence of other congruent multisensory motion cues, sound contribution to illusions of self-motion (vection is relatively weak and often attributed to purely cognitive, top-down processes. The present study addressed the influence of cognitive and perceptual factors in the experience of circular, yaw auditorily-induced vection (AIV, focusing on participants’ imagery vividness scores. We used different rotating sound sources (acoustic landmark vs. movable types and their filtered versions that provided different binaural cues (interaural time or level differences, ITD vs. ILD when delivering via loudspeaker array. The significant differences in circular vection intensity showed that 1 AIV was stronger for rotating sound fields containing auditory landmarks as compared to movable sound objects; 2 ITD based acoustic cues were more instrumental than ILD based ones for horizontal AIV; and 3 individual differences in imagery vividness significantly influenced the effects of contextual and perceptual cues. While participants with high scores of kinesthetic and visual imagery were helped by vection ``rich cues, i.e. acoustic landmarks and ITD cues, the participants from the low-vivid imagery group did not benefit from these cues automatically. Only when specifically asked to use their imagination intentionally did these external cues start influencing vection sensation in similar way to high-vivid imagers. These findings are in line with the recent fMRI work which suggested that high-vivid imagers employ automatic, almost unconscious mechanisms in imagery generation, while low-vivid imagers rely on more schematic and conscious framework. Consequently, our results provide an additional insight into the interaction between perceptual and contextual cues when experiencing purely auditorily or multisensorily induced vection.

  11. Neural Representation of Concurrent Vowels in Macaque Primary Auditory Cortex.

    Science.gov (United States)

    Fishman, Yonatan I; Micheyl, Christophe; Steinschneider, Mitchell

    2016-01-01

    Successful speech perception in real-world environments requires that the auditory system segregate competing voices that overlap in frequency and time into separate streams. Vowels are major constituents of speech and are comprised of frequencies (harmonics) that are integer multiples of a common fundamental frequency (F0). The pitch and identity of a vowel are determined by its F0 and spectral envelope (formant structure), respectively. When two spectrally overlapping vowels differing in F0 are presented concurrently, they can be readily perceived as two separate "auditory objects" with pitches at their respective F0s. A difference in pitch between two simultaneous vowels provides a powerful cue for their segregation, which in turn, facilitates their individual identification. The neural mechanisms underlying the segregation of concurrent vowels based on pitch differences are poorly understood. Here, we examine neural population responses in macaque primary auditory cortex (A1) to single and double concurrent vowels (/a/ and /i/) that differ in F0 such that they are heard as two separate auditory objects with distinct pitches. We find that neural population responses in A1 can resolve, via a rate-place code, lower harmonics of both single and double concurrent vowels. Furthermore, we show that the formant structures, and hence the identities, of single vowels can be reliably recovered from the neural representation of double concurrent vowels. We conclude that A1 contains sufficient spectral information to enable concurrent vowel segregation and identification by downstream cortical areas.

  12. Tuned with a tune: Talker normalization via general auditory processes

    Directory of Open Access Journals (Sweden)

    Erika J C Laing

    2012-06-01

    Full Text Available Voices have unique acoustic signatures, contributing to the acoustic variability listeners must contend with in perceiving speech, and it has long been proposed that listeners normalize speech perception to information extracted from a talker’s speech. Initial attempts to explain talker normalization relied on extraction of articulatory referents, but recent studies of context-dependent auditory perception suggest that general auditory referents such as the long-term average spectrum (LTAS of a talker’s speech similarly affect speech perception. The present study aimed to differentiate the contributions of articulatory/linguistic versus auditory referents for context-driven talker normalization effects and, more specifically, to identify the specific constraints under which such contexts impact speech perception. Synthesized sentences manipulated to sound like different talkers influenced categorization of a subsequent speech target only when differences in the sentences’ LTAS were in the frequency range of the acoustic cues relevant for the target phonemic contrast. This effect was true both for speech targets preceded by spoken sentence contexts and for targets preceded by nonspeech tone sequences that were LTAS-matched to the spoken sentence contexts. Specific LTAS characteristics, rather than perceived talker, predicted the results suggesting that general auditory mechanisms play an important role in effects considered to be instances of perceptual talker normalization.

  13. Neural encoding of auditory discrimination in ventral premotor cortex

    Science.gov (United States)

    Lemus, Luis; Hernández, Adrián; Romo, Ranulfo

    2009-01-01

    Monkeys have the capacity to accurately discriminate the difference between two acoustic flutter stimuli. In this task, monkeys must compare information about the second stimulus to the memory trace of the first stimulus, and must postpone the decision report until a sensory cue triggers the beginning of the decision motor report. The neuronal processes associated with the different components of this task have been investigated in the primary auditory cortex (A1); but, A1 seems exclusively associated with the sensory and not with the working memory and decision components of this task. Here, we show that ventral premotor cortex (VPC) neurons reflect in their activities the current and remembered acoustic stimulus, their comparison, and the result of the animal's decision report. These results provide evidence that the neural dynamics of VPC is involved in the processing steps that link sensation and decision-making during auditory discrimination. PMID:19667191

  14. Auditory pathways: anatomy and physiology.

    Science.gov (United States)

    Pickles, James O

    2015-01-01

    This chapter outlines the anatomy and physiology of the auditory pathways. After a brief analysis of the external, middle ears, and cochlea, the responses of auditory nerve fibers are described. The central nervous system is analyzed in more detail. A scheme is provided to help understand the complex and multiple auditory pathways running through the brainstem. The multiple pathways are based on the need to preserve accurate timing while extracting complex spectral patterns in the auditory input. The auditory nerve fibers branch to give two pathways, a ventral sound-localizing stream, and a dorsal mainly pattern recognition stream, which innervate the different divisions of the cochlear nucleus. The outputs of the two streams, with their two types of analysis, are progressively combined in the inferior colliculus and onwards, to produce the representation of what can be called the "auditory objects" in the external world. The progressive extraction of critical features in the auditory stimulus in the different levels of the central auditory system, from cochlear nucleus to auditory cortex, is described. In addition, the auditory centrifugal system, running from cortex in multiple stages to the organ of Corti of the cochlea, is described.

  15. Animal models for auditory streaming.

    Science.gov (United States)

    Itatani, Naoya; Klump, Georg M

    2017-02-19

    Sounds in the natural environment need to be assigned to acoustic sources to evaluate complex auditory scenes. Separating sources will affect the analysis of auditory features of sounds. As the benefits of assigning sounds to specific sources accrue to all species communicating acoustically, the ability for auditory scene analysis is widespread among different animals. Animal studies allow for a deeper insight into the neuronal mechanisms underlying auditory scene analysis. Here, we will review the paradigms applied in the study of auditory scene analysis and streaming of sequential sounds in animal models. We will compare the psychophysical results from the animal studies to the evidence obtained in human psychophysics of auditory streaming, i.e. in a task commonly used for measuring the capability for auditory scene analysis. Furthermore, the neuronal correlates of auditory streaming will be reviewed in different animal models and the observations of the neurons' response measures will be related to perception. The across-species comparison will reveal whether similar demands in the analysis of acoustic scenes have resulted in similar perceptual and neuronal processing mechanisms in the wide range of species being capable of auditory scene analysis.This article is part of the themed issue 'Auditory and visual scene analysis'.

  16. Maintaining realism in auditory length-perception experiments

    DEFF Research Database (Denmark)

    Kirkwood, Brent Christopher

    2005-01-01

    Humans are capable of hearing the lengths of wooden rods dropped onto hard floors. In an attempt to understand the influence of the stimulus presentation method for testing this kind of everyday listening task, listener performance was compared for three presentation methods in an auditory length......-estimation experiment. A comparison of the length-estimation accuracy for the three presentation methods indicates that the choice of presentation method is important for maintaining realism and for maintaining the acoustic cues utilized by listeners in perceiving length....

  17. The influence of presentation method on auditory length perception

    DEFF Research Database (Denmark)

    Kirkwood, Brent Christopher

    2005-01-01

    Humans are capable of hearing the lengths of wooden rods dropped onto hard floors. In an attempt to understand the influence of the stimulus presentation method for testing this kind of everyday listening task, listener performance was compared for three presentation methods in an auditory length......-estimation experiment. A comparison of the length-estimation accuracy for the three presentation methods indicates that the choice of presentation method is important for maintaining realism and for maintaining the acoustic cues utilized by listeners in perceiving length....

  18. The Influence of Presentation Method on Auditory Length Perception

    DEFF Research Database (Denmark)

    Kirkwood, Brent Christopher

    Humans are capable of hearing the lengths of wooden rods dropped onto hard floors. In an attempt to understand the influence of the stimulus presentation method for testing this kind of everyday listening task, listener performance was compared for three presentation methods in an auditory length......-estimation experiment. A comparison of the length-estimation accuracy for the three presentation methods indicates that the choice of presentation method is important for maintaining realism and for maintaining the acoustic cues utilized by listeners in perceiving length....

  19. Enhancement from targets and suppression from cues in fast task-irrelevant perceptual learning.

    Science.gov (United States)

    Leclercq, Virginie; Seitz, Aaron R

    2012-09-01

    Task-irrelevant perceptual learning (TIPL) refers to the phenomenon where the stimulus features of a subject's task are learned when they are consistently presented at times when behaviorally relevant events occur. In this article, we addressed two points concerning TIPL. First, we address the question, are all behaviorally relevant events equal in their impact on encoding processes? Second, we address the hypothesis that TIPL involves mechanisms of the alerting attentional system. Two experiments of fast-TIPL were conducted in which the attentional state of participants was manipulated by using an alerting cue (visual or auditory) that informed participants of the arrival of an upcoming target. Images were presented with task-related stimuli (cues, targets and distractors) and subjects were tested on their memory of those images. Results indicate that memory for target-paired images was enhanced and cue-paired images were suppressed relative to that of distractor-paired images. The alerting cue increased the ability to recall target-paired images presented after this cue, although this result depended on the proportion of cued trials in a session. These results demonstrate a complex interplay between task-elements and the encoding of stimuli paired with them where both enhancement and suppression of task-paired stimuli can be found depending whether those stimuli are paired with task-targets or cues.

  20. The Effects of Age and Preoral Sensorimotor Cues on Anticipatory Mouth Movement During Swallowing

    Science.gov (United States)

    Moon, Jerald B.; Goodman, Shawn S.

    2016-01-01

    Purpose The aim of this study was to investigate the effects of preoral sensorimotor cues on anticipatory swallowing/eating-related mouth movements in older and younger adults. It was hypothesized that these cues are essential to timing anticipatory oral motor patterns, and these movements are delayed in older as compared with younger adults. Method Using a 2 × 2 repeated-measures design, eating-related lip, jaw, and hand movements were recorded from 24 healthy older (ages 70–85 years) and 24 healthy younger (ages 18–30 years) adults under 4 conditions: typical self-feeding, typical assisted feeding (proprioceptive loss), sensory-loss self-feeding (auditory and visual loss/degradation), and sensory-loss assisted feeding (loss/degradation of all cues). Results All participants demonstrated anticipatory mouth opening. The absence of proprioception delayed lip-lowering onset, and sensory loss more negatively affected offset. Given at least 1 preoral sensorimotor cue, older adults initiated movement earlier than younger adults. Conclusions Preoral sensorimotor information influences anticipatory swallowing/eating-related mouth movements, highlighting the importance of these cues. Earlier movement in older adults may be a compensation, facilitating safe swallowing given other age-related declines. Further research is needed to determine if the negative impact of cue removal may be further exacerbated in a nonhealthy system (e.g., presence of dysphagia or disease), potentially increasing swallowing- and eating-related risks. PMID:26540553

  1. Resizing Auditory Communities

    DEFF Research Database (Denmark)

    Kreutzfeldt, Jacob

    2012-01-01

    Heard through the ears of the Canadian composer and music teacher R. Murray Schafer the ideal auditory community had the shape of a village. Schafer’s work with the World Soundscape Project in the 70s represent an attempt to interpret contemporary environments through musical and auditory...... parameters highlighting harmonious and balanced qualities while criticizing the noisy and cacophonous qualities of modern urban settings. This paper present a reaffirmation of Schafer’s central methodological claim: that environments can be analyzed through their sound, but offers considerations on the role...... musicalized through electro acoustic equipment installed in shops, shopping streets, transit areas etc. Urban noise no longer acts only as disturbance, but also structure and shape the places and spaces in which urban life enfold. Based on research done in Japanese shopping streets and in Copenhagen the paper...

  2. Event-related potentials in response to 3-D auditory stimuli.

    Science.gov (United States)

    Fuchigami, Tatsuo; Okubo, Osami; Fujita, Yukihiko; Kohira, Ryutaro; Arakawa, Chikako; Endo, Ayumi; Haruyama, Wakako; Imai, Yuki; Mugishima, Hideo

    2009-09-01

    To evaluate auditory spatial cognitive function, age correlations for event-related potentials (ERPs) in response to auditory stimuli with a Doppler effect were studied in normal children. A sound with a Doppler effect is perceived as a moving audio image. A total of 99 normal subjects (age range, 4-21 years) were tested. In the task-relevant oddball paradigm, P300 and key-press reaction time were elicited using auditory stimuli (1000 Hz fixed and enlarged tones with a Doppler effect). From the age of 4 years, the P300 latency for the enlarged tone with a Doppler effect shortened more rapidly with age than did the P300 latency for tone-pips, and the latencies for the different conditions became similar towards the late teens. The P300 of auditory stimuli with a Doppler effect may be used to evaluate auditory spatial cognitive function in children.

  3. Informative Cues Facilitate Saccadic Localization in Blindsight Monkeys

    Science.gov (United States)

    Yoshida, Masatoshi; Hafed, Ziad M.; Isa, Tadashi

    2017-01-01

    Patients with damage to the primary visual cortex (V1) demonstrate residual visual performance during laboratory tasks despite denying having a conscious percept. The mechanisms behind such performance, often called blindsight, are not fully understood, but the use of surgically-induced unilateral V1 lesions in macaque monkeys provides a useful animal model for exploring such mechanisms. For example, V1-lesioned monkeys localize stimuli in a forced-choice condition while at the same time failing to report awareness of identical stimuli in a yes-no detection condition, similar to human patients. Moreover, residual cognitive processes, including saliency-guided eye movements, bottom-up attention with peripheral non-informative cues, and spatial short-term memory, have all been demonstrated in these animals. Here we examined whether post-lesion residual visuomotor processing can be modulated by top-down task knowledge. We tested two V1-lesioned monkeys with a visually guided saccade task in which we provided an informative foveal pre-cue about upcoming target location. Our monkeys fixated while we presented a leftward or rightward arrow (serving as a pre-cue) superimposed on the fixation point (FP). After various cue-target onset asynchronies (CTOAs), a saccadic target (of variable contrast across trials) was presented either in the affected (contra-lesional) or seeing (ipsi-lesional) hemifield. Critically, target location was in the same hemifield that the arrow pre-cue pointed towards in 80% of the trials (valid-cue trials), making the cue highly useful for task performance. In both monkeys, correct saccade reaction times were shorter during valid than invalid trials. Moreover, in one monkey, the ratio of correct saccades towards the affected hemifield was higher during valid than invalid trials. We replicated both reaction time and correct ratio effects in the same monkey using a symbolic color cue. These results suggest that V1-lesion monkeys can use informative

  4. Informative Cues Facilitate Saccadic Localization in Blindsight Monkeys.

    Science.gov (United States)

    Yoshida, Masatoshi; Hafed, Ziad M; Isa, Tadashi

    2017-01-01

    Patients with damage to the primary visual cortex (V1) demonstrate residual visual performance during laboratory tasks despite denying having a conscious percept. The mechanisms behind such performance, often called blindsight, are not fully understood, but the use of surgically-induced unilateral V1 lesions in macaque monkeys provides a useful animal model for exploring such mechanisms. For example, V1-lesioned monkeys localize stimuli in a forced-choice condition while at the same time failing to report awareness of identical stimuli in a yes-no detection condition, similar to human patients. Moreover, residual cognitive processes, including saliency-guided eye movements, bottom-up attention with peripheral non-informative cues, and spatial short-term memory, have all been demonstrated in these animals. Here we examined whether post-lesion residual visuomotor processing can be modulated by top-down task knowledge. We tested two V1-lesioned monkeys with a visually guided saccade task in which we provided an informative foveal pre-cue about upcoming target location. Our monkeys fixated while we presented a leftward or rightward arrow (serving as a pre-cue) superimposed on the fixation point (FP). After various cue-target onset asynchronies (CTOAs), a saccadic target (of variable contrast across trials) was presented either in the affected (contra-lesional) or seeing (ipsi-lesional) hemifield. Critically, target location was in the same hemifield that the arrow pre-cue pointed towards in 80% of the trials (valid-cue trials), making the cue highly useful for task performance. In both monkeys, correct saccade reaction times were shorter during valid than invalid trials. Moreover, in one monkey, the ratio of correct saccades towards the affected hemifield was higher during valid than invalid trials. We replicated both reaction time and correct ratio effects in the same monkey using a symbolic color cue. These results suggest that V1-lesion monkeys can use informative

  5. Apparent auditory source width insensitivity in older hearing-impaired individuals.

    Science.gov (United States)

    Whitmer, William M; Seeber, Bernhard U; Akeroyd, Michael A

    2012-07-01

    Previous studies have shown a loss in the precision of horizontal localization responses of older hearing-impaired (HI) individuals, along with potentially poorer neural representations of sound-source location. These deficits could be the result or corollary of greater difficulties in discriminating spatial images, and the insensitivity to punctate sound sources. This hypothesis was tested in three headphone-presentation experiments varying interaural coherence (IC), the cue most associated with apparent auditory source width. First, thresholds for differences in IC were measured for a broad sampling of participants. Older HI participants were significantly worse at discriminating IC across reference values than younger normal-hearing participants. These results are consistent with senescent increases in temporal jitter. Performance decreased with age, a finding corroborated in a second discrimination experiment using a separate group of participants matched for hearing loss. This group also completed a third, visual experiment, with both a cross-mapping task where they drew the size of the sound they heard and the identification task where they chose the image that best corresponded to what they heard. The results from the visual tasks indicate that older HI individuals do not hear punctate images and are relatively insensitive to changes in width based on IC.

  6. Auditory temporal resolution and integration - stages of analyzing time-varying sounds

    DEFF Research Database (Denmark)

    Pedersen, Benjamin

    2007-01-01

    , much is still unknown of how temporal information is analyzed and represented in the auditory system. The PhD lecture concerns the topic of temporal processing in hearing and the topic is approached via four different listening experiments designed to probe several aspects of temporal processing...... scheme: Effects such as attention seem to play an important role in loudness integration, and further, it will be demonstrated that the auditory system can rely on temporal cues at a much finer level of detail than predicted be existing models (temporal details in the time-range of 60 ?s can...

  7. Compression of auditory space during forward self-motion.

    Directory of Open Access Journals (Sweden)

    Wataru Teramoto

    Full Text Available BACKGROUND: Spatial inputs from the auditory periphery can be changed with movements of the head or whole body relative to the sound source. Nevertheless, humans can perceive a stable auditory environment and appropriately react to a sound source. This suggests that the inputs are reinterpreted in the brain, while being integrated with information on the movements. Little is known, however, about how these movements modulate auditory perceptual processing. Here, we investigate the effect of the linear acceleration on auditory space representation. METHODOLOGY/PRINCIPAL FINDINGS: Participants were passively transported forward/backward at constant accelerations using a robotic wheelchair. An array of loudspeakers was aligned parallel to the motion direction along a wall to the right of the listener. A short noise burst was presented during the self-motion from one of the loudspeakers when the listener's physical coronal plane reached the location of one of the speakers (null point. In Experiments 1 and 2, the participants indicated which direction the sound was presented, forward or backward relative to their subjective coronal plane. The results showed that the sound position aligned with the subjective coronal plane was displaced ahead of the null point only during forward self-motion and that the magnitude of the displacement increased with increasing the acceleration. Experiment 3 investigated the structure of the auditory space in the traveling direction during forward self-motion. The sounds were presented at various distances from the null point. The participants indicated the perceived sound location by pointing a rod. All the sounds that were actually located in the traveling direction were perceived as being biased towards the null point. CONCLUSIONS/SIGNIFICANCE: These results suggest a distortion of the auditory space in the direction of movement during forward self-motion. The underlying mechanism might involve anticipatory spatial

  8. Behind the Scenes of Auditory Perception

    OpenAIRE

    Shamma, Shihab A.; Micheyl, Christophe

    2010-01-01

    Auditory scenes” often contain contributions from multiple acoustic sources. These are usually heard as separate auditory “streams”, which can be selectively followed over time. How and where these auditory streams are formed in the auditory system is one of the most fascinating questions facing auditory scientists today. Findings published within the last two years indicate that both cortical and sub-cortical processes contribute to the formation of auditory streams, and they raise importan...

  9. Pigeons' (Columba livia) hierarchical organization of local and global cues in touch screen tasks.

    Science.gov (United States)

    Legge, Eric L G; Spetch, Marcia L; Batty, Emily R

    2009-02-01

    Redundant encoding of local and global spatial cues is a common occurrence in many species. However, preferential use of the each type of cue seems to vary across species and tasks. In the current study, pigeons (Columba livia) were trained in three experiments on a touch screen task which included redundant local positional cues and global spatial cues. Specifically, pigeons were required to choose the middle out of three choice squares, such that the position within the array provided local information and the location on the screen provided global information. In Experiment 1, pigeons were trained and tested on vertically aligned arrays. In Experiment 2, pigeons were trained and tested on horizontally aligned arrays, and in Experiment 3, pigeons were trained and tested with vertical, horizontal and diagonally aligned arrays. The results indicate that preference for cue type depends upon the type of spatial information being encoded. Specifically, on vertical and diagonally aligned arrays, pigeons preferred global cues, whereas on horizontally aligned arrays, pigeons preferred local cues.

  10. Auditory and non-auditory effects of noise on health

    NARCIS (Netherlands)

    Basner, M.; Babisch, W.; Davis, A.; Brink, M.; Clark, C.; Janssen, S.A.; Stansfeld, S.

    2013-01-01

    Noise is pervasive in everyday life and can cause both auditory and non-auditory health eff ects. Noise-induced hearing loss remains highly prevalent in occupational settings, and is increasingly caused by social noise exposure (eg, through personal music players). Our understanding of molecular mec

  11. Methylphenidate attenuates limbic brain inhibition after cocaine-cues exposure in cocaine abusers.

    Directory of Open Access Journals (Sweden)

    Nora D Volkow

    Full Text Available Dopamine (phasic release is implicated in conditioned responses. Imaging studies in cocaine abusers show decreases in striatal dopamine levels, which we hypothesize may enhance conditioned responses since tonic dopamine levels modulate phasic dopamine release. To test this we assessed the effects of increasing tonic dopamine levels (using oral methylphenidate on brain activation induced by cocaine-cues in cocaine abusers. Brain metabolism (marker of brain function was measured with PET and (18FDG in 24 active cocaine abusers tested four times; twice watching a Neutral video (nature scenes and twice watching a Cocaine-cues video; each video was preceded once by placebo and once by methylphenidate (20 mg. The Cocaine-cues video increased craving to the same extent with placebo (68% and with methylphenidate (64%. In contrast, SPM analysis of metabolic images revealed that differences between Neutral versus Cocaine-cues conditions were greater with placebo than methylphenidate; whereas with placebo the Cocaine-cues decreased metabolism (p<0.005 in left limbic regions (insula, orbitofrontal, accumbens and right parahippocampus, with methylphenidate it only decreased in auditory and visual regions, which also occurred with placebo. Decreases in metabolism in these regions were not associated with craving; in contrast the voxel-wise SPM analysis identified significant correlations with craving in anterior orbitofrontal cortex (p<0.005, amygdala, striatum and middle insula (p<0.05. This suggests that methylphenidate's attenuation of brain reactivity to Cocaine-cues is distinct from that involved in craving. Cocaine-cues decreased metabolism in limbic regions (reflects activity over 30 minutes, which contrasts with activations reported by fMRI studies (reflects activity over 2-5 minutes that may reflect long-lasting limbic inhibition following activation. Studies to evaluate the clinical significance of methylphenidate's blunting of cue-induced limbic

  12. Methylphenidate attenuates limbic brain inhibition after cocaine-cues exposure in cocaine abusers.

    Energy Technology Data Exchange (ETDEWEB)

    Volkow, N.D.; Wang, G.; Volkow, N.D.; Wang, G.-J.; Tomasi, D.; Telang, F.; Fowler, J.S.; Pradhan, K.; Jayne, M.; Logan, J.; Goldstein, R.Z.; Alia-Klein, N.; Wong, C.T.

    2010-07-01

    Dopamine (phasic release) is implicated in conditioned responses. Imaging studies in cocaine abusers show decreases in striatal dopamine levels, which we hypothesize may enhance conditioned responses since tonic dopamine levels modulate phasic dopamine release. To test this we assessed the effects of increasing tonic dopamine levels (using oral methylphenidate) on brain activation induced by cocaine-cues in cocaine abusers. Brain metabolism (marker of brain function) was measured with PET and {sup 18}FDG in 24 active cocaine abusers tested four times; twice watching a Neutral video (nature scenes) and twice watching a Cocaine-cues video; each video was preceded once by placebo and once by methylphenidate (20 mg). The Cocaine-cues video increased craving to the same extent with placebo (68%) and with methylphenidate (64%). In contrast, SPM analysis of metabolic images revealed that differences between Neutral versus Cocaine-cues conditions were greater with placebo than methylphenidate; whereas with placebo the Cocaine-cues decreased metabolism (p<0.005) in left limbic regions (insula, orbitofrontal, accumbens) and right parahippocampus, with methylphenidate it only decreased in auditory and visual regions, which also occurred with placebo. Decreases in metabolism in these regions were not associated with craving; in contrast the voxel-wise SPM analysis identified significant correlations with craving in anterior orbitofrontal cortex (p<0.005), amygdala, striatum and middle insula (p<0.05). This suggests that methylphenidate's attenuation of brain reactivity to Cocaine-cues is distinct from that involved in craving. Cocaine-cues decreased metabolism in limbic regions (reflects activity over 30 minutes), which contrasts with activations reported by fMRI studies (reflects activity over 2-5 minutes) that may reflect long-lasting limbic inhibition following activation. Studies to evaluate the clinical significance of methylphenidate's blunting of cue

  13. Midbrain auditory selectivity to natural sounds.

    Science.gov (United States)

    Wohlgemuth, Melville J; Moss, Cynthia F

    2016-03-01

    This study investigated auditory stimulus selectivity in the midbrain superior colliculus (SC) of the echolocating bat, an animal that relies on hearing to guide its orienting behaviors. Multichannel, single-unit recordings were taken across laminae of the midbrain SC of the awake, passively listening big brown bat, Eptesicus fuscus. Species-specific frequency-modulated (FM) echolocation sound sequences with dynamic spectrotemporal features served as acoustic stimuli along with artificial sound sequences matched in bandwidth, amplitude, and duration but differing in spectrotemporal structure. Neurons in dorsal sensory regions of the bat SC responded selectively to elements within the FM sound sequences, whereas neurons in ventral sensorimotor regions showed broad response profiles to natural and artificial stimuli. Moreover, a generalized linear model (GLM) constructed on responses in the dorsal SC to artificial linear FM stimuli failed to predict responses to natural sounds and vice versa, but the GLM produced accurate response predictions in ventral SC neurons. This result suggests that auditory selectivity in the dorsal extent of the bat SC arises through nonlinear mechanisms, which extract species-specific sensory information. Importantly, auditory selectivity appeared only in responses to stimuli containing the natural statistics of acoustic signals used by the bat for spatial orientation-sonar vocalizations-offering support for the hypothesis that sensory selectivity enables rapid species-specific orienting behaviors. The results of this study are the first, to our knowledge, to show auditory spectrotemporal selectivity to natural stimuli in SC neurons and serve to inform a more general understanding of mechanisms guiding sensory selectivity for natural, goal-directed orienting behaviors.

  14. Cue-elicited reward-seeking requires extracellular signal-regulated kinase activation in the nucleus accumbens.

    Science.gov (United States)

    Shiflett, Michael W; Martini, Ross P; Mauna, Jocelyn C; Foster, Rebecca L; Peet, Eloise; Thiels, Edda

    2008-02-01

    The motivation to seek out rewards can come under the control of stimuli associated with reward delivery. The ability of cues to motivate reward-seeking behavior depends on the nucleus accumbens (NAcc). The molecular mechanisms in the NAcc that underlie the ability of a cue to motivate reward-seeking are not well understood. We examined whether extracellular signal-regulated kinase (ERK), an important intracellular signaling pathway in learning and memory, has a role in these motivational processes. We first examined p42 ERK (ERK2) activation in the NAcc after rats were trained to associate an auditory stimulus with food delivery and found that, as a consequence of training, presentation of the auditory cue itself was sufficient to increase ERK2 activation in the NAcc. To examine whether inhibition of ERK in the NAcc prevents cue-induced reward-seeking, we infused an inhibitor of ERK, U0126, into the NAcc before assessing rats' instrumental responding in the presence versus absence of the conditioned cue. We found that, whereas vehicle-infused rats showed increased instrumental responding during cue presentation, rats infused with U0126 showed a profound impairment in cue-induced instrumental responding. In contrast, intra-NAcc U0126 infusion had no effect on rats' food-reinforced instrumental responding or their ability to execute conditioned approach behavior. Our results demonstrate learning-related changes in ERK signaling in the NAcc, and that disruption of ERK activation in this structure interferes with the incentive-motivational effects of conditioned stimuli. The molecular mechanisms described here may have implications for cue-elicited drug craving after repeated exposure to drugs of abuse.

  15. Visual capture of a stereo sound: Interactions between cue reliability, sound localization variability, and cross-modal bias.

    Science.gov (United States)

    Montagne, Christopher; Zhou, Yi

    2016-07-01

    Multisensory interactions involve coordination and sometimes competition between multiple senses. Vision usually dominates audition in spatial judgments when light and sound stimuli are presented from two different physical locations. This study investigated the influence of vision on the perceived location of a phantom sound source placed in a stereo sound field using a pair of loudspeakers emitting identical signals that were delayed or attenuated relative to each other. Results show that although a similar horizontal range (+/-45°) was reported for timing-modulated and level-modulated signals, listeners' localization performance showed greater variability for the timing signals. When visual stimuli were presented simultaneously with the auditory stimuli, listeners showed stronger visual bias for timing-modulated signals than level-modulated and single-speaker control signals. Trial-to-trial errors remained relatively stable over time, suggesting that sound localization uncertainty has an immediate and long-lasting effect on the across-modal bias. Binaural signal analyses further reveal that interaural differences of time and intensity-the two primary cues for sound localization in the azimuthal plane-are inherently more ambiguous for signals placed using timing. These results suggest that binaural ambiguity is intrinsically linked with localization variability and the strength of cross-modal bias in sound localization.

  16. The acoustic and perceptual cues affecting melody segregation for listeners with a cochlear implant.

    Directory of Open Access Journals (Sweden)

    Jeremy eMarozeau

    2013-11-01

    Full Text Available Our ability to listen selectively to single sound sources in complex auditory environments is termed ‘auditory stream segregation.’ This ability is affected by peripheral disorders such as hearing loss, as well as plasticity in central processing such as occurs with musical training. Brain plasticity induced by musical training can enhance the ability to segregate sound, leading to improvements in a variety of auditory abilities. The melody segregation ability of 12 cochlear-implant recipients was tested using a new method to determine the perceptual distance needed to segregate a simple 4-note melody from a background of interleaved random-pitch distractor notes. In experiment 1, participants rated the difficulty of segregating the melody from distracter notes. Four physical properties of the distracter notes were changed. In experiment 2, listeners were asked to rate the dissimilarity between melody patterns whose notes differed on the four physical properties simultaneously. Multidimensional scaling analysis transformed the dissimilarity ratings into perceptual distances. Regression between physical and perceptual cues then derived the minimal perceptual distance needed to segregate the melody.The most efficient streaming cue for CI users was loudness. For the normal hearing listeners without musical backgrounds, a greater difference on the perceptual dimension correlated to the temporal envelope is needed for stream segregation in CI users. No differences in streaming efficiency were found between the perceptual dimensions linked to the F0 and the spectral envelope.Combined with our previous results in normally-hearing musicians and non-musicians, the results show that differences in training as well as differences in peripheral auditory processing (hearing impairment and the use of a hearing device influences the way that listeners use different acoustic cues for segregating interleaved musical streams.

  17. Representation of speech in human auditory cortex: is it special?

    Science.gov (United States)

    Steinschneider, Mitchell; Nourski, Kirill V; Fishman, Yonatan I

    2013-11-01

    Successful categorization of phonemes in speech requires that the brain analyze the acoustic signal along both spectral and temporal dimensions. Neural encoding of the stimulus amplitude envelope is critical for parsing the speech stream into syllabic units. Encoding of voice onset time (VOT) and place of articulation (POA), cues necessary for determining phonemic identity, occurs within shorter time frames. An unresolved question is whether the neural representation of speech is based on processing mechanisms that are unique to humans and shaped by learning and experience, or is based on rules governing general auditory processing that are also present in non-human animals. This question was examined by comparing the neural activity elicited by speech and other complex vocalizations in primary auditory cortex of macaques, who are limited vocal learners, with that in Heschl's gyrus, the putative location of primary auditory cortex in humans. Entrainment to the amplitude envelope is neither specific to humans nor to human speech. VOT is represented by responses time-locked to consonant release and voicing onset in both humans and monkeys. Temporal representation of VOT is observed both for isolated syllables and for syllables embedded in the more naturalistic context of running speech. The fundamental frequency of male speakers is represented by more rapid neural activity phase-locked to the glottal pulsation rate in both humans and monkeys. In both species, the differential representation of stop consonants varying in their POA can be predicted by the relationship between the frequency selectivity of neurons and the onset spectra of the speech sounds. These findings indicate that the neurophysiology of primary auditory cortex is similar in monkeys and humans despite their vastly different experience with human speech, and that Heschl's gyrus is engaged in general auditory, and not language-specific, processing. This article is part of a Special Issue entitled

  18. Responses of Eastern red-backed salamanders (Plethodon cinereus) to chemical cues of prey presented in soluble and volatile forms.

    Science.gov (United States)

    Telfer, A C; Laberge, F

    2013-04-10

    Terrestrial salamanders are able to detect prey items using chemical cues, but the nature of the cues involved is uncertain. This study aimed to tease apart the roles of the soluble and volatile components of prey cues detected by Eastern red-backed salamanders (Plethodon cinereus), assuming the likelihood that these different components are respectively detected by the vomeronasal (accessory) and main olfactory organs. Wild-caught salamanders were exposed to control or soluble and volatile cricket cues in two different behavioural assays conducted in the laboratory. The first series of assays focused on localized presentation of soluble cues on the substrate, and the second on point sources of volatile cues delivered through plastic tubes. Room temperature was varied across experiments. Salamanders increased chemoinvestigation of the substrate via nosetapping when soluble prey cues were distributed non-uniformly on the substrate. In the warmer of two temperatures tested, salamanders additionally showed a spatial preference for location of soluble cue deposition. Attraction to a point source of volatile cues was not evident when examining the responses of salamanders grouped together; however, investigation of the volatile point source was significantly correlated with side preference only when both soluble cues and a volatile point source were present. The latter suggests that a subset of salamanders were attracted to the point source of volatile cues in the presence of soluble cues on the substrate. This study indicates that soluble prey cues alone are sufficient to trigger salamander foraging behaviour, and that temperature influences this foraging response. It supports the notion that the vomeronasal system plays an important role in prey detection, but suggests that volatile cues are also investigated by some salamanders when soluble prey cues have been detected.

  19. Effects of aging on peripheral and central auditory processing in rats.

    Science.gov (United States)

    Costa, Margarida; Lepore, Franco; Prévost, François; Guillemot, Jean-Paul

    2016-08-01

    Hearing loss is a hallmark sign in the elderly population. Decline in auditory perception provokes deficits in the ability to localize sound sources and reduces speech perception, particularly in noise. In addition to a loss of peripheral hearing sensitivity, changes in more complex central structures have also been demonstrated. Related to these, this study examines the auditory directional maps in the deep layers of the superior colliculus of the rat. Hence, anesthetized Sprague-Dawley adult (10 months) and aged (22 months) rats underwent distortion product of otoacoustic emissions (DPOAEs) to assess cochlear function. Then, auditory brainstem responses (ABRs) were assessed, followed by extracellular single-unit recordings to determine age-related effects on central auditory functions. DPOAE amplitude levels were decreased in aged rats although they were still present between 3.0 and 24.0 kHz. ABR level thresholds in aged rats were significantly elevated at an early (cochlear nucleus - wave II) stage in the auditory brainstem. In the superior colliculus, thresholds were increased and the tuning widths of the directional receptive fields were significantly wider. Moreover, no systematic directional spatial arrangement was present among the neurons of the aged rats, implying that the topographical organization of the auditory directional map was abolished. These results suggest that the deterioration of the auditory directional spatial map can, to some extent, be attributable to age-related dysfunction at more central, perceptual stages of auditory processing.

  20. Behavioral Cues of Interpersonal Warmth

    Science.gov (United States)

    Bayes, Marjorie A.

    1972-01-01

    The results of this study suggest, first, that interpersonal warmth does seem to be a personality dimension which can be reliably judged and, second, that it was possible to define and demonstrate the relevance of a number of behavioral cues for warmth. (Author)

  1. Optimal cue integration in ants.

    Science.gov (United States)

    Wystrach, Antoine; Mangan, Michael; Webb, Barbara

    2015-10-07

    In situations with redundant or competing sensory information, humans have been shown to perform cue integration, weighting different cues according to their certainty in a quantifiably optimal manner. Ants have been shown to merge the directional information available from their path integration (PI) and visual memory, but as yet it is not clear that they do so in a way that reflects the relative certainty of the cues. In this study, we manipulate the variance of the PI home vector by allowing ants (Cataglyphis velox) to run different distances and testing their directional choice when the PI vector direction is put in competition with visual memory. Ants show progressively stronger weighting of their PI direction as PI length increases. The weighting is quantitatively predicted by modelling the expected directional variance of home vectors of different lengths and assuming optimal cue integration. However, a subsequent experiment suggests ants may not actually compute an internal estimate of the PI certainty, but are using the PI home vector length as a proxy.

  2. Optimal assessment of multiple cues

    NARCIS (Netherlands)

    Fawcett, TW; Johnstone, RA

    2003-01-01

    In a wide range of contexts from mate choice to foraging, animals are required to discriminate between alternative options on the basis of multiple cues. How should they best assess such complex multicomponent stimuli? Here, we construct a model to investigate this problem, focusing on a simple case

  3. Partial Epilepsy with Auditory Features

    Directory of Open Access Journals (Sweden)

    J Gordon Millichap

    2004-07-01

    Full Text Available The clinical characteristics of 53 sporadic (S cases of idiopathic partial epilepsy with auditory features (IPEAF were analyzed and compared to previously reported familial (F cases of autosomal dominant partial epilepsy with auditory features (ADPEAF in a study at the University of Bologna, Italy.

  4. Word Recognition in Auditory Cortex

    Science.gov (United States)

    DeWitt, Iain D. J.

    2013-01-01

    Although spoken word recognition is more fundamental to human communication than text recognition, knowledge of word-processing in auditory cortex is comparatively impoverished. This dissertation synthesizes current models of auditory cortex, models of cortical pattern recognition, models of single-word reading, results in phonetics and results in…

  5. Orienting of Attention to Gaze Direction Cues in Rhesus Macaques: Species-specificity, and Effects of Cue Motion and Reward Predictiveness

    Directory of Open Access Journals (Sweden)

    Dian eYu

    2012-06-01

    Full Text Available Primates live in complex social groups and rely on social cues to direct their attention. For example, primates react faster to an unpredictable stimulus after seeing a conspecific looking in the direction of that stimulus. In the current study we tested the specificity of facial cues (gaze direction for orienting attention and their interaction with other cues that are known to guide attention. In particular, we tested whether macaque monkeys only respond to gaze cues from conspecifics or if the effect generalizes across species. We found an attentional advantage of conspecific faces over that of other human and cartoon faces. Because gaze cues are often conveyed by gesture, we also explored the effect of image motion (a simulated glance on the orienting of attention in monkeys. We found that the simulated glance did not significantly enhance the speed of orienting for monkey face stimuli, but had a significant effect for images of human faces. Finally, because gaze cues presumably guide attention towards relevant or rewarding stimuli, we explored whether orienting of attention was modulated by reward predictiveness. When the cue predicted reward location, face and non-face cues were effective in speeding responses towards the cued location. This effect was strongest for conspecific faces. In sum, our results suggest that while conspecific gaze cues activate an intrinsic process that reflexively directs spatial attention, its effect is relatively small in comparison to other features including motion and reward predictiveness. It is possible that gaze cues are more important for decision-making and voluntary orienting than for reflexive orienting.

  6. How and when auditory action effects impair motor performance.

    Science.gov (United States)

    D'Ausilio, Alessandro; Brunetti, Riccardo; Delogu, Franco; Santonico, Cristina; Belardinelli, Marta Olivetti

    2010-03-01

    Music performance is characterized by complex cross-modal interactions, offering a remarkable window into training-induced long-term plasticity and multimodal integration processes. Previous research with pianists has shown that playing a musical score is affected by the concurrent presentation of musical tones. We investigated the nature of this audio-motor coupling by evaluating how congruent and incongruent cross-modal auditory cues affect motor performance at different time intervals. We found facilitation if a congruent sound preceded motor planning with a large Stimulus Onset Asynchrony (SOA -300 and -200 ms), whereas we observed interference when an incongruent sound was presented with shorter SOAs (-200, -100 and 0 ms). Interference and facilitation, instead of developing through time as opposite effects of the same mechanism, showed dissociable time-courses suggesting their derivation from distinct processes. It seems that the motor preparation induced by the auditory cue has different consequences on motor performance according to the congruency with the future motor state the system is planning and the degree of asynchrony between the motor act and the sound presentation. The temporal dissociation we found contributes to the understanding of how perception meets action in the context of audio-motor integration.

  7. Behavioral sensitivity to broadband binaural localization cues in the ferret.

    Science.gov (United States)

    Keating, Peter; Nodal, Fernando R; Gananandan, Kohilan; Schulz, Andreas L; King, Andrew J

    2013-08-01

    Although the ferret has become an important model species for studying both fundamental and clinical aspects of spatial hearing, previous behavioral work has focused on studies of sound localization and spatial release from masking in the free field. This makes it difficult to tease apart the role played by different spatial cues. In humans and other species, interaural time differences (ITDs) and interaural level differences (ILDs) play a critical role in sound localization in the azimuthal plane and also facilitate sound source separation in noisy environments. In this study, we used a range of broadband noise stimuli presented via customized earphones to measure ITD and ILD sensitivity in the ferret. Our behavioral data show that ferrets are extremely sensitive to changes in either binaural cue, with levels of performance approximating that found in humans. The measured thresholds were relatively stable despite extensive and prolonged (>16 weeks) testing on ITD and ILD tasks with broadband stimuli. For both cues, sensitivity was reduced at shorter durations. In addition, subtle effects of changing the stimulus envelope were observed on ITD, but not ILD, thresholds. Sensitivity to these cues also differed in other ways. Whereas ILD sensitivity was unaffected by changes in average binaural level or interaural correlation, the same manipulations produced much larger effects on ITD sensitivity, with thresholds declining when either of these parameters was reduced. The binaural sensitivity measured in this study can largely account for the ability of ferrets to localize broadband stimuli in the azimuthal plane. Our results are also broadly consistent with data from humans and confirm the ferret as an excellent experimental model for studying spatial hearing.

  8. Unimodal and crossmodal gradients of spatial attention

    DEFF Research Database (Denmark)

    Föcker, J.; Hötting, K.; Gondan, Matthias

    2010-01-01

    Behavioral and event-related potential (ERP) studies have shown that spatial attention is gradually distributed around the center of the attentional focus. The present study compared uni- and crossmodal gradients of spatial attention to investigate whether the orienting of auditory and visual...... spatial attention is based on modality specific or supramodal representations of space. Auditory and visual stimuli were presented from five speaker locations positioned in the right hemifield. Participants had to attend to the innermost or outmost right position in order to detect either visual...... or auditory deviant stimuli. Detection rates and event-related potentials (ERPs) indicated that spatial attention is distributed as a gradient. Unimodal spatial ERP gradients correlated with the spatial resolution of the modality. Crossmodal spatial gradients were always broader than the corresponding...

  9. Peripheral Auditory Mechanisms

    CERN Document Server

    Hall, J; Hubbard, A; Neely, S; Tubis, A

    1986-01-01

    How weIl can we model experimental observations of the peripheral auditory system'? What theoretical predictions can we make that might be tested'? It was with these questions in mind that we organized the 1985 Mechanics of Hearing Workshop, to bring together auditory researchers to compare models with experimental observations. Tbe workshop forum was inspired by the very successful 1983 Mechanics of Hearing Workshop in Delft [1]. Boston University was chosen as the site of our meeting because of the Boston area's role as a center for hearing research in this country. We made a special effort at this meeting to attract students from around the world, because without students this field will not progress. Financial support for the workshop was provided in part by grant BNS- 8412878 from the National Science Foundation. Modeling is a traditional strategy in science and plays an important role in the scientific method. Models are the bridge between theory and experiment. Tbey test the assumptions made in experim...

  10. Cue salience influences the use of height cues in reorientation in pigeons (Columba livia).

    Science.gov (United States)

    Du, Yu; Mahdi, Nuha; Paul, Breanne; Spetch, Marcia L

    2016-07-01

    Although orienting ability has been examined with numerous types of cues, most research has focused only on cues from the horizontal plane. The current study investigated pigeons' use of wall height, a vertical cue, in an open-field task and compared it with their use of horizontal cues. Pigeons were trained to locate food in 2 diagonal corners of a rectangular enclosure with 2 opposite high walls as height cues. Before each trial, pigeons were rotated to disorient them. In training, pigeons could use either the horizontal cues from the rectangular enclosure or the height information from the walls to locate the food. In testing, the apparatus was modified to provide (a) horizontal cues only, (b) height cues only, and (c) both height and horizontal cues in conflict. In Experiment 1 the lower and high walls, respectively, were 40 and 80 cm, whereas in Experiment 2 they were made more perceptually salient by shortening them to 20 and 40 cm. Pigeons accurately located the goal corners with horizontal cues alone in both experiments, but they searched accurately with height cues alone only in Experiment 2. When the height cues conflicted with horizontal cues, pigeons preferred the horizontal cues over the height cues in Experiment 1 but not in Experiment 2, suggesting that perceptual salience influences the relative weighting of cues. (PsycINFO Database Record

  11. Brain networks of novelty-driven involuntary and cued voluntary auditory attention shifting.

    Science.gov (United States)

    Huang, Samantha; Belliveau, John W; Tengshe, Chinmayi; Ahveninen, Jyrki

    2012-01-01

    In everyday life, we need a capacity to flexibly shift attention between alternative sound sources. However, relatively little work has been done to elucidate the mechanisms of attention shifting in the auditory domain. Here, we used a mixed event-related/sparse-sampling fMRI approach to investigate this essential cognitive function. In each 10-sec trial, subjects were instructed to wait for an auditory "cue" signaling the location where a subsequent "target" sound was likely to be presented. The target was occasionally replaced by an unexpected "novel" sound in the uncued ear, to trigger involuntary attention shifting. To maximize the attention effects, cues, targets, and novels were embedded within dichotic 800-Hz vs. 1500-Hz pure-tone "standard" trains. The sound of clustered fMRI acquisition (starting at t = 7.82 sec) served as a controlled trial-end signal. Our approach revealed notable activation differences between the conditions. Cued voluntary attention shifting activated the superior intra--parietal sulcus (IPS), whereas novelty-triggered involuntary orienting activated the inferior IPS and certain subareas of the precuneus. Clearly more widespread activations were observed during voluntary than involuntary orienting in the premotor cortex, including the frontal eye fields. Moreover, we found -evidence for a frontoinsular-cingular attentional control network, consisting of the anterior insula, inferior frontal cortex, and medial frontal cortices, which were activated during both target discrimination and voluntary attention shifting. Finally, novels and targets activated much wider areas of superior temporal auditory cortices than shifting cues.

  12. How food cues can enhance and inhibit motivation to obtain and consume food.

    Science.gov (United States)

    Colagiuri, Ben; Lovibond, Peter F

    2015-01-01

    Learning may play an important role in over-eating. One example is Pavlovian-to-instrumental transfer (PIT), whereby reward cues facilitate responding to obtain that reward. Whilst there is increasing research indicating PIT for food in humans, these studies have exclusively tested PIT under instrumental extinction (i.e. when the food is no longer available), which may reduce their ecological validity. To address this, we conducted two experiments exploring PIT for food in humans when tested under instrumental reinforcement. Participants first underwent Pavlovian discrimination training with an auditory cue paired with a chocolate reward (CS+) and another auditory cue unpaired (CS-). In instrumental training participants learnt to press a button to receive the chocolate reward on a VR10 schedule. In the test phase, each CS was presented whilst participants maintained the opportunity to press the button to receive chocolate. In Experiment 1, the PIT test was implemented after up to 20 min of instrumental training (satiation) whereas in Experiment 2 it was implemented after only 4 min of instrumental training. In both experiments there was evidence for differential PIT, but the pattern differed according to the rate of responding at the time of the PIT test. In low baseline responders the CS+ facilitated both button press responding and consumption, whereas in high baseline responders the CS- suppressed responding. These findings suggest that both excitatory and inhibitory associations may be learnt during PIT training and that the expression of these associations depends on motivation levels at the time the cues are encountered. Particularly concerning is that a food-paired cue can elicit increased motivation to obtain and consume food even when the participant is highly satiated and no longer actively seeking food, as this may be one mechanism by which over-consumption is maintained.

  13. Enhanced representation of spectral contrasts in the primary auditory cortex

    Directory of Open Access Journals (Sweden)

    Nicolas eCatz

    2013-06-01

    Full Text Available The role of early auditory processing may be to extract some elementary features from an acoustic mixture in order to organize the auditory scene. To accomplish this task, the central auditory system may rely on the fact that sensory objects are often composed of spectral edges, i.e. regions where the stimulus energy changes abruptly over frequency. The processing of acoustic stimuli may benefit from a mechanism enhancing the internal representation of spectral edges. While the visual system is thought to rely heavily on this mechanism (enhancing spatial edges, it is still unclear whether a related process plays a significant role in audition. We investigated the cortical representation of spectral edges, using acoustic stimuli composed of multi-tone pips whose time-averaged spectral envelope contained suppressed or enhanced regions. Importantly, the stimuli were designed such that neural responses properties could be assessed as a function of stimulus frequency during stimulus presentation. Our results suggest that the representation of acoustic spectral edges is enhanced in the auditory cortex, and that this enhancement is sensitive to the characteristics of the spectral contrast profile, such as depth, sharpness and width. Spectral edges are maximally enhanced for sharp contrast and large depth. Cortical activity was also suppressed at frequencies within the suppressed region. To note, the suppression of firing was larger at frequencies nearby the lower edge of the suppressed region than at the upper edge. Overall, the present study gives critical insights into the processing of spectral contrasts in the auditory system.

  14. Auditory-visual integration of emotional signals in a virtual environment for cynophobia.

    Science.gov (United States)

    Taffou, Marine; Chapoulie, Emmanuelle; David, Adrien; Guerchouche, Rachid; Drettakis, George; Viaud-Delmon, Isabelle

    2012-01-01

    Cynophobia (dog phobia) has both visual and auditory relevant components. In order to investigate the efficacy of virtual reality (VR) exposure-based treatment for cynophobia, we studied the efficiency of auditory-visual environments in generating presence and emotion. We conducted an evaluation test with healthy participants sensitive to cynophobia in order to assess the capacity of auditory-visual virtual environments (VE) to generate fear reactions. Our application involves both high fidelity visual stimulation displayed in an immersive space and 3D sound. This specificity enables us to present and spatially manipulate fearful stimuli in the auditory modality, the visual modality and both. Our specific presentation of animated dog stimuli creates an environment that is highly arousing, suggesting that VR is a promising tool for cynophobia treatment and that manipulating auditory-visual integration might provide a way to modulate affect.

  15. Storing maternal memories: hypothesizing an interaction of experience and estrogen on sensory cortical plasticity to learn infant cues.

    Science.gov (United States)

    Banerjee, Sunayana B; Liu, Robert C

    2013-10-01

    Much of the literature on maternal behavior has focused on the role of infant experience and hormones in a canonical subcortical circuit for maternal motivation and maternal memory. Although early studies demonstrated that the cerebral cortex also plays a significant role in maternal behaviors, little has been done to explore what that role may be. Recent work though has provided evidence that the cortex, particularly sensory cortices, contains correlates of sensory memories of infant cues, consistent with classical studies of experience-dependent sensory cortical plasticity in non-maternal paradigms. By reviewing the literature from both the maternal behavior and sensory cortical plasticity fields, focusing on the auditory modality, we hypothesize that maternal hormones (predominantly estrogen) may act to prime auditory cortical neurons for a longer-lasting neural trace of infant vocal cues, thereby facilitating recognition and discrimination. This couldthen more efficiently activate the subcortical circuit to elicit and sustain maternal behavior.

  16. Effects of pitch on auditory number comparisons.

    Science.gov (United States)

    Campbell, Jamie I D; Scheepers, Florence

    2015-05-01

    Three experiments investigated interactions between auditory pitch and the numerical quantities represented by spoken English number words. In Experiment 1, participants heard a pair of sequential auditory numbers in the range zero to ten. They pressed a left-side or right-side key to indicate if the second number was lower or higher in numerical value. The vocal pitches of the two numbers either ascended or descended so that pitch change was congruent or incongruent with number change. The error rate was higher when pitch and number were incongruent relative to congruent trials. The distance effect on RT (i.e., slower responses for numerically near than far number pairs) occurred with pitch ascending but not descending. In Experiment 2, to determine if these effects depended on the left/right spatial mapping of responses, participants responded "yes" if the second number was higher and "no" if it was lower. Again, participants made more number comparison errors when number and pitch were incongruent, but there was no distance × pitch order effect. To pursue the latter, in Experiment 3, participants were tested with response buttons assigned left-smaller and right-larger ("normal" spatial mapping) or the reverse mapping. Participants who received normal mapping first presented a distance effect with pitch ascending but not descending as in Experiment 1, whereas participants who received reverse mapping first presented a distance effect with pitch descending but not ascending. We propose that the number and pitch dimensions of stimuli both activated spatial representations and that strategy shifts from quantity comparison to order processing were induced by spatial incongruities.

  17. Moving in time: Bayesian causal inference explains movement coordination to auditory beats.

    Science.gov (United States)

    Elliott, Mark T; Wing, Alan M; Welchman, Andrew E

    2014-07-07

    Many everyday skilled actions depend on moving in time with signals that are embedded in complex auditory streams (e.g. musical performance, dancing or simply holding a conversation). Such behaviour is apparently effortless; however, it is not known how humans combine auditory signals to support movement production and coordination. Here, we test how participants synchronize their movements when there are potentially conflicting auditory targets to guide their actions. Participants tapped their fingers in time with two simultaneously presented metronomes of equal tempo, but differing in phase and temporal regularity. Synchronization therefore depended on integrating the two timing cues into a single-event estimate or treating the cues as independent and thereby selecting one signal over the other. We show that a Bayesian inference process explains the situations in which participants choose to integrate or separate signals, and predicts motor timing errors. Simulations of this causal inference process demonstrate that this model provides a better description of the data than other plausible models. Our findings suggest that humans exploit a Bayesian inference process to control movement timing in situations where the origin of auditory signals needs to be resolved.

  18. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events

    Directory of Open Access Journals (Sweden)

    Jeroen eStekelenburg

    2012-05-01

    Full Text Available In many natural audiovisual events (e.g., a clap of the two hands, the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have already reported that there are distinct neural correlates of temporal (when versus phonetic/semantic (which content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual part of the audiovisual stimulus. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical subadditive amplitude reductions (AV – V < A were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that the N1 suppression was larger for spatially congruent stimuli. A very early audiovisual interaction was also found at 30-50 ms in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  19. Auditory short-term memory in the primate auditory cortex.

    Science.gov (United States)

    Scott, Brian H; Mishkin, Mortimer

    2016-06-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory.

  20. Changes in auditory perceptions and cortex resulting from hearing recovery after extended congenital unilateral hearing loss

    Directory of Open Access Journals (Sweden)

    Jill B Firszt

    2013-12-01

    Full Text Available Monaural hearing induces auditory system reorganization. Imbalanced input also degrades time-intensity cues for sound localization and signal segregation for listening in noise. While there have been studies of bilateral auditory deprivation and later hearing restoration (e.g. cochlear implants, less is known about unilateral auditory deprivation and subsequent hearing improvement. We investigated effects of long-term congenital unilateral hearing loss on localization, speech understanding, and cortical organization following hearing recovery. Hearing in the congenitally affected ear of a 41 year old female improved significantly after stapedotomy and reconstruction. Pre-operative hearing threshold levels showed unilateral, mixed, moderately-severe to profound hearing loss. The contralateral ear had hearing threshold levels within normal limits. Testing was completed prior to, and three and nine months after surgery. Measurements were of sound localization with intensity-roved stimuli and speech recognition in various noise conditions. We also evoked magnetic resonance signals with monaural stimulation to the unaffected ear. Activation magnitudes were determined in core, belt, and parabelt auditory cortex regions via an interrupted single event design. Hearing improvement following 40 years of congenital unilateral hearing loss resulted in substantially improved sound localization and speech recognition in noise. Auditory cortex also reorganized. Contralateral auditory cortex responses were increased after hearing recovery and the extent of activated cortex was bilateral, including a greater portion of the posterior superior temporal plane. Thus, prolonged predominant monaural stimulation did not prevent auditory system changes consequent to restored binaural hearing. Results support future research of unilateral auditory deprivation effects and plasticity, with consideration for length of deprivation, age at hearing correction, degree and type

  1. Auditory Neuropathy - A Case of Auditory Neuropathy after Hyperbilirubinemia

    Directory of Open Access Journals (Sweden)

    Maliheh Mazaher Yazdi

    2007-12-01

    Full Text Available Background and Aim: Auditory neuropathy is an hearing disorder in which peripheral hearing is normal, but the eighth nerve and brainstem are abnormal. By clinical definition, patient with this disorder have normal OAE, but exhibit an absent or severely abnormal ABR. Auditory neuropathy was first reported in the late 1970s as different methods could identify discrepancy between absent ABR and present hearing threshold. Speech understanding difficulties are worse than can be predicted from other tests of hearing function. Auditory neuropathy may also affect vestibular function. Case Report: This article presents electrophysiological and behavioral data from a case of auditory neuropathy in a child with normal hearing after bilirubinemia in a 5 years follow-up. Audiological findings demonstrate remarkable changes after multidisciplinary rehabilitation. Conclusion: auditory neuropathy may involve damage to the inner hair cells-specialized sensory cells in the inner ear that transmit information about sound through the nervous system to the brain. Other causes may include faulty connections between the inner hair cells and the nerve leading from the inner ear to the brain or damage to the nerve itself. People with auditory neuropathy have OAEs response but absent ABR and hearing loss threshold that can be permanent, get worse or get better.

  2. Tactile stimulation and hemispheric asymmetries modulate auditory perception and neural responses in primary auditory cortex.

    Science.gov (United States)

    Hoefer, M; Tyll, S; Kanowski, M; Brosch, M; Schoenfeld, M A; Heinze, H-J; Noesselt, T

    2013-10-01

    Although multisensory integration has been an important area of recent research, most studies focused on audiovisual integration. Importantly, however, the combination of audition and touch can guide our behavior as effectively which we studied here using psychophysics and functional magnetic resonance imaging (fMRI). We tested whether task-irrelevant tactile stimuli would enhance auditory detection, and whether hemispheric asymmetries would modulate these audiotactile benefits using lateralized sounds. Spatially aligned task-irrelevant tactile stimuli could occur either synchronously or asynchronously with the sounds. Auditory detection was enhanced by non-informative synchronous and asynchronous tactile stimuli, if presented on the left side. Elevated fMRI-signals to left-sided synchronous bimodal stimulation were found in primary auditory cortex (A1). Adjacent regions (planum temporale, PT) expressed enhanced BOLD-responses for synchronous and asynchronous left-sided bimodal conditions. Additional connectivity analyses seeded in right-hemispheric A1 and PT for both bimodal conditions showed enhanced connectivity with right-hemispheric thalamic, somatosensory and multisensory areas that scaled with subjects' performance. Our results indicate that functional asymmetries interact with audiotactile interplay which can be observed for left-lateralized stimulation in the right hemisphere. There, audiotactile interplay recruits a functional network of unisensory cortices, and the strength of these functional network connections is directly related to subjects' perceptual sensitivity.

  3. Auditory Processing Disorder (For Parents)

    Science.gov (United States)

    ... CAPD often have trouble maintaining attention, although health, motivation, and attitude also can play a role. Auditory ... programs. Several computer-assisted programs are geared toward children with APD. They mainly help the brain do ...

  4. Measuring the performance of visual to auditory information conversion.

    Directory of Open Access Journals (Sweden)

    Shern Shiou Tan

    Full Text Available BACKGROUND: Visual to auditory conversion systems have been in existence for several decades. Besides being among the front runners in providing visual capabilities to blind users, the auditory cues generated from image sonification systems are still easier to learn and adapt to compared to other similar techniques. Other advantages include low cost, easy customizability, and universality. However, every system developed so far has its own set of strengths and weaknesses. In order to improve these systems further, we propose an automated and quantitative method to measure the performance of such systems. With these quantitative measurements, it is possible to gauge the relative strengths and weaknesses of different systems and rank the systems accordingly. METHODOLOGY: Performance is measured by both the interpretability and also the information preservation of visual to auditory conversions. Interpretability is measured by computing the correlation of inter image distance (IID and inter sound distance (ISD whereas the information preservation is computed by applying Information Theory to measure the entropy of both visual and corresponding auditory signals. These measurements provide a basis and some insights on how the systems work. CONCLUSIONS: With an automated interpretability measure as a standard, more image sonification systems can be developed, compared, and then improved. Even though the measure does not test systems as thoroughly as carefully designed psychological experiments, a quantitative measurement like the one proposed here can compare systems to a certain degree without incurring much cost. Underlying this research is the hope that a major breakthrough in image sonification systems will allow blind users to cost effectively regain enough visual functions to allow them to lead secure and productive lives.

  5. Evoked potential correlates of selective attention with multi-channel auditory inputs

    Science.gov (United States)

    Schwent, V. L.; Hillyard, S. A.

    1975-01-01

    Ten subjects were presented with random, rapid sequences of four auditory tones which were separated in pitch and apparent spatial position. The N1 component of the auditory vertex evoked potential (EP) measured relative to a baseline was observed to increase with attention. It was concluded that the N1 enhancement reflects a finely tuned selective attention to one stimulus channel among several concurrent, competing channels. This EP enhancement probably increases with increased information load on the subject.

  6. Surface Flow from Visual Cues

    OpenAIRE

    Petit, Benjamin,; Letouzey, Antoine; Boyer, Edmond; Franco, Jean-Sébastien

    2011-01-01

    International audience; In this paper we study the estimation of dense, instantaneous 3D motion fields over a non-rigidly moving surface observed by multi-camera systems. The motivation arises from multi-camera applications that require motion information, for arbitrary subjects, in order to perform tasks such as surface tracking or segmentation. To this aim, we present a novel framework that allows to efficiently compute dense 3D displacement fields using low level visual cues and geometric con...

  7. Phonetic training with acoustic cue manipulations: A comparison of methods for teaching English /r/-/l/ to Japanese adults

    Science.gov (United States)

    Iverson, Paul; Hazan, Valerie; Bannister, Kerry

    2005-11-01

    Recent work [Iverson et al. (2003) Cognition, 87, B47-57] has suggested that Japanese adults have difficulty learning English /r/ and /l/ because they are overly sensitive to acoustic cues that are not reliable for /r/-/l/ categorization (e.g., F2 frequency). This study investigated whether cue weightings are altered by auditory training, and compared the effectiveness of different training techniques. Separate groups of subjects received High Variability Phonetic Training (natural words from multiple talkers), and 3 techniques in which the natural recordings were altered via signal processing (All Enhancement, with F3 contrast maximized and closure duration lengthened; Perceptual Fading, with F3 enhancement reduced during training; and Secondary Cue Variability, with variation in F2 and durations increased during training). The results demonstrated that all of the training techniques improved /r/-/l/ identification by Japanese listeners, but there were no differences between the techniques. Training also altered the use of secondary acoustic cues; listeners became biased to identify stimuli as English /l/ when the cues made them similar to the Japanese /r/ category, and reduced their use of secondary acoustic cues for stimuli that were dissimilar to Japanese /r/. The results suggest that both category assimilation and perceptual interference affect English /r/ and /l/ acquisition.

  8. The semantic representation of event information depends on the cue modality: an instance of meaning-based retrieval.

    Science.gov (United States)

    Karlsson, Kristina; Sikström, Sverker; Willander, Johan

    2013-01-01

    The semantic content, or the meaning, is the essence of autobiographical memories. In comparison to previous research, which has mainly focused on the phenomenological experience and the age distribution of retrieved events, the present study provides a novel view on the retrieval of event information by quantifying the information as semantic representations. We investigated the semantic representation of sensory cued autobiographical events and studied the modality hierarchy within the multimodal retrieval cues. The experiment comprised a cued recall task, where the participants were presented with visual, auditory, olfactory or multimodal retrieval cues and asked to recall autobiographical events. The results indicated that the three different unimodal retrieval cues generate significantly different semantic representations. Further, the auditory and the visual modalities contributed the most to the semantic representation of the multimodally retrieved events. Finally, the semantic representation of the multimodal condition could be described as a combination of the three unimodal conditions. In conclusion, these results suggest that the meaning of the retrieved event information depends on the modality of the retrieval cues.

  9. Sexual selection in the squirrel treefrog Hyla squirella: the role of multimodal cue assessment in female choice

    Science.gov (United States)

    Taylor, Ryan C.; Buchanan, Bryant W.; Doherty, Jessie L.

    2007-01-01

    Anuran amphibians have provided an excellent system for the study of animal communication and sexual selection. Studies of female mate choice in anurans, however, have focused almost exclusively on the role of auditory signals. In this study, we examined the effect of both auditory and visual cues on female choice in the squirrel treefrog. Our experiments used a two-choice protocol in which we varied male vocalization properties, visual cues, or both, to assess female preferences for the different cues. Females discriminated against high-frequency calls and expressed a strong preference for calls that contained more energy per unit time (faster call rate). Females expressed a preference for the visual stimulus of a model of a calling male when call properties at the two speakers were held the same. They also showed a significant attraction to a model possessing a relatively large lateral body stripe. These data indicate that visual cues do play a role in mate attraction in this nocturnal frog species. Furthermore, this study adds to a growing body of evidence that suggests that multimodal signals play an important role in sexual selection.

  10. Watch out! Magnetoencephalographic evidence for early modulation of attention orienting by fearful gaze cueing.

    Directory of Open Access Journals (Sweden)

    Fanny Lachat

    Full Text Available Others' gaze and emotional facial expression are important cues for the process of attention orienting. Here, we investigated with magnetoencephalography (MEG whether the combination of averted gaze and fearful expression may elicit a selectively early effect of attention orienting on the brain responses to targets. We used the direction of gaze of centrally presented fearful and happy faces as the spatial attention orienting cue in a Posner-like paradigm where the subjects had to detect a target checkerboard presented at gazed-at (valid trials or non gazed-at (invalid trials locations of the screen. We showed that the combination of averted gaze and fearful expression resulted in a very early attention orienting effect in the form of additional parietal activity between 55 and 70 ms for the valid versus invalid targets following fearful gaze cues. No such effect was obtained for the targets following happy gaze cues. This early cue-target validity effect selective of fearful gaze cues involved the left superior parietal region and the left lateral middle occipital region. These findings provide the first evidence for an effect of attention orienting induced by fearful gaze in the time range of C1. In doing so, they demonstrate the selective impact of combined gaze and fearful expression cues in the process of attention orienting.

  11. Contextual Cueing Improves Attentional Guidance, Even When Guidance Is Supposedly Optimal.

    Science.gov (United States)

    Harris, Anthony M; Remington, Roger W

    2017-02-23

    Visual search through previously encountered contexts typically produces reduced reaction times compared with search through novel contexts. This contextual cueing benefit is well established, but there is debate regarding its underlying mechanisms. Eye-tracking studies have consistently shown reduced number of fixations with repetition, supporting improvements in attentional guidance as the source of contextual cueing. However, contextual cueing benefits have been shown in conditions in which attentional guidance should already be optimal-namely, when attention is captured to the target location by an abrupt onset, or under pop-out conditions. These results have been used to argue for a response-related account of contextual cueing. Here, we combine eye tracking with response time to examine the mechanisms behind contextual cueing in spatially cued and pop-out conditions. Three experiments find consistent response time benefits with repetition, which appear to be driven almost entirely by a reduction in number of fixations, supporting improved attentional guidance as the mechanism behind contextual cueing. No differences were observed in the time between fixating the target and responding-our proxy for response related processes. Furthermore, the correlation between contextual cueing magnitude and the reduction in number of fixations on repeated contexts approaches 1. These results argue strongly that attentional guidance is facilitated by familiar search contexts, even when guidance is near-optimal. (PsycINFO Database Record

  12. Attentional demands influence vocal compensations to pitch errors heard in auditory feedback.

    Directory of Open Access Journals (Sweden)

    Anupreet K Tumber

    Full Text Available Auditory feedback is required to maintain fluent speech. At present, it is unclear how attention modulates auditory feedback processing during ongoing speech. In this event-related potential (ERP study, participants vocalized/a/, while they heard their vocal pitch suddenly shifted downward a ½ semitone in both single and dual-task conditions. During the single-task condition participants passively viewed a visual stream for cues to start and stop vocalizing. In the dual-task condition, participants vocalized while they identified target stimuli in a visual stream of letters. The presentation rate of the visual stimuli was manipulated in the dual-task condition in order to produce a low, intermediate, and high attentional load. Visual target identification accuracy was lowest in the high attentional load condition, indicating that attentional load was successfully manipulated. Results further showed that participants who were exposed to the single-task condition, prior to the dual-task condition, produced larger vocal compensations during the single-task condition. Thus, when participants' attention was divided, less attention was available for the monitoring of their auditory feedback, resulting in smaller compensatory vocal responses. However, P1-N1-P2 ERP responses were not affected by divided attention, suggesting that the effect of attentional load was not on the auditory processing of pitch altered feedback, but instead it interfered with the integration of auditory and motor information, or motor control itself.

  13. Sex differences in brain structure in auditory and cingulate regions

    OpenAIRE

    Brun, Caroline C.; Lepore, Natasha; Luders, Eileen; Chou, Yi-Yu; Madsen, Sarah K.; Toga, Arthur W; Thompson, Paul M.

    2009-01-01

    We applied a new method to visualize the three-dimensional profile of sex differences in brain structure based on MRI scans of 100 young adults. We compared 50 men with 50 women, matched for age and other relevant demographics. As predicted, left hemisphere auditory and language-related regions were proportionally expanded in women versus men, suggesting a possible structural basis for the widely replicated sex differences in language processing. In men, primary visual, and visuo-spatial asso...

  14. Cueing the Virtual Storyteller: Analysis of cue phrase usage in fairy tales

    NARCIS (Netherlands)

    Penning, Manon; Theune, Mariët; Busemann, S.

    2007-01-01

    An existing taxonomy of Dutch cue phrases, designed for use in story generation, was validated by analysing cue phrase usage in a corpus of classical fairy tales. The analysis led to some adaptations of the original taxonomy.

  15. Moving to music: Effects of heard and imagined musical cues on movement-related brain activity

    Directory of Open Access Journals (Sweden)

    Rebecca S Schaefer

    2014-09-01

    Full Text Available Music is commonly used to facilitate or support movement, and increasingly used in movement rehabilitation. Additionally, there is some evidence to suggest that music imagery, which is reported to lead to brain signatures similar to music perception, may also assist movement. However, it is not yet known whether either imagined or musical cueing changes the way in which the motor system of the human brain is activated during simple movements. Here, functional Magnetic Resonance Imaging (fMRI was used to compare neural activity during wrist flexions performed to either heard or imagined music with self-pacing of the same movement without any cueing. Focusing specifically on the motor network of the brain, analyses were performed within a mask of BA4, BA6, the basal ganglia (putamen, caudate and pallidum, the motor nuclei of the thalamus and the whole cerebellum. Results revealed that moving to music compared with self-paced movement resulted in significantly increased activation in left cerebellum VI. Moving to imagined music led to significantly more activation in pre-supplementary motor area (pre-SMA and right globus pallidus, relative to self-paced movement. When the music and imagery cueing conditions were contrasted directly, movements in the music condition showed significantly more activity in left hemisphere cerebellum VII and right hemisphere and vermis of cerebellum IX, while the imagery condition revealed more significant activity in pre-SMA. These results suggest that cueing movement with actual or imagined music impacts upon engagement of motor network regions during the movement, and suggest that heard and imagined cues can modulate movement in subtly different ways. These results may have implications for the applicability of auditory cueing in movement rehabilitation for different patient populations.

  16. Using electrophysiology to demonstrate that cueing affects long-term memory storage over the short term.

    Science.gov (United States)

    Maxcey, Ashleigh M; Fukuda, Keisuke; Song, Won S; Woodman, Geoffrey F

    2015-10-01

    As researchers who study working memory, we often assume that participants keep a representation of an object in working memory when we present a cue that indicates that the object will be tested in a couple of seconds. This intuitively accounts for how well people can remember a cued object, relative to their memory for that same object presented without a cue. However, it is possible that this superior memory does not purely reflect storage of the cued object in working memory. We tested the hypothesis that cues presented during a stream of objects, followed by a short retention interval and immediate memory test, can change how information is handled by long-term memory. We tested this hypothesis by using a family of frontal event-related potentials believed to reflect long-term memory storage. We found that these frontal indices of long-term memory were sensitive to the task relevance of objects signaled by auditory cues, even when the objects repeated frequently, such that proactive interference was high. Our findings indicate the problematic nature of assuming process purity in the study of working memory, and demonstrate that frequent stimulus repetitions fail to isolate the role of working memory mechanisms.

  17. Cue-switch costs in task-switching: cue priming or control processes?

    Science.gov (United States)

    Grange, James A; Houghton, George

    2010-09-01

    In the explicitly cued task-switching paradigm, two cues per task allow separation of costs associated with switching cues from costs of switching tasks. Whilst task-switch costs have become controversial, cue-switch costs are robust. The processes that contribute to cue-switch costs are under-specified in the literature: they could reflect perceptual priming of cue properties, or priming of control processes that form relevant working memory (WM) representations of task demands. Across two experiments we manipulated cue-transparency in an attention-switching design to test the contrasting hypotheses of cue-switch costs, and show that such costs emerge from control processes of establishing relevant WM representations, rather than perceptual priming of the cue itself. When the cues were maximally transparent, cue-switch costs were eradicated. We discuss the results in terms of recent theories of cue encoding, and provide a formal definition of cue-transparency in switching designs and its relation to WM representations that guide task performance.

  18. Effect of stimulus hemifield on free-field auditory saltation.

    Science.gov (United States)

    Ishigami, Yoko; Phillips, Dennis P

    2008-07-01

    Auditory saltation is the orderly misperception of the spatial location of repetitive click stimuli emitted from two successive locations when the inter-click intervals (ICIs) are sufficiently short. The clicks are perceived as originating not only from the actual source locations, but also from locations between them. In two tasks, the present experiment compared free-field auditory saltation for 90 degrees excursions centered in the frontal, rear, left and right acoustic hemifields, by measuring the ICI at which subjects report 50% illusion strength (subjective task) and the ICI at which subjects could not distinguish real motion from saltation (objective task). A comparison of the saltation illusion for excursions spanning the midline (i.e. for frontal or rear hemifields) with that for stimuli in the lateral hemifields (left or right) revealed that the illusion was weaker for the midline-straddling conditions (i.e. the illusion was restricted to shorter ICIs). This may reflect the contribution of two perceptual channels to the task in the midline conditions (as opposed to one in the lateral hemifield conditions), or the fact that the temporal dynamics of localization differ between the midline and lateral hemifield conditions. A subsidiary comparison of saltation supported in the left and right auditory hemifields, and therefore by the right and left auditory forebrains, revealed no difference.

  19. Spatial Language and Children’s Spatial Landmark Use

    Directory of Open Access Journals (Sweden)

    Amber A. Ankowski

    2012-01-01

    Full Text Available We examined how spatial language affected search behavior in a landmark spatial search task. In Experiment 1, two- to six-year-old children were trained to find a toy in the center of a square array of four identical landmarks. Children heard one of three spatial language cues once during the initial training trial (“here,” “in the middle,” “next to this one”. After search performance reached criterion, children received a probe test trial in which the landmark array was expanded. In Experiment 2, two- to four-year-old children participated in the search task and also completed a language comprehension task. Results revealed that children’s spatial language comprehension scores and spatial language cues heard during training trials were related to children’s performance in the search task.

  20. Auditory Hallucinations in Acute Stroke

    Directory of Open Access Journals (Sweden)

    Yair Lampl

    2005-01-01

    Full Text Available Auditory hallucinations are uncommon phenomena which can be directly caused by acute stroke, mostly described after lesions of the brain stem, very rarely reported after cortical strokes. The purpose of this study is to determine the frequency of this phenomenon. In a cross sectional study, 641 stroke patients were followed in the period between 1996–2000. Each patient underwent comprehensive investigation and follow-up. Four patients were found to have post cortical stroke auditory hallucinations. All of them occurred after an ischemic lesion of the right temporal lobe. After no more than four months, all patients were symptom-free and without therapy. The fact the auditory hallucinations may be of cortical origin must be taken into consideration in the treatment of stroke patients. The phenomenon may be completely reversible after a couple of months.

  1. Guiding Attention by Cooperative Cues

    Institute of Scientific and Technical Information of China (English)

    KangWoo Lee

    2008-01-01

    A common assumption in visual attention is based on the rationale of "limited capacity of information pro-ceasing". From this view point there is little consideration of how different information channels or modules are cooperating because cells in processing stages are forced to compete for the limited resource. To examine the mechanism behind the cooperative behavior of information channels, a computational model of selective attention is implemented based on two hypotheses. Unlike the traditional view of visual attention, the cooperative behavior is assumed to be a dynamic integration process between the bottom-up and top-down information. Furthermore, top-down information is assumed to provide a contextual cue during selection process and to guide the attentional allocation among many bottom-up candidates. The result from a series of simulation with still and video images showed some interesting properties that could not be explained by the competitive aspect of selective attention alone.

  2. How rats combine temporal cues.

    Science.gov (United States)

    Guilhardi, Paulo; Keen, Richard; MacInnis, Mika L M; Church, Russell M

    2005-05-31

    The procedures for classical and operant conditioning, and for many timing procedures, involve the delivery of reinforcers that may be related to the time of previous reinforcers and responses, and to the time of onsets and terminations of stimuli. The behavior resulting from such procedures can be described as bouts of responding that occur in some pattern at some rate. A packet theory of timing and conditioning is described that accounts for such behavior under a wide range of procedures. Applications include the food searching by rats in Skinner boxes under conditions of fixed and random reinforcement, brief and sustained stimuli, and several response-food contingencies. The approach is used to describe how multiple cues from reinforcers and stimuli combine to determine the rate and pattern of response bouts.

  3. Cross-Cultural Nonverbal Cue Immersive Training

    Science.gov (United States)

    2008-12-01

    Global Assessment Orlando, Florida, 32809 + University of Central Florida Orlando, Florida, 32816 ++ Army Research Institute...technologies incorporating mixed reality training may be used to promote social cooperative learning. 1. INTRODUCTION As a global community...communicated either consciously or unconsciously through various forms of nonverbal cues such as body posture and facial expressions. Nonverbal cues

  4. Dylan Pritchett, Storyteller. Cue Sheet for Students.

    Science.gov (United States)

    Evans, Karen L. B.

    Designed to be used before and after attending a storytelling performance by Dylan Pritchett, this cue sheet presents information about the performance and suggests activities that can be done with classmates, friends, or family members. The cue sheet discusses where and why people tell stories, what makes a story good for telling, what makes a…

  5. Children's recognition of emotions from vocal cues

    NARCIS (Netherlands)

    Sauter, D.A.; Panattoni, C.; Happé, F.

    2013-01-01

    Emotional cues contain important information about the intentions and feelings of others. Despite a wealth of research into children's understanding of facial signals of emotions, little research has investigated the developmental trajectory of interpreting affective cues in the voice. In this study

  6. Neural Correlates of Auditory Figure-Ground Segregation Based on Temporal Coherence

    Science.gov (United States)

    Teki, Sundeep; Barascud, Nicolas; Picard, Samuel; Payne, Christopher; Griffiths, Timothy D.; Chait, Maria

    2016-01-01

    To make sense of natural acoustic environments, listeners must parse complex mixtures of sounds that vary in frequency, space, and time. Emerging work suggests that, in addition to the well-studied spectral cues for segregation, sensitivity to temporal coherence—the coincidence of sound elements in and across time—is also critical for the perceptual organization of acoustic scenes. Here, we examine pre-attentive, stimulus-driven neural processes underlying auditory figure-ground segregation using stimuli that capture the challenges of listening in complex scenes where segregation cannot be achieved based on spectral cues alone. Signals (“stochastic figure-ground”: SFG) comprised a sequence of brief broadband chords containing random pure tone components that vary from 1 chord to another. Occasional tone repetitions across chords are perceived as “figures” popping out of a stochastic “ground.” Magnetoencephalography (MEG) measurement in naïve, distracted, human subjects revealed robust evoked responses, commencing from about 150 ms after figure onset that reflect the emergence of the “figure” from the randomly varying “ground.” Neural sources underlying this bottom-up driven figure-ground segregation were localized to planum temporale, and the intraparietal sulcus, demonstrating that this area, outside the “classic” auditory system, is also involved in the early stages of auditory scene analysis.” PMID:27325682

  7. Auditory and audio-visual processing in patients with cochlear, auditory brainstem, and auditory midbrain implants: An EEG study.

    Science.gov (United States)

    Schierholz, Irina; Finke, Mareike; Kral, Andrej; Büchner, Andreas; Rach, Stefan; Lenarz, Thomas; Dengler, Reinhard; Sandmann, Pascale

    2017-04-01

    There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc.

  8. Music As a Sacred Cue? Effects of Religious Music on Moral Behavior.

    Science.gov (United States)

    Lang, Martin; Mitkidis, Panagiotis; Kundt, Radek; Nichols, Aaron; Krajčíková, Lenka; Xygalatas, Dimitris

    2016-01-01

    Religion can have an important influence in moral decision-making, and religious reminders may deter people from unethical behavior. Previous research indicated that religious contexts may increase prosocial behavior and reduce cheating. However, the perceptual-behavioral link between religious contexts and decision-making lacks thorough scientific understanding. This study adds to the current literature by testing the effects of purely audial religious symbols (instrumental music) on moral behavior across three different sites: Mauritius, the Czech Republic, and the USA. Participants were exposed to one of three kinds of auditory stimuli (religious, secular, or white noise), and subsequently were given a chance to dishonestly report on solved mathematical equations in order to increase their monetary reward. The results showed cross-cultural differences in the effects of religious music on moral behavior, as well as a significant interaction between condition and religiosity across all sites, suggesting that religious participants were more influenced by the auditory religious stimuli than non-religious participants. We propose that religious music can function as a subtle cue associated with moral standards via cultural socialization and ritual participation. Such associative learning can charge music with specific meanings and create sacred cues that influence normative behavior. Our findings provide preliminary support for this view, which we hope further research will investigate more closely.

  9. MODELLING OF FACILITATIVE EDUCATIONAL ENVIRONMENT FOR STUDENTS WITH AUDITORY IMPERCEPTION

    Directory of Open Access Journals (Sweden)

    V. L. EFIMOVA

    2015-01-01

    Full Text Available The article describes the theoretical basis and practical recommendations for modelling the facilitative educationalenvironment for elementary school pupils with learning difficulties. It is shown that 80% of elementary school pupils with learning difficulties have problems related to auditory imperceptions. At the same time, the peripheral hearing of these students is usually normal. Auditory imperception has a negative impact on all types of educational activities, as educational material is mainly based on aural reception. The practical recommendations are aimed at changing the objective environment and the communicative strategies of all adults involved in educational activities of pupils in order to create conditions facilitating the aural reception of information by pupils. To create a facilitative environment, the following measures are proposed: improvement of the acoustic characteristics of the learning premises, the use of visual cues, change of the communicative strategies of adults, the use of special equipment in the classroom. The author suggests measures for creating the facilitating environment at home for children with aural imperceptions when they do their homework.

  10. Auditory Hallucinations Nomenclature and Classification

    NARCIS (Netherlands)

    Blom, Jan Dirk; Sommer, Iris E. C.

    2010-01-01

    Introduction: The literature on the possible neurobiologic correlates of auditory hallucinations is expanding rapidly. For an adequate understanding and linking of this emerging knowledge, a clear and uniform nomenclature is a prerequisite. The primary purpose of the present article is to provide an

  11. Nigel: A Severe Auditory Dyslexic

    Science.gov (United States)

    Cotterell, Gill

    1976-01-01

    Reported is the case study of a boy with severe auditory dyslexia who received remedial treatment from the age of four and progressed through courses at a technical college and a 3-year apprenticeship course in mechanics by the age of eighteen. (IM)

  12. Quantifying attentional modulation of auditory-evoked cortical responses from single-trial electroencephalography

    Directory of Open Access Journals (Sweden)

    Inyong eChoi

    2013-04-01

    Full Text Available Selective auditory attention is essential for human listeners to be able to communicate in multi-source environments. Selective attention is known to modulate the neural representation of the auditory scene, boosting the representation of a target sound relative to the background, but the strength of this modulation, and the mechanisms contributing to it, are not well understood. Here, listeners performed a behavioral experiment demanding sustained, focused spatial auditory attention while we measured cortical responses using electroencephalography (EEG. We presented three concurrent melodic streams; listeners were asked to attend and analyze the melodic contour of one of the streams, randomly selected from trial to trial. In a control task, listeners heard the same sound mixtures, but performed the contour judgment task on a series of visual arrows, ignoring all auditory streams. We found that the cortical responses could be fit as weighted sum of event-related potentials evoked by the stimulus onsets in the competing streams. The weighting to a given stream was roughly 10 dB higher when it was attended compared to when another auditory stream was attended; during the visual task, the auditory gains were intermediate. We then used a template-matching classification scheme to classify single-trial EEG results. We found that in all subjects, we could determine which stream the subject was attending significantly better than by chance. By directly quantifying the effect of selective attention on auditory cortical responses, these results reveal that focused auditory attention both suppresses the response to an unattended stream and enhances the response to an attended stream. The single-trial classification results add to the growing body of literature suggesting that auditory attentional modulation is sufficiently robust that it could be used as a control mechanism in brain-computer interfaces.

  13. Action experience changes attention to kinematic cues

    Directory of Open Access Journals (Sweden)

    Courtney eFilippi

    2016-02-01

    Full Text Available The current study used remote corneal reflection eye-tracking to examine the relationship between motor experience and action anticipation in 13-month-old infants. To measure online anticipation of actions infants watched videos where the actor’s hand provided kinematic information (in its orientation about the type of object that the actor was going to reach for. The actor’s hand orientation either matched the orientation of a rod (congruent cue or did not match the orientation of the rod (incongruent cue. To examine relations between motor experience and action anticipation, we used a 2 (reach first vs. observe first x 2 (congruent kinematic cue vs. incongruent kinematic cue between-subjects design. We show that 13-month-old infants in the observe first condition spontaneously generate rapid online visual predictions to congruent hand orientation cues and do not visually anticipate when presented incongruent cues. We further demonstrate that the speed that these infants generate predictions to congruent motor cues is correlated with their own ability to pre-shape their hands. Finally, we demonstrate that following reaching experience, infants generate rapid predictions to both congruent and incongruent hand shape cues—suggesting that short-term experience changes attention to kinematics.

  14. It Depends Who Is Watching You: 3-D Agent Cues Increase Fairness.

    Directory of Open Access Journals (Sweden)

    Jan Krátký

    Full Text Available Laboratory and field studies have demonstrated that exposure to cues of intentional agents in the form of eyes can increase prosocial behavior. However, previous research mostly used 2-dimensional depictions as experimental stimuli. Thus far no study has examined the influence of the spatial properties of agency cues on this prosocial effect. To investigate the role of dimensionality of agency cues on fairness, 345 participants engaged in a decision-making task in a naturalistic setting. The experimental treatment included a 3-dimensional pseudo-realistic model of a human head and a 2-dimensional picture of the same object. The control stimuli consisted of a real plant and its 2-D image. Our results partly support the findings of previous studies that cues of intentional agents increase prosocial behavior. However, this effect was only found for the 3-D cues, suggesting that dimensionality is a critical variable in triggering these effects in a real-world settings. Our research sheds light on a hitherto unexplored aspect of the effects of environmental cues and their morphological properties on decision-making.

  15. Cues of maternal condition influence offspring selfishness.

    Directory of Open Access Journals (Sweden)

    Janine W Y Wong

    Full Text Available The evolution of parent-offspring communication was mostly studied from the perspective of parents responding to begging signals conveying information about offspring condition. Parents should respond to begging because of the differential fitness returns obtained from their investment in offspring that differ in condition. For analogous reasons, offspring should adjust their behavior to cues/signals of parental condition: parents that differ in condition pay differential costs of care and, hence, should provide different amounts of food. In this study, we experimentally tested in the European earwig (Forficula auricularia if cues of maternal condition affect offspring behavior in terms of sibling cannibalism. We experimentally manipulated female condition by providing them with different amounts of food, kept nymph condition constant, allowed for nymph exposure to chemical maternal cues over extended time, quantified nymph survival (deaths being due to cannibalism and extracted and analyzed the females' cuticular hydrocarbons (CHC. Nymph survival was significantly affected by chemical cues of maternal condition, and this effect depended on the timing of breeding. Cues of poor maternal condition enhanced nymph survival in early broods, but reduced nymph survival in late broods, and vice versa for cues of good condition. Furthermore, female condition affected the quantitative composition of their CHC profile which in turn predicted nymph survival patterns. Thus, earwig offspring are sensitive to chemical cues of maternal condition and nymphs from early and late broods show opposite reactions to the same chemical cues. Together with former evidence on maternal sensitivities to condition-dependent nymph chemical cues, our study shows context-dependent reciprocal information exchange about condition between earwig mothers and their offspring, potentially mediated by cuticular hydrocarbons.

  16. Sensory habituation of auditory receptor neurons: implications for sound localization.

    Science.gov (United States)

    Givois, V; Pollack, G S

    2000-09-01

    Auditory receptor neurons exhibit sensory habituation; their responses decline with repeated stimulation. We studied the effects of sensory habituation on the neural encoding of sound localization cues using crickets as a model system. In crickets, Teleogryllus oceanicus, sound localization is based on binaural comparison of stimulus intensity. There are two potential codes at the receptor-neuron level for interaural intensity difference: interaural difference in response strength, i.e. spike rate and/or count, and interaural difference in response latency. These are affected differently by sensory habituation. When crickets are stimulated with cricket-song-like trains of sound pulses, response strength declines for successive pulses in the train, and the decrease becomes more pronounced as the stimulus intensity increases. Response decrement is thus greater for receptors serving the ear ipsilateral to the sound source, where intensity is higher, resulting in a decrease in the interaural difference in response strength. Sensory habituation also affects response latency, which increases for responses to successive sound pulses in the stimulus train. The change in latency is independent of intensity, and thus is similar for receptors serving both ears. As a result, interaural latency difference is unaffected by sensory habituation and may be a more reliable cue for sound localization.

  17. Intermodal auditory, visual, and tactile attention modulates early stages of neural processing.

    Science.gov (United States)

    Karns, Christina M; Knight, Robert T

    2009-04-01

    We used event-related potentials (ERPs) and gamma band oscillatory responses (GBRs) to examine whether intermodal attention operates early in the auditory, visual, and tactile modalities. To control for the effects of spatial attention, we spatially coregistered all stimuli and varied the attended modality across counterbalanced blocks in an intermodal selection task. In each block, participants selectively responded to either auditory, visual, or vibrotactile stimuli from the stream of intermodal events. Auditory and visual ERPs were modulated at the latencies of early cortical processing, but attention manifested later for tactile ERPs. For ERPs, auditory processing was modulated at the latency of the Na (29 msec), which indexes early cortical or thalamocortical processing and the subsequent P1 (90 msec) ERP components. Visual processing was modulated at the latency of the early phase of the C1 (62-72 msec) thought to be generated in the primary visual cortex and the subsequent P1 and N1 (176 msec). Tactile processing was modulated at the latency of the N160 (165 msec) likely generated in the secondary association cortex. Intermodal attention enhanced early sensory GBRs for all three modalities: auditory (onset 57 msec), visual (onset 47 msec), and tactile (onset 27 msec). Together, these results suggest that intermodal attention enhances neural processing relatively early in the sensory stream independent from differential effects of spatial and intramodal selective attention.

  18. The perception of speech modulation cues in lexical tones is guided by early language-specific experience

    Directory of Open Access Journals (Sweden)

    Laurianne eCabrera

    2015-08-01

    Full Text Available A number of studies showed that infants reorganize their perception of speech sounds according to their native language categories during their first year of life. Still, information is lacking about the contribution of basic auditory mechanisms to this process. This study aimed to evaluate when native language experience starts to noticeably affect the perceptual processing of basic acoustic cues (i.e., frequency-modulation (FM and amplitude-modulation (AM information known to be crucial for speech perception in adults. The discrimination of a lexical-tone contrast (rising versus low was assessed in 6- and 10-month-old infants learning either French or Mandarin using a visual habituation paradigm. The lexical tones were presented in two conditions designed to either keep intact or to severely degrade the FM and fine spectral cues needed to accurately perceive voice-pitch trajectory. A third condition was designed to assess the discrimination of the same voice-pitch trajectories using click trains containing only the FM cues related to the fundamental-frequency (F0 in French- and Mandarin-learning 10-month-old infants. Results showed that the younger infants of both language groups and the Mandarin-learning 10-month-olds discriminated the intact lexical-tone contrast while French-learning 10-month-olds failed. However, only the French 10-month-olds discriminated degraded lexical tones when FM, and thus voice-pitch cues were reduced. Moreover, Mandarin-learning 10-month-olds were found to discriminate the pitch trajectories as presented in click trains better than French infants. Altogether, these results reveal that the perceptual reorganization occurring during the first year of life for lexical tones is coupled with changes in the auditory ability to use speech modulation cues.

  19. Cue-Specific Reactivity in Experienced Gamblers

    OpenAIRE

    2009-01-01

    To examine whether gambling cue reactivity is cue-specific, 47 scratch-off lottery players and 47 horse race gamblers were presented with video clips of their preferred and non-preferred modes of gambling, and two control stimuli including an exciting car race and a mental stressor task while heart rates, excitement, and urge to gamble were being measured. Heart rates for both groups of gamblers were highest to the mental stressor and did not differ in response to the other three cues. Excite...

  20. Neural representation of concurrent harmonic sounds in monkey primary auditory cortex: implications for models of auditory scene analysis.

    Science.gov (United States)

    Fishman, Yonatan I; Steinschneider, Mitchell; Micheyl, Christophe

    2014-09-10

    The ability to attend to a particular sound in a noisy environment is an essential aspect of hearing. To accomplish this feat, the auditory system must segregate sounds that overlap in frequency and time. Many natural sounds, such as human voices, consist of harmonics of a common fundamental frequency (F0). Such harmonic complex tones (HCTs) evoke a pitch corresponding to their F0. A difference in pitch between simultaneous HCTs provides a powerful cue for their segregation. The neural mechanisms underlying concurrent sound segregation based on pitch differences are poorly understood. Here, we examined neural responses in monkey primary auditory cortex (A1) to two concurrent HCTs that differed in F0 such that they are heard as two separate "auditory objects" with distinct pitches. We found that A1 can resolve, via a rate-place code, the lower harmonics of both HCTs, a prerequisite for deriving their pitches and for their perceptual segregation. Onset asynchrony between the HCTs enhanced the neural representation of their harmonics, paralleling their improved perceptual segregation in humans. Pitches of the concurrent HCTs could also be temporally represented by neuronal phase-locking at their respective F0s. Furthermore, a model of A1 responses using harmonic templates could qualitatively reproduce psychophysical data on concurrent sound segregation in humans. Finally, we identified a possible intracortical homolog of the "object-related negativity" recorded noninvasively in humans, which correlates with the perceptual segregation of concurrent sounds. Findings indicate that A1 contains sufficient spectral and temporal information for segregating concurrent sounds based on differences in pitch.

  1. Is the auditory evoked P2 response a biomarker of learning?

    Directory of Open Access Journals (Sweden)

    Kelly eTremblay

    2014-02-01

    Full Text Available Even though auditory training exercises for humans have been shown to improve certain perceptual skills of individuals with and without hearing loss, there is a lack of knowledge pertaining to which aspects of training are responsible for the perceptual gains, and which aspects of perception are changed. To better define how auditory training impacts brain and behavior, electroencephalography and magnetoencephalography have been used to determine the time course and coincidence of cortical modulations associated with different types of training. Here we focus on P1-N1-P2 auditory evoked responses (AEP, as there are consistent reports of gains in P2 amplitude following various types of auditory training experiences; including music and speech-sound training. The purpose of this experiment was to determine if the auditory evoked P2 response is a biomarker of learning. To do this, we taught native English speakers to identify a new pre-voiced temporal cue that is not used phonemically in the English language so that coinciding changes in evoked neural activity could be characterized. To differentiate possible effects of repeated stimulus exposure and a button-pushing task from learning itself, we examined modulations in brain activity in a group of participants who learned to identify the pre-voicing contrast and compared it to participants, matched in time, and stimulus exposure, that did not. The main finding was that the amplitude of the P2 auditory evoked response increased across repeated EEG sessions for all groups, regardless of any change in perceptual performance. What’s more, these effects were retained for months. Changes in P2 amplitude were attributed to changes in neural activity associated with the acquisition process and not the learned outcome itself. A further finding was the expression of a late negativity (LN wave 600-900 ms post-stimulus onset, post-training, exclusively for the group that learned to identify the pre

  2. Is the auditory evoked P2 response a biomarker of learning?

    Science.gov (United States)

    Tremblay, Kelly L; Ross, Bernhard; Inoue, Kayo; McClannahan, Katrina; Collet, Gregory

    2014-01-01

    Even though auditory training exercises for humans have been shown to improve certain perceptual skills of individuals with and without hearing loss, there is a lack of knowledge pertaining to which aspects of training are responsible for the perceptual gains, and which aspects of perception are changed. To better define how auditory training impacts brain and behavior, electroencephalography (EEG) and magnetoencephalography (MEG) have been used to determine the time course and coincidence of cortical modulations associated with different types of training. Here we focus on P1-N1-P2 auditory evoked responses (AEP), as there are consistent reports of gains in P2 amplitude following various types of auditory training experiences; including music and speech-sound training. The purpose of this experiment was to determine if the auditory evoked P2 response is a biomarker of learning. To do this, we taught native English speakers to identify a new pre-voiced temporal cue that is not used phonemically in the English language so that coinciding changes in evoked neural activity could be characterized. To differentiate possible effects of repeated stimulus exposure and a button-pushing task from learning itself, we examined modulations in brain activity in a group of participants who learned to identify the pre-voicing contrast and compared it to participants, matched in time, and stimulus exposure, that did not. The main finding was that the amplitude of the P2 auditory evoked response increased across repeated EEG sessions for all groups, regardless of any change in perceptual performance. What's more, these effects are retained for months. Changes in P2 amplitude were attributed to changes in neural activity associated with the acquisition process and not the learned outcome itself. A further finding was the expression of a late negativity (LN) wave 600-900 ms post-stimulus onset, post-training exclusively for the group that learned to identify the pre-voiced contrast.

  3. Neural entrainment to rhythmically-presented auditory, visual and audio-visual speech in children

    Directory of Open Access Journals (Sweden)

    Alan James Power

    2012-07-01

    Full Text Available Auditory cortical oscillations have been proposed to play an important role in speech perception. It is suggested that the brain may take temporal ‘samples’ of information from the speech stream at different rates, phase-resetting ongoing oscillations so that they are aligned with similar frequency bands in the input (‘phase locking’. Information from these frequency bands is then bound together for speech perception. To date, there are no explorations of neural phase-locking and entrainment to speech input in children. However, it is clear from studies of language acquisition that infants use both visual speech information and auditory speech information in learning. In order to study neural entrainment to speech in typically-developing children, we use a rhythmic entrainment paradigm (underlying 2 Hz or delta rate based on repetition of the syllable ba, presented in either the auditory modality alone, the visual modality alone, or as auditory-visual speech (via a talking head. To ensure attention to the task, children aged 13 years were asked to press a button as fast as possible when the ba stimulus violated the rhythm for each stream type. Rhythmic violation depended on delaying the occurrence of a ba in the isochronous stream. Neural entrainment was demonstrated for all stream types, and individual differences in standardized measures of language processing were related to auditory entrainment at the theta rate. Further, there was significant modulation of the preferred phase of auditory entrainment in the theta band when visual speech cues were present, indicating cross-modal phase resetting. The rhythmic entrainment paradigm developed here offers a method for exploring individual differences in oscillatory phase locking during development. In particular, a method for assessing neural entrainment and cross-modal phase resetting would be useful for exploring developmental learning difficulties thought to involve temporal sampling

  4. Contribution of auditory working memory to speech understanding in mandarin-speaking cochlear implant users.

    Directory of Open Access Journals (Sweden)

    Duoduo Tao

    Full Text Available To investigate how auditory working memory relates to speech perception performance by Mandarin-speaking cochlear implant (CI users.Auditory working memory and speech perception was measured in Mandarin-speaking CI and normal-hearing (NH participants. Working memory capacity was measured using forward digit span and backward digit span; working memory efficiency was measured using articulation rate. Speech perception was assessed with: (a word-in-sentence recognition in quiet, (b word-in-sentence recognition in speech-shaped steady noise at +5 dB signal-to-noise ratio, (c Chinese disyllable recognition in quiet, (d Chinese lexical tone recognition in quiet. Self-reported school rank was also collected regarding performance in schoolwork.There was large inter-subject variability in auditory working memory and speech performance for CI participants. Working memory and speech performance were significantly poorer for CI than for NH participants. All three working memory measures were strongly correlated with each other for both CI and NH participants. Partial correlation analyses were performed on the CI data while controlling for demographic variables. Working memory efficiency was significantly correlated only with sentence recognition in quiet when working memory capacity was partialled out. Working memory capacity was correlated with disyllable recognition and school rank when efficiency was partialled out. There was no correlation between working memory and lexical tone recognition in the present CI participants.Mandarin-speaking CI users experience significant deficits in auditory working memory and speech performance compared with NH listeners. The present data suggest that auditory working memory may contribute to CI users' difficulties in speech understanding. The present pattern of results with Mandarin-speaking CI users is consistent with previous auditory working memory studies with English-speaking CI users, suggesting that the lexical

  5. Effects of incongruent auditory and visual room-related cues on sound externalization

    DEFF Research Database (Denmark)

    Carvajal, Juan Camilo Gil; Santurette, Sébastien; Cubick, Jens;

    Sounds presented via headphones are typically perceived inside the head. However, the illusion of a sound source located out in space away from the listener’s head can be generated with binaural headphone-based auralization systems by convolving anechoic sound signals with a binaural room impulse...

  6. Synchronization and leadership in string quartet performance: a case study of auditory and visual cues

    Directory of Open Access Journals (Sweden)

    Renee eTimmers

    2014-06-01

    Full Text Available Temporal coordination between members of a string quartet was investigated across repeated performances of an excerpt of Haydn’s string quartet in G Major, Op. 77 No. 1. Cross-correlations between interbeat intervals of performances at different lags showed a unidirectional dependence of Viola on Violin I, and of Violin I on Cello. Bidirectional dependence was observed for the relationships between Violin II and Cello and Violin II and Viola. Own-reported dependencies after the performances reflected these measured dependencies more closely than dependencies of players reported by the other players, which instead showed more typical leader-follower patterns in which Violin I leads. On the other hand, primary leadership from Violin I was observed in an analysis of the bow speed characteristics preceding the first tone onset. The anticipatory movement of Violin I set the tempo of the excerpt. Taken together the results show a more complex and differentiated pattern of dependencies than expected from a traditional role division of leadership suggesting several avenues for further research.

  7. Self-generated auditory feedback as a cue to support rhythmic motor stability

    DEFF Research Database (Denmark)

    Krupenia, Stas S.; Hoffmann, Pablo F.; Zalmanov, Hagar

    2011-01-01

    A goal of the SKILLS project is to develop Virtual Reality (VR)-based training simulators for different application domains, one of which is juggling. Within this context the value of multimodal VR environments for skill acquisition is investigated. In this study, we investigated whether it was n...

  8. Perceptual separation of transparent motion components: the interaction of motion, luminance and shape cues.

    Science.gov (United States)

    Meso, Andrew Isaac; Durant, Szonya; Zanker, Johannes M

    2013-09-01

    Transparency is perceived when two or more objects or surfaces can be separated by the visual system whilst they are presented in the same region of the visual field at the same time. This segmentation of distinct entities on the basis of overlapping local visual cues poses an interesting challenge for the understanding of cortical information processing. In psychophysical experiments, we studied stimuli that contained randomly positioned disc elements, moving at two different speeds in the same direction, to analyse the interaction of cues during the perception of motion transparency. The current work extends findings from previous experiments with sine wave luminance gratings which only vary in one spatial dimension. The reported experiments manipulate low-level cues, like differences in speed or luminance, and what are likely to be higher level cues such as the relative size of the elements or the superposition rules that govern overlapping regions. The mechanism responsible for separation appears to be mediated by combination of the relevant and available cues. Where perceived transparency is stronger, the neural representations of components are inferred to be more distinguishable from each other across what appear to be multiple cue dimensions. The disproportionally large effect on transparency strength of the type of superposition of disc suggests that with this manipulation, there may be enhanced separation above what might be expected from the linear combination of low-level cues in a process we term labelling. A mechanism for transparency perception consistent with the current results would require a minimum of three stages; in addition to the local motion detection and global pooling and separation of motion signals, findings suggest a powerful additional role of higher level separation cues.

  9. Relative saliency of pitch versus phonetic cues in infancy

    Science.gov (United States)

    Cardillo, Gina; Kuhl, Patricia; Sundara, Megha

    2005-09-01

    Infants in their first year are highly sensitive to different acoustic components of speech, including phonetic detail and pitch information. The present investigation examined whether relative sensitivity to these two dimensions changes during this period, as the infant acquires language-specific phonetic categories. If pitch and phonetic discrimination are hierarchical, then the relative salience of pitch and phonetic change may become reversed between 8 and 12 months of age. Thirty-two- and 47-week-old infants were tested using an auditory preference paradigm in which they first heard a recording of a person singing a 4-note song (i.e., ``go-bi-la-tu'') and were then presented with both the familiar and an unfamiliar, modified version of that song. Modifications were either a novel pitch order (keeping syllables constant) or a novel syllable order (keeping melody constant). Compared to the younger group, older infants were predicted to show greater relative sensitivity to syllable order than pitch order, in accordance with an increased tendency to attend to linguistically relevant information (phonetic patterns) as opposed to cues that are initially more salient (pitch patterns). Preliminary data show trends toward the predicted interaction, with preference patterns commensurate with previously reported data. [Work supported by the McDonnell Foundation and NIH.

  10. Attentional bias for food cues in binge eating disorder.

    Science.gov (United States)

    Schmitz, Florian; Naumann, Eva; Trentowska, Monika; Svaldi, Jennifer

    2014-09-01

    The aim of the present study was to investigate an attentional bias toward food stimuli in binge eating disorder (BED). To this end, a BED and a weight-matched control group (CG) completed a clarification task and a spatial cueing paradigm. The clarification task revealed that food stimuli were faster detected than neutral stimuli, and that this difference was more pronounced in BED than in the CG. The spatial cueing paradigm indicated a stimulus engagement effect in the BED group but not in the CG, suggesting that an early locus in stimulus processing contributes to differences between BED patients and obese controls. Both groups experienced difficulty disengaging attention from food stimuli, and this effect was only descriptively larger in the BED group. The effects obtained in both paradigms were found to be correlated with reported severity of BED symptoms. Of note, this relationship was partially mediated by the arousal associated with food stimuli relative to neutral stimuli, as predicted by an account on incentive sensitization.

  11. Multisensory cueing for enhancing orientation information during flight.

    Science.gov (United States)

    Albery, William B

    2007-05-01

    The U.S. Air Force still regards spatial disorientation (SD) and loss of situational awareness (SA) as major contributing factors in operational Class A aircraft mishaps ($1M in aircraft loss and/or pilot fatality). Air Force Safety Agency data show 71 Class A SD mishaps from 1991-2004 in both fixed and rotary-wing aircraft. These mishaps resulted in 62 fatalities and an aircraft cost of over $2.OB. These losses account for 21 % of the USAF's Class A mishaps during that 14-yr period. Even non-mishap SD events negatively impact aircrew performance and reduce mission effectiveness. A multisensory system has been developed called the Spatial Orientation Retention Device (SORD) to enhance the aircraft attitude information to the pilot. SORD incorporates multisensory aids including helmet mounted symbology and tactile and audio cues. SORD has been prototyped and demonstrated in the Air Force Research Laboratory at Wright-Patterson AFB, OH. The technology has now been transitioned to a Rotary Wing Brownout program. This paper discusses the development of SORD and a potential application, including an augmented cognition application. Unlike automatic ground collision avoidance systems, SORD does not take over the aircraft if a pre-set altitude is broached by the pilot; rather, SORD provides complementary attitude cues to the pilot via the tactile, audio, and visual systems that allow the pilot to continue flying through disorienting conditions.

  12. Cue Summation Enables Perceptual Grouping

    Science.gov (United States)

    Persike, Malte; Meinhardt, Gunter

    2008-01-01

    In order to study how basic features interact in shape perception, subjects detected target figure presence and identified target figure shape in a combined 2-alternative forced-choice task. Target figures were embedded in Gabor random fields and were defined by feature contrast in spatial frequency, orientation, or both. Two figure classes were…

  13. Kin-informative recognition cues in ants

    DEFF Research Database (Denmark)

    Nehring, Volker; Evison, Sophie E F; Santorelli, Lorenzo A

    2011-01-01

    Although social groups are characterized by cooperation, they are also often the scene of conflict. In non-clonal systems, the reproductive interests of group members will differ and individuals may benefit by exploiting the cooperative efforts of other group members. However, such selfish...... behaviour is thought to be rare in one of the classic examples of cooperation--social insect colonies--because the colony-level costs of individual selfishness select against cues that would allow workers to recognize their closest relatives. In accord with this, previous studies of wasps and ants have...... found little or no kin information in recognition cues. Here, we test the hypothesis that social insects do not have kin-informative recognition cues by investigating the recognition cues and relatedness of workers from four colonies of the ant Acromyrmex octospinosus. Contrary to the theoretical...

  14. Impaired timing adjustments in response to time-varying auditory perturbation during connected speech production in persons who stutter.

    Science.gov (United States)

    Cai, Shanqing; Beal, Deryk S; Ghosh, Satrajit S; Guenther, Frank H; Perkell, Joseph S

    2014-02-01

    Auditory feedback (AF), the speech signal received by a speaker's own auditory system, contributes to the online control of speech movements. Recent studies based on AF perturbation provided evidence for abnormalities in the integration of auditory error with ongoing articulation and phonation in persons who stutter (PWS), but stopped short of examining connected speech. This is a crucial limitation considering the importance of sequencing and timing in stuttering. In the current study, we imposed time-varying perturbations on AF while PWS and fluent participants uttered a multisyllabic sentence. Two distinct types of perturbations were used to separately probe the control of the spatial and temporal parameters of articulation. While PWS exhibited only subtle anomalies in the AF-based spatial control, their AF-based fine-tuning of articulatory timing was substantially weaker than normal, especially in early parts of the responses, indicating slowness in the auditory-motor integration for temporal control.

  15. Research of Visual and Auditory Stimulation Based on Environment Reset on Hemi-spatial Neglect in Patients with Cerebral Apoplexy%基于环境重置的视听觉刺激在脑卒中偏侧忽略的护理研究

    Institute of Scientific and Technical Information of China (English)

    韩宇花; 陶希; 邓景贵; 刘佳; 宋涛; 何娟

    2014-01-01

    目的:探讨基于环境重置的视听觉刺激对脑卒中偏侧忽略( hemispatial neglect, HSN)的影响。方法2010年3月-2012年9月收治的脑卒中HSN 49例随机分为观察组27例和对照组22例。两组均给予常规治疗,对照组对病房及康复环境不做要求,观察组对病房床位及康复环境进行重新设置。于治疗前、治疗4周及治疗8周时分别行直线二等分( LB)测试和线段划消( LC)测试评估HSN的程度,以美国国立研究院脑卒中评定量表( NIHSS)评定神经功能缺损和改良Barthel Index( MBI)评估日常生活活动能力( ADL)。结果治疗4、8周时两组LB、LC及NIHSS评分均低于治疗前,MBI评分均高于治疗前(P<0.05)。治疗8周时两组LB、NIHSS评分和观察组LC均较治疗4周时降低,MBI评分较治疗4周时升高,观察组LB、LC低于对照组,MBI评分高于对照组(P<0.05)。结论基于环境重置的视听觉刺激对脑卒中HSN患者有益,可提高ADL能力,但对神经功能缺损影响可能不大。%Objective To explore the effect of visual and auditory stimulation based on environment reset on he-mi-spatial neglect ( HSN) in patients with cerebral apoplexy. Methods A total of 49 patients with cerebral apoplexy combined with HSN during March 2010 and September 2012 were randomly divided into control group (n=22) and ob-servation group (n=27). Conventional therapy was performed in the two groups. Wards and rehabilitation environment for patients in control group had no special requirement, while wards and rehabilitation environment for patients were rese-ted regularly. HSN degrees were assessed by test of line bisection ( LB) and line cancellation ( LC);scores of neurologic impairment were evaluated with National Institute of Health stroke scale ( NIHSS) , and abilities of activity of daily living ( ADL) were evaluated with modified Barthel index ( MBI) before treatment, after treatment for 4 weeks and 8 weeks. Results Compared with those before

  16. Adaptation in the auditory system: an overview

    Directory of Open Access Journals (Sweden)

    David ePérez-González

    2014-02-01

    Full Text Available The early stages of the auditory system need to preserve the timing information of sounds in order to extract the basic features of acoustic stimuli. At the same time, different processes of neuronal adaptation occur at several levels to further process the auditory information. For instance, auditory nerve fiber responses already experience adaptation of their firing rates, a type of response that can be found in many other auditory nuclei and may be useful for emphasizing the onset of the stimuli. However, it is at higher levels in the auditory hierarchy where more sophisticated types of neuronal processing take place. For example, stimulus-specific adaptation, where neurons show adaptation to frequent, repetitive stimuli, but maintain their responsiveness to stimuli with different physical characteristics, thus representing a distinct kind of processing that may play a role in change and deviance detection. In the auditory cortex, adaptation takes more elaborate forms, and contributes to the processing of complex sequences, auditory scene analysis and attention. Here we review the multiple types of adaptation that occur in the auditory system, which are part of the pool of resources that the neurons employ to process the auditory scene, and are critical to a proper understanding of the neuronal mechanisms that govern auditory perception.

  17. Auditory adaptation improves tactile frequency perception.

    Science.gov (United States)

    Crommett, Lexi E; Pérez-Bellido, Alexis; Yau, Jeffrey M

    2017-01-11

    Our ability to process temporal frequency information by touch underlies our capacity to perceive and discriminate surface textures. Auditory signals, which also provide extensive temporal frequency information, can systematically alter the perception of vibrations on the hand. How auditory signals shape tactile processing is unclear: perceptual interactions between contemporaneous sounds and vibrations are consistent with multiple neural mechanisms. Here we used a crossmodal adaptation paradigm, which separated auditory and tactile stimulation in time, to test the hypothesis that tactile frequency perception depends on neural circuits that also process auditory frequency. We reasoned that auditory adaptation effects would transfer to touch only if signals from both senses converge on common representations. We found that auditory adaptation can improve tactile frequency discrimination thresholds. This occurred only when adaptor and test frequencies overlapped. In contrast, auditory adaptation did not influence tactile intensity judgments. Thus, auditory adaptation enhances touch in a frequency- and feature-specific manner. A simple network model in which tactile frequency information is decoded from sensory neurons that are susceptible to auditory adaptation recapitulates these behavioral results. Our results imply that the neural circuits supporting tactile frequency perception also process auditory signals. This finding is consistent with the notion of supramodal operators performing canonical operations, like temporal frequency processing, regardless of input modality.

  18. Auditory Dysfunction in Patients with Cerebrovascular Disease

    Directory of Open Access Journals (Sweden)

    Sadaharu Tabuchi

    2014-01-01

    Full Text Available Auditory dysfunction is a common clinical symptom that can induce profound effects on the quality of life of those affected. Cerebrovascular disease (CVD is the most prevalent neurological disorder today, but it has generally been considered a rare cause of auditory dysfunction. However, a substantial proportion of patients with stroke might have auditory dysfunction that has been underestimated due to difficulties with evaluation. The present study reviews relationships between auditory dysfunction and types of CVD including cerebral infarction, intracerebral hemorrhage, subarachnoid hemorrhage, cerebrovascular malformation, moyamoya disease, and superficial siderosis. Recent advances in the etiology, anatomy, and strategies to diagnose and treat these conditions are described. The numbers of patients with CVD accompanied by auditory dysfunction will increase as the population ages. Cerebrovascular diseases often include the auditory system, resulting in various types of auditory dysfunctions, such as unilateral or bilateral deafness, cortical deafness, pure word deafness, auditory agnosia, and auditory hallucinations, some of which are subtle and can only be detected by precise psychoacoustic and electrophysiological testing. The contribution of CVD to auditory dysfunction needs to be understood because CVD can be fatal if overlooked.

  19. The auditory brainstem is a barometer of rapid auditory learning.

    Science.gov (United States)

    Skoe, E; Krizman, J; Spitzer, E; Kraus, N

    2013-07-23

    To capture patterns in the environment, neurons in the auditory brainstem rapidly alter their firing based on the statistical properties of the soundscape. How this neural sensitivity relates to behavior is unclear. We tackled this question by combining neural and behavioral measures of statistical learning, a general-purpose learning mechanism governing many complex behaviors including language acquisition. We recorded complex auditory brainstem responses (cABRs) while human adults implicitly learned to segment patterns embedded in an uninterrupted sound sequence based on their statistical characteristics. The brainstem's sensitivity to statistical structure was measured as the change in the cABR between a patterned and a pseudo-randomized sequence composed from the same set of sounds but differing in their sound-to-sound probabilities. Using this methodology, we provide the first demonstration that behavioral-indices of rapid learning relate to individual differences in brainstem physiology. We found that neural sensitivity to statistical structure manifested along a continuum, from adaptation to enhancement, where cABR enhancement (patterned>pseudo-random) tracked with greater rapid statistical learning than adaptation. Short- and long-term auditory experiences (days to years) are known to promote brainstem plasticity and here we provide a conceptual advance by showing that the brainstem is also integral to rapid learning occurring over minutes.

  20. Gender differences in craving and cue reactivity to smoking and negative affect/stress cues.

    Science.gov (United States)

    Saladin, Michael E; Gray, Kevin M; Carpenter, Matthew J; LaRowe, Steven D; DeSantis, Stacia M; Upadhyaya, Himanshu P

    2012-01-01

    There is evidence that women may be less successful when attempting to quit smoking than men. One potential contributory cause of this gender difference is differential craving and stress reactivity to smoking- and negative affect/stress-related cues. The present human laboratory study investigated the effects of gender on reactivity to smoking and negative affect/stress cues by exposing nicotine dependent women (n = 37) and men (n = 53) smokers to two active cue types, each with an associated control cue: (1) in vivo smoking cues and in vivo neutral control cues, and (2) imagery-based negative affect/stress script and a neutral/relaxing control script. Both before and after each cue/script, participants provided subjective reports of smoking-related craving and affective reactions. Heart rate (HR) and skin conductance (SC) responses were also measured. Results indicated that participants reported greater craving and SC in response to smoking versus neutral cues and greater subjective stress in response to the negative affect/stress versus neutral/relaxing script. With respect to gender differences, women evidenced greater craving, stress and arousal ratings and lower valence ratings (greater negative emotion) in response to the negative affect/stressful script. While there were no gender differences in responses to smoking cues, women trended towards higher arousal ratings. Implications of the findings for treatment and tobacco-related morbidity and mortality are discussed.

  1. Nipping cue reactivity in the bud: baclofen prevents limbic activation elicited by subliminal drug cues.

    Science.gov (United States)

    Young, Kimberly A; Franklin, Teresa R; Roberts, David C S; Jagannathan, Kanchana; Suh, Jesse J; Wetherill, Reagan R; Wang, Ze; Kampman, Kyle M; O'Brien, Charles P; Childress, Anna Rose

    2014-04-02

    Relapse is a widely recognized and difficult to treat feature of the addictions. Substantial evidence implicates cue-triggered activation of the mesolimbic dopamine system as an important contributing factor. Even drug cues presented outside of conscious awareness (i.e., subliminally) produce robust activation within this circuitry, indicating the sensitivity and vulnerability of the brain to potentially problematic reward signals. Because pharmacological agents that prevent these early cue-induced responses could play an important role in relapse prevention, we examined whether baclofen-a GABAB receptor agonist that reduces mesolimbic dopamine release and conditioned drug responses in laboratory animals-could inhibit mesolimbic activation elicited by subliminal cocaine cues in cocaine-dependent individuals. Twenty cocaine-dependent participants were randomized to receive baclofen (60 mg/d; 20 mg t.i.d.) or placebo. Event-related BOLD fMRI and a backward-masking paradigm were used to examine the effects of baclofen on subliminal cocaine (vs neutral) cues. Sexual and aversive cues were included to examine specificity. We observed that baclofen-treated participants displayed significantly less activation in response to subliminal cocaine (vs neutral) cues, but not sexual or aversive (vs neutral) cues, than placebo-treated participants in a large interconnected bilateral cluster spanning the ventral striatum, ventral pallidum, amygdala, midbrain, and orbitofrontal cortex (voxel threshold p baclofen may inhibit the earliest type of drug cue-induced motivational processing-that which occurs outside of awareness-before it evolves into a less manageable state.

  2. Design guidelines for the use of audio cues in computer interfaces

    Energy Technology Data Exchange (ETDEWEB)

    Sumikawa, D.A.; Blattner, M.M.; Joy, K.I.; Greenberg, R.M.

    1985-07-01

    A logical next step in the evolution of the computer-user interface is the incorporation of sound thereby using our senses of ''hearing'' in our communication with the computer. This allows our visual and auditory capacities to work in unison leading to a more effective and efficient interpretation of information received from the computer than by sight alone. In this paper we examine earcons, which are audio cues, used in the computer-user interface to provide information and feedback to the user about computer entities (these include messages and functions, as well as states and labels). The material in this paper is part of a larger study that recommends guidelines for the design and use of audio cues in the computer-user interface. The complete work examines the disciplines of music, psychology, communication theory, advertising, and psychoacoustics to discover how sound is utilized and analyzed in those areas. The resulting information is organized according to the theory of semiotics, the theory of signs, into the syntax, semantics, and pragmatics of communication by sound. Here we present design guidelines for the syntax of earcons. Earcons are constructed from motives, short sequences of notes with a specific rhythm and pitch, embellished by timbre, dynamics, and register. Compound earcons and family earcons are introduced. These are related motives that serve to identify a family of related cues. Examples of earcons are given.

  3. Processing of location and pattern changes of natural sounds in the human auditory cortex.

    Science.gov (United States)

    Altmann, Christian F; Bledowski, Christoph; Wibral, Michael; Kaiser, Jochen

    2007-04-15

    Parallel cortical pathways have been proposed for the processing of auditory pattern and spatial information, respectively. We tested this segregation with human functional magnetic resonance imaging (fMRI) and separate electroencephalographic (EEG) recordings in the same subjects who listened passively to four sequences of repetitive spatial animal vocalizations in an event-related paradigm. Transitions between sequences constituted either a change of auditory pattern, location, or both pattern+location. This procedure allowed us to investigate the cortical correlates of natural auditory "what" and "where" changes independent of differences in the individual stimuli. For pattern changes, we observed significantly increased fMRI responses along the bilateral anterior superior temporal gyrus and superior temporal sulcus, the planum polare, lateral Heschl's gyrus and anterior planum temporale. For location changes, significant increases of fMRI responses were observed in bilateral posterior superior temporal gyrus and planum temporale. An overlap of these two types of changes occurred in the lateral anterior planum temporale and posterior superior temporal gyrus. The analysis of source event-related potentials (ERPs) revealed faster processing of location than pattern changes. Thus, our data suggest that passive processing of auditory spatial and pattern changes is dissociated both temporally and anatomically in the human brain. The predominant role of more anterior aspects of the superior temporal lobe in sound identity processing supports the role of this area as part of the auditory pattern processing stream, while spatial processing of auditory stimuli appears to be mediated by the more posterior parts of the superior temporal lobe.

  4. Direct and Indirect Cues to Knowledge States during Word Learning

    Science.gov (United States)

    Saylor, Megan M.; Carroll, C. Brooke

    2009-01-01

    The present study investigated three-year-olds' sensitivity to direct and indirect cues to others' knowledge states for word learning purposes. Children were given either direct, physical cues to knowledge or indirect, verbal cues to knowledge. Preschoolers revealed a better ability to learn words from a speaker following direct, physical cues to…

  5. An Association between Auditory-Visual Synchrony Processing and Reading Comprehension: Behavioral and Electrophysiological Evidence.

    Science.gov (United States)

    Mossbridge, Julia; Zweig, Jacob; Grabowecky, Marcia; Suzuki, Satoru

    2017-03-01

    The perceptual system integrates synchronized auditory-visual signals in part to promote individuation of objects in cluttered environments. The processing of auditory-visual synchrony may more generally contribute to cognition by synchronizing internally generated multimodal signals. Reading is a prime example because the ability to synchronize internal phonological and/or lexical processing with visual orthographic processing may facilitate encoding of words and meanings. Consistent with this possibility, developmental and clinical research has suggested a link between reading performance and the ability to compare visual spatial/temporal patterns with auditory temporal patterns. Here, we provide converging behavioral and electrophysiological evidence suggesting that greater behavioral ability to judge auditory-visual synchrony (Experiment 1) and greater sensitivity of an electrophysiological marker of auditory-visual synchrony processing (Experiment 2) both predict superior reading comprehension performance, accounting for 16% and 25% of the variance, respectively. These results support the idea that the mechanisms that detect auditory-visual synchrony contribute to reading comprehension.

  6. [Age differences of event-related potentials in the perception of successive and spacial components of auditory information].

    Science.gov (United States)

    Portnova, G V; Martynova, O V; Ivanitskiĭ, G A

    2014-01-01

    The perception of spatial and successive contexts of auditory information develops during human ontogeny. We compared event-related potentials (ERPs) recorded in 5- to 6-year-old children (N = 15) and adults (N = 15) in response to a digital series with omitted digits to explore age differences in the perception of successive auditory information. In addition, ERPs in response to the sound of falling drops delivered binaurally were obtained to examine the spatial context of auditory information. The ERPs obtained from the omitted digits significantly differed in the amplitude and latency of the N200 and P300 components between adults and children, which supports the hypothesis that the perception of a successive auditory structure is less automated in children compared with adults. Although no significant differences were found in adults, the sound of falling drops presented to the left ears of children elicited ERPs with earlier latencies and higher amplitudes of P300 and N400 components in the right temporal area. Stimulation of the right ear caused increasing amplitude of the N100 component in children. Thus, the observed differences in auditory ERPs of children and adults reflect developmental changes in the perception of spatial and successive auditory information.

  7. Speech distortion measure based on auditory properties

    Institute of Scientific and Technical Information of China (English)

    CHEN Guo; HU Xiulin; ZHANG Yunyu; ZHU Yaoting

    2000-01-01

    The Perceptual Spectrum Distortion (PSD), based on auditory properties of human being, is presented to measure speech distortion. The PSD measure calculates the speech distortion distance by simulating the auditory properties of human being and converting short-time speech power spectrum to auditory perceptual spectrum. Preliminary simulative experiments in comparison with the Itakura measure have been done. The results show that the PSD measure is a perferable speech distortion measure and more consistent with subjective assessment of speech quality.

  8. Auditory evoked potentials and multiple sclerosis

    OpenAIRE

    Carla Gentile Matas; Sandro Luiz de Andrade Matas; Caroline Rondina Salzano de Oliveira; Isabela Crivellaro Gonçalves

    2010-01-01

    Multiple sclerosis (MS) is an inflammatory, demyelinating disease that can affect several areas of the central nervous system. Damage along the auditory pathway can alter its integrity significantly. Therefore, it is important to investigate the auditory pathway, from the brainstem to the cortex, in individuals with MS. OBJECTIVE: The aim of this study was to characterize auditory evoked potentials in adults with MS of the remittent-recurrent type. METHOD: The study comprised 25 individuals w...

  9. Cortical encoding and neurophysiological tracking of intensity and pitch cues signaling English stress patterns in native and nonnative speakers.

    Science.gov (United States)

    Chung, Wei-Lun; Bidelman, Gavin M

    2016-01-01

    We examined cross-language differences in neural encoding and tracking of intensity and pitch cues signaling English stress patterns. Auditory mismatch negativities (MMNs) were recorded in English and Mandarin listeners in response to contrastive English pseudowords whose primary stress occurred either on the first or second syllable (i.e., "nocTICity" vs. "NOCticity"). The contrastive syllable stress elicited two consecutive MMNs in both language groups, but English speakers demonstrated larger responses to stress patterns than Mandarin speakers. Correlations between the amplitude of ERPs and continuous changes in the running intensity and pitch of speech assessed how well each language group's brain activity tracked these salient acoustic features of lexical stress. We found that English speakers' neural responses tracked intensity changes in speech more closely than Mandarin speakers (higher brain-acoustic correlation). Findings demonstrate more robust and precise processing of English stress (intensity) patterns in early auditory cortical responses of native relative to nonnative speakers.

  10. An Eye Tracking Comparison of External Pointing Cues and Internal Continuous Cues in Learning with Complex Animations

    Science.gov (United States)

    Boucheix, Jean-Michel; Lowe, Richard K.

    2010-01-01

    Two experiments used eye tracking to investigate a novel cueing approach for directing learner attention to low salience, high relevance aspects of a complex animation. In the first experiment, comprehension of a piano mechanism animation containing spreading-colour cues was compared with comprehension obtained with arrow cues or no cues. Eye…

  11. Seeing the song: left auditory structures may track auditory-visual dynamic alignment.

    Directory of Open Access Journals (Sweden)

    Julia A Mossbridge

    Full Text Available Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements, it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment.

  12. Auditory Training and Its Effects upon the Auditory Discrimination and Reading Readiness of Kindergarten Children.

    Science.gov (United States)

    Cullen, Minga Mustard

    The purpose of this investigation was to evaluate the effects of a systematic auditory training program on the auditory discrimination ability and reading readiness of 55 white, middle/upper middle class kindergarten students. Following pretesting with the "Wepman Auditory Discrimination Test,""The Clymer-Barrett Prereading Battery," and the…

  13. Effects of Methylphenidate (Ritalin) on Auditory Performance in Children with Attention and Auditory Processing Disorders.

    Science.gov (United States)

    Tillery, Kim L.; Katz, Jack; Keller, Warren D.

    2000-01-01

    A double-blind, placebo-controlled study examined effects of methylphenidate (Ritalin) on auditory processing in 32 children with both attention deficit hyperactivity disorder and central auditory processing (CAP) disorder. Analyses revealed that Ritalin did not have a significant effect on any of the central auditory processing measures, although…

  14. Central auditory function of deafness genes.

    Science.gov (United States)

    Willaredt, Marc A; Ebbers, Lena; Nothwang, Hans Gerd

    2014-06-01

    The highly variable benefit of hearing devices is a serious challenge in auditory rehabilitation. Various factors contribute to this phenomenon such as the diversity in ear defects, the different extent of auditory nerve hypoplasia, the age of intervention, and cognitive abilities. Recent analyses indicate that, in addition, central auditory functions of deafness genes have to be considered in this context. Since reduced neuronal activity acts as the common denominator in deafness, it is widely assumed that peripheral deafness influences development and function of the central auditory system in a stereotypical manner. However, functional characterization of transgenic mice with mutated deafness genes demonstrated gene-specific abnormalities in the central auditory system as well. A frequent function of deafness genes in the central auditory system is supported by a genome-wide expression study that revealed significant enrichment of these genes in the transcriptome of the auditory brainstem compared to the entire brain. Here, we will summarize current knowledge of the diverse central auditory functions of deafness genes. We furthermore propose the intimately interwoven gene regulatory networks governing development of the otic placode and the hindbrain as a mechanistic explanation for the widespread expression of these genes beyond the cochlea. We conclude that better knowledge of central auditory dysfunction caused by genetic alterations in deafness genes is required. In combination with improved genetic diagnostics becoming currently available through novel sequencing technologies, this information will likely contribute to better outcome prediction of hearing devices.

  15. Stimulation of the human auditory nerve with optical radiation

    Science.gov (United States)

    Fishman, Andrew; Winkler, Piotr; Mierzwinski, Jozef; Beuth, Wojciech; Izzo Matic, Agnella; Siedlecki, Zygmunt; Teudt, Ingo; Maier, Hannes; Richter, Claus-Peter

    2009-02-01

    A novel, spatially selective method to stimulate cranial nerves has been proposed: contact free stimulation with optical radiation. The radiation source is an infrared pulsed laser. The Case Report is the first report ever that shows that optical stimulation of the auditory nerve is possible in the human. The ethical approach to conduct any measurements or tests in humans requires efficacy and safety studies in animals, which have been conducted in gerbils. This report represents the first step in a translational research project to initiate a paradigm shift in neural interfaces. A patient was selected who required surgical removal of a large meningioma angiomatum WHO I by a planned transcochlear approach. Prior to cochlear ablation by drilling and subsequent tumor resection, the cochlear nerve was stimulated with a pulsed infrared laser at low radiation energies. Stimulation with optical radiation evoked compound action potentials from the human auditory nerve. Stimulation of the auditory nerve with infrared laser pulses is possible in the human inner ear. The finding is an important step for translating results from animal experiments to human and furthers the development of a novel interface that uses optical radiation to stimulate neurons. Additional measurements are required to optimize the stimulation parameters.

  16. Temporal visual cues aid speech recognition

    DEFF Research Database (Denmark)

    Zhou, Xiang; Ross, Lars; Lehn-Schiøler, Tue;

    2006-01-01

    that it is the temporal synchronicity of the visual input that aids parsing of the auditory stream. More specifically, we expected that purely temporal information, which does not convey information such as place of articulation may facility word recognition. METHODS: To test this prediction we used temporal features...... of audio to generate an artificial talking-face video and measured word recognition performance on simple monosyllabic words. RESULTS: When presenting words together with the artificial video we find that word recognition is improved over purely auditory presentation. The effect is significant (p...

  17. Ipsilateral inhibition and contralateral facilitation of simple reaction time to non-foveal visual targets from non-informative visual cues.

    Science.gov (United States)

    Tassinari, G; Biscaldi, M; Marzi, C A; Berlucchi, G

    1989-05-01

    Orienting to an extrafoveal light cue without foveating it induces a temporary inhibition of responses to subsequent targets presented in the same visual hemifield, as evinced from the fact that reaction time (RT) to targets ipsilateral to the cue relative to fixation is longer than RT to targets contralateral to the cue. This study has tested the hypothesis that ipsilateral RT inhibition is associated with contralateral RT facilitation by attempting to divide the difference between ipsilateral and contralateral RTs into costs and benefits. A neutral condition suited to this purpose should involve a cue that does not require a lateral orientation. Such neutral condition was provided by measuring RT to lateralized light targets following a central overhead auditory cue (experiment 1) or a foveal visual cue (experiment 2). In both experiments RT in the neutral condition was intermediate between ipsilateral and contralateral RTs, and the differences reaches significance in the second experiment. Benefits over the neutral condition measured in the contralateral condition were thus associated with costs in the ipsilateral condition. These results suggest that a reciprocal antagonism between opposite turning tendencies underlies the organization of covert orienting. They also agree with general multi-channel theories of selective attention according to which the facilitation of given channels is an obligatory accompaniment of the inhibition of other competing channels and vice versa.

  18. Bayesian integration of position and orientation cues in perception of biological and non-biological forms.

    Science.gov (United States)

    Thurman, Steven M; Lu, Hongjing

    2014-01-01

    Visual form analysis is fundamental to shape perception and likely plays a central role in perception of more complex dynamic shapes, such as moving objects or biological motion. Two primary form-based cues serve to represent the overall shape of an object: the spatial position and the orientation of locations along the boundary of the object. However, it is unclear how the visual system integrates these two sources of information in dynamic form analysis, and in particular how the brain resolves ambiguities due to sensory uncertainty and/or cue conflict. In the current study, we created animations of sparsely-sampled dynamic objects (human walkers or rotating squares) comprised of oriented Gabor patches in which orientation could either coincide or conflict with information provided by position cues. When the cues were incongruent, we found a characteristic trade-off between position and orientation information whereby position cues increasingly dominated perception as the relative uncertainty of orientation increased and vice versa. Furthermore, we found no evidence for differences in the visual processing of biological and non-biological objects, casting doubt on the claim that biological motion may be specialized in the human brain, at least in specific terms of form analysis. To explain these behavioral results quantitatively, we adopt a probabilistic template-matching model that uses Bayesian inference within local modules to estimate object shape separately from either spatial position or orientation signals. The outputs of the two modules are integrated with weights that reflect individual estimates of subjective cue reliability, and integrated over time to produce a decision about the perceived dynamics of the input data. Results of this model provided a close fit to the behavioral data, suggesting a mechanism in the human visual system that approximates rational Bayesian inference to integrate position and orientation signals in dynamic form analysis.

  19. Biophysical Cueing and Vascular Endothelial Cell Behavior

    Directory of Open Access Journals (Sweden)

    Joshua A. Wood

    2010-03-01

    Full Text Available Human vascular endothelial cells (VEC line the vessels of the body and are critical for the maintenance of vessel integrity and trafficking of biochemical cues. They are fundamental structural elements and are central to the signaling environment. Alterations in the normal functioning of the VEC population are associated with a number of vascular disorders among which are some of the leading causes of death in both the United States and abroad. VECs attach to their underlying stromal elements through a specialization of the extracellular matrix, the basement membrane. The basement membrane provides signaling cues to the VEC through its chemical constituents, by serving as a reservoir for cytoactive factors and through its intrinsic biophysical properties. This specialized matrix is composed of a topographically rich 3D felt-like network of fibers and pores on the nano (1–100 nm and submicron (100–1,000 nm size scale. The basement membrane provides biophysical cues to the overlying VECs through its intrinsic topography as well as through its local compliance (relative stiffness. These biophysical cues modulate VEC adhesion, migration, proliferation, differentiation, and the cytoskeletal signaling network of the individual cells. This review focuses on the impact of biophysical cues on VEC behaviors and demonstrates the need for their consideration in future vascular studies and the design of improved prosthetics.

  20. Word segmentation with universal prosodic cues.

    Science.gov (United States)

    Endress, Ansgar D; Hauser, Marc D

    2010-09-01

    When listening to speech from one's native language, words seem to be well separated from one another, like beads on a string. When listening to a foreign language, in contrast, words seem almost impossible to extract, as if there was only one bead on the same string. This contrast reveals that there are language-specific cues to segmentation. The puzzle, however, is that infants must be endowed with a language-independent mechanism for segmentation, as they ultimately solve the segmentation problem for any native language. Here, we approach the acquisition problem by asking whether there are language-independent cues to segmentation that might be available to even adult learners who have already acquired a native language. We show that adult learners recognize words in connected speech when only prosodic cues to word-boundaries are given from languages unfamiliar to the participants. In both artificial and natural speech, adult English speakers, with no prior exposure to the test languages, readily recognized words in natural languages with critically different prosodic patterns, including French, Turkish and Hungarian. We suggest that, even though languages differ in their sound structures, they carry universal prosodic characteristics. Further, these language-invariant prosodic cues provide a universally accessible mechanism for finding words in connected speech. These cues may enable infants to start acquiring words in any language even before they are fine-tuned to the sound structure of their native language.

  1. Auditory and visual connectivity gradients in frontoparietal cortex.

    Science.gov (United States)

    Braga, Rodrigo M; Hellyer, Peter J; Wise, Richard J S; Leech, Robert

    2017-01-01

    A frontoparietal network of brain regions is often implicated in both auditory and visual information processing. Although it is possible that the same set of multimodal regions subserves both modalities, there is increasing evidence that there is a differentiation of sensory function within frontoparietal cortex. Magnetic resonance imaging (MRI) in humans was used to investigate whether different frontoparietal regions showed intrinsic biases in connectivity with visual or auditory modalities. Structural connectivity was assessed with diffusion tractography and functional connectivity was tested using functional MRI. A dorsal-ventral gradient of function was observed, where connectivity with visual cortex dominates dorsal frontal and parietal connections, while connectivity with auditory cortex dominates ventral frontal and parietal regions. A gradient was also observed along the posterior-anterior axis, although in opposite directions in prefrontal and parietal cortices. The results suggest that the location of neural activity within frontoparietal cortex may be influenced by these intrinsic biases toward visual and auditory processing. Thus, the location of activity in frontoparietal cortex may be influenced as much by stimulus modality as the cognitive demands of a task. It was concluded that stimulus modality was spatially encoded throughout frontal and parietal cortices, and was speculated that such an arrangement allows for top-down modulation of modality-specific information to occur within higher-order cortex. This could provide a potentially faster and more efficient pathway by which top-down selection between sensory modalities could occur, by constraining modulations to within frontal and parietal regions, rather than long-range connections to sensory cortices. Hum Brain Mapp 38:255-270, 2017. © 2016 Wiley Periodicals, Inc.

  2. The effect of visual cues on difficulty ratings for segregation of musical streams in listeners with impaired hearing.

    Directory of Open Access Journals (Sweden)

    Hamish Innes-Brown

    Full Text Available BACKGROUND: Enjoyment of music is an important part of life that may be degraded for people with hearing impairments, especially those using cochlear implants. The ability to follow separate lines of melody is an important factor in music appreciation. This ability relies on effective auditory streaming, which is much reduced in people with hearing impairment, contributing to difficulties in music appreciation. The aim of this study was to assess whether visual cues could reduce the subjective difficulty of segregating a melody from interleaved background notes in normally hearing listeners, those using hearing aids, and those using cochlear implants. METHODOLOGY/PRINCIPAL FINDINGS: Normally hearing listeners (N = 20, hearing aid users (N = 10, and cochlear implant users (N = 11 were asked to rate the difficulty of segregating a repeating four-note melody from random interleaved distracter notes. The pitch of the background notes was gradually increased or decreased throughout blocks, providing a range of difficulty from easy (with a large pitch separation between melody and distracter to impossible (with the melody and distracter completely overlapping. Visual cues were provided on half the blocks, and difficulty ratings for blocks with and without visual cues were compared between groups. Visual cues reduced the subjective difficulty of extracting the melody from the distracter notes for normally hearing listeners and cochlear implant users, but not hearing aid users. CONCLUSION/SIGNIFICANCE: Simple visual cues may improve the ability of cochlear implant users to segregate lines of music, thus potentially increasing their enjoyment of music. More research is needed to determine what type of acoustic cues to encode visually in order to optimise the benefits they may provide.

  3. The Auditory-Visual Speech Benefit on Working Memory in Older Adults with Hearing Impairment

    Directory of Open Access Journals (Sweden)

    Jana B. Frtusova

    2016-04-01

    Full Text Available This study examined the effect of auditory-visual (AV speech stimuli on working memory in hearing impaired participants (HIP in comparison to age- and education-matched normal elderly controls (NEC. Participants completed a working memory n-back task (0- to 2-back in which sequences of digits were presented in visual-only (i.e., speech-reading, auditory-only (A-only, and AV conditions. Auditory event-related potentials (ERP were collected to assess the relationship between perceptual and working memory processing. The behavioural results showed that both groups were faster in the AV condition in comparison to the unisensory conditions. The ERP data showed perceptual facilitation in the AV condition, in the form of reduced amplitudes and latencies of the auditory N1 and/or P1 components, in the HIP group. Furthermore, a working memory ERP component, the P3, peaked earlier for both groups in the AV condition compared to the A-only condition. In general, the HIP group showed a more robust AV benefit; however, the NECs showed a dose-response relationship between perceptual facilitation and working memory improvement, especially for facilitation of processing speed. Two measures, reaction time and P3 amplitude, suggested that the presence of visual speech cues may have helped the HIP to counteract the demanding auditory processing, to the level that no group differences were evident during the AV modality despite lower performance during the A-only condition. Overall, this study provides support for the theory of an integrated perceptual-cognitive system. The practical significance of these findings is also discussed.

  4. A comparative study of simple auditory reaction time in blind (congenitally and sighted subjects

    Directory of Open Access Journals (Sweden)

    Pritesh Hariprasad Gandhi

    2013-01-01

    Full Text Available Background: Reaction time is the time interval between the application of a stimulus and the appearance of appropriate voluntary response by a subject. It involves stimulus processing, decision making, and response programming. Reaction time study has been popular due to their implication in sports physiology. Reaction time has been widely studied as its practical implications may be of great consequence e.g., a slower than normal reaction time while driving can have grave results. Objective: To study simple auditory reaction time in congenitally blind subjects and in age sex matched sighted subjects. To compare the simple auditory reaction time between congenitally blind subjects and healthy control subjects. Materials and Methods: Study had been carried out in two groups: The 1 st of 50 congenitally blind subjects and 2 nd group comprises of 50 healthy controls. It was carried out on Multiple Choice Reaction Time Apparatus, Inco Ambala Ltd. (Accuracy΁0.001 s in a sitting position at Government Medical College and Hospital, Bhavnagar and at a Blind School, PNR campus, Bhavnagar, Gujarat, India. Observations / Results: Simple auditory reaction time response with four different type of sound (horn, bell, ring, and whistle was recorded in both groups. According to our study, there is no significant different in reaction time between congenital blind and normal healthy persons. Conclusion: Blind individuals commonly utilize tactual and auditory cues for information and orientation and they reliance on touch and audition, together with more practice in using these modalities to guide behavior, is often reflected in better performance of blind relative to sighted participants in tactile or auditory discrimination tasks, but there is not any difference in reaction time between congenitally blind and sighted people.

  5. Segregation and integration of auditory streams when listening to multi-part music.

    Directory of Open Access Journals (Sweden)

    Marie Ragert

    Full Text Available In our daily lives, auditory stream segregation allows us to differentiate concurrent sound sources and to make sense of the scene we are experiencing. However, a combination of segregation and the concurrent integration of auditory streams is necessary in order to analyze the relationship between streams and thus perceive a coherent auditory scene. The present functional magnetic resonance imaging study investigates the relative role and neural underpinnings of these listening strategies in multi-part musical stimuli. We compare a real human performance of a piano duet and a synthetic stimulus of the same duet in a prioritized integrative attention paradigm that required the simultaneous segregation and integration of auditory streams. In so doing, we manipulate the degree to which the attended part of the duet led either structurally (attend melody vs. attend accompaniment or temporally (asynchronies vs. no asynchronies between parts, and thus the relative contributions of integration and segregation used to make an assessment of the leader-follower relationship. We show that perceptually the relationship between parts is biased towards the conventional structural hierarchy in western music in which the melody generally dominates (leads the accompaniment. Moreover, the assessment varies as a function of both cognitive load, as shown through difficulty ratings and the interaction of the temporal and the structural relationship factors. Neurally, we see that the temporal relationship between parts, as one important cue for stream segregation, revealed distinct neural activity in the planum temporale. By contrast, integration used when listening to both the temporally separated performance stimulus and the temporally fused synthetic stimulus resulted in activation of the intraparietal sulcus. These results support the hypothesis that the planum temporale and IPS are key structures underlying the mechanisms of segregation and integration of

  6. Topographical cues regulate the crosstalk between MSCs and macrophages

    Science.gov (United States)

    Vallés, Gema; Bensiamar, Fátima; Crespo, Lara; Arruebo, Manuel; Vilaboa, Nuria; Saldaña, Laura

    2015-01-01

    Implantation of scaffolds may elicit a host foreign body response triggered by monocyte/macrophage lineage cells. Growing evidence suggests that topographical cues of scaffolds play an important role in MSC functionality. In this work, we examined whether surface topographical features can regulate paracrine interactions that MSCs establish with macrophages. Three-dimensional (3D) topography sensing drives MSCs into a spatial arrangement that stimulates the production of the anti-inflammatory proteins PGE2 and TSG-6. Compared to two-dimensional (2D) settings, 3D arrangement of MSCs co-cultured with macrophages leads to an important decrease in the secretion of soluble factors related with inflammation and chemotaxis including IL-6 and MCP-1. Attenuation of MCP-1 secretion in 3D co-cultures correlates with a decrease in the accumulation of its mRNA levels in MSCs and macrophages. Using neutralizing antibodies, we identified that the interplay between PGE2, IL-6, TSG-6 and MCP-1 in the co-cultures is strongly influenced by the micro-architecture that supports MSCs. Local inflammatory milieu provided by 3D-arranged MSCs in co-cultures induces a decrease in monocyte migration as compared to monolayer cells. This effect is partially mediated by reduced levels of IL-6 and MCP-1, proteins that up-regulate each other's secretion. Our findings highlight the importance of topographical cues in the soluble factor-guided communication between MSCs and macrophages. PMID:25453943

  7. Coordinated sensor cueing for chemical plume detection

    Science.gov (United States)

    Abraham, Nathan J.; Jensenius, Andrea M.; Watkins, Adam S.; Hawthorne, R. Chad; Stepnitz, Brian J.

    2011-05-01

    This paper describes an organic data fusion and sensor cueing approach for Chemical, Biological, Radiological, and Nuclear (CBRN) sensors. The Joint Warning and Reporting Network (JWARN) uses a hardware component referred to as the JWARN Component Interface Device (JCID). The Edgewood Chemical and Biological Center has developed a small footprint and open architecture solution for the JCID capability called JCID-on-a-Chip (JoaC). The JoaC program aims to reduce the cost and complexity of the JCID by shrinking the necessary functionality down to a small single board computer. This effort focused on development of a fusion and cueing algorithm organic to the JoaC hardware. By embedding this capability in the JoaC, sensors have the ability to receive and process cues from other sensors without the use of a complex and costly centralized infrastructure. Additionally, the JoaC software is hardware agnostic, as evidenced by its drop-in inclusion in two different system-on-a-chip platforms including Windows CE and LINUX environments. In this effort, a partnership between JPM-CA, JHU/APL, and the Edgewood Chemical and Biological Center (ECBC), the authors implemented and demonstrated a new algorithm for cooperative detection and localization of a chemical agent plume. This experiment used a pair of mobile Joint Services Lightweight Standoff Chemical Agent Detector (JSLSCAD) units which were controlled by fusion and cueing algorithms hosted on a JoaC. The algorithms embedded in the JoaC enabled the two sensor systems to perform cross cueing and cooperatively form a higher fidelity estimate of chemical releases by combining sensor readings. Additionally, each JSLSCAD had the ability to focus its search on smaller regions than those required by a single sensor system by using the cross cue information from the other sensor.

  8. Autosomal recessive hereditary auditory neuropathy

    Institute of Scientific and Technical Information of China (English)

    王秋菊; 顾瑞; 曹菊阳

    2003-01-01

    Objectives: Auditory neuropathy (AN) is a sensorineural hearing disorder characterized by absent or abnormal auditory brainstem responses (ABRs) and normal cochlear outer hair cell function as measured by otoacoustic emissions (OAEs). Many risk factors are thought to be involved in its etiology and pathophysiology. Three Chinese pedigrees with familial AN are presented herein to demonstrate involvement of genetic factors in AN etiology. Methods: Probands of the above - mentioned pedigrees, who had been diagnosed with AN, were evaluated and followed up in the Department of Otolaryngology Head and Neck Surgery, China PLA General Hospital. Their family members were studied and the pedigree diagrams were established. History of illness, physical examination,pure tone audiometry, acoustic reflex, ABRs and transient evoked and distortion- product otoacoustic emissions (TEOAEs and DPOAEs) were obtained from members of these families. DPOAE changes under the influence of contralateral sound stimuli were observed by presenting a set of continuous white noise to the non - recording ear to exam the function of auditory efferent system. Some subjects received vestibular caloric test, computed tomography (CT)scan of the temporal bone and electrocardiography (ECG) to exclude other possible neuropathy disorders. Results: In most affected subjects, hearing loss of various degrees and speech discrimination difficulties started at 10 to16 years of age. Their audiological evaluation showed absence of acoustic reflex and ABRs. As expected in AN, these subjects exhibited near normal cochlear outer hair cell function as shown in TEOAE & DPOAE recordings. Pure- tone audiometry revealed hearing loss ranging from mild to severe in these patients. Autosomal recessive inheritance patterns were observed in the three families. In Pedigree Ⅰ and Ⅱ, two affected brothers were found respectively, while in pedigree Ⅲ, 2 sisters were affected. All the patients were otherwise normal without

  9. Auditory hallucinations in nonverbal quadriplegics.

    Science.gov (United States)

    Hamilton, J

    1985-11-01

    When a system for communicating with nonverbal, quadriplegic, institutionalized residents was developed, it was discovered that many were experiencing auditory hallucinations. Nine cases are presented in this study. The "voices" described have many similar characteristics, the primary one being that they give authoritarian commands that tell the residents how to behave and to which the residents feel compelled to respond. Both the relationship of this phenomenon to the theoretical work of Julian Jaynes and its effect on the lives of the residents are discussed.

  10. Narrow, duplicated internal auditory canal

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, T. [Servico de Neurorradiologia, Hospital Garcia de Orta, Avenida Torrado da Silva, 2801-951, Almada (Portugal); Shayestehfar, B. [Department of Radiology, UCLA Oliveview School of Medicine, Los Angeles, California (United States); Lufkin, R. [Department of Radiology, UCLA School of Medicine, Los Angeles, California (United States)

    2003-05-01

    A narrow internal auditory canal (IAC) constitutes a relative contraindication to cochlear implantation because it is associated with aplasia or hypoplasia of the vestibulocochlear nerve or its cochlear branch. We report an unusual case of a narrow, duplicated IAC, divided by a bony septum into a superior relatively large portion and an inferior stenotic portion, in which we could identify only the facial nerve. This case adds support to the association between a narrow IAC and aplasia or hypoplasia of the vestibulocochlear nerve. The normal facial nerve argues against the hypothesis that the narrow IAC is the result of a primary bony defect which inhibits the growth of the vestibulocochlear nerve. (orig.)

  11. Mapping tonotopy in human auditory cortex

    NARCIS (Netherlands)

    van Dijk, Pim; Langers, Dave R M; Moore, BCJ; Patterson, RD; Winter, IM; Carlyon, RP; Gockel, HE

    2013-01-01

    Tonotopy is arguably the most prominent organizational principle in the auditory pathway. Nevertheless, the layout of tonotopic maps in humans is still debated. We present neuroimaging data that robustly identify multiple tonotopic maps in the bilateral auditory cortex. In contrast with some earlier

  12. Bilateral duplication of the internal auditory canal

    Energy Technology Data Exchange (ETDEWEB)

    Weon, Young Cheol; Kim, Jae Hyoung; Choi, Sung Kyu [Seoul National University College of Medicine, Department of Radiology, Seoul National University Bundang Hospital, Seongnam-si (Korea); Koo, Ja-Won [Seoul National University College of Medicine, Department of Otolaryngology, Seoul National University Bundang Hospital, Seongnam-si (Korea)

    2007-10-15

    Duplication of the internal auditory canal is an extremely rare temporal bone anomaly that is believed to result from aplasia or hypoplasia of the vestibulocochlear nerve. We report bilateral duplication of the internal auditory canal in a 28-month-old boy with developmental delay and sensorineural hearing loss. (orig.)

  13. Primary Auditory Cortex Regulates Threat Memory Specificity

    Science.gov (United States)

    Wigestrand, Mattis B.; Schiff, Hillary C.; Fyhn, Marianne; LeDoux, Joseph E.; Sears, Robert M.

    2017-01-01

    Distinguishing threatening from nonthreatening stimuli is essential for survival and stimulus generalization is a hallmark of anxiety disorders. While auditory threat learning produces long-lasting plasticity in primary auditory cortex (Au1), it is not clear whether such Au1 plasticity regulates memory specificity or generalization. We used…

  14. Further Evidence of Auditory Extinction in Aphasia

    Science.gov (United States)

    Marshall, Rebecca Shisler; Basilakos, Alexandra; Love-Myers, Kim

    2013-01-01

    Purpose: Preliminary research ( Shisler, 2005) suggests that auditory extinction in individuals with aphasia (IWA) may be connected to binding and attention. In this study, the authors expanded on previous findings on auditory extinction to determine the source of extinction deficits in IWA. Method: Seventeen IWA (M[subscript age] = 53.19 years)…

  15. Auditory Processing Disorder and Foreign Language Acquisition

    Science.gov (United States)

    Veselovska, Ganna

    2015-01-01

    This article aims at exploring various strategies for coping with the auditory processing disorder in the light of foreign language acquisition. The techniques relevant to dealing with the auditory processing disorder can be attributed to environmental and compensatory approaches. The environmental one involves actions directed at creating a…

  16. Memory for location and visual cues in white-eared hummingbirds Hylocharis leucotis

    Directory of Open Access Journals (Sweden)

    Guillermo PÉREZ, Carlos LARA, José VICCON-PALE, Martha SIGNORET-POILLON

    2011-08-01

    Full Text Available In nature hummingbirds face floral resources whose availability, quality and quantity can vary spatially and temporally. Thus, they must constantly make foraging decisions about which patches, plants and flowers to visit, partly as a function of the nectar reward. The uncertainty of these decisions would possibly be reduced if an individual could remember locations or use visual cues to avoid revisiting recently depleted flowers. In the present study, we carried out field experiments with white-eared hummingbirds Hylocharis leucotis, to evaluate their use of locations or visual cues when foraging on natural flowers Penstemon roseus. We evaluated the use of spatial memory by observing birds while they were foraging between two plants and within a single plant. Our results showed that hummingbirds prefer to use location when foraging in two plants, but they also use visual cues to efficiently locate unvisited rewarded flowers when they feed on a single plant. However, in absence of visual cues, in both experiments birds mainly used the location of previously visited flowers to make subsequent visits. Our data suggest that hummingbirds are capable of learning and employing this flexibility depending on the faced environmental conditions and the information acquired in previous visits [Current Zoology 57 (4: 468–476, 2011].

  17. Active listening for spatial orientation in a complex auditory scene.

    Directory of Open Access Journals (Sweden)

    Cynthia F Moss

    2006-04-01

    Full Text Available To successfully negotiate a complex environment, an animal must control the timing of motor behaviors in coordination with dynamic sensory information. Here, we report on adaptive temporal control of vocal-motor behavior in an echolocating bat, Eptesicus fuscus, as it captured tethered insects close to background vegetation. Recordings of the bat's sonar vocalizations were synchronized with high-speed video images that were used to reconstruct the bat's three-dimensional flight path and the positions of target and vegetation. When the bat encountered the difficult task of taking insects as close as 10-20 cm from the vegetation, its behavior changed significantly from that under open room conditions. Its success rate decreased by about 50%, its time to initiate interception increased by a factor of ten, and its high repetition rate "terminal buzz" decreased in duration by a factor of three. Under all conditions, the bat produced prominent sonar "strobe groups," clusters of echolocation pulses with stable intervals. In the final stages of insect capture, the bat produced strobe groups at a higher incidence when the insect was positioned near clutter. Strobe groups occurred at all phases of the wingbeat (and inferred respiration cycle, challenging the hypothesis of strict synchronization between respiration and sound production in echolocating bats. The results of this study provide a clear demonstration of temporal vocal-motor control that directly impacts the signals used for perception.

  18. Cognitive Cues are More Compelling than Facial Cues in Determining Adults' Reactions towards Young Children

    Directory of Open Access Journals (Sweden)

    Carlos Hernández Blasi

    2015-04-01

    Full Text Available Previous research has demonstrated the significant influence that both children's facial features (Lorenz, 1943 and children's cognitive expressions (Bjorklund, Hernández Blasi, and Periss, 2010 have on adults' perception of young children. However, until now, these two types of cues have been studied independently. The present study contrasted these two types of cues simultaneously in a group of college students. To this purpose, we designed five experimental conditions (Consistent, Inconsistent, Mature-Face, Immature-Face, and Faces-Only in which we varied the presentation of a series of mature and immature vignettes (including two previously studied types of thinking: natural thinking and supernatural thinking associated with a series of more mature and less mature children's faces. Performance in these conditions was contrasted with data from a Vignettes-Only condition taken from Bjorklund et al. (2010. Results indicated that cognitive cues were more powerful than facial cues in determining adults' perceptions of young children. From an evolutionary developmental perspective, we suggest that facial cues are more relevant to adults during infancy than during the preschool period, when, with the development of spoken language, the verbalized expressions of children's thoughts become the principal cues influencing adults' perceptions, with facial cues playing a more secondary role.

  19. Speech perception as complex auditory categorization

    Science.gov (United States)

    Holt, Lori L.

    2002-05-01

    Despite a long and rich history of categorization research in cognitive psychology, very little work has addressed the issue of complex auditory category formation. This is especially unfortunate because the general underlying cognitive and perceptual mechanisms that guide auditory category formation are of great importance to understanding speech perception. I will discuss a new methodological approach to examining complex auditory category formation that specifically addresses issues relevant to speech perception. This approach utilizes novel nonspeech sound stimuli to gain full experimental control over listeners' history of experience. As such, the course of learning is readily measurable. Results from this methodology indicate that the structure and formation of auditory categories are a function of the statistical input distributions of sound that listeners hear, aspects of the operating characteristics of the auditory system, and characteristics of the perceptual categorization system. These results have important implications for phonetic acquisition and speech perception.

  20. Effects of auditory rhythm and music on gait disturbances in Parkinson’s disease

    Directory of Open Access Journals (Sweden)

    Aidin eAshoori

    2015-11-01

    Full Text Available Gait abnormalities such as shuffling steps, start hesitation, and freezing are common and often incapacitating symptoms of Parkinson’s disease (PD and other parkinsonian disorders. Pharmacological and surgical approaches have only limited efficacy in treating these gait disorders. Rhythmic auditory stimulation (RAS, such as playing marching music or dance therapy, has been shown to be a safe, inexpensive, and an effective method in improving gait in PD patients. However, RAS that adapts to patients’ movements may be more effective than rigid, fixed-tempo RAS used in most studies. In addition to auditory cueing, immersive virtual reality technologies that utilize interactive computer-generated systems through wearable devices are increasingly used for improving brain-body interaction and sensory-motor integration. Using multisensory cues, these therapies may be particularly suitable for the treatment of parkinsonian freezing and other gait disorders. In this review, we examine the affected neurological circuits underlying gait and temporal processing in PD patients and summarize the current studies demonstrating the effects of RAS on improving these gait deficits.

  1. Auditory enhancement of visual perception at threshold depends on visual abilities.

    Science.gov (United States)

    Caclin, Anne; Bouchet, Patrick; Djoulah, Farida; Pirat, Elodie; Pernier, Jacques; Giard, Marie-Hélène

    2011-06-17

    Whether or not multisensory interactions can improve detection thresholds, and thus widen the range of perceptible events is a long-standing debate. Here we revisit this question, by testing the influence of auditory stimuli on visual detection threshold, in subjects exhibiting a wide range of visual-only performance. Above the perceptual threshold, crossmodal interactions have indeed been reported to depend on the subject's performance when the modalities are presented in isolation. We thus tested normal-seeing subjects and short-sighted subjects wearing their usual glasses. We used a paradigm limiting potential shortcomings of previous studies: we chose a criterion-free threshold measurement procedure and precluded exogenous cueing effects by systematically presenting a visual cue whenever a visual target (a faint Gabor patch) might occur. Using this carefully controlled procedure, we found that concurrent sounds only improved visual detection thresholds in the sub-group of subjects exhibiting the poorest performance in the visual-only conditions. In these subjects, for oblique orientations of the visual stimuli (but not for vertical or horizontal targets), the auditory improvement was still present when visual detection was already helped with flanking visual stimuli generating a collinear facilitation effect. These findings highlight that crossmodal interactions are most efficient to improve perceptual performance when an isolated modality is deficient.

  2. Neural representation in the auditory midbrain of the envelope of vocalizations based on a peripheral ear model

    Directory of Open Access Journals (Sweden)

    Thilo eRode

    2013-10-01

    Full Text Available The auditory midbrain implant (AMI consists of a single shank array (20 sites for stimulation along the tonotopic axis of the central nucleus of the inferior colliculus (ICC and has been safely implanted in deaf patients who cannot benefit from a cochlear implant (CI. The AMI improves lip-reading abilities and environmental awareness in the implanted patients. However, the AMI cannot achieve the high levels of speech perception possible with the CI. It appears the AMI can transmit sufficient spectral cues but with limited temporal cues required for speech understanding. Currently, the AMI uses a CI-based strategy, which was originally designed to stimulate each frequency region along the cochlea with amplitude-modulated pulse trains matching the envelope of the bandpass-filtered sound components. However, it is unclear if this type of stimulation with only a single site within each frequency lamina of the ICC can elicit sufficient temporal cues for speech perception. At least speech understanding in quiet is still possible with envelope cues as low as 50 Hz. Therefore, we investigated how ICC neurons follow the bandpass-filtered envelope structure of natural stimuli in ketamine-anesthetized guinea pigs. We identified a subset of ICC neurons that could closely follow the envelope structure (up to ~100 Hz of a diverse set of species-specific calls, which was revealed by using a peripheral ear model to estimate the true bandpass-filtered envelopes observed by the brain. Although previous studies have suggested a complex neural transformation from the auditory nerve to the ICC, our data suggest that the brain maintains a robust temporal code in a subset of ICC neurons matching the envelope structure of natural stimuli. Clinically, these findings suggest that a CI-based strategy may still be effective for the AMI if the appropriate neurons are entrained to the envelope of the acoustic stimulus and can transmit sufficient temporal cues to higher

  3. Phase Sensitive Cueing for 3D Objects in Overhead Images

    Energy Technology Data Exchange (ETDEWEB)

    Paglieroni, D W; Eppler, W G; Poland, D N

    2005-02-18

    A 3D solid model-aided object cueing method that matches phase angles of directional derivative vectors at image pixels to phase angles of vectors normal to projected model edges is described. It is intended for finding specific types of objects at arbitrary position and orientation in overhead images, independent of spatial resolution, obliqueness, acquisition conditions, and type of imaging sensor. It is shown that the phase similarity measure can be efficiently evaluated over all combinations of model position and orientation using the FFT. The highest degree of similarity over all model orientations is captured in a match surface of similarity values vs. model position. Unambiguous peaks in this surface are sorted in descending order of similarity value, and the small image thumbnails that contain them are presented to human analysts for inspection in sorted order.

  4. THE EFFECTS OF SALICYLATE ON AUDITORY EVOKED POTENTIAL AMPLITWDE FROM THE AUDITORY CORTEX AND AUDITORY BRAINSTEM

    Institute of Scientific and Technical Information of China (English)

    Brian Sawka; SUN Wei

    2014-01-01

    Tinnitus has often been studied using salicylate in animal models as they are capable of inducing tempo-rary hearing loss and tinnitus. Studies have recently observed enhancement of auditory evoked responses of the auditory cortex (AC) post salicylate treatment which is also shown to be related to tinnitus like behavior in rats. The aim of this study was to observe if enhancements of the AC post salicylate treatment are also present at structures in the brainstem. Four male Sprague Dawley rats with AC implanted electrodes were tested for both AC and auditory brainstem response (ABR) recordings pre and post 250 mg/kg intraperitone-al injections of salicylate. The responses were recorded as the peak to trough amplitudes of P1-N1 (AC), ABR wave V, and ABR waveⅡ. AC responses resulted in statistically significant enhancement of ampli-tude at 2 hours post salicylate with 90 dB stimuli tone bursts of 4, 8, 12, and 20 kHz. Wave V of ABR re-sponses at 90 dB resulted in a statistically significant reduction of amplitude 2 hours post salicylate and a mean decrease of amplitude of 31%for 16 kHz. WaveⅡamplitudes at 2 hours post treatment were signifi-cantly reduced for 4, 12, and 20 kHz stimuli at 90 dB SPL. Our results suggest that the enhancement chang-es of the AC related to salicylate induced tinnitus are generated superior to the level of the inferior colliculus and may originate in the AC.

  5. Neuromagnetic fields reveal cortical plasticity when learning an auditory discrimination task.

    Science.gov (United States)

    Cansino, S; Williamson, S J

    1997-08-01

    Auditory evoked neuromagnetic fields of the primary and association auditory cortices were recorded while subjects learned to discriminate small differences in frequency and intensity between two consecutive tones. When discrimination was no better than chance, evoked field patterns across the scalp manifested no significant differences between correct and incorrect responses. However, when performance was correct on at least 75% of the trials, the spatial pattern of magnetic field differed significantly between correct and incorrect responses during the first 70 ms following the onset of the second tone. In this respect, the magnetic field pattern predicted when the subject would make an incorrect judgment more than 100 ms prior to indicating the judgment by a button press. One subject improved discrimination for much smaller differences between stimuli after 200 h of training. Evidence of cortical plasticity with improved discrimination is provided by an accompanying decrease of the relative magnetic field amplitude of the 100 ms response components in the primary and association auditory cortices.

  6. Cue-specific reactivity in experienced gamblers.

    Science.gov (United States)

    Wulfert, Edelgard; Maxson, Julie; Jardin, Bianca

    2009-12-01

    To examine whether gambling cue reactivity is cue-specific, 47 scratch-off lottery players and 47 horse race gamblers were presented with video clips of their preferred and nonpreferred modes of gambling, and two control stimuli including an exciting car race and a mental stressor task while heart rates, excitement, and urge to gamble were being measured. Heart rates for both groups of gamblers were highest to the mental stressor and did not differ in response to the other three cues. Excitement for both groups was highest in response to the action cues (horse race and car chase). Urge to gamble was significantly higher for each group to their preferred mode of gambling. A post hoc exploratory analysis comparing social gamblers (n = 54) and probable pathological gamblers (n = 40) revealed a similar pattern of responses. However, pathological gamblers reported overall significantly higher urges to gamble than social gamblers. As urges have been shown to play a pivotal role in addictive behaviors and relapse, the current findings may have implications for the development of gambling problems and relapse after successful treatment.

  7. Effects of similarity on environmental context cueing.

    Science.gov (United States)

    Smith, Steven M; Handy, Justin D; Angello, Genna; Manzano, Isabel

    2014-01-01

    Three experiments examined the prediction that context cues which are similar to study contexts can facilitate episodic recall, even if those cues are never seen before the recall test. Environmental context cueing effects have typically produced such small effect sizes that influences of moderating factors, such as the similarity between encoding and retrieval contexts, would be difficult to observe experimentally. Videos of environmental contexts, however, can be used to produce powerful context-dependent memory effects, particularly when only one memory target is associated with each video context, intentional item-context encoding is encouraged, and free recall tests are used. Experiment 1 showed that a not previously viewed video of the study context provided an effective recall cue, although it was not as effective as the originally viewed video context. Experiments 2 and 3 showed that videos of environments that were conceptually similar to encoding contexts (e.g., both were videos of ball field games) also cued recall, but not as well if the encoding contexts were given specific labels (e.g., "home run") incompatible with test contexts (e.g., a soccer scene). A fourth experiment that used incidental item-context encoding showed that video context reinstatement has a robust effect on paired associate memory, indicating that the video context reinstatement effect does not depend on interactive item-context encoding or free recall testing.

  8. Visual Cues and Listening Effort: Individual Variability

    Science.gov (United States)

    Picou, Erin M.; Ricketts, Todd A; Hornsby, Benjamin W. Y.

    2011-01-01

    Purpose: To investigate the effect of visual cues on listening effort as well as whether predictive variables such as working memory capacity (WMC) and lipreading ability affect the magnitude of listening effort. Method: Twenty participants with normal hearing were tested using a paired-associates recall task in 2 conditions (quiet and noise) and…

  9. Relationship between Sympathetic Skin Responses and Auditory Hypersensitivity to Different Auditory Stimuli.

    Science.gov (United States)

    Kato, Fumi; Iwanaga, Ryoichiro; Chono, Mami; Fujihara, Saori; Tokunaga, Akiko; Murata, Jun; Tanaka, Koji; Nakane, Hideyuki; Tanaka, Goro

    2014-07-01

    [Purpose] Auditory hypersensitivity has been widely reported in patients with autism spectrum disorders. However, the neurological background of auditory hypersensitivity is currently not clear. The present study examined the relationship between sympathetic nervous system responses and auditory hypersensitivity induced by different types of auditory stimuli. [Methods] We exposed 20 healthy young adults to six different types of auditory stimuli. The amounts of palmar sweating resulting from the auditory stimuli were compared between groups with (hypersensitive) and without (non-hypersensitive) auditory hypersensitivity. [Results] Although no group × type of stimulus × first stimulus interaction was observed for the extent of reaction, significant type of stimulus × first stimulus interaction was noted for the extent of reaction. For an 80 dB-6,000 Hz stimulus, the trends for palmar sweating differed between the groups. For the first stimulus, the variance became larger in the hypersensitive group than in the non-hypersensitive group. [Conclusion] Subjects who regularly felt excessive reactions to auditory stimuli tended to have excessive sympathetic responses to repeated loud noises compared with subjects who did not feel excessive reactions. People with auditory hypersensitivity may be classified into several subtypes depending on their reaction patterns to auditory stimuli.

  10. The (unclear effects of invalid retro-cues.

    Directory of Open Access Journals (Sweden)

    Marcel eGressmann

    2016-03-01

    Full Text Available Studies with the retro-cue paradigm have shown that validly cueing objects in visual working memory long after encoding can still benefit performance on subsequent change detection tasks. With regard to the effects of invalid cues, the literature is less clear. Some studies reported costs, others did not. We here revisit two recent studies that made interesting suggestions concerning invalid retro-cues: One study suggested that costs only occur for larger set sizes, and another study suggested that inclusion of invalid retro-cues diminishes the retro-cue benefit. New data from one experiment and a reanalysis of published data are provided to address these conclusions. The new data clearly show costs (and benefits that were independent of set size, and the reanalysis suggests no influence of the inclusion of invalid retro-cues on the retro-cue benefit. Thus, previous interpretations may be taken with some caution at present.

  11. Temporal pattern of acoustic imaging noise asymmetrically modulates activation in the auditory cortex.

    Science.gov (United States)

    Ranaweera, Ruwan D; Kwon, Minseok; Hu, Shuowen; Tamer, Gregory G; Luh, Wen-Ming; Talavage, Thomas M

    2016-01-01

    This study investigated the hemisphere-specific effects of the temporal pattern of imaging related acoustic noise on auditory cortex activation. Hemodynamic responses (HDRs) to five temporal patterns of imaging noise corresponding to noise generated by unique combinations of imaging volume and effective repetition time (TR), were obtained using a stroboscopic event-related paradigm with extra-long (≥27.5 s) TR to minimize inter-acquisition effects. In addition to confirmation that fMRI responses in auditory cortex do not behave in a linear manner, temporal patterns of imaging noise were found to modulate both the shape and spatial extent of hemodynamic responses, with classically non-auditory areas exhibiting responses to longer duration noise conditions. Hemispheric analysis revealed the right primary auditory cortex to be more sensitive than the left to the presence of imaging related acoustic noise. Right primary auditory cortex responses were significantly larger during all the conditions. This asymmetry of response to imaging related acoustic noise could lead to different baseline activation levels during acquisition schemes using short TR, inducing an observed asymmetry in the responses to an intended acoustic stimulus through limitations of dynamic range, rather than due to differences in neuronal processing of the stimulus. These results emphasize the importance of accounting for the temporal pattern of the acoustic noise when comparing findings across different fMRI studies, especially those involving acoustic stimulation.

  12. Acoustic communication in Panthera tigris: A study of tiger vocalization and auditory receptivity

    Science.gov (United States)

    Walsh, Edward J.; Wang, Lily M.; Armstrong, Douglas L.; Curro, Thomas; Simmons, Lee G.; McGee, Joann

    2003-04-01

    Acoustic communication represents a primary mode of interaction within the sub-species of Panthera tigris and it is commonly known that their vocal repertoire consists of a relatively wide range of utterances that include roars, growls, grunts, hisses and chuffling, vocalizations that are in some cases produced with extraordinary power. P. tigris vocalizations are known to contain significant amounts of acoustic energy over a wide spectral range, with peak output occurring in a low frequency bandwidth in the case of roars. von Muggenthaler (2000) has also shown that roars and other vocal productions uttered by P. tigris contain energy in the infrasonic range. While it is reasonable to assume that low and infrasonic acoustic cues are used as communication signals among conspecifics in the wild, it is clearly necessary to demonstrate that members of the P. tigris sub-species are responsive to low and infrasonic acoustic signals. The auditory brainstem response has proven to be an effective tool in the characterization of auditory performance among tigers and the results of an ongoing study of both the acoustical properties of P. tigris vocalizations and their auditory receptivity support the supposition that tigers are not only responsive to low frequency stimulation, but exquisitely so.

  13. Large cross-sectional study of presbycusis reveals rapid progressive decline in auditory temporal acuity.

    Science.gov (United States)

    Ozmeral, Erol J; Eddins, Ann C; Frisina, D Robert; Eddins, David A

    2016-07-01

    The auditory system relies on extraordinarily precise timing cues for the accurate perception of speech, music, and object identification. Epidemiological research has documented the age-related progressive decline in hearing sensitivity that is known to be a major health concern for the elderly. Although smaller investigations indicate that auditory temporal processing also declines with age, such measures have not been included in larger studies. Temporal gap detection thresholds (TGDTs; an index of auditory temporal resolution) measured in 1071 listeners (aged 18-98 years) were shown to decline at a minimum rate of 1.05 ms (15%) per decade. Age was a significant predictor of TGDT when controlling for audibility (partial correlation) and when restricting analyses to persons with normal-hearing sensitivity (n = 434). The TGDTs were significantly better for males (3.5 ms; 51%) than females when averaged across the life span. These results highlight the need for indices of temporal processing in diagnostics, as treatment targets, and as factors in models of aging.

  14. Auditory-somatosensory temporal sensitivity improves when the somatosensory event is caused by voluntary body movement

    Directory of Open Access Journals (Sweden)

    Norimichi Kitagawa

    2016-12-01

    Full Text Available When we actively interact with the environment, it is crucial that we perceive a precise temporal relationship between our own actions and sensory effects to guide our body movements.Thus, we hypothesized that voluntary movements improve perceptual sensitivity to the temporal disparity between auditory and movement-related somatosensory events compared to when they are delivered passively to sensory receptors. In the voluntary condition, participants voluntarily tapped a button, and a noise burst was presented at various onset asynchronies relative to the button press. The participants made either 'sound-first' or 'touch-first' responses. We found that the performance of temporal order judgment (TOJ in the voluntary condition (as indexed by the just noticeable difference was significantly better (M=42.5 ms ±3.8 s.e.m than that when their finger was passively stimulated (passive condition: M=66.8 ms ±6.3 s.e.m. We further examined whether the performance improvement with voluntary action can be attributed to the prediction of the timing of the stimulation from sensory cues (sensory-based prediction, kinesthetic cues contained in voluntary action, and/or to the prediction of stimulation timing from the efference copy of the motor command (motor-based prediction. When the participant’s finger was moved passively to press the button (involuntary condition and when three noise bursts were presented before the target burst with regular intervals (predictable condition, the TOJ performance was not improved from that in the passive condition. These results suggest that the improvement in sensitivity to temporal disparity between somatosensory and auditory events caused by the voluntary action cannot be attributed to sensory-based prediction and kinesthetic cues. Rather, the prediction from the efference copy of the motor command would be crucial for improving the temporal sensitivity.

  15. Auditory filters at low-frequencies

    DEFF Research Database (Denmark)

    Orellana, Carlos Andrés Jurado; Pedersen, Christian Sejer; Møller, Henrik

    2009-01-01

    Prediction and assessment of low-frequency noise problems requires information about the auditory filter characteristics at low-frequencies. Unfortunately, data at low-frequencies is scarce and practically no results have been published for frequencies below 100 Hz. Extrapolation of ERB results......-ear transfer function), the asymmetry of the auditory filter changed from steeper high-frequency slopes at 1000 Hz to steeper low-frequency slopes below 100 Hz. Increasing steepness at low-frequencies of the middle-ear high-pass filter is thought to cause this effect. The dynamic range of the auditory filter...

  16. Assessing the aging effect on auditory-verbal memory by Persian version of dichotic auditory verbal memory test

    Directory of Open Access Journals (Sweden)

    Zahra Shahidipour

    2014-01-01

    Conclusion: Based on the obtained results, significant reduction in auditory memory was seen in aged group and the Persian version of dichotic auditory-verbal memory test, like many other auditory verbal memory tests, showed the aging effects on auditory verbal memory performance.

  17. Configural Effect in Multiple-Cue Probability Learning

    Science.gov (United States)

    Edgell, Stephen E.; Castellan, N. John, Jr.

    1973-01-01

    In a nonmetric multiple-cue probability learning task involving 2 binary cue dimensions, it was found that Ss can learn to use configural or pattern information (a) when only the configural information is relevant, and in addition to the configural information, one or both of the cue dimensions are relevant. (Author/RK)

  18. Effects of Typographical Cues on Reading and Recall of Text.

    Science.gov (United States)

    Lorch, Robert F., Jr.; And Others

    1995-01-01

    Effects of typographical cues on text memory were investigated in 2 experiments involving 204 college students. Findings demonstrated that effects of typographical cues on memory were mediated by effects on attention during reading. Typographical cues appeared to increase attention only to the signaled content, resulting in better memory. (SLD)

  19. Responsiveness of Nigerian Students to Pictorial Depth Cues.

    Science.gov (United States)

    Evans, G. S.; Seddon, G. M.

    1978-01-01

    Three groups of Nigerian high school and college students were tested for response to four pictorial depth cues. Students had more difficulty with cues concerning the relative size of objects and the foreshortening of straight lines than with cues involving overlap of lines and distortion of the angles between lines. (Author/JEG)

  20. Emotional pictures and sounds: A review of multimodal interactions of emotion cues in multiple domains

    Directory of Open Access Journals (Sweden)

    Antje B M Gerdes

    2014-12-01

    Full Text Available In everyday life, multiple sensory channels jointly trigger emotional experiences and one channel may alter processing in another channel. For example, seeing an emotional facial expression and hearing the voice’s emotional tone will jointly create the emotional experience. This example, where auditory and visual input is related to social communication, has gained considerable attention by researchers. However, interactions of visual and auditory emotional information are not limited to social communication but can extend to much broader contexts including human, animal, and environmental cues. In this article, we review current research on audiovisual emotion processing beyond face-voice stimuli to develop a broader perspective on multimodal interactions in emotion processing. We argue that current concepts of multimodality should be extended in considering an ecologically valid variety of stimuli in audiovisual emotion processing. Therefore, we provide an overview of studies in which emotional sounds and interactions with complex pictures of scenes were investigated. In addition to behavioral studies, we focus on neuroimaging, electro- and peripher-physiological findings. Furthermore, we integrate these findings and identify similarities or differences. We conclude with suggestions for future research.

  1. Use of auditory learning to manage listening problems in children

    OpenAIRE

    Moore, David R.; Halliday, Lorna F.; Amitay, Sygal

    2008-01-01

    This paper reviews recent studies that have used adaptive auditory training to address communication problems experienced by some children in their everyday life. It considers the auditory contribution to developmental listening and language problems and the underlying principles of auditory learning that may drive further refinement of auditory learning applications. Following strong claims that language and listening skills in children could be improved by auditory learning, researchers hav...

  2. Ambiguous Tilt and Translation Motion Cues after Space Flight and Otolith Assessment during Post-Flight Re-Adaptation

    Science.gov (United States)

    Wood, Scott J.; Clarke, A. H.; Harm, D. L.; Rupert, A. H.; Clement, G. R.

    2009-01-01

    Adaptive changes during space flight in how the brain integrates vestibular cues with other sensory information can lead to impaired movement coordination, vertigo, spatial disorientation and perceptual illusions following Gtransitions. These studies are designed to examine both the physiological basis and operational implications for disorientation and tilt-translation disturbances following short duration space flights.

  3. Material differences of auditory source retrieval:Evidence from event-related potential studies

    Institute of Scientific and Technical Information of China (English)

    NIE AiQing; GUO ChunYan; SHEN MoWei

    2008-01-01

    Two event-related potential experiments were conducted to investigate the temporal and the spatial distributions of the old/new effects for the item recognition task and the auditory source retrieval task using picture and Chinese character as stimuli respectively. Stimuli were presented on the center of the screen with their names read out either by female or by male voice simultaneously during the study phase and then two testa were performed separately. One test task was to differentiate the old items from the new ones, and the other task was to judge the items read out by a certain voice during the study phase as targets and other ones as non-targets. The results showed that the old/new effect of the auditory source retrieval task was more sustained over time than that of the item recognition task in both experiments, and the spatial distribution of the former effect was wider than that of the latter one. Both experiments recorded reliable old/new effect over the prefrontal cortex during the source retrieval task. However, there existed some differences of the old/new effect for the auditory source retrieval task between picture and Chinese character, and LORETA source analysis indicated that the differ-ences might be rooted in the temporal lobe. These findings demonstrate that the relevancy of the old/new effects between the item recognition task and the auditory source retrieval task supports the dual-process model; the spatial and the temporal distributions of the old/new effect elicited by the auditory source retrieval task are regulated by both the feature of the experimental material and the perceptual attribute of the voice.

  4. Task-switching effects for visual and auditory pro- and antisaccades: evidence for a task-set inertia.

    Science.gov (United States)

    Heath, Matthew; Starrs, Faryn; Macpherson, Ewan; Weiler, Jeffrey

    2015-01-01

    The completion of an antisaccade delays the reaction time (RT) of a subsequent prosaccade; however, the converse switch does not influence RT. In accounting for this result, the task-set inertia hypothesis contends that antisaccades engender a persistent nonstandard task-set that delays the planning of a subsequent prosaccade. In contrast, the coordinate system transformation hypothesis asserts that the transformation required to construct a mirror-symmetrical target representation persistently inhibits prosaccade planning. The authors tested the latter hypothesis by examining switch-costs for pro- and antisaccades directed to visual (i.e., the stimuli used in previous work) and auditory targets. Notably, auditory cues are specified in a head-centered frame of reference prior to their conversion into the retinocentric coordinates necessary for saccade output. Thus, if the coordinate system transformation hypothesis is correct then auditory pro- and antisaccades should elicit a bidirectional switch-cost because each requires a coordinate transformation. RTs for visual and auditory modalities showed a reliable--and equivalent magnitude--prosaccade switch-cost. Moreover, performance (e.g., movement time) and kinematic (e.g., velocity) variables indicated the switch-cost was restricted to response planning. As such, results are incompatible with the coordinate system transformation hypothesis and therefore provide convergent evidence that a task-set inertia contributes to the prosaccade switch-cost.

  5. Introspective responses to cues and motivation to reduce cigarette smoking influence state and behavioral responses to cue exposure.

    Science.gov (United States)

    Veilleux, Jennifer C; Skinner, Kayla D

    2016-09-01

    In the current study, we aimed to extend smoking cue-reactivity research by evaluating delay discounting as an outcome of cigarette cue exposure. We also separated introspection in response to cues (e.g., self-reporting craving and affect) from cue exposure alone, to determine if introspection changes behavioral responses to cigarette cues. Finally, we included measures of quit motivation and resistance to smoking to assess motivational influences on cue exposure. Smokers were invited to participate in an online cue-reactivity study. Participants were randomly assigned to view smoking images or neutral images, and were randomized to respond to cues with either craving and affect questions (e.g., introspection) or filler questions. Following cue exposure, participants completed a delay discounting task and then reported state affect, craving, and resistance to smoking, as well as an assessment of quit motivation. We found that after controlling for trait impulsivity, participants who introspected on craving and affect showed higher delay discounting, irrespective of cue type, but we found no effect of response condition on subsequent craving (e.g., craving reactivity). We also found that motivation to quit interacted with experimental conditions to predict state craving and state resistance to smoking. Although asking about craving during cue exposure did not increase later craving, it resulted in greater delaying of discounted rewards. Overall, our findings suggest the need to further assess the implications of introspection and motivation on behavioral outcomes of cue exposure.

  6. [Approaches to therapy of auditory agnosia].

    Science.gov (United States)

    Fechtelpeter, A; Göddenhenrich, S; Huber, W; Springer, L

    1990-01-01

    In a 41-year-old stroke patient with bitemporal brain damage, we found severe signs of auditory agnosia 6 months after onset. Recognition of environmental sounds was extremely impaired when tested in a multiple choice sound-picture matching task, whereas auditory discrimination between sounds and picture identifications by written names was almost undisturbed. In a therapy experiment, we tried to enhance sound recognition via semantic categorization and association, imitation of sound and analysis of auditory features, respectively. The stimulation of conscious auditory analysis proved to be increasingly effective over a 4-week period of therapy. We were able to show that the patient's improvement was not only a simple effect of practicing, but it was stable and carried over to nontrained items.

  7. Environment for Auditory Research Facility (EAR)

    Data.gov (United States)

    Federal Laboratory Consortium — EAR is an auditory perception and communication research center enabling state-of-the-art simulation of various indoor and outdoor acoustic environments. The heart...

  8. Effect of omega-3 on auditory system

    Directory of Open Access Journals (Sweden)

    Vida Rahimi

    2014-01-01

    Full Text Available Background and Aim: Omega-3 fatty acid have structural and biological roles in the body 's various systems . Numerous studies have tried to research about it. Auditory system is affected a s well. The aim of this article was to review the researches about the effect of omega-3 on auditory system.Methods: We searched Medline , Google Scholar, PubMed, Cochrane Library and SID search engines with the "auditory" and "omega-3" keywords and read textbooks about this subject between 19 70 and 20 13.Conclusion: Both excess and deficient amounts of dietary omega-3 fatty acid can cause harmful effects on fetal and infant growth and development of brain and central nervous system esspesially auditory system. It is important to determine the adequate dosage of omega-3.

  9. Cue-Based Feeding in the NICU.

    Science.gov (United States)

    Whetten, Cynthia H

    In NICU settings, caring for neonates born as early as 23 weeks gestation presents unique challenges for caregivers. Traditionally, preterm infants who are learning to orally feed take a predetermined volume of breast milk or formula at scheduled intervals, regardless of their individual ability to coordinate each feeding. Evidence suggests that this volume-driven feeding model should be replaced with a more individualized, developmentally appropriate practice. Evidence from the literature suggests that preterm infants fed via cue-based feeding reach full oral feeding status faster than their volume-feeding counterparts and have shorter lengths of stay in the hospital. Changing practice to infant-driven or cue-based feedings in the hospital setting requires staff education, documentation, and team-based communication.

  10. Cleaning MEG artifacts using external cues.

    Science.gov (United States)

    Tal, I; Abeles, M

    2013-07-15

    Using EEG, ECoG, MEG, and microelectrodes to record brain activity is prone to multiple artifacts. The main power line (mains line), video equipment, mechanical vibrations and activities outside the brain are the most common sources of artifacts. MEG amplitudes are low, and even small artifacts distort recordings. In this study, we show how these artifacts can be efficiently removed by recording external cues during MEG recordings. These external cues are subsequently used to register the precise times or spectra of the artifacts. The results indicate that these procedures preserve both the spectra and the time domain wave-shapes of the neuromagnetic signal, while successfully reducing the contribution of the artifacts to the target signals without reducing the rank of the data.

  11. Teaching hand-washing with pictorial cues

    Directory of Open Access Journals (Sweden)

    Timo Saloviita

    2016-01-01

    Full Text Available Applied behavior analysis has been shown to be an effective means to teach daily living skills to individuals with intellectual disability. In the present study pictorial cues based on task analysis, system of least prompts, and social reinforcement were used to teach a man with mild intellectual disability to wash his hands correctly. An ABAB reversal design was used with follow-up after two weeks. The results show a rapid increase in hand-washing skills.

  12. A critical period for auditory thalamocortical connectivity

    DEFF Research Database (Denmark)

    Rinaldi Barkat, Tania; Polley, Daniel B; Hensch, Takao K

    2011-01-01

    connectivity by in vivo recordings and day-by-day voltage-sensitive dye imaging in an acute brain slice preparation. Passive tone-rearing modified response strength and topography in mouse primary auditory cortex (A1) during a brief, 3-d window, but did not alter tonotopic maps in the thalamus. Gene...... locus of change for the tonotopic plasticity. The evolving postnatal connectivity between thalamus and cortex in the days following hearing onset may therefore determine a critical period for auditory processing....

  13. Integration of polarization and chromatic cues in the insect sky compass.

    Science.gov (United States)

    el Jundi, Basil; Pfeiffer, Keram; Heinze, Stanley; Homberg, Uwe

    2014-06-01

    Animals relying on a celestial compass for spatial orientation may use the position of the sun, the chromatic or intensity gradient of the sky, the polarization pattern of the sky, or a combination of these cues as compass signals. Behavioral experiments in bees and ants, indeed, showed that direct sunlight and sky polarization play a role in sky compass orientation, but the relative importance of these cues are species-specific. Intracellular recordings from polarization-sensitive interneurons in the desert locust and monarch butterfly suggest that inputs from different eye regions, including polarized-light input through the dorsal rim area of the eye and chromatic/intensity gradient input from the main eye, are combined at the level of the medulla to create a robust compass signal. Conflicting input from the polarization and chromatic/intensity channel, resulting from eccentric receptive fields, is eliminated at the level of the anterior optic tubercle and central complex through internal compensation for changing solar elevations, which requires input from a circadian clock. Across several species, the central complex likely serves as an internal sky compass, combining E-vector information with other celestial cues. Descending neurons, likewise, respond both to zenithal polarization and to unpolarized cues in an azimuth-dependent way.

  14. Brain dynamic mechanisms on the visual attention scale with Chinese characters cues

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The temporal dynamics in brain evoked by the scale of visual attention with the cues of Chinese characters were studied by recording event-related potentials (ERPs). With the fixed orientation of visual attention, 14 healthy young participants performed a search task in which the search array was preceded by Chinese characters cues, "大, 中, 小" (large, medium, small). 128 channels scalp ERPs were recorded to study the role of visual attention scale played in the visual spatial attention. The results showed that there was no significant difference in the ERP components evoked by the three Chinese characters cues except the inferoposterior N2 latency. The targets evoked P2, N2 amplitudes and latency have significant differences with the different cues of large, middle and small, while P1 and N1 components had no significant difference. The results suggested that the processing of scale of visual attention was mainly concerned with P2, N2 components, while the P1, N1 components were mainly related with the processing of visual orientation information.

  15. Children Use Wealth Cues to Evaluate Others.

    Directory of Open Access Journals (Sweden)

    Kristin Shutts

    Full Text Available Wealth differences between individuals are ubiquitous in modern society, and often serve as the basis for biased social evaluations among adults. The present research probed whether children use cues that are commonly associated with wealth differences in society to guide their consideration of others. In Study 1, 4-5-year-old participants from diverse racial backgrounds expressed preferences for children who were paired with high-wealth cues; White children in Study 1 also matched high-wealth stimuli with White faces. Study 2 conceptually replicated the preference effect from Study 1, and showed that young children (4-6 years also use material wealth indicators to guide their inferences about people's relative standing in other domains (i.e., competence and popularity. Study 3 revealed that children (5-9 years use a broad range of wealth cues to guide their evaluations of, and actions toward, unfamiliar people. Further, biased responses were not attenuated among children whose families were lower in socioeconomic status. Often overlooked by those who study children's attitudes and stereotypes, social class markers appear to influence evaluations, inferences, and behavior early in development.

  16. Children Use Wealth Cues to Evaluate Others.

    Science.gov (United States)

    Shutts, Kristin; Brey, Elizabeth L; Dornbusch, Leah A; Slywotzky, Nina; Olson, Kristina R

    2016-01-01

    Wealth differences between individuals are ubiquitous in modern society, and often serve as the basis for biased social evaluations among adults. The present research probed whether children use cues that are commonly associated with wealth differences in society to guide their consideration of others. In Study 1, 4-5-year-old participants from diverse racial backgrounds expressed preferences for children who were paired with high-wealth cues; White children in Study 1 also matched high-wealth stimuli with White faces. Study 2 conceptually replicated the preference effect from Study 1, and showed that young children (4-6 years) also use material wealth indicators to guide their inferences about people's relative standing in other domains (i.e., competence and popularity). Study 3 revealed that children (5-9 years) use a broad range of wealth cues to guide their evaluations of, and actions toward, unfamiliar people. Further, biased responses were not attenuated among children whose families were lower in socioeconomic status. Often overlooked by those who study children's attitudes and stereotypes, social class markers appear to influence evaluations, inferences, and behavior early in development.

  17. Simple ears-flexible behavior: Information processing in the moth auditory pathway

    Institute of Scientific and Technical Information of China (English)

    Gerit PFUHL; Blanka KALINOVA; Irena VALTEROVA; Bente G.BERG

    2015-01-01

    Lepidoptera evolved tympanic ears in response to echolocating bats.Comparative studies have shown that moth ears evolved many times independently from chordotonal organs.With only 1 to 4 receptor cells,they are one of the simplest hearing organs.The small number of receptors does not imply simplicity,neither in behavior nor in the neural circuit.Behaviorally,the response to ultrasound is far from being a simple reflex.Moths' escape behavior is modulated by a variety of cues,especially pheromones,which can alter the auditory response.Neurally the receptor cell(s) diverges onto many intemeurons,enabling pa rallel processing and feature extraction.Ascending interneurons and sound-sensitive brain neurons innervate a neuropil in the ventrolateral protocerebrum.Further,recent electrophysiological data provides the first glimpses into how the acoustic response is modulated as well as how ultrasound influences the other senses.So far,the auditory pathway has been studied in noctuids.The findings agree well with common computational principles found in other insects.However,moth ears also show unique mechanical and neural adaptation.Here,we first describe the variety of moths' auditory behavior,especially the co-option of ultrasonic signals for intraspecific communication.Second,we describe the current knowledge of the neural pathway gained from noctuid moths.Finally,we argue that Galleriinae which show negative and positive phonotaxis,are an interesting model species for future electrophysiological studies of the auditory pathway and multimodal sensory integration,and so are ideally suited for the study of the evolution of behavioral mechanisms given a few receptors [Current Zoology 61 (2):292-302,2015].

  18. Speech Evoked Auditory Brainstem Response in Stuttering

    Directory of Open Access Journals (Sweden)

    Ali Akbar Tahaei

    2014-01-01

    Full Text Available Auditory processing deficits have been hypothesized as an underlying mechanism for stuttering. Previous studies have demonstrated abnormal responses in subjects with persistent developmental stuttering (PDS at the higher level of the central auditory system using speech stimuli. Recently, the potential usefulness of speech evoked auditory brainstem responses in central auditory processing disorders has been emphasized. The current study used the speech evoked ABR to investigate the hypothesis that subjects with PDS have specific auditory perceptual dysfunction. Objectives. To determine whether brainstem responses to speech stimuli differ between PDS subjects and normal fluent speakers. Methods. Twenty-five subjects with PDS participated in this study. The speech-ABRs were elicited by the 5-formant synthesized syllable/da/, with duration of 40 ms. Results. There were significant group differences for the onset and offset transient peaks. Subjects with PDS had longer latencies for the onset and offset peaks relative to the control group. Conclusions. Subjects with PDS showed a deficient neural timing in the early stages of the auditory pathway consistent with temporal processing deficits and their abnormal timing may underlie to their disfluency.

  19. Auditory processing in fragile x syndrome.

    Science.gov (United States)

    Rotschafer, Sarah E; Razak, Khaleel A

    2014-01-01

    Fragile X syndrome (FXS) is an inherited form of intellectual disability and autism. Among other symptoms, FXS patients demonstrate abnormalities in sensory processing and communication. Clinical, behavioral, and electrophysiological studies consistently show auditory hypersensitivity in humans with FXS. Consistent with observations in humans, the Fmr1 KO mouse model of FXS also shows evidence of altered auditory processing and communication deficiencies. A well-known and commonly used phenotype in pre-clinical studies of FXS is audiogenic seizures. In addition, increased acoustic startle response is seen in the Fmr1 KO mice. In vivo electrophysiological recordings indicate hyper-excitable responses, broader frequency tuning, and abnormal spectrotemporal processing in primary auditory cortex of Fmr1 KO mice. Thus, auditory hyper-excitability is a robust, reliable, and translatable biomarker in Fmr1 KO mice. Abnormal auditory evoked responses have been used as outcome measures to test therapeutics in FXS patients. Given that similarly abnormal responses are present in Fmr1 KO mice suggests that cellular mechanisms can be addressed. Sensory cortical deficits are relatively more tractable from a mechanistic perspective than more complex social behaviors that are typically studied in autism and FXS. The focus of this review is to bring together clinical, functional, and structural studies in humans with electrophysiological and behavioral studies in mice to make the case that auditory hypersensitivity provides a unique opportunity to integrate molecular, cellular, circuit level studies with behavioral outcomes in the search for therapeutics for FXS and other autism spectrum disorders.

  20. Auditory Processing in Fragile X Syndrome

    Directory of Open Access Journals (Sweden)

    Sarah E Rotschafer

    2014-02-01

    Full Text Available Fragile X syndrome (FXS is an inherited form of intellectual disability and autism. Among other symptoms, FXS patients demonstrate abnormalities in sensory processing and communication. Clinical, behavioral and electrophysiological studies consistently show auditory hypersensitivity in humans with FXS. Consistent with observations in humans, the Fmr1 KO mouse model of FXS also shows evidence of altered auditory processing and communication deficiencies. A well-known and commonly used phenotype in pre-clinical studies of FXS is audiogenic seizures. In addition, increased acoustic startle is also seen in the Fmr1 KO mice. In vivo electrophysiological recordings indicate hyper-excitable responses, broader frequency tuning and abnormal spectrotemporal processing in primary auditory cortex of Fmr1 KO mice. Thus, auditory hyper-excitability is a robust, reliable and translatable biomarker in Fmr1 KO mice. Abnormal auditory evoked responses have been used as outcome measures to test therapeutics in FXS patients. Given that similarly abnormal responses are present in Fmr1 KO mice suggests that cellular mechanisms can be addressed. Sensory cortical deficits are relatively more tractable from a mechanistic perspective than more complex social behaviors that are typically studied in autism and FXS. The focus of this review is to bring together clinical, functional and structural studies in humans with electrophysiological and behavioral studies in mice to make the case that auditory hypersensitivity provides a unique opportunity to integrate molecular, cellular, circuit level studies with behavioral outcomes in the search for therapeutics for FXS and other autism spectrum disorders.

  1. Auditory model inversion and its application

    Institute of Scientific and Technical Information of China (English)

    ZHAO Heming; WANG Yongqi; CHEN Xueqin

    2005-01-01

    Auditory model has been applied to several aspects of speech signal processing field, and appears to be effective in performance. This paper presents the inverse transform of each stage of one widely used auditory model. First of all it is necessary to invert correlogram and reconstruct phase information by repetitious iterations in order to get auditory-nerve firing rate. The next step is to obtain the negative parts of the signal via the reverse process of the HWR (Half Wave Rectification). Finally the functions of inner hair cell/synapse model and Gammatone filters have to be inverted. Thus the whole auditory model inversion has been achieved. An application of noisy speech enhancement based on auditory model inversion algorithm is proposed. Many experiments show that this method is effective in reducing noise.Especially when SNR of noisy speech is low it is more effective than other methods. Thus this auditory model inversion method given in this paper is applicable to speech enhancement field.

  2. Auditory dysfunction associated with solvent exposure

    Directory of Open Access Journals (Sweden)

    Fuente Adrian

    2013-01-01

    Full Text Available Abstract Background A number of studies have demonstrated that solvents may induce auditory dysfunction. However, there is still little knowledge regarding the main signs and symptoms of solvent-induced hearing loss (SIHL. The aim of this research was to investigate the association between solvent exposure and adverse effects on peripheral and central auditory functioning with a comprehensive audiological test battery. Methods Seventy-two solvent-exposed workers and 72 non-exposed workers were selected to participate in the study. The test battery comprised pure-tone audiometry (PTA, transient evoked otoacoustic emissions (TEOAE, Random Gap Detection (RGD and Hearing-in-Noise test (HINT. Results Solvent-exposed subjects presented with poorer mean test results than non-exposed subjects. A bivariate and multivariate linear regression model analysis was performed. One model for each auditory outcome (PTA, TEOAE, RGD and HINT was independently constructed. For all of the models solvent exposure was significantly associated with the auditory outcome. Age also appeared significantly associated with some auditory outcomes. Conclusions This study provides further evidence of the possible adverse effect of solvents on the peripheral and central auditory functioning. A discussion of these effects and the utility of selected hearing tests to assess SIHL is addressed.

  3. Long Latency Auditory Evoked Potentials during Meditation.

    Science.gov (United States)

    Telles, Shirley; Deepeshwar, Singh; Naveen, Kalkuni Visweswaraiah; Pailoor, Subramanya

    2015-10-01

    The auditory sensory pathway has been studied in meditators, using midlatency and short latency auditory evoked potentials. The present study evaluated long latency auditory evoked potentials (LLAEPs) during meditation. Sixty male participants, aged between 18 and 31 years (group mean±SD, 20.5±3.8 years), were assessed in 4 mental states based on descriptions in the traditional texts. They were (a) random thinking, (b) nonmeditative focusing, (c) meditative focusing, and (d) meditation. The order of the sessions was randomly assigned. The LLAEP components studied were P1 (40-60 ms), N1 (75-115 ms), P2 (120-180 ms), and N2 (180-280 ms). For each component, the peak amplitude and peak latency were measured from the prestimulus baseline. There was significant decrease in the peak latency of the P2 component during and after meditation (Pmeditation facilitates the processing of information in the auditory association cortex, whereas the number of neurons recruited was smaller in random thinking and non-meditative focused thinking, at the level of the secondary auditory cortex, auditory association cortex and anterior cingulate cortex.

  4. Task-specific modulation of human auditory evoked responses in a delayed-match-to-sample task

    Directory of Open Access Journals (Sweden)

    Feng eRong

    2011-05-01

    Full Text Available In this study, we focus our investigation on task-specific cognitive modulation of early cortical auditory processing in human cerebral cortex. During the experiments, we acquired whole-head magnetoencephalography (MEG data while participants were performing an auditory delayed-match-to-sample (DMS task and associated control tasks. Using a spatial filtering beamformer technique to simultaneously estimate multiple source activities inside the human brain, we observed a significant DMS-specific suppression of the auditory evoked response to the second stimulus in a sound pair, with the center of the effect being located in the vicinity of the left auditory cortex. For the right auditory cortex, a non-invariant suppression effect was observed in both DMS and control tasks. Furthermore, analysis of coherence revealed a beta band (12 ~ 20 Hz DMS-specific enhanced functional interaction between the sources in left auditory cortex and those in left inferior frontal gyrus, which has been shown to involve in short-term memory processing during the delay period of DMS task. Our findings support the view that early evoked cortical responses to incoming acoustic stimuli can be modulated by task-specific cognitive functions by means of frontal-temporal functional interactions.

  5. The use of auditory and visual context in speech perception by listeners with normal hearing and listeners with cochlear implants

    Directory of Open Access Journals (Sweden)

    Matthew eWinn

    2013-11-01

    Full Text Available There is a wide range of acoustic and visual variability across different talkers and different speaking contexts. Listeners with normal hearing accommodate that variability in ways that facilitate efficient perception, but it is not known whether listeners with cochlear implants can do the same. In this study, listeners with normal hearing (NH and listeners with cochlear implants (CIs were tested for accommodation to auditory and visual phonetic contexts created by gender-driven speech differences as well as vowel coarticulation and lip rounding in both consonants and vowels. Accommodation was measured as the shifting of perceptual boundaries between /s/ and /ʃ/ sounds in various contexts, as modeled by mixed-effects logistic regression. Owing to the spectral contrasts thought to underlie these context effects, CI listeners were predicted to perform poorly, but showed considerable success. Listeners with cochlear implants not only showed sensitivity to auditory cues to gender, they were also able to use visual cues to gender (i.e. faces as a supplement or proxy for information in the acoustic domain, in a pattern that was not observed for listeners with normal hearing. Spectrally-degraded stimuli heard by listeners with normal hearing generally did not elicit strong context effects, underscoring the limitations of noise vocoders and/or the importance of experience with electric hearing. Visual cues for consonant lip rounding and vowel lip rounding were perceived in a manner consistent with coarticulation and were generally used more heavily by listeners with CIs. Results suggest that listeners with cochlear implants are able to accommodate various sources of acoustic variability either by attending to appropriate acoustic cues or by inferring them via the visual signal.

  6. Beethoven's Last Piano Sonata and Those Who Follow Crocodiles: Cross-Domain Mappings of Auditory Pitch in a Musical Context

    Science.gov (United States)

    Eitan, Zohar; Timmers, Renee

    2010-01-01

    Though auditory pitch is customarily mapped in Western cultures onto spatial verticality (high-low), both anthropological reports and cognitive studies suggest that pitch may be mapped onto a wide variety of other domains. We collected a total number of 35 pitch mappings and investigated in four experiments how these mappings are used and…

  7. Effect of auditory feedback differs according to side of hemiparesis: a comparative pilot study

    Directory of Open Access Journals (Sweden)

    Bensmail Djamel

    2009-12-01

    Full Text Available Abstract Background Following stroke, patients frequently demonstrate loss of motor control and function and altered kinematic parameters of reaching movements. Feedback is an essential component of rehabilitation and auditory feedback of kinematic parameters may be a useful tool for rehabilitation of reaching movements at the impairment level. The aim of this study was to investigate the effect of 2 types of auditory feedback on the kinematics of reaching movements in hemiparetic stroke patients and to compare differences between patients with right (RHD and left hemisphere damage (LHD. Methods 10 healthy controls, 8 stroke patients with LHD and 8 with RHD were included. Patient groups had similar levels of upper limb function. Two types of auditory feedback (spatial and simple were developed and provided online during reaching movements to 9 targets in the workspace. Kinematics of the upper limb were recorded with an electromagnetic system. Kinematics were compared between groups (Mann Whitney test and the effect of auditory feedback on kinematics was tested within each patient group (Friedman test. Results In the patient groups, peak hand velocity was lower, the number of velocity peaks was higher and movements were more curved than in the healthy group. Despite having a similar clinical level, kinematics differed between LHD and RHD groups. Peak velocity was similar but LHD patients had fewer velocity peaks and less curved movements than RHD patients. The addition of auditory feedback improved the curvature index in patients with RHD and deteriorated peak velocity, the number of velocity peaks and curvature index in LHD patients. No difference between types of feedback was found in either patient group. Conclusion In stroke patients, side of lesion should be considered when examining arm reaching kinematics. Further studies are necessary to evaluate differences in responses to auditory feedback between patients with lesions in opposite

  8. Expression and function of scleraxis in the developing auditory system.

    Directory of Open Access Journals (Sweden)

    Zoe F Mann

    Full Text Available A study of genes expressed in the developing inner ear identified the bHLH transcription factor Scleraxis (Scx in the developing cochlea. Previous work has demonstrated an essential role for Scx in the differentiation and development of tendons, ligaments and cells of chondrogenic lineage. Expression in the cochlea has been shown previously, however the functional role for Scx in the cochlea is unknown. Using a Scx-GFP reporter mouse line we examined the spatial and temporal patterns of Scx expression in the developing cochlea between embryonic day 13.5 and postnatal day 25. Embryonically, Scx is expressed broadly throughout the cochlear duct and surrounding mesenchyme and at postnatal ages becomes restricted to the inner hair cells and the interdental cells of the spiral limbus. Deletion of Scx results in hearing impairment indicated by elevated auditory brainstem response (ABR thresholds and diminished distortion product otoacoustic emission (DPOAE amplitudes, across a range of frequencies. No changes in either gross cochlear morphology or expression of the Scx target genes Col2A, Bmp4 or Sox9 were observed in Scx(-/- mutants, suggesting that the auditory defects observed in these animals may be a result of unidentified Scx-dependent processes within the cochlea.

  9. Auditory function in vestibular migraine

    Directory of Open Access Journals (Sweden)

    John Mathew

    2016-01-01

    Full Text Available Introduction: Vestibular migraine (VM is a vestibular syndrome seen in patients with migraine and is characterized by short spells of spontaneous or positional vertigo which lasts between a few seconds to weeks. Migraine and VM are considered to be a result of chemical abnormalities in the serotonin pathway. Neuhauser′s diagnostic criteria for vestibular migraine is widely accepted. Research on VM is still limited and there are few studies which have been published on this topic. Materials and Methods: This study has two parts. In the first part, we did a retrospective chart review of eighty consecutive patients who were diagnosed with vestibular migraine and determined the frequency of auditory dysfunction in these patients. The second part was a prospective case control study in which we compared the audiological parameters of thirty patients diagnosed with VM with thirty normal controls to look for any significant differences. Results: The frequency of vestibular migraine in our population is 22%. The frequency of hearing loss in VM is 33%. Conclusion: There is a significant difference between cases and controls with regards to the presence of distortion product otoacoustic emissions in both ears. This finding suggests that the hearing loss in VM is cochlear in origin.

  10. An Investigation of Spatial Hearing in Children with Normal Hearing and with Cochlear Implants and the Impact of Executive Function

    Science.gov (United States)

    Misurelli, Sara M.

    The ability to analyze an "auditory scene"---that is, to selectively attend to a target source while simultaneously segregating and ignoring distracting information---is one of the most important and complex skills utilized by normal hearing (NH) adults. The NH adult auditory system and brain work rather well to segregate auditory sources in adverse environments. However, for some children and individuals with hearing loss, selectively attending to one source in noisy environments can be extremely challenging. In a normal auditory system, information arriving at each ear is integrated, and thus these binaural cues aid in speech understanding in noise. A growing number of individuals who are deaf now receive cochlear implants (CIs), which supply hearing through electrical stimulation to the auditory nerve. In particular, bilateral cochlear implants (BICIs) are now becoming more prevalent, especially in children. However, because CI sound processing lacks both fine structure cues and coordination between stimulation at the two ears, binaural cues may either be absent or inconsistent. For children with NH and with BiCIs, this difficulty in segregating sources is of particular concern because their learning and development commonly occurs within the context of complex auditory environments. This dissertation intends to explore and understand the ability of children with NH and with BiCIs to function in everyday noisy environments. The goals of this work are to (1) Investigate source segregation abilities in children with NH and with BiCIs; (2) Examine the effect of target-interferer similarity and the benefits of source segregation for children with NH and with BiCIs; (3) Investigate measures of executive function that may predict performance in complex and realistic auditory tasks of source segregation for listeners with NH; and (4) Examine source segregation abilities in NH listeners, from school-age to adults.

  11. Contingent capture of involuntary visual attention interferes with detection of auditory stimuli

    Directory of Open Access Journals (Sweden)

    Marc R. Kamke

    2014-06-01

    Full Text Available The involuntary capture of attention by salient visual stimuli can be influenced by the behavioral goals of an observer. For example, when searching for a target item, irrelevant items that possess the target-defining characteristic capture attention more strongly than items not possessing that feature. Such contingent capture involves a shift of spatial attention toward the item with the target-defining characteristic. It is not clear, however, if the associated decrements in performance for detecting the target item are entirely due to involuntary orienting of spatial attention. To investigate whether contingent capture also involves a non-spatial interference, adult observers were presented with streams of visual and auditory stimuli and were tasked with simultaneously monitoring for targets in each modality. Visual and auditory targets could be preceded by a lateralized visual distractor that either did, or did not, possess the target-defining feature (a specific color. In agreement with the contingent capture hypothesis, target-colored distractors interfered with visual detection performance (response time and accuracy more than distractors that did not possess the target color. Importantly, the same pattern of results was obtained for the auditory task: visual target-colored distractors interfered with sound detection. The decrement in auditory performance following a target-colored distractor suggests that contingent capture involves a source of processing interference in addition to that caused by a spatial shift of attention. Specifically, we argue that distractors possessing the target-defining characteristic enter a capacity-limited, serial stage of neural processing, which delays detection of subsequently presented stimuli regardless of the sensory modality.

  12. Contingent capture of involuntary visual attention interferes with detection of auditory stimuli

    Science.gov (United States)

    Kamke, Marc R.; Harris, Jill

    2014-01-01

    The involuntary capture of attention by salient visual stimuli can be influenced by the behavioral goals of an observer. For example, when searching for a target item, irrelevant items that possess the target-defining characteristic capture attention more strongly than items not possessing that feature. Such contingent capture involves a shift of spatial attention toward the item with the target-defining characteristic. It is not clear, however, if the associated decrements in performance for detecting the target item are entirely due to involuntary orienting of spatial attention. To investigate whether contingent capture also involves a non-spatial interference, adult observers were presented with streams of visual and auditory stimuli and were tasked with simultaneously monitoring for targets in each modality. Visual and auditory targets could be preceded by a lateralized visual distractor that either did, or did not, possess the target-defining feature (a specific color). In agreement with the contingent capture hypothesis, target-colored distractors interfered with visual detection performance (response time and accuracy) more than distractors that did not possess the target color. Importantly, the same pattern of results was obtained for the auditory task: visual target-colored distractors interfered with sound detection. The decrement in auditory performance following a target-colored distractor suggests that contingent capture involves a source of processing interference in addition to that caused by a spatial shift of attention. Specifically, we argue that distractors possessing the target-defining characteristic enter a capacity-limited, serial stage of neural processing, which delays detection of subsequently presented stimuli regardless of the sensory modality. PMID:24920945

  13. Contingent capture of involuntary visual attention interferes with detection of auditory stimuli.

    Science.gov (United States)

    Kamke, Marc R; Harris, Jill

    2014-01-01

    The involuntary capture of attention by salient visual stimuli can be influenced by the behavioral goals of an observer. For example, when searching for a target item, irrelevant items that possess the target-defining characteristic capture attention more strongly than items not possessing that feature. Such contingent capture involves a shift of spatial attention toward the item with the target-defining characteristic. It is not clear, however, if the associated decrements in performance for detecting the target item are entirely due to involuntary orienting of spatial attention. To investigate whether contingent capture also involves a non-spatial interference, adult observers were presented with streams of visual and auditory stimuli and were tasked with simultaneously monitoring for targets in each modality. Visual and auditory targets could be preceded by a lateralized visual distractor that either did, or did not, possess the target-defining feature (a specific color). In agreement with the contingent capture hypothesis, target-colored distractors interfered with visual detection performance (response time and accuracy) more than distractors that did not possess the target color. Importantly, the same pattern of results was obtained for the auditory task: visual target-colored distractors interfered with sound detection. The decrement in auditory performance following a target-colored distractor suggests that contingent capture involves a source of processing interference in addition to that caused by a spatial shift of attention. Specifically, we argue that distractors possessing the target-defining characteristic enter a capacity-limited, serial stage of neural processing, which delays detection of subsequently presented stimuli regardless of the sensory modality.

  14. Auditory sustained field responses to periodic noise

    Directory of Open Access Journals (Sweden)

    Keceli Sumru

    2012-01-01

    Full Text Available Abstract Background Auditory sustained responses have been recently suggested to reflect neural processing of speech sounds in the auditory cortex. As periodic fluctuations below the pitch range are important for speech perception, it is necessary to investigate how low frequency periodic sounds are processed in the human auditory cortex. Auditory sustained responses have been shown to be sensitive to temporal regularity but the relationship between the amplitudes of auditory evoked sustained responses and the repetitive rates of auditory inputs remains elusive. As the temporal and spectral features of sounds enhance different components of sustained responses, previous studies with click trains and vowel stimuli presented diverging results. In order to investigate the effect of repetition rate on cortical responses, we analyzed the auditory sustained fields evoked by periodic and aperiodic noises using magnetoencephalography. Results Sustained fields were elicited by white noise and repeating frozen noise stimuli with repetition rates of 5-, 10-, 50-, 200- and 500 Hz. The sustained field amplitudes were significantly larger for all the periodic stimuli than for white noise. Although the sustained field amplitudes showed a rising and falling pattern within the repetition rate range, the response amplitudes to 5 Hz repetition rate were significantly larger than to 500 Hz. Conclusions The enhanced sustained field responses to periodic noises show that cortical sensitivity to periodic sounds is maintained for a wide range of repetition rates. Persistence of periodicity sensitivity below the pitch range suggests that in addition to processing the fundamental frequency of voice, sustained field generators can also resolve low frequency temporal modulations in speech envelope.

  15. Electrical brain imaging evidences left auditory cortex involvement in speech and non-speech discrimination based on temporal features

    Directory of Open Access Journals (Sweden)

    Jancke Lutz

    2007-12-01

    Full Text Available Abstract Background Speech perception is based on a variety of spectral and temporal acoustic features available in the acoustic signal. Voice-onset time (VOT is considered an important cue that is cardinal for phonetic perception. Methods In the present study, we recorded and compared scalp auditory evoked potentials (AEP in response to consonant-vowel-syllables (CV with varying voice-onset-times (VOT and non-speech analogues with varying noise-onset-time (NOT. In particular, we aimed to investigate the spatio-temporal pattern of acoustic feature processing underlying elemental speech perception and relate this temporal processing mechanism to specific activations of the auditory cortex. Results Results show that the characteristic AEP waveform in response to consonant-vowel-syllables is on a par with those of non-speech sounds with analogue temporal characteristics. The amplitude of the N1a and N1b component of the auditory evoked potentials significantly correlated with the duration of the VOT in CV and likewise, with the duration of the NOT in non-speech sounds. Furthermore, current density maps indicate overlapping supratemporal networks involved in the perception of both speech and non-speech sounds with a bilateral activation pattern during the N1a time window and leftward asymmetry during the N1b time window. Elaborate regional statistical analysis of the activation over the middle and posterior portion of the supratemporal plane (STP revealed strong left lateralized responses over the middle STP for both the N1a and N1b component, and a functional leftward asymmetry over the posterior STP for the N1b component. Conclusion The present data demonstrate overlapping spatio-temporal brain responses during the perception of temporal acoustic cues in both speech and non-speech sounds. Source estimation evidences a preponderant role of the left middle and posterior auditory cortex in speech and non-speech discrimination based on temporal

  16. The role of visual spatial attention in audiovisual speech perception

    DEFF Research Database (Denmark)

    Andersen, Tobias; Tiippana, K.; Laarni, J.

    2009-01-01

    integration did not change. Visual spatial attention was also able to select between the faces when lip reading. This suggests that visual spatial attention acts at the level of visual speech perception prior to audiovisual integration and that the effect propagates through audiovisual integration......Auditory and visual information is integrated when perceiving speech, as evidenced by the McGurk effect in which viewing an incongruent talking face categorically alters auditory speech perception. Audiovisual integration in speech perception has long been considered automatic and pre-attentive...... but recent reports have challenged this view. Here we study the effect of visual spatial attention on the McGurk effect. By presenting a movie of two faces symmetrically displaced to each side of a central fixation point and dubbed with a single auditory speech track, we were able to discern the influences...

  17. Relative contributions of spatial weighting, explicit knowledge and proprioception to hand localisation during positional ambiguity.

    Science.gov (United States)

    Bellan, Valeria; Gilpin, Helen R; Stanton, Tasha R; Dagsdóttir, Lilja K; Gallace, Alberto; Lorimer Moseley, G

    2017-02-01

    When vision and proprioception are rendered incongruent during a hand localisation task, vision is initially weighted more than proprioception in determining location, and proprioception gains more weighting over time. However, it is not known whether, under these incongruency conditions, particular areas of space are also weighted more heavily than others, nor whether explicit knowledge of the sensory incongruence (i.e. disconfirming the perceived location of the hand) modulates the effect. Here, we hypothesised that both non-informative inputs coming from one side of space and explicit knowledge of sensory incongruence would modulate perceived location of the limb. Specifically, we expected spatial weighting to shift hand localisation towards the weighted area of space, and we expected greater weighting of proprioceptive input once perceived location was demonstrated to be inaccurate. We manipulated spatial weighting using an established auditory cueing paradigm (Experiment 1, n = 18) and sensory incongruence using the 'disappearing hand trick' (Experiment 2, n = 9). Our first hypothesis was not supported-spatial weighting did not modulate hand localisation. Our second hypothesis was only partially supported-disconfirmation of hand position did lead to more accurate localisations, even if participants were still unaware of their hand position. This raised the possibility that rather than disconfirmation, a simple movement of the hand in view could update the sensory-motor system, by immediately increasing the weighting of proprioceptive input relative to visual input. This third hypothesis was then confirmed (Experiment 3, n = 9). These results suggest that hand localisation is robust in the face of differential weighting of space, but open to modulation in a modality-specific manner, when one sense (vision) is rendered inaccurate.

  18. Common mechanisms of spatial attention in memory and perception: a tactile dual-task study.

    Science.gov (United States)

    Katus, Tobias; Andersen, Søren K; Müller, Matthias M

    2014-03-01

    Orienting attention to locations in mnemonic representations engages processes that functionally and anatomically overlap the neural circuitry guiding prospective shifts of spatial attention. The attention-based rehearsal account predicts that the requirement to withdraw attention from a memorized location impairs memory accuracy. In a dual-task study, we simultaneously presented retro-cues and pre-cues to guide spatial attention in short-term memory (STM) and perception, respectively. The spatial direction of each cue was independent of the other. The locations indicated by the combined cues could be compatible (same hand) or incompatible (opposite hands). Incompatible directional cues decreased lateralized activity in brain potentials evoked by visual cues, indicating interference in the generation of prospective attention shifts. The detection of external stimuli at the prospectively cued location was impaired when the memorized location was part of the perceptually ignored hand. The disruption of attention-based rehearsal by means of incompatible pre-cues reduced memory accuracy and affected encoding of tactile test stimuli at the retrospectively cued hand. These findings highlight the functional significance of spatial attention for spatial STM. The bidirectional interactions between both tasks demonstrate that spatial attention is a shared neural resource of a capacity-limited system that regulates information processing in internal and external stimulus representations.

  19. Different mechanisms are responsible for dishabituation of electrophysiological auditory responses to a change in acoustic identity than to a change in stimulus location.

    Science.gov (United States)

    Smulders, Tom V; Jarvis, Erich D

    2013-11-01

    Repeated exposure to an auditory stimulus leads to habituation of the electrophysiological and immediate-early-gene (IEG) expression response in the auditory system. A novel auditory stimulus reinstates this response in a form of dishabituation. This has been interpreted as the start of new memory formation for this novel stimulus. Changes in the location of an otherwise identical auditory stimulus can also dishabituate the IEG expression response. This has been interpreted as an integration of stimulus identity and stimulus location into a single auditory object, encoded in the firing patterns of the auditory system. In this study, we further tested this hypothesis. Using chronic multi-electrode arrays to record multi-unit activity from the auditory system of awake and behaving zebra finches, we found that habituation occurs to repeated exposure to the same song and dishabituation with a novel song, similar to that described in head-fixed, restrained animals. A large proportion of recording sites also showed dishabituation when the same auditory stimulus was moved to a novel location. However, when the song was randomly moved among 8 interleaved locations, habituation occurred independently of the continuous changes in location. In contrast, when 8 different auditory stimuli were interleaved all from the same location, a separate habituation occurred to each stimulus. This result suggests that neuronal memories of the acoustic identity and spatial location are different, and that allocentric location of a stimulus is not encoded as part of the memory for an auditory object, while its acoustic properties are. We speculate that, instead, the dishabituation that occurs with a change from a stable location of a sound is due to the unexpectedness of the location change, and might be due to different underlying mechanisms than the dishabituation and separate habituations to different acoustic stimuli.

  20. Current status of auditory aging and anti-aging research.

    Science.gov (United States)

    Ruan, Qingwei; Ma, Cheng; Zhang, Ruxin; Yu, Zhuowei

    2014-01-01

    The development of presbycusis, or age-related hearing loss, is determined by a combination of genetic and environmental factors. The auditory periphery exhibits a progressive bilateral, symmetrical reduction of auditory sensitivity to sound from high to low frequencies. The central auditory nervous system shows symptoms of decline in age-related cognitive abilities, including difficulties in speech discrimination and reduced central auditory processing, ultimately resulting in auditory perceptual abnormalities. The pathophysiological mechanisms of presbycusis include excitotoxicity, oxidative stress, inflammation, aging and oxidative stress-induced DNA damage that results in apoptosis in the auditory pathway. However, the originating signals that trigger these mechanisms remain unclear. For instance, it is still unknown whether insulin is involved in auditory aging. Auditory aging has preclinical lesions, which manifest as asymptomatic loss of periphery auditory nerves and changes in the plasticity of the central auditory nervous system. Currently, the diagnosis of preclinical, reversible lesions depends on the detection of auditory impairment by functional imaging, and the identification of physiological and molecular biological markers. However, despite recent improvements in the application of these markers, they remain under-utilized in clinical practice. The application of antisenescent approaches to the prevention of auditory aging has produced inconsistent results. Future research will focus on the identification of markers for the diagnosis of preclinical auditory aging and the development of effective interventions.

  1. Experience and information loss in auditory and visual memory.

    Science.gov (United States)

    Gloede, Michele E; Paulauskas, Emily E; Gregg, Melissa K

    2017-07-01

    Recent studies show that recognition memory for sounds is inferior to memory for pictures. Four experiments were conducted to examine the nature of auditory and visual memory. Experiments 1-3 were conducted to evaluate the role of experience in auditory and visual memory. Participants received a study phase with pictures/sounds, followed by a recognition memory test. Participants then completed auditory training with each of the sounds, followed by a second memory test. Despite auditory training in Experiments 1 and 2, visual memory was superior to auditory memory. In Experiment 3, we found that it is possible to improve auditory memory, but only after 3 days of specific auditory training and 3 days of visual memory decay. We examined the time course of information loss in auditory and visual memory in Experiment 4 and found a trade-off between visual and auditory recognition memory: Visual memory appears to have a larger capacity, while auditory memory is more enduring. Our results indicate that visual and auditory memory are inherently different memory systems and that differences in visual and auditory recognition memory performance may be due to the different amounts of experience with visual and auditory information, as well as structurally different neural circuitry specialized for information retention.

  2. Spatial and Nonspatial Escape Strategies in the Barnes Maze

    Science.gov (United States)

    Harrison, Fiona E.; Reiserer, Randall S.; Tomarken, Andrew J.; McDonald, Michael P.

    2006-01-01

    The Barnes maze is a spatial memory task that requires subjects to learn the position of a hole that can be used to escape the brightly lit, open surface of the maze. Two experiments assessed the relative importance of spatial (extra-maze) versus proximal visible cues in solving the maze. In Experiment 1, four groups of mice were trained either…

  3. The Emergence of Flexible Spatial Strategies in Young Children

    Science.gov (United States)

    Waismeyer, Anna S.; Jacobs, Lucia F.

    2013-01-01

    The development of spatial navigation in children depends not only on remembering which landmarks lead to a goal location but also on developing strategies to deal with changes in the environment or imperfections in memory. Using cue combination methods, the authors examined 3- and 4-year-old children's memory for different types of spatial cues…

  4. Visual cues to female physical attractiveness.

    OpenAIRE

    Tovée, M.J.; Maisey, D S; Emery, J. L.; Cornelissen, P L

    1999-01-01

    Evolutionary psychology suggests that a woman's sexual attractiveness is based on cues of health and reproductive potential. In recent years, research has focused on the ratio of the width of the waist to the width of the hips (the waist-to-hip ratio (WHR). A low WHR (i.e. a curvaceous body) is believed to correspond to the optimal fat distribution for high fertility, and so this shape should be highly attractive. In this paper we present evidence that weight scaled for height (the body mass ...

  5. Low-Level Flight Simulation: Vertical Cues

    Science.gov (United States)

    1983-09-01

    The ASPT visual s’ester Soiftware autoematic’ally droips the’m freim the scene- at altitudes above 2000 fe’et AGI.) In an attempt to make the cue’s...8217 vertical (V) field of view. The ASPT has a - 15’ view o er the nose. - 370 over the left side, and - 15’ over the right side. (The aircraft field of...simulation, the ASPT /F-16 provided several instructional features that were used in this study. A video display of the HUD (Figures 1 and 21 and forward

  6. Spatial planning

    OpenAIRE

    Dimitrov, Nikola; Koteski, Cane

    2016-01-01

    The professional book ,, Space planning "processed chapters on: space, concept and definition of space, space as a system, spatial economics, economic essence of space, space planning, social determinants of spatial planning, spatial planning as a process, factors development and elements in spatial planning, methodology, components and content of spatial planning stages and types of preparation of spatial planning, spatial planning and industrialization, industrialization, urbanization and s...

  7. Sequential Monte Carlo tracking of the marginal artery by multiple cue fusion and random forest regression.

    Science.gov (United States)

    Cherry, Kevin M; Peplinski, Brandon; Kim, Lauren; Wang, Shijun; Lu, Le; Zhang, Weidong; Liu, Jianfei; Wei, Zhuoshi; Summers, Ronald M

    2015-01-01

    Given the potential importance of marginal artery localization in automated registration in computed tomography colonography (CTC), we have devised a semi-automated method of marginal vessel detection employing sequential Monte Carlo tracking (also known as particle filtering tracking) by multiple cue fusion based on intensity, vesselness, organ detection, and minimum spanning tree information for poorly enhanced vessel segments. We then employed a random forest algorithm for intelligent cue fusion and decision making which achieved high sensitivity and robustness. After applying a vessel pruning procedure to the tracking results, we achieved statistically significantly improved precision compared to a baseline Hessian detection method (2.7% versus 75.2%, prandom forest) with a sequential Monte Carlo tracking mechanism. In so doing, we present the effective application of an anatomical probability map to vessel pruning as well as a supplementary spatial coordinate system for colonic segmentation and registration when this task has been confounded by colon lumen collapse.

  8. Laboratory-based, cue-elicited craving and cue reactivity as predictors of naturally occurring smoking behavior.

    Science.gov (United States)

    Carpenter, Matthew J; Saladin, Michael E; DeSantis, Stacia; Gray, Kevin M; LaRowe, Steven D; Upadhyaya, Himanshu P

    2009-01-01

    Cigarette craving, one hallmark sign of nicotine dependence, is often measured in laboratory settings using cue reactivity methods. How lab measures of cue reactivity relate to real world smoking behavior is unclear, particularly among non-treatment seeking smokers. Within a larger study of hormonal effects on cue reactivity (N=78), we examined the predictive relationship of cue reactivity to smoking, each measured in several ways. Results indicated that cue-evoked craving in response to stressful imagery, and to a lesser extent, in vivo smoking cues, significantly predicted smoking behavior during the week following testing. However, this predictive relationship was absent upon controlling for reactivity to neutral cues. Nicotine dependence may moderate the relationship between cue reactivity and actual smoking, such that this predictive relationship is less robust among highly dependent smokers than among smokers low in nicotine dependence. The question of whether cue-elicited craving predicts smoking among smokers not in treatment is best answered with a qualified yes, depending on how craving is manipulated and measured. Our findings highlight important methodological and theoretical considerations for cue reactivity research.

  9. Effects of Caffeine on Auditory Brainstem Response

    Directory of Open Access Journals (Sweden)

    Saleheh Soleimanian

    2008-06-01

    Full Text Available Background and Aim: Blocking of the adenosine receptor in central nervous system by caffeine can lead to increasing the level of neurotransmitters like glutamate. As the adenosine receptors are present in almost all brain areas like central auditory pathway, it seems caffeine can change conduction in this way. The purpose of this study was to evaluate the effects of caffeine on latency and amplitude of auditory brainstem response(ABR.Materials and Methods: In this clinical trial study 43 normal 18-25 years old male students were participated. The subjects consumed 0, 2 and 3 mg/kg BW caffeine in three different sessions. Auditory brainstem responses were recorded before and 30 minute after caffeine consumption. The results were analyzed by Friedman and Wilcoxone test to assess the effects of caffeine on auditory brainstem response.Results: Compared to control group the latencies of waves III,V and I-V interpeak interval of the cases decreased significantly after 2 and 3mg/kg BW caffeine consumption. Wave I latency significantly decreased after 3mg/kg BW caffeine consumption(p<0.01. Conclusion: Increasing of the glutamate level resulted from the adenosine receptor blocking brings about changes in conduction in the central auditory pathway.

  10. Facilitated auditory detection for speech sounds.

    Science.gov (United States)

    Signoret, Carine; Gaudrain, Etienne; Tillmann, Barbara; Grimault, Nicolas; Perrin, Fabien

    2011-01-01

    If it is well known that knowledge facilitates higher cognitive functions, such as visual and auditory word recognition, little is known about the influence of knowledge on detection, particularly in the auditory modality. Our study tested the influence of phonological and lexical knowledge on auditory detection. Words, pseudo-words, and complex non-phonological sounds, energetically matched as closely as possible, were presented at a range of presentation levels from sub-threshold to clearly audible. The participants performed a detection task (Experiments 1 and 2) that was followed by a two alternative forced-choice recognition task in Experiment 2. The results of this second task in Experiment 2 suggest a correct recognition of words in the absence of detection with a subjective threshold approach. In the detection task of both experiments, phonological stimuli (words and pseudo-words) were better detected than non-phonological stimuli (complex sounds), presented close to the auditory threshold. This finding suggests an advantage of speech for signal detection. An additional advantage of words over pseudo-words was observed in Experiment 2, suggesting that lexical knowledge could also improve auditory detection when listeners had to recognize the stimulus in a subsequent task. Two simulations of detection performance performed on the sound signals confirmed that the advantage of speech over non-speech processing could not be attributed to energetic differences in the stimuli.

  11. Facilitated auditory detection for speech sounds

    Directory of Open Access Journals (Sweden)

    Carine eSignoret

    2011-07-01

    Full Text Available If it is well known that knowledge facilitates higher cognitive functions, such as visual and auditory word recognition, little is known about the influence of knowledge on detection, particularly in the auditory modality. Our study tested the influence of phonological and lexical knowledge on auditory detection. Words, pseudo words and complex non phonological sounds, energetically matched as closely as possible, were presented at a range of presentation levels from sub threshold to clearly audible. The participants performed a detection task (Experiments 1 and 2 that was followed by a two alternative forced choice recognition task in Experiment 2. The results of this second task in Experiment 2 suggest a correct recognition of words in the absence of detection with a subjective threshold approach. In the detection task of both experiments, phonological stimuli (words and pseudo words were better detected than non phonological stimuli (complex sounds, presented close to the auditory threshold. This finding suggests an advantage of speech for signal detection. An additional advantage of words over pseudo words was observed in Experiment 2, suggesting that lexical knowledge could also improve auditory detection when listeners had to recognize the stimulus in a subsequent task. Two simulations of detection performance performed on the sound signals confirmed that the advantage of speech over non speech processing could not be attributed to energetic differences in the stimuli.

  12. Absence of auditory 'global interference' in autism.

    Science.gov (United States)

    Foxton, Jessica M; Stewart, Mary E; Barnard, Louise; Rodgers, Jacqui; Young, Allan H; O'Brien, Gregory; Griffiths, Timothy D

    2003-12-01

    There has been considerable recent interest in the cognitive style of individuals with Autism Spectrum Disorder (ASD). One theory, that of weak central coherence, concerns an inability to combine stimulus details into a coherent whole. Here we test this theory in the case of sound patterns, using a new definition of the details (local structure) and the coherent whole (global structure). Thirteen individuals with a diagnosis of autism or Asperger's syndrome and 15 control participants were administered auditory tests, where they were required to match local pitch direction changes between two auditory sequences. When the other local features of the sequence pairs were altered (the actual pitches and relative time points of pitch direction change), the control participants obtained lower scores compared with when these details were left unchanged. This can be attributed to interference from the global structure, defined as the combination of the local auditory details. In contrast, the participants with ASD did not obtain lower scores in the presence of such mismatches. This was attributed to the absence of interference from an auditory coherent whole. The results are consistent with the presence of abnormal interactions between local and global auditory perception in ASD.

  13. Retrosplenial Cortical Neurons Encode Navigational Cues, Trajectories and Reward Locations During Goal Directed Navigation.

    Science.gov (United States)

    Vedder, Lindsey C; Miller, Adam M P; Harrison, Marc B; Smith, David M

    2016-07-29

    The retrosplenial cortex (RSC) plays an important role in memory and spatial navigation. It shares functional similarities with the hippocampus, including the presence of place fields and lesion-induced impairments in spatial navigation, and the RSC is an important source of visual-spatial input to the hippocampus. Recently, the RSC has been the target of intense scrutiny among investigators of human memory and navigation. fMRI and lesion data suggest an RSC role in the ability to use landmarks to navigate to goal locations. However, no direct neurophysiological evidence of encoding navigational cues has been reported so the specific RSC contribution to spatial cognition has been uncertain. To examine this, we trained rats on a T-maze task in which the reward location was explicitly cued by a flashing light and we recorded RSC neurons as the rats learned. We found that RSC neurons rapidly encoded the light cue. Additionally, RSC neurons encoded the reward and its location, and they showed distinct firing patterns along the left and right trajectories to the goal. These responses may provide key information for goal-directed navigation, and the loss of these signals may underlie navigational impairments in subjects with RSC damage.

  14. Consumer Involvement and Knowledge Influence on Wine Choice Cue Utilisation

    DEFF Research Database (Denmark)

    Bruwer, Johan; Chrysochou, Polymeros; Lesschaeve, Isabelle

    2017-01-01

    Purpose The purpose of this paper is to examine the utilisation of product choice cues in a retail environment and the impact of consumer involvement on this utilisation. It further investigates the impact of product knowledge on product choice cue utilisation and its moderating role on the impact...... of consumer involvement. Design/methodology/approach The case of wine as an exemplary product category is considered, given the importance and variability of choice cues that have been found to affect product choice. Analysis is conducted on survey data from a sample of wine consumers in Ontario, Canada....... Product choice cues are grouped into extrinsic, intrinsic and marketing mix. The importance of how these cues are influenced from different dimensions of consumer involvement is illustrated. Findings The results show that product knowledge has a positive impact on intrinsic product cue utilisation...

  15. Blood cues induce antipredator behavior in Nile tilapia conspecifics.

    Directory of Open Access Journals (Sweden)

    Rodrigo Egydio Barreto

    Full Text Available In this study, we show that the fish Nile tilapia displays an antipredator response to chemical cues present in the blood of conspecifics. This is the first report of alarm response induced by blood-borne chemical cues in fish. There is a body of evidence showing that chemical cues from epidermal 'club' cells elicit an alarm reaction in fish. However, the chemical cues of these 'club' cells are restricted to certain species of fish. Thus, as a parsimonious explanation, we assume that an alarm response to blood cues is a generalized response among animals because it occurs in mammals, birds and protostomian animals. Moreover, our results suggest that researchers must use caution when studying chemically induced alarm reactions because it is difficult to separate club cell cues from traces of blood.

  16. Reactivity to alcohol cues: isolating the role of perceived availability.

    Science.gov (United States)

    MacKillop, James; Lisman, Stephen A

    2005-08-01

    Perceived availability of a substance has been proposed to play a role in cue reactivity by both traditional classical conditioning models and S. T. Tiffany's (1990) cognitive processing model (CPM) of substance use. This study investigated the role of availability information on alcohol cue reactivity. Subjects were 134 heavy drinkers in a 2 x 2 between-subjects design, crossing cues (alcohol vs. neutral) and availability information (availability vs. unavailability). The results indicated significant main effects for cue type, with alcohol cues eliciting greater reactivity on multiple measures, and an interaction effect on the Alcohol Urge Questionnaire (M. J. Bohn, D. D. Krahn, & B. B. Staehler, 1995), such that exposure to alcohol cues in conjunction with unavailability information elicited a greater urge. This was largely a result of changes in self-reported craving and was interpreted as consistent with the CPM. Alternative methodologies and limitations are discussed.

  17. Effect of task-related continuous auditory feedback during learning of tracking motion exercises

    Directory of Open Access Journals (Sweden)

    Rosati Giulio

    2012-10-01

    Full Text Available Abstract Background This paper presents the results of a set of experiments in which we used continuous auditory feedback to augment motor training exercises. This feedback modality is mostly underexploited in current robotic rehabilitation systems, which usually implement only very basic auditory interfaces. Our hypothesis is that properly designed continuous auditory feedback could be used to represent temporal and spatial information that could in turn, improve performance and motor learning. Methods We implemented three different experiments on healthy subjects, who were asked to track a target on a screen by moving an input device (controller with their hand. Different visual and auditory feedback modalities were envisaged. The first experiment investigated whether continuous task-related auditory feedback can help improve performance to a greater extent than error-related audio feedback, or visual feedback alone. In the second experiment we used sensory substitution to compare different types of auditory feedback with equivalent visual feedback, in order to find out whether mapping the same information on a different sensory channel (the visual channel yielded comparable effects with those gained in the first experiment. The final experiment applied a continuously changing visuomotor transformation between the controller and the screen and mapped kinematic information, computed in either coordinate system (controller or video, to the audio channel, in order to investigate which information was more relevant to the user. Results Task-related audio feedback significantly improved performance with respect to visual feedback alone, whilst error-related feedback did not. Secondly, performance in audio tasks was significantly better with respect to the equivalent sensory-substituted visual tasks. Finally, with respect to visual feedback alone, video-task-related sound feedback decreased the tracking error during the learning of a novel

  18. Specific cue reactivity on computer game-related cues in excessive gamers.

    Science.gov (United States)

    Thalemann, R; Wölfling, K; Grüsser, S M

    2007-06-01

    It has been posited that excessive computer game playing behavior, referred to as computer game addiction, meets criteria that have been internationally established to define drug addiction. Nevertheless, there have been no psychophysiological investigations of the underlying mechanisms available to support the characterization of excessive computer gaming as behavioral addiction. To investigate whether excessive computer gaming parallels learning processes in development and maintenance (which are assumed to underlie drug addiction), the authors obtained a psychophysiological assessment of the (learned) emotional processing of computer game-relevant and -irrelevant cues. For this purpose, electroencephalographic recordings in excessive and casual computer game players were conducted. Significant between-group differences in event-related potentials evoked by computer game related-cues were found at parietal regions and point to an increased emotional processing of these cues in excessive pathological players compared with casual players. These results are in concordance with the suggestion that addiction is characterized and maintained through sensitization of the mesolimbic dopaminergic system along with incentive salience of specific addiction-associated cues.

  19. The effect of background music in auditory health persuasion

    NARCIS (Netherlands)

    Elbert, Sarah; Dijkstra, Arie

    2013-01-01

    In auditory health persuasion, threatening information regarding health is communicated by voice only. One relevant context of auditory persuasion is the addition of background music. There are different mechanisms through which background music might influence persuasion, for example through mood (

  20. Auditory imagery and the poor-pitch singer.

    Science.gov (United States)

    Pfordresher, Peter Q; Halpern, Andrea R

    2013-08-01

    The vocal imitation of pitch by singing requires one to plan laryngeal movements on the basis of anticipated target pitch events. This process may rely on auditory imagery, which has been shown to activate motor planning areas. As such, we hypothesized that poor-pitch singing, although not typically associated with deficient pitch perception, may be associated with deficient auditory imagery. Participants vocally imitated simple pitch sequences by singing, discriminated pitch pairs on the basis of pitch height, and completed an auditory imagery self-report questionnaire (the Bucknell Auditory Imagery Scale). The percentage of trials participants sung in tune correlated significantly with self-reports of vividness for auditory imagery, although not with the ability to control auditory imagery. Pitch discrimination was not predicted by auditory imagery scores. The results thus support a link between auditory imagery and vocal imitation.

  1. Intradermal melanocytic nevus of the external auditory canal.

    Science.gov (United States)

    Alves, Renato V; Brandão, Fabiano H; Aquino, José E P; Carvalho, Maria R M S; Giancoli, Suzana M; Younes, Eduado A P

    2005-01-01

    Intradermal nevi are common benign pigmented skin tumors. Their occurrence within the external auditory canal is uncommon. The clinical and pathologic features of an intradermal nevus arising within the external auditory canal are presented, and the literature reviewed.

  2. Subliminal Cues While Teaching: HCI Technique for Enhanced Learning

    OpenAIRE

    Pierre Chalfoun; Claude Frasson

    2011-01-01

    This paper presents results from an empirical study conducted with a subliminal teaching technique aimed at enhancing learner's performance in Intelligent Systems through the use of physiological sensors. This technique uses carefully designed subliminal cues (positive) and miscues (negative) and projects them under the learner's perceptual visual threshold. A positive cue, called answer cue, is a hint aiming to enhance the learner's inductive reasoning abilities and projected in a way to hel...

  3. The simultaneous perception of auditory-tactile stimuli in voluntary movement.

    Science.gov (United States)

    Hao, Qiao; Ogata, Taiki; Ogawa, Ken-Ichiro; Kwon, Jinhwan; Miyake, Yoshihiro

    2015-01-01

    The simultaneous perception of multimodal information in the environment during voluntary movement is very important for effective reactions to the environment. Previous studies have found that voluntary movement affects the simultaneous perception of auditory and tactile stimuli. However, the results of these experiments are not completely consistent, and the differences may be attributable to methodological differences in the previous studies. In this study, we investigated the effect of voluntary movement on the simultaneous perception of auditory and tactile stimuli using a temporal order judgment task with voluntary movement, involuntary movement, and no movement. To eliminate the potential effect of stimulus predictability and the effect of spatial information associated with large-scale movement in the previous studies, we randomized the interval between the start of movement and the first stimulus, and used small-scale movement. As a result, the point of subjective simultaneity (PSS) during voluntary movement shifted from the tactile stimulus being first during involuntary movement or no movement to the auditory stimulus being first. The just noticeable difference (JND), an indicator of temporal resolution, did not differ across the three conditions. These results indicate that voluntary movement itself affects the PSS in auditory-tactile simultaneous perception, but it does not influence the JND. In the discussion of these results, we suggest that simultaneous perception may be affected by the efference copy.

  4. Analogues of simple and complex cells in rhesus monkey auditory cortex.

    Science.gov (United States)

    Tian, Biao; Kuśmierek, Paweł; Rauschecker, Josef P

    2013-05-01

    Receptive fields (RFs) of neurons in primary visual cortex have traditionally been subdivided into two major classes: "simple" and "complex" cells. Simple cells were originally defined by the existence of segregated subregions within their RF that respond to either the on- or offset of a light bar and by spatial summation within each of these regions, whereas complex cells had ON and OFF regions that were coextensive in space [Hubel DH, et al. (1962) J Physiol 160:106-154]. Although other definitions based on the linearity of response modulation have been proposed later [Movshon JA, et al. (1978) J Physiol 283:53-77; Skottun BC, et al. (1991) Vision Res 31(7-8):1079-1086], the segregation of ON and OFF subregions has remained an important criterion for the distinction between simple and complex cells. Here we report that response profiles of neurons in primary auditory cortex of monkeys show a similar distinction: one group of cells has segregated ON and OFF subregions in frequency space; and another group shows ON and OFF responses within largely overlapping response profiles. This observation is intriguing for two reasons: (i) spectrotemporal dissociation in the auditory domain provides a basic neural mechanism for the segregation of sounds, a fundamental prerequisite for auditory figure-ground discrimination; and (ii) the existence of similar types of RF organization in visual and auditory cortex would support the existence of a common canonical processing algorithm within cortical columns.

  5. Comparison of Auditory Event-Related Potential P300 in Sighted and Early Blind Individuals

    Directory of Open Access Journals (Sweden)

    Fatemeh Heidari

    2010-06-01

    Full Text Available Background and Aim: Following an early visual deprivation, the neural network involved in processing auditory spatial information undergoes a profound reorganization. In order to investigate this process, event-related potentials provide accurate information about time course neural activation as well as perception and cognitive processes. In this study, the latency and amplitude of auditory P300 were compared in sighted and early blind individuals in age range of 18-25 years old.Methods: In this cross-sectional study, auditory P300 potential was measured in conventional oddball paradigm by using two tone burst stimuli (1000 and 2000 Hz on 40 sighted subjects and 19 early blind subjects with mean age 20.94 years old.Results: The mean latency of P300 in early blind subjects was significantly smaller than sighted subjects (p=0.00.( There was no significant difference in amplitude between two groups (p>0.05.Conclusion: Reduced latency of P300 in early blind subjects in comparison to sighted subjects probably indicates the rate of automatic processing and information categorization is faster in early blind subjects because of sensory compensation. It seems that neural plasticity increases the rate of auditory processing and attention in early blind subjects.

  6. Magnetic information calibrates celestial cues during migration.

    Science.gov (United States)

    Sandberg; Bäckman; Moore; Lõhmus

    2000-10-01

    Migratory birds use celestial and geomagnetic directional information to orient on their way between breeding and wintering areas. Cue-conflict experiments involving these two orientation cue systems have shown that directional information can be transferred from one system to the other by calibration. We designed experiments with four species of North American songbirds to: (1) examine whether these species calibrate orientation information from one system to the other; and (2) determine whether there are species-specific differences in calibration. Migratory orientation was recorded with two different techniques, cage tests and free-flight release tests, during autumn migration. Cage tests at dusk in the local geomagnetic field revealed species-specific differences: red-eyed vireo, Vireo olivaceus, and northern waterthrush, Seiurus noveboracensis, selected seasonally appropriate southerly directions whereas indigo bunting, Passerina cyanea, and grey catbird, Dumetella carolinensis, oriented towards the sunset direction. When tested in deflected magnetic fields, vireos and waterthrushes responded by shifting their orientation according to the deflection of the magnetic field, but buntings and catbirds failed to show any response to the treatment. In release tests, all four species showed that they had recalibrated their star compass on the basis of the magnetic field they had just experienced in the cage tests. Since release tests were done in the local geomagnetic field it seems clear that once the migratory direction is determined, most likely during the twilight period, the birds use their recalibrated star compass for orientation at departure. Copyright 2000 The Association for the Study of Animal Behaviour.

  7. Attention to health cues on product packages

    DEFF Research Database (Denmark)

    Orquin, Jacob Lund; Scholderer, Joachim

    2011-01-01

    The objectives of the study were (a) to examine which information and design elements on dairy product packages operate as cues in consumer evaluations of product healthfulness, and (b) to measure the degree to which consumers voluntarily attend to these elements during product choice. Visual att...... during purchase likelihood evaluations. The study also revealed that the probability that a consumer will read the nutrition label during the purchase decision process is associated with gender, body mass index and health motivation.......The objectives of the study were (a) to examine which information and design elements on dairy product packages operate as cues in consumer evaluations of product healthfulness, and (b) to measure the degree to which consumers voluntarily attend to these elements during product choice. Visual...... attention was measured by means of eye-tracking. Task (free viewing, product healthfulness evaluation, and purchase likelihood evaluation) and product (five different yoghurt products) were varied in a mixed within-between subjects design. The free viewing condition served as a baseline against which...

  8. Interpreting prosodic cues in discourse context

    Science.gov (United States)

    Brown, Meredith; Salverda, Anne Pier; Gunlogson, Christine; Tanenhaus, Michael K.

    2014-01-01

    Two visual-world experiments investigated whether and how quickly discourse-based expectations about the prosodic realization of spoken words modulate interpretation of acoustic-prosodic cues. Experiment 1 replicated effects of segmental lengthening on activation of onset-embedded words (e.g. pumpkin) using resynthetic manipulation of duration and fundamental frequency (F0). In Experiment 2, the same materials were preceded by instructions establishing information-structural differences between competing lexical alternatives (i.e. repeated vs. newly-assigned thematic roles) in critical instructions. Eye-movements generated upon hearing the critical target word revealed a significant interaction between information structure and target-word realization: Segmental lengthening and pitch excursion elicited more fixations to the onset-embedded competitor when the target word remained in the same thematic role, but not when its thematic role changed. These results suggest that information structure modulates the interpretation of acoustic-prosodic cues by influencing expectations about fine-grained acoustic-phonetic properties of the unfolding utterance. PMID:25599081

  9. What determines auditory distraction? On the roles of local auditory changes and expectation violations.

    Directory of Open Access Journals (Sweden)

    Jan P Röer

    Full Text Available Both the acoustic variability of a distractor sequence and the degree to which it violates expectations are important determinants of auditory distraction. In four experiments we examined the relative contribution of local auditory changes on the one hand and expectation violations on the other hand in the disruption of serial recall by irrelevant sound. We present evidence for a greater disruption by auditory sequences ending in unexpected steady state distractor repetitions compared to auditory sequences with expected changing state end