Full Text Available Selective attention is the mechanism that allows focusing one's attention on a particular stimulus while filtering out a range of other stimuli, for instance, on a single conversation in a noisy room. Attending to one sound source rather than another changes activity in the human auditory cortex, but it is unclear whether attention to different acoustic features, such as voice pitch and speaker location, modulates subcortical activity. Studies using a dichotic listening paradigm indicated that auditory brainstem processing may be modulated by the direction of attention. We investigated whether endogenous selective attention to one of two speech signals affects amplitude and phase locking in auditory brainstem responses when the signals were either discriminable by frequency content alone, or by frequency content and spatial location. Frequency-following responses to the speech sounds were significantly modulated in both conditions. The modulation was specific to the task-relevant frequency band. The effect was stronger when both frequency and spatial information were available. Patterns of response were variable between participants, and were correlated with psychophysical discriminability of the stimuli, suggesting that the modulation was biologically relevant. Our results demonstrate that auditory brainstem responses are susceptible to efferent modulation related to behavioral goals. Furthermore they suggest that mechanisms of selective attention actively shape activity at early subcortical processing stages according to task relevance and based on frequency and spatial cues.
Widmann, Andreas; Schröger, Erich
The presented study was designed to investigate ERP effects of auditory spatial attention in sustained attention condition (where the to-be-attended location is defined in a blockwise manner) and in a transient attention condition (where the to-be-attended location is defined in a trial-by-trial manner). Lateralization in the azimuth plane was manipulated (a) via monaural presentation of l- and right-ear sounds, (b) via interaural intensity differences, (c) via interaural time differences, (d) via an artificial-head recording, and (e) via free-field stimulation. Ten participants were delivered with frequent Nogo- and infrequent Go-Stimuli. In one half of the experiment participants were instructed to press a button if they detected a Go-stimulus at a predefined side (sustained attention), in the other half they were required to detect Go-stimuli following an arrow-cue at the cued side (transient attention). Results revealed negative differences (Nd) between ERPs elicited by to-be-attended and to-be-ignored sounds in all conditions. These Nd-effects were larger for the sustained than for the transient attention condition indicating that attentional selection according to spatial criteria is improved when subjects can focus to one and the same location for a series of stimuli.
Getzmann, Stephan; Jasny, Julian; Falkenstein, Michael
Verbal communication in a "cocktail-party situation" is a major challenge for the auditory system. In particular, changes in target speaker usually result in declined speech perception. Here, we investigated whether speech cues indicating a subsequent change in target speaker reduce the costs of switching in younger and older adults. We employed event-related potential (ERP) measures and a speech perception task, in which sequences of short words were simultaneously presented by four speakers. Changes in target speaker were either unpredictable or semantically cued by a word within the target stream. Cued changes resulted in a less decreased performance than uncued changes in both age groups. The ERP analysis revealed shorter latencies in the change-related N400 and late positive complex (LPC) after cued changes, suggesting an acceleration in context updating and attention switching. Thus, both younger and older listeners used semantic cues to prepare changes in speaker setting. Copyright © 2016 Elsevier Inc. All rights reserved.
Mohammad-Ali Nikouei Mahani
Full Text Available In our daily life, we continually exploit already learned multisensory associations and form new ones when facing novel situations. Improving our associative learning results in higher cognitive capabilities. We experimentally and computationally studied the learning performance of healthy subjects in a visual-auditory sensory associative learning task across active learning, attention cueing learning, and passive learning modes. According to our results, the learning mode had no significant effect on learning association of congruent pairs. In addition, subjects' performance in learning congruent samples was not correlated with their vigilance score. Nevertheless, vigilance score was significantly correlated with the learning performance of the non-congruent pairs. Moreover, in the last block of the passive learning mode, subjects significantly made more mistakes in taking non-congruent pairs as associated and consciously reported lower confidence. These results indicate that attention and activity equally enhanced visual-auditory associative learning for non-congruent pairs, while false alarm rate in the passive learning mode did not decrease after the second block. We investigated the cause of higher false alarm rate in the passive learning mode by using a computational model, composed of a reinforcement learning module and a memory-decay module. The results suggest that the higher rate of memory decay is the source of making more mistakes and reporting lower confidence in non-congruent pairs in the passive learning mode.
Arjona, Antonio; Gómez, Carlos M
Preparatory activity based on a priori probabilities generated in previous trials and subjective expectancies would produce an attentional bias. However, preparation can be correct (valid) or incorrect (invalid) depending on the actual target stimulus. The alternation effect refers to the subjective expectancy that a target will not be repeated in the same position, causing RTs to increase if the target location is repeated. The present experiment, using the Posner's central cue paradigm, tries to demonstrate that not only the credibility of the cue, but also the expectancy about the next position of the target are changed in a trial by trial basis. Sequences of trials were analyzed. The results indicated an increase in RT benefits when sequences of two and three valid trials occurred. The analysis of errors indicated an increase in anticipatory behavior which grows as the number of valid trials is increased. On the other hand, there was also an RT benefit when a trial was preceded by trials in which the position of the target changed with respect to the current trial (alternation effect). Sequences of two alternations or two repetitions were faster than sequences of trials in which a pattern of repetition or alternation is broken. Taken together, these results suggest that in Posner's central cue paradigm, and with regard to the anticipatory activity, the credibility of the external cue and of the endogenously anticipated patterns of target location are constantly updated. The results suggest that Bayesian rules are operating in the generation of anticipatory activity as a function of the previous trial's outcome, but also on biases or prior beliefs like the "gambler fallacy".
Dillon A Hambrook
Full Text Available The process of resolving mixtures of several sounds into their separate individual streams is known as auditory scene analysis and it remains a challenging task for computational systems. It is well-known that animals use binaural differences in arrival time and intensity at the two ears to find the arrival angle of sounds in the azimuthal plane, and this localization function has sometimes been considered sufficient to enable the un-mixing of complex scenes. However, the ability of such systems to resolve distinct sound sources in both space and frequency remains limited. The neural computations for detecting interaural time difference (ITD have been well studied and have served as the inspiration for computational auditory scene analysis systems, however a crucial limitation of ITD models is that they produce ambiguous or "phantom" images in the scene. This has been thought to limit their usefulness at frequencies above about 1khz in humans. We present a simple Bayesian model and an implementation on a robot that uses ITD information recursively. The model makes use of head rotations to show that ITD information is sufficient to unambiguously resolve sound sources in both space and frequency. Contrary to commonly held assumptions about sound localization, we show that the ITD cue used with high-frequency sound can provide accurate and unambiguous localization and resolution of competing sounds. Our findings suggest that an "active hearing" approach could be useful in robotic systems that operate in natural, noisy settings. We also suggest that neurophysiological models of sound localization in animals could benefit from revision to include the influence of top-down memory and sensorimotor integration across head rotations.
Kaya, Emine Merve; Elhilali, Mounya
Sounds in everyday life seldom appear in isolation. Both humans and machines are constantly flooded with a cacophony of sounds that need to be sorted through and scoured for relevant information-a phenomenon referred to as the 'cocktail party problem'. A key component in parsing acoustic scenes is the role of attention, which mediates perception and behaviour by focusing both sensory and cognitive resources on pertinent information in the stimulus space. The current article provides a review of modelling studies of auditory attention. The review highlights how the term attention refers to a multitude of behavioural and cognitive processes that can shape sensory processing. Attention can be modulated by 'bottom-up' sensory-driven factors, as well as 'top-down' task-specific goals, expectations and learned schemas. Essentially, it acts as a selection process or processes that focus both sensory and cognitive resources on the most relevant events in the soundscape; with relevance being dictated by the stimulus itself (e.g. a loud explosion) or by a task at hand (e.g. listen to announcements in a busy airport). Recent computational models of auditory attention provide key insights into its role in facilitating perception in cluttered auditory scenes.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Authors.
Tobias Borra; Huib Versnel; Chantal Kemner; A. John van Opstal; Raymond van Ee
... tones. Current auditory models explain this phenomenon by a simple bandpass attention filter. Here, we demonstrate that auditory attention involves multiple pass-bands around octave-related frequencies above and below the cued tone...
Blurton, Steven Paul; Greenlee, Mark W.; Gondan, Matthias
Visual processing is most effective at the location of our attentional focus. It has long been known that various spatial cues can direct visuospatial attention and influence the detection of auditory targets. Cross-modal cueing, however, seems to depend on the type of the visual cue: facilitation...... that the perception of multisensory signals is modulated by a single, supramodal system operating in a top-down manner (Experiment 1). In contrast, bottom-up control of attention, as observed in the exogenous cueing task of Experiment 2, mainly exerts its influence through modality-specific subsystems. Experiment 3...
Borra, Tobias; Versnel, Huib; Kemner, Chantal; van Opstal, A John; van Ee, Raymond
After hearing a tone, the human auditory system becomes more sensitive to similar tones than to other tones. Current auditory models explain this phenomenon by a simple bandpass attention filter. Here, we demonstrate that auditory attention involves multiple pass-bands around octave-related frequencies above and below the cued tone. Intriguingly, this "octave effect" not only occurs for physically presented tones, but even persists for the missing fundamental in complex tones, and for imagined tones. Our results suggest neural interactions combining octave-related frequencies, likely located in nonprimary cortical regions. We speculate that this connectivity scheme evolved from exposure to natural vibrations containing octave-related spectral peaks, e.g., as produced by vocal cords.
Christian F Altmann
Full Text Available Ranging of auditory objects relies on several acoustic cues and is possibly modulated by additional visual information. Sound pressure level can serve as a cue for distance perception because it decreases with increasing distance. In this agnetoencephalography (MEG experiment, we tested whether psychophysical loudness judgment and N1m MEG responses are modulated by visual distance cues. To this end, we paired noise bursts at different sound pressure levels with synchronous visual cues at different distances. We hypothesized that noise bursts paired with far visual cues will be perceived louder and result in increased N1m amplitudes compared to a pairing with close visual cues. The rationale behind this was that listeners might compensate the visually induced object distance when processing loudness. Psychophysically, we observed no significant modulation of loudness judgments by visual cues. However, N1m MEG responses at about 100 ms after stimulus onset were significantly stronger for far versus close visual cues in the left auditory cortex. N1m responses in the right auditory cortex increased with increasing sound pressure level, but were not modulated by visual distance cues. Thus, our results suggest an audio-visual interaction in the left auditory cortex that is possibly related to cue integration for auditory distance processing.
Jin Joo Lee
Full Text Available Current state-of-the-art approaches to emotion recognition primarily focus on modeling the nonverbal expressions of the sole individual without reference to contextual elements such as the co-presence of the partner. In this paper, we demonstrate that the accurate inference of listeners’ social-emotional state of attention depends on accounting for the nonverbal behaviors of their storytelling partner, namely their speaker cues. To gain a deeper understanding of the role of speaker cues in attention inference, we conduct investigations into real-world interactions of children (5–6 years old storytelling with their peers. Through in-depth analysis of human–human interaction data, we first identify nonverbal speaker cues (i.e., backchannel-inviting cues and listener responses (i.e., backchannel feedback. We then demonstrate how speaker cues can modify the interpretation of attention-related backchannels as well as serve as a means to regulate the responsiveness of listeners. We discuss the design implications of our findings toward our primary goal of developing attention recognition models for storytelling robots, and we argue that social robots can proactively use speaker cues to form more accurate inferences about the attentive state of their human partners.
Jin Joo Lee; Cynthia Breazeal; David DeSteno
Current state-of-the-art approaches to emotion recognition primarily focus on modeling the nonverbal expressions of the sole individual without reference to contextual elements such as the co-presence of the partner. In this paper, we demonstrate that the accurate inference of listeners’ social-emotional state of attention depends on accounting for the nonverbal behaviors of their storytelling partner, namely their speaker cues. To gain a deeper understanding of the role of speaker cues in at...
Cancela, Jorge; Moreno, Eugenio M; Arredondo, Maria T; Bonato, Paolo
Recent works have proved that Parkinson's disease (PD) patients can be largely benefit by performing rehabilitation exercises based on audio cueing and music therapy. Specially, gait can benefit from repetitive sessions of exercises using auditory cues. Nevertheless, all the experiments are based on the use of a metronome as auditory stimuli. Within this work, Human-Computer Interaction methodologies have been used to design new cues that could benefit the long-term engagement of PD patients in these repetitive routines. The study has been also extended to commercial music and musical pieces by analyzing features and characteristics that could benefit the engagement of PD patients to rehabilitation tasks.
Rick Van Der Zwan
Full Text Available Johnson and Tassinary (2005 proposed that visually perceived sex is signalled by structural or form cues. They suggested also that biological motion cues signal sex, but do so indirectly. We previously have shown that auditory cues can mediate visual sex perceptions (van der Zwan et al., 2009. Here we demonstrate that structural cues to body shape are alone sufficient for visual sex discriminations but that biological motion cues alone are not. Interestingly, biological motions can resolve ambiguous structural cues to sex, but so can olfactory cues even when those cues are not salient. To accommodate these findings we propose an alternative model of the processes mediating visual sex discriminations: Form cues can be used directly if they are available and unambiguous. If there is any ambiguity other sensory cues are used to resolve it, suggesting there may exist sex-detectors that are stimulus independent.
Full Text Available Self-perception of body posture and movement is achieved through multi-sensory integration, particularly the utilisation of vision, and proprioceptive information derived from muscles and joints. Disruption to these processes can occur following a neurological accident, such as stroke, leading to sensory and physical impairment. Rehabilitation can be helped through use of augmented visual and auditory biofeedback to stimulate neuro-plasticity, but the effective design and application of feedback, particularly in the auditory domain, is non-trivial. Simple auditory feedback was tested by comparing the stepping accuracy of normal subjects when given a visual spatial target (step length and an auditory temporal target (step duration. A baseline measurement of step length and duration was taken using optical motion capture. Subjects (n=20 took 20 ‘training’ steps (baseline ±25% using either an auditory target (950 Hz tone, bell-shaped gain envelope or visual target (spot marked on the floor and were then asked to replicate the target step (length or duration corresponding to training with all feedback removed. Visual cues (mean percentage error=11.5%; SD ± 7.0%; auditory cues (mean percentage error = 12.9%; SD ± 11.8%. Visual cues elicit a high degree of accuracy both in training and follow-up un-cued tasks; despite the novelty of the auditory cues present for subjects, the mean accuracy of subjects approached that for visual cues, and initial results suggest a limited amount of practice using auditory cues can improve performance.
Full Text Available The current study used remote corneal reflection eye-tracking to examine the relationship between motor experience and action anticipation in 13-month-old infants. To measure online anticipation of actions infants watched videos where the actor’s hand provided kinematic information (in its orientation about the type of object that the actor was going to reach for. The actor’s hand orientation either matched the orientation of a rod (congruent cue or did not match the orientation of the rod (incongruent cue. To examine relations between motor experience and action anticipation, we used a 2 (reach first vs. observe first x 2 (congruent kinematic cue vs. incongruent kinematic cue between-subjects design. We show that 13-month-old infants in the observe first condition spontaneously generate rapid online visual predictions to congruent hand orientation cues and do not visually anticipate when presented incongruent cues. We further demonstrate that the speed that these infants generate predictions to congruent motor cues is correlated with their own ability to pre-shape their hands. Finally, we demonstrate that following reaching experience, infants generate rapid predictions to both congruent and incongruent hand shape cues—suggesting that short-term experience changes attention to kinematics.
Full Text Available Optimal utilization of acoustic cues during auditory categorization is a vital skill, particularly when informative cues become occluded or degraded. Consequently, the acoustic environment requires flexible choosing and switching amongst available cues. The present study targets the brain functions underlying such changes in cue utilization. Participants performed a categorization task with immediate feedback on acoustic stimuli from two categories that varied in duration and spectral properties, while we simultaneously recorded Blood Oxygenation Level Dependent (BOLD responses in fMRI and electroencephalograms (EEGs. In the first half of the experiment, categories could be best discriminated by spectral properties. Halfway through the experiment, spectral degradation rendered the stimulus duration the more informative cue. Behaviorally, degradation decreased the likelihood of utilizing spectral cues. Spectrally degrading the acoustic signal led to increased alpha power compared to nondegraded stimuli. The EEG-informed fMRI analyses revealed that alpha power correlated with BOLD changes in inferior parietal cortex and right posterior superior temporal gyrus (including planum temporale. In both areas, spectral degradation led to a weaker coupling of BOLD response to behavioral utilization of the spectral cue. These data provide converging evidence from behavioral modeling, electrophysiology, and hemodynamics that (a increased alpha power mediates the inhibition of uninformative (here spectral stimulus features, and that (b the parietal attention network supports optimal cue utilization in auditory categorization. The results highlight the complex cortical processing of auditory categorization under realistic listening challenges.
Schreiner, Thomas; Lehmann, Mick; Rasch, Björn
It is now widely accepted that re-exposure to memory cues during sleep reactivates memories and can improve later recall. However, the underlying mechanisms are still unknown. As reactivation during wakefulness renders memories sensitive to updating, it remains an intriguing question whether reactivated memories during sleep also become susceptible to incorporating further information after the cue. Here we show that the memory benefits of cueing Dutch vocabulary during sleep are in fact completely blocked when memory cues are directly followed by either correct or conflicting auditory feedback, or a pure tone. In addition, immediate (but not delayed) auditory stimulation abolishes the characteristic increases in oscillatory theta and spindle activity typically associated with successful reactivation during sleep as revealed by high-density electroencephalography. We conclude that plastic processes associated with theta and spindle oscillations occurring during a sensitive period immediately after the cue are necessary for stabilizing reactivated memory traces during sleep.
Emine Merve Kaya
Full Text Available Bottom-up attention is a sensory-driven selection mechanism that directs perception towards a subset of the stimulus that is considered salient, or attention-grabbing. Most studies of bottom-up auditory attention have adapted frameworks similar to visual attention models whereby local or global contrast is a central concept in defining salient elements in a scene. In the current study, we take a more fundamental approach to modeling auditory attention; providing the first examination of the space of auditory saliency spanning pitch, intensity and timbre; and shedding light on complex interactions among these features. Informed by psychoacoustic results, we develop a computational model of auditory saliency implementing a novel attentional framework, guided by processes hypothesized to take place in the auditory pathway. In particular, the model tests the hypothesis that perception tracks the evolution of sound events in a multidimensional feature space, and flags any deviation from background statistics as salient. Predictions from the model corroborate the relationship between bottom-up auditory attention and statistical inference, and argues for a potential role of predictive coding as mechanism for saliency detection in acoustic scenes.
Gil Carvajal, Juan Camilo; Cubick, Jens; Santurette, Sébastien
features of the recording environment and listener’s anatomy to be recorded at the listener’s ear canals. Although the resulting auditory images can be indistinguishable from real-world sources, their externalization may be less robust when the playback and recording environments differ. Here we tested...... whether a mismatch between playback and recording room reduces perceived distance, azimuthal direction, and compactness of the auditory image, and whether this is mostly due to incongruent auditory cues or to expectations generated from the visual impression of the room. Perceived distance ratings...
.... Both high- and low-visualizers benefited equally well from attention cueing. However, an interaction effect in one subtest was observed indicating that attention cueing can result in learning interference among high-visualizers...
McCloy, Daniel R; Lau, Bonnie K; Larson, Eric; Pratt, Katherine A I; Lee, Adrian K C
Successful speech communication often requires selective attention to a target stream amidst competing sounds, as well as the ability to switch attention among multiple interlocutors. However, auditory attention switching negatively affects both target detection accuracy and reaction time, suggesting that attention switches carry a cognitive cost. Pupillometry is one method of assessing mental effort or cognitive load. Two experiments were conducted to determine whether the effort associated with attention switches is detectable in the pupillary response. In both experiments, pupil dilation, target detection sensitivity, and reaction time were measured; the task required listeners to either maintain or switch attention between two concurrent speech streams. Secondary manipulations explored whether switch-related effort would increase when auditory streaming was harder. In experiment 1, spatially distinct stimuli were degraded by simulating reverberation (compromising across-time streaming cues), and target-masker talker gender match was also varied. In experiment 2, diotic streams separable by talker voice quality and pitch were degraded by noise vocoding, and the time alloted for mid-trial attention switching was varied. All trial manipulations had some effect on target detection sensitivity and/or reaction time; however, only the attention-switching manipulation affected the pupillary response: greater dilation was observed in trials requiring switching attention between talkers.
Gil-Carvajal, Juan C.; Cubick, Jens; Santurette, Sébastien; Dau, Torsten
In day-to-day life, humans usually perceive the location of sound sources as outside their heads. This externalized auditory spatial perception can be reproduced through headphones by recreating the sound pressure generated by the source at the listener’s eardrums. This requires the acoustical features of the recording environment and listener’s anatomy to be recorded at the listener’s ear canals. Although the resulting auditory images can be indistinguishable from real-world sources, their externalization may be less robust when the playback and recording environments differ. Here we tested whether a mismatch between playback and recording room reduces perceived distance, azimuthal direction, and compactness of the auditory image, and whether this is mostly due to incongruent auditory cues or to expectations generated from the visual impression of the room. Perceived distance ratings decreased significantly when collected in a more reverberant environment than the recording room, whereas azimuthal direction and compactness remained room independent. Moreover, modifying visual room-related cues had no effect on these three attributes, while incongruent auditory room-related cues between the recording and playback room did affect distance perception. Consequently, the external perception of virtual sounds depends on the degree of congruency between the acoustical features of the environment and the stimuli.
Hari M Bharadwaj
Full Text Available Frequency tagging of sensory inputs (presenting stimuli that fluctuate periodically at rates to which the cortex can phase lock has been used to study attentional modulation of neural responses to inputs in different sensory modalities. For visual inputs, the visual steady-state response (VSSR at the frequency modulating an attended object is enhanced, while the VSSR to a distracting object is suppressed. In contrast, the effect of attention on the auditory steady-state response (ASSR is inconsistent across studies. However, most auditory studies analyzed results at the sensor level or used only a small number of equivalent current dipoles to fit cortical responses. In addition, most studies of auditory spatial attention used dichotic stimuli (independent signals at the ears rather than more natural, binaural stimuli. Here, we asked whether these methodological choices help explain discrepant results. Listeners attended to one of two competing speech streams, one simulated from the left and one from the right, that were modulated at different frequencies. Using distributed source modeling of magnetoencephalography results, we estimate how spatially directed attention modulates the ASSR in neural regions across the whole brain. Attention enhances the ASSR power at the frequency of the attended stream in the contralateral auditory cortex. The attended-stream modulation frequency also drives phase-locked responses in the left (but not right precentral sulcus (lPCS, a region implicated in control of eye gaze and visual spatial attention. Importantly, this region shows no phase locking to the distracting stream suggesting that the lPCS in engaged in an attention-specific manner. Modeling results that take account of the geometry and phases of the cortical sources phase locked to the two streams (including hemispheric asymmetry of lPCS activity help partly explain why past ASSR studies of auditory spatial attention yield seemingly contradictory
Manolas, Christos; Pauletto, Sandra
Assisted by the technological advances of the past decades, stereoscopic 3D (S3D) cinema is currently in the process of being established as a mainstream form of entertainment. The main focus of this collaborative effort is placed on the creation of immersive S3D visuals. However, with few exceptions, little attention has been given so far to the potential effect of the soundtrack on such environments. The potential of sound both as a means to enhance the impact of the S3D visual information and to expand the S3D cinematic world beyond the boundaries of the visuals is large. This article reports on our research into the possibilities of using auditory depth cues within the soundtrack as a means of affecting the perception of depth within cinematic S3D scenes. We study two main distance-related auditory cues: high-end frequency loss and overall volume attenuation. A series of experiments explored the effectiveness of these auditory cues. Results, although not conclusive, indicate that the studied auditory cues can influence the audience judgement of depth in cinematic 3D scenes, sometimes in unexpected ways. We conclude that 3D filmmaking can benefit from further studies on the effectiveness of specific sound design techniques to enhance S3D cinema.
Full Text Available Attentional blink (AB describes a phenomenon whereby correct identification of a first target impairs the processing of a second target (i.e., probe nearby in time. Evidence suggests that explicit attention orienting in the time domain can attenuate the AB. Here, we used scalp-recorded, event-related potentials to examine whether auditory AB is also sensitive to implicit temporal attention orienting. Expectations were set up implicitly by varying the probability (i.e., 80% or 20% that the probe would occur at the +2 or +8 position following target presentation. Participants showed a significant AB, which was reduced with the increased probe probability at the +2 position. The probe probability effect was paralleled by an increase in P3b amplitude elicited by the probe. The results suggest that implicit temporal attention orienting can facilitate short-term consolidation of the probe and attenuate auditory AB.
Gibson, J M; Watkins, M J
An experiment is reported in which subjects first heard a list of words and then tried to identify these same words from degraded utterances. Paralleling previous findings in the visual modality, the probability of identifying a given utterance was reduced when the utterance was immediately preceded by other, more degraded, utterances of the same word. A second experiment replicated this "cue-depreciation effect" and in addition found the effect to be weakened, if not eliminated, when the target word was not included in the initial list or when the test was delayed by two days.
Karla Maria Ibraim da Freiria Elias
Full Text Available OBJECTIVE: To verify the auditory selective attention in children with stroke. METHODS: Dichotic tests of binaural separation (non-verbal and consonant-vowel and binaural integration - digits and Staggered Spondaic Words Test (SSW - were applied in 13 children (7 boys, from 7 to 16 years, with unilateral stroke confirmed by neurological examination and neuroimaging. RESULTS: The attention performance showed significant differences in comparison to the control group in both kinds of tests. In the non-verbal test, identifications the ear opposite the lesion in the free recall stage was diminished and, in the following stages, a difficulty in directing attention was detected. In the consonant- vowel test, a modification in perceptual asymmetry and difficulty in focusing in the attended stages was found. In the digits and SSW tests, ipsilateral, contralateral and bilateral deficits were detected, depending on the characteristics of the lesions and demand of the task. CONCLUSION: Stroke caused auditory attention deficits when dealing with simultaneous sources of auditory information.
Cooney, Sarah Maeve; Brady, Nuala; Ryan, Katie
Across three experiments, we examined the efficacy of three cues from the human body-body orientation, head turning, and eye-gaze direction-to shift an observer's attention in space. Using a modified Posner cueing paradigm, we replicate the previous findings of gender differences in the gaze-cueing effect whereby female but not male participants responded significantly faster to validly cued than to invalidly cued targets. In contrast to the previous studies, we report a robust cueing effect for both male and female participants when head turning direction was used as the central cue, whereas oriented bodies proved ineffectual as cues to attention for both males and females. These results are discussed with reference to the time course of central cueing effects, gender differences in spatial attention, and current models of how cues from the human body are combined to judge another person's direction of attention.
Pitchers, Kyle K; Wood, Taylor R; Skrzynski, Cari J; Robinson, Terry E; Sarter, Martin
In humans, reward cues, including drug cues in individuals experiencing addiction, are especially effective in biasing attention towards them, so much so they can disrupt ongoing task performance. It is not known, however, whether this happens in rats. To address this question, we developed a behavioral paradigm to assess the capacity of an auditory drug (cocaine) cue to evoke cocaine-seeking behavior, thus distracting thirsty rats from performing a well-learned sustained attention task (SAT) to obtain a water reward. First, it was determined that an auditory cocaine cue (tone-CS) reinstated drug-seeking equally in sign-trackers (STs) and goal-trackers (GTs), which otherwise vary in the propensity to attribute incentive salience to a localizable drug cue. Next, we tested the ability of an auditory cocaine cue to disrupt performance on the SAT in STs and GTs. Rats were trained to self-administer cocaine intravenously using an Intermittent Access self-administration procedure known to produce a progressive increase in motivation for cocaine, escalation of intake, and strong discriminative stimulus control over drug-seeking behavior. When presented alone, the auditory discriminative stimulus elicited cocaine-seeking behavior while rats were performing the SAT, but it was not sufficiently disruptive to impair SAT performance. In contrast, if cocaine was available in the presence of the cue, or when administered non-contingently, SAT performance was severely disrupted. We suggest that performance on a relatively automatic, stimulus-driven task, such as the basic version of the SAT used here, may be difficult to disrupt with a drug cue alone. A task that requires more top-down cognitive control may be needed. Copyright © 2016 Elsevier B.V. All rights reserved.
Menning, Hans; Ackermann, Hermann; Hertrich, Ingo; Mathiak, Klaus
Previous studies have shown that cross-modal processing affects perception at a variety of neuronal levels. In this study, event-related brain responses were recorded via whole-head magnetoencephalography (MEG). Spatial auditory attention was directed via tactile pre-cues (primes) to one of four locations in the peripersonal space (left and right hand versus face). Auditory stimuli were white noise bursts, convoluted with head-related transfer functions, which ensured spatial perception of the four locations. Tactile primes (200-300 ms prior to acoustic onset) were applied randomly to one of these locations. Attentional load was controlled by three different visual distraction tasks. The auditory P50m (about 50 ms after stimulus onset) showed a significant "proximity" effect (larger responses to face stimulation as well as a "contralaterality" effect between side of stimulation and hemisphere). The tactile primes essentially reduced both the P50m and N100m components. However, facial tactile pre-stimulation yielded an enhanced ipsilateral N100m. These results show that earlier responses are mainly governed by exogenous stimulus properties whereas cross-sensory interaction is spatially selective at a later (endogenous) processing stage.
Lodhia, Veema; Hautus, Michael J; Johnson, Blake W; Brock, Jon
The auditory processing atypicalities experienced by many individuals on the autism spectrum disorder might be understood in terms of difficulties parsing the sound energy arriving at the ears into discrete auditory 'objects'. Here, we asked whether autistic adults are able to make use of two important spatial cues to auditory object formation - the relative timing and amplitude of sound energy at the left and right ears. Using electroencephalography, we measured the brain responses of 15 autistic adults and 15 age- and verbal-IQ-matched control participants as they listened to dichotic pitch stimuli - white noise stimuli in which interaural timing or amplitude differences applied to a narrow frequency band of noise typically lead to the perception of a pitch sound that is spatially segregated from the noise. Responses were contrasted with those to stimuli in which timing and amplitude cues were removed. Consistent with our previous studies, autistic adults failed to show a significant object-related negativity (ORN) for timing-based pitch, although their ORN was not significantly smaller than that of the control group. Autistic participants did show an ORN to amplitude cues, indicating that they do not experience a general impairment in auditory object formation. However, their P400 response - thought to indicate the later attention-dependent aspects of auditory object formation - was missing. These findings provide further evidence of atypical auditory object processing in autism with potential implications for understanding the perceptual and communication difficulties associated with the condition. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Mulckhuyse, Manon; Crombez, Geert
In the emotional spatial cueing task, a peripheral cue--either emotional or non-emotional--is presented before target onset. A stronger cue validity effect with an emotional relative to a non-emotional cue (i.e., more efficient responding to validly cued targets relative to invalidly cued targets) is taken as an indication of emotional modulation of attentional processes. However, results from previous emotional spatial cueing studies are not consistent. Some studies find an effect at the validly cued location (shorter reaction times compared to a non-emotional cue), whereas other studies find an effect at the invalidly cued location (longer reaction times compared to a non-emotional cue). In the current paper, we explore which parameters affect emotional modulation of the cue validity effect in the spatial cueing task. Results from five experiments in healthy volunteers led to the conclusion that a threatening spatial cue did not affect attention processes but rather indicate that motor processes are affected. A possible mechanism might be that a strong aversive cue stimulus decreases reaction times by means of stronger action preparation. Consequently, in case of a spatially congruent response with the peripheral cue, a stronger cue validity effect could be obtained due to stronger response priming. The implications for future research are discussed.
Obrzut, John E; Boliek, Carol A; Asbjornsen, Arve
This study addresses the effects of verbal versus nonverbal (tone) shifts of attention on dichotic listening (DL) performance with children. Theoretically, a tonal cue may be more effective in increasing attention than a verbal cue following instruction. The inconsistency of studies reporting substantial effects of attention on ear asymmetries in children with or without learning disabilities (LDs) may be due to a developmental difference in their ability to use verbal or tone cues to select stimuli for recall. Participants included 30 right-handed children (15 control, 15 with LDs) with a mean age of 10.8 years. Each participant received 60 trials of a monaural tone cue task, 60 trials of a binaural verbal cue task, and 60 trials of a monaural verbal cue task, to direct attention to either the left or right ear before the presentation of consonant-vowel syllable pairs in a DL task. A factorial design analysis of variance yielded a significant right-ear advantage for both groups. More important, the Group x Task interaction was found to be significant, indicating that group performance on ear scores was dependent on type of cueing condition. Whereas all 3 cue conditions were effective in orienting attention for control participants, larger shifts were apparent under both binaural and monaural verbal instructional cue conditions. In contrast, participants with LD showed larger shifts of attention under the tonal cue condition. These results show that control participants have greater ability to focus attention with the use of a verbal cue, whereas participants with LD show greater ability to orient attention with the use of a tone cue in reducing error rates in DL performance.
Hansen, Kirstin Anderson; Maxwell, Alyssa; Siebert, Ursula; Larsen, Ole Næsbye; Wahlberg, Magnus
In-air hearing in birds has been thoroughly investigated. Sound provides birds with auditory information for species and individual recognition from their complex vocalizations, as well as cues while foraging and for avoiding predators. Some 10% of existing species of birds obtain their food under the water surface. Whether some of these birds make use of acoustic cues while underwater is unknown. An interesting species in this respect is the great cormorant ( Phalacrocorax carbo), being one of the most effective marine predators and relying on the aquatic environment for food year round. Here, its underwater hearing abilities were investigated using psychophysics, where the bird learned to detect the presence or absence of a tone while submerged. The greatest sensitivity was found at 2 kHz, with an underwater hearing threshold of 71 dB re 1 μPa rms. The great cormorant is better at hearing underwater than expected, and the hearing thresholds are comparable to seals and toothed whales in the frequency band 1-4 kHz. This opens up the possibility of cormorants and other aquatic birds having special adaptations for underwater hearing and making use of underwater acoustic cues from, e.g., conspecifics, their surroundings, as well as prey and predators.
Carvajal, Juan Camilo Gil; Santurette, Sébastien; Cubick, Jens
recorded [1,2,3]. This may be due to incongruent auditory cues between the recording and playback room during sound reproduction . Alternatively, an expectation effect caused by the visual impression of the room may affect the position of the perceived auditory image . Here, we systematically...... investigated whether incongruent auditory and visual roomrelated cues affected sound externalization in terms of perceived distance, azimuthal localization, and compactness....
Tumber, Anupreet K; Scheerer, Nichole E; Jones, Jeffery A
Auditory feedback is required to maintain fluent speech. At present, it is unclear how attention modulates auditory feedback processing during ongoing speech. In this event-related potential (ERP) study, participants vocalized/a/, while they heard their vocal pitch suddenly shifted downward a ½ semitone in both single and dual-task conditions. During the single-task condition participants passively viewed a visual stream for cues to start and stop vocalizing. In the dual-task condition, participants vocalized while they identified target stimuli in a visual stream of letters. The presentation rate of the visual stimuli was manipulated in the dual-task condition in order to produce a low, intermediate, and high attentional load. Visual target identification accuracy was lowest in the high attentional load condition, indicating that attentional load was successfully manipulated. Results further showed that participants who were exposed to the single-task condition, prior to the dual-task condition, produced larger vocal compensations during the single-task condition. Thus, when participants' attention was divided, less attention was available for the monitoring of their auditory feedback, resulting in smaller compensatory vocal responses. However, P1-N1-P2 ERP responses were not affected by divided attention, suggesting that the effect of attentional load was not on the auditory processing of pitch altered feedback, but instead it interfered with the integration of auditory and motor information, or motor control itself.
Full Text Available In this article we present a review of current literature on adaptations to altered head-related auditory localization cues. Localization cues can be altered through ear blocks, ear molds, electronic hearing devices and altered head-related transfer functions. Three main methods have been used to induce auditory space adaptation: sound exposure, training with feedback, and explicit training. Adaptations induced by training, rather than exposure, are consistently faster. Studies on localization with altered head-related cues have reported poor initial localization, but improved accuracy and discriminability with training. Also, studies that displaced the auditory space by altering cue values reported adaptations in perceived source position to compensate for such displacements. Auditory space adaptations can last for a few months even without further contact with the learned cues. In most studies, localization with the subject’s own unaltered cues remained intact despite the adaptation to a second set of cues. Generalization is observed from trained to untrained sound source positions, but there is mixed evidence regarding cross-frequency generalization. Multiple brain areas might be involved in auditory space adaptation processes, but the auditory cortex may play a critical role. Auditory space plasticity may involve context-dependent cue reweighting.
Orquin, Jacob Lund
Purpose As part of a larger project aiming at improving healthy food choice among consumers, four studies were carried out to identify packaging cues that communicate product healthfulness. Methods Study 1 was an eye tracking experiment using a 5x3 group mixed design where the stimuli (five diffe...
Orquin, Jacob Lund; Scholderer, Joachim
The objectives of the study were (a) to examine which information and design elements on dairy product packages operate as cues in consumer evaluations of product healthfulness, and (b) to measure the degree to which consumers voluntarily attend to these elements during product choice. Visual...
Kunert, R.; Jongman, S.R.
Many natural auditory signals, including music and language, change periodically. The effect of such auditory rhythms on the brain is unclear however. One widely held view, dynamic attending theory, proposes that the attentional system entrains to the rhythm and increases attention at moments of
Lawo, Vera; Koch, Iring
Using a novel task-switching variant of dichotic selective listening, we examined age-related differences in the ability to intentionally switch auditory attention between 2 speakers defined by their sex. In our task, young (M age = 23.2 years) and older adults (M age = 66.6 years) performed a numerical size categorization on spoken number words. The task-relevant speaker was indicated by a cue prior to auditory stimulus onset. The cuing interval was either short or long and varied randomly trial by trial. We found clear performance costs with instructed attention switches. These auditory attention switch costs decreased with prolonged cue-stimulus interval. Older adults were generally much slower (but not more error prone) than young adults, but switching-related effects did not differ across age groups. These data suggest that the ability to intentionally switch auditory attention in a selective listening task is not compromised in healthy aging. We discuss the role of modality-specific factors in age-related differences.
Kondo, Hirohito M; Toshima, Iwaki; Pressnitzer, Daniel; Kashino, Makio
The perceptual organization of auditory scenes is a hard but important problem to solve for human listeners. It is thus likely that cues from several modalities are pooled for auditory scene analysis, including sensory-motor cues related to the active exploration of the scene. We previously reported a strong effect of head motion on auditory streaming. Streaming refers to an experimental paradigm where listeners hear sequences of pure tones, and rate their perception of one or more subjective sources called streams. To disentangle the effects of head motion (changes in acoustic cues at the ear, subjective location cues, and motor cues), we used a robotic telepresence system, Telehead. We found that head motion induced perceptual reorganization even when the acoustic scene had not changed. Here we reanalyzed the same data to probe the time course of sensory-motor integration. We show that motor cues had a different time course compared to acoustic or subjective location cues: motor cues impacted perceptual organization earlier and for a shorter time than other cues, with successive positive and negative contributions to streaming. An additional experiment controlled for the effects of volitional anticipatory components, and found that arm or leg movements did not have any impact on scene analysis. These data provide a first investigation of the time course of the complex integration of sensory-motor cues in an auditory scene analysis task, and they suggest a loose temporal coupling between the different mechanisms involved.
Hirohito M. Kondo
Full Text Available The perceptual organization of auditory scenes is a hard but important problem to solve for human listeners. It is thus likely that cues from several modalities are pooled for auditory scene analysis, including sensory-motor cues related to the active exploration of the scene. We previously reported a strong effect of head motion on auditory streaming. Streaming refers to an experimental paradigm where listeners hear sequences of pure tones, and report their perception of one or more subjective sources called streams. To disentangle the effects of head motion (changes in acoustic cues at the ear, subjective location cues, and motor cues, we used a robotic telepresence system, Telehead. We found that head motion induced perceptual reorganization even when the acoustic scene had not changed. Here we reanalyzed the same data to probe the time course of sensory-motor integration. We show that motor cues had a different time course compared to acoustic or subjective location cues: motor cues impacted perceptual organization earlier and for a shorter time than other cues, with successive positive and negative contributions to streaming. An additional experiment controlled for the effects of volitional anticipatory components, and found that arm or leg movements did not have any impact on scene analysis. These data provide a first investigation of the time course of the complex integration of sensory-motor cues in an auditory scene analysis task, and they suggest a loose temporal coupling between the different mechanisms involved.
McLaughlin, Susan A; Higgins, Nathan C; Stecker, G Christopher
Interaural level and time differences (ILD and ITD), the primary binaural cues for sound localization in azimuth, are known to modulate the tuned responses of neurons in mammalian auditory cortex (AC). The majority of these neurons respond best to cue values that favor the contralateral ear, such that contralateral bias is evident in the overall population response and thereby expected in population-level functional imaging data. Human neuroimaging studies, however, have not consistently found contralaterally biased binaural response patterns. Here, we used functional magnetic resonance imaging (fMRI) to parametrically measure ILD and ITD tuning in human AC. For ILD, contralateral tuning was observed, using both univariate and multivoxel analyses, in posterior superior temporal gyrus (pSTG) in both hemispheres. Response-ILD functions were U-shaped, revealing responsiveness to both contralateral and—to a lesser degree—ipsilateral ILD values, consistent with rate coding by unequal populations of contralaterally and ipsilaterally tuned neurons. In contrast, for ITD, univariate analyses showed modest contralateral tuning only in left pSTG, characterized by a monotonic response-ITD function. A multivoxel classifier, however, revealed ITD coding in both hemispheres. Although sensitivity to ILD and ITD was distributed in similar AC regions, the differently shaped response functions and different response patterns across hemispheres suggest that basic ILD and ITD processes are not fully integrated in human AC. The results support opponent-channel theories of ILD but not necessarily ITD coding, the latter of which may involve multiple types of representation that differ across hemispheres.
Shi, Jinfu; Weng, Xuchu; He, Sheng; Jiang, Yi
The human visual system is extremely sensitive to biological signals around us. In the current study, we demonstrate that biological motion walking direction can induce robust reflexive attentional orienting. Following a brief presentation of a central point-light walker walking towards either the left or right direction, observers' performance…
Kunert, Richard; Jongman, Suzanne R
Many natural auditory signals, including music and language, change periodically. The effect of such auditory rhythms on the brain is unclear however. One widely held view, dynamic attending theory, proposes that the attentional system entrains to the rhythm and increases attention at moments of rhythmic salience. In support, 2 experiments reported here show reduced response times to visual letter strings shown at auditory rhythm peaks, compared with rhythm troughs. However, we argue that an account invoking the entrainment of general attention should further predict rhythm entrainment to also influence memory for visual stimuli. In 2 pseudoword memory experiments we find evidence against this prediction. Whether a pseudoword is shown during an auditory rhythm peak or not is irrelevant for its later recognition memory in silence. Other attention manipulations, dividing attention and focusing attention, did result in a memory effect. This raises doubts about the suggested attentional nature of rhythm entrainment. We interpret our findings as support for auditory rhythm perception being based on auditory-motor entrainment, not general attention entrainment. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Liemburg, Edith J.; Vercammen, Ans; Ter Horst, Gert J.; Curcic-Blake, Branislava; Knegtering, Henderikus; Aleman, Andre
Brain circuits involved in language processing have been suggested to be compromised in patients with schizophrenia. This does not only include regions subserving language production and perception, but also auditory processing and attention. We investigated resting state network connectivity of
van Reekum, C.M.; van den Berg, H.; Frijda, N.H.
A cross-modal paradigm was chosen to test the hypothesis that affective olfactory and auditory cues paired with neutral visual stimuli bearing no resemblance or logical connection to the affective cues can evoke preference shifts in those stimuli. Neutral visual stimuli of abstract paintings were
Ahveninen, Jyrki; Huang, Samantha; Belliveau, John W.; Chang, Wei-Tang; Hämäläinen, Matti
In everyday listening situations, we need to constantly switch between alternative sound sources and engage attention according to cues that match our goals and expectations. The exact neuronal bases of these processes are poorly understood. We investigated oscillatory brain networks controlling auditory attention using cortically constrained fMRI-weighted magnetoencephalography/ electroencephalography (MEG/EEG) source estimates. During consecutive trials, subjects were instructed to shift attention based on a cue, presented in the ear where a target was likely to follow. To promote audiospatial attention effects, the targets were embedded in streams of dichotically presented standard tones. Occasionally, an unexpected novel sound occurred opposite to the cued ear, to trigger involuntary orienting. According to our cortical power correlation analyses, increased frontoparietal/temporal 30–100 Hz gamma activity at 200–1400 ms after cued orienting predicted fast and accurate discrimination of subsequent targets. This sustained correlation effect, possibly reflecting voluntary engagement of attention after the initial cue-driven orienting, spread from the temporoparietal junction, anterior insula, and inferior frontal (IFC) cortices to the right frontal eye fields. Engagement of attention to one ear resulted in a significantly stronger increase of 7.5–15 Hz alpha in the ipsilateral than contralateral parieto-occipital cortices 200–600 ms after the cue onset, possibly reflecting crossmodal modulation of the dorsal visual pathway during audiospatial attention. Comparisons of cortical power patterns also revealed significant increases of sustained right medial frontal cortex theta power, right dorsolateral prefrontal cortex and anterior insula/IFC beta power, and medial parietal cortex and posterior cingulate cortex gamma activity after cued vs. novelty-triggered orienting (600–1400 ms). Our results reveal sustained oscillatory patterns associated with voluntary
Picton, T. W.; Hillyard, S. A.
Attention directed toward auditory stimuli, in order to detect an occasional fainter 'signal' stimulus, caused a substantial increase in the N1 (83 msec) and P2 (161 msec) components of the auditory evoked potential without any change in preceding components. This evidence shows that human auditory attention is not mediated by a peripheral gating mechanism. The evoked response to the detected signal stimulus also contained a large P3 (450 msec) wave that was topographically distinct from the preceding components. This late positive wave could also be recorded in response to a detected omitted stimulus in a regular train and therefore seemed to index a stimulus-independent perceptual decision process.
Mulckhuyse, Manon; Talsma, D.; Theeuwes, Jan
The present study shows that an abrupt onset cue that is not consciously perceived can cause attentional facilitation followed by inhibition at the cued location. The observation of this classic biphasic effect of facilitation followed by inhibition of return (IOR) suggests that the subliminal cue
The present study examines how various types of attention cueing and cognitive preference affect learners' comprehension of a cardiovascular system and cognitive load. EFL learners were randomly assigned to one of four conditions: non-signal, static-blood-signal, static-blood-static-arrow-signal, and animation-signal. The results indicated that…
Dotov, D G; Bayard, S; Cochen de Cock, V; Geny, C; Driss, V; Garrigue, G; Bardy, B; Dalla Bella, S
Rhythmic auditory cueing improves certain gait symptoms of Parkinson's disease (PD). Cues are typically stimuli or beats with a fixed inter-beat interval. We show that isochronous cueing has an unwanted side-effect in that it exacerbates one of the motor symptoms characteristic of advanced PD. Whereas the parameters of the stride cycle of healthy walkers and early patients possess a persistent correlation in time, or long-range correlation (LRC), isochronous cueing renders stride-to-stride variability random. Random stride cycle variability is also associated with reduced gait stability and lack of flexibility. To investigate how to prevent patients from acquiring a random stride cycle pattern, we tested rhythmic cueing which mimics the properties of variability found in healthy gait (biological variability). PD patients (n=19) and age-matched healthy participants (n=19) walked with three rhythmic cueing stimuli: isochronous, with random variability, and with biological variability (LRC). Synchronization was not instructed. The persistent correlation in gait was preserved only with stimuli with biological variability, equally for patients and controls (p'scycle. Notably, the individual's tendency to synchronize steps with beats determined the amount of negative effects of isochronous and random cues (p'sgait dynamics during cueing. The beneficial effects of biological variability provide useful guidelines for improving existing cueing treatments. Copyright Â© 2016 Elsevier B.V. All rights reserved.
Georg F Meyer
Full Text Available We argue that objective fidelity evaluation of virtual environments, such as flight simulation, should be human-performance-centred and task-specific rather than measure the match between simulation and physical reality. We show how principled experimental paradigms and behavioural models to quantify human performance in simulated environments that have emerged from research in multisensory perception provide a framework for the objective evaluation of the contribution of individual cues to human performance measures of fidelity. We present three examples in a flight simulation environment as a case study: Experiment 1: Detection and categorisation of auditory and kinematic motion cues; Experiment 2: Performance evaluation in a target-tracking task; Experiment 3: Transferrable learning of auditory motion cues. We show how the contribution of individual cues to human performance can be robustly evaluated for each task and that the contribution is highly task dependent. The same auditory cues that can be discriminated and are optimally integrated in experiment 1, do not contribute to target-tracking performance in an in-flight refuelling simulation without training, experiment 2. In experiment 3, however, we demonstrate that the auditory cue leads to significant, transferrable, performance improvements with training. We conclude that objective fidelity evaluation requires a task-specific analysis of the contribution of individual cues.
Kidd, Celeste; Piantadosi, Steven T.; Aslin, Richard N.
Infants must learn about many cognitive domains (e.g., language, music) from auditory statistics, yet capacity limits on their cognitive resources restrict the quantity that they can encode. Previous research has established that infants can attend to only a subset of available acoustic input. Yet few previous studies have directly examined infant…
Full Text Available We conducted a human fear conditioning experiment in which three different color cues were followed by an aversive electric shock on 0, 50, and 100% of the trials, and thus induced low (L, partial (P, and high (H shock expectancy, respectively. The cues differed with respect to the strength of their shock association (L < P < H and the uncertainty of their prediction (L < P > H. During conditioning we measured pupil dilation and ocular fixations to index differences in the attentional processing of the cues. After conditioning, the shock-associated colors were introduced as irrelevant distracters during visual search for a shape target while shocks were no longer administered and we analyzed the cues’ potential to capture and hold overt attention automatically. Our findings suggest that fear conditioning creates an automatic attention bias for the conditioned cues that depends on their correlation with the aversive outcome. This bias was exclusively linked to the strength of the cues’ shock association for the early attentional processing of cues in the visual periphery, but additionally was influenced by the uncertainty of the shock prediction after participants fixated on the cues. These findings are in accord with attentional learning theories that formalize how associative learning shapes automatic attention.
Lochbuehler, Kirsten; Otten, Roy; Voogd, Hubert; Engels, Rutger C M E
Research has shown that children with smoking parents are more likely to initiate smoking than children with non-smoking parents. So far, these effects have been explained through genetic factors, modelling and norm-setting processes. However, it is also possible that parental smoking affects smoking initiation through automatic cognitive processes. Therefore, we examined whether children with a smoking parent focus longer, faster and more often on smoking cues. The children were given two movie clips to watch, during which their attention to smoking cues was assessed with eye-tracking technology. Results showed that children with a smoking parent focused more often and longer on smoking cues compared with children with non-smoking parents. No correlations between attentional bias and explicit smoking cognitions were found. In conclusion, results suggest that parental smoking affects children's attention to smoking cues. These findings may indicate that parental smoking instigates automatic cognitive processes in children who have not experimented with smoking, and possibly even before explicit smoking cognitions become more favourable.
Roberts, Katherine L.; Andersen, Tobias; Kyllingsbæk, Søren
Mathematical and computational models have provided useful insights into normal and impaired visual attention, but less progress has been made in modelling auditory attention. We are developing a Theory of Auditory Attention (TAA), based on an influential visual model, the Theory of Visual...... the auditory data, producing good estimates of the rate at which information is encoded (C), the minimum exposure duration required for processing to begin (t0), and the relative attentional weight to targets versus distractors (α). Future work will address the issue of target-distractor confusion, and extend...
interaural level and interaural envelope timing (weak cues for left-right direction). This work, published in Acustica united with Acta Acustica in...Acta Acust united Acustica 2005; 91:967-9. Durlach NI, Mason CR, Gallun FJ, Shinn-Cunningham BG, Colburn HS, and Kidd G Jr. Informational masking for
Soveri, Anna; Tallus, Jussi; Laine, Matti; Nyberg, Lars; Bäckman, Lars; Hugdahl, Kenneth; Tuomainen, Jyrki; Westerhausen, René; Hämäläinen, Heikki
We studied the effects of training on auditory attention in healthy adults with a speech perception task involving dichotically presented syllables. Training involved bottom-up manipulation (facilitating responses from the harder-to-report left ear through a decrease of right-ear stimulus intensity), top-down manipulation (focusing attention on the left-ear stimuli through instruction), or their combination. The results showed significant training-related effects for top-down training. These effects were evident as higher overall accuracy rates in the forced-left dichotic listening (DL) condition that sets demands on attentional control, as well as a response shift toward left-sided reports in the standard DL task. Moreover, a transfer effect was observed in an untrained auditory-spatial attention task involving bilateral stimulation where top-down training led to a relatively stronger focus on left-sided stimuli. Our results indicate that training of attentional control can modulate the allocation of attention in the auditory space in adults. Malleability of auditory attention in healthy adults raises the issue of potential training gains in individuals with attentional deficits.
Van der Burg, Erik; Nieuwenstein, Mark R.; Theeuwes, Jan; Olivers, Christian N. L.
In the present study we investigated whether a task-irrelevant distractor can induce a visual attentional blink pattern. Participants were asked to detect only a visual target letter (A, B, or C) and to ignore the preceding auditory, visual, or audiovisual distractor. An attentional blink was
Vachon, Francois; Hughes, Robert W.; Jones, Dylan M.
The role of memory in behavioral distraction by auditory attentional capture was investigated: We examined whether capture is a product of the novelty of the capturing event (i.e., the absence of a recent memory for the event) or its violation of learned expectancies on the basis of a memory for an event structure. Attentional capture--indicated…
Full Text Available Simple and unambiguous visual cues (e.g. an arrow can be used to trigger covert shifts of visual attention away from the center of gaze. The processing of visual stimuli is enhanced at the attended location. Covert shifts of attention modulate the power of cerebral oscillations in the alpha band over parietal and occipital regions. These modulations are sufficiently robust to be decoded on a single trial basis from electro-encephalography (EEG signals. It is often assumed that covert attention shifts are under voluntary control, and also occur in more natural and complex environments, but there is no direct evidence to support this assumption. We address this important issue by using random-dot stimuli to cue one of two opposite locations, where a visual target is presented. We contrast two conditions in which the random-dot motion is either predictive of the target location or contains ambiguous information. Behavioral results show attention shifts in anticipation of the visual target, in both conditions. In addition, these attention shifts involve similar neural sources, and the EEG can be decoded on a single trial basis. These results shed a new light on the behavioral and neural correlates of visuospatial attention, with implications for Brain-Computer Interfaces (BCI based on covert attention shifts.
Gómez-González, J; Martín-Casas, P; Cano-de-la-Cuerda, R
To review the available scientific evidence about the effectiveness of auditory cues during gait initiation and turning in patients with Parkinson's disease. We conducted a literature search in the following databases: Brain, PubMed, Medline, CINAHL, Scopus, Science Direct, Web of Science, Cochrane Database of Systematic Reviews, Cochrane Library Plus, CENTRAL, Trip Database, PEDro, DARE, OTseeker, and Google Scholar. We included all studies published between 2007 and 2016 and evaluating the influence of auditory cues on independent gait initiation and turning in patients with Parkinson's disease. The methodological quality of the studies was assessed with the Jadad scale. We included 13 studies, all of which had a low methodological quality (Jadad scale score≤2). In these studies, high-intensity, high-frequency auditory cues had a positive impact on gait initiation and turning. More specifically, they 1) improved spatiotemporal and kinematic parameters; 2) decreased freezing, turning duration, and falls; and 3) increased gait initiation speed, muscle activation, and gait speed and cadence in patients with Parkinson's disease. We need studies of better methodological quality to establish the Parkinson's disease stage in which auditory cues are most beneficial, as well as to determine the most effective type and frequency of the auditory cue during gait initiation and turning in patients with Parkinson's disease. Copyright © 2016 Sociedad Española de Neurología. Publicado por Elsevier España, S.L.U. All rights reserved.
Hink, R. F.; Van Voorhis, S. T.; Hillyard, S. A.; Smith, T. S.
The sensitivity of the scalp-recorded, auditory evoked potential to selective attention was examined while subjects responded to stimuli presented to one ear (focused attention) and to both ears (divided attention). The amplitude of the N1 component was found to be largest to stimuli in the ear upon which attention was to be focused, smallest to stimuli in the ear to be ignored, and intermediate to stimuli in both ears when attention was divided. The results are interpreted as supporting a capacity model of attention.
Full Text Available Sergio L Schmidt,1,2 Ana Lucia Novais Carvaho,3 Eunice N Simoes2 1Department of Neurophysiology, State University of Rio de Janeiro, Rio de Janeiro, 2Neurology Department, Federal University of the State of Rio de Janeiro, Rio de Janeiro, 3Department of Psychology, Fluminense Federal University, Niteroi, Brazil Abstract: The relationship between handedness and attentional performance is poorly understood. Continuous performance tests (CPTs using visual stimuli are commonly used to assess subjects suffering from attention deficit hyperactivity disorder (ADHD. However, auditory CPTs are considered more useful than visual ones to evaluate classroom attentional problems. A previous study reported that there was a significant effect of handedness on students’ performance on a visual CPT. Here, we examined whether handedness would also affect CPT performance using only auditory stimuli. From an initial sample of 337 students, 11 matched pairs were selected. Repeated ANOVAs showed a significant effect of handedness on attentional performance that was exhibited even in the control group. Left-handers made more commission errors than right-handers. The results were interpreted considering that the association between ADHD and handedness reflects that consistent left-handers are less lateralized and have decreased interhemispheric connections. Auditory attentional data suggest that left-handers have problems in the impulsive/hyperactivity domain. In ADHD, clinical therapeutics and rehabilitation must take handedness into account because consistent sinistrals are more impulsive than dextrals. Keywords: attention, ADHD, consistent left-handers, auditory attention, continuous performance test
Dalton, Polly; Santangelo, Valerio; Spence, Charles
A growing body of research now demonstrates that working memory plays an important role in controlling the extent to which irrelevant visual distractors are processed during visual selective attention tasks (e.g., Lavie, Hirst, De Fockert, & Viding, 2004). Recently, it has been shown that the successful selection of tactile information also depends on the availability of working memory (Dalton, Lavie, & Spence, 2009). Here, we investigate whether working memory plays a role in auditory selective attention. Participants focused their attention on short continuous bursts of white noise (targets) while attempting to ignore pulsed bursts of noise (distractors). Distractor interference in this auditory task, as measured in terms of the difference in performance between congruent and incongruent distractor trials, increased significantly under high (vs. low) load in a concurrent working-memory task. These results provide the first evidence demonstrating a causal role for working memory in reducing interference by irrelevant auditory distractors.
Full Text Available Background and Aim: Bilingualism, as one of the discussing issues of psychology and linguistics, can influence the speech processing. Of several tests for assessing auditory processing, dichotic digit test has been designed to study divided auditory attention. Our study was performed to compare the auditory attention between Iranian bilingual and monolingual young adults. Methods: This cross-sectional study was conducted on 60 students including 30 Turkish-Persian bilinguals and 30 Persian monolinguals aged between 18 to 30 years in both genders. Dichotic digit test was performed on young individuals with normal peripheral hearing and right hand preference. Results: No significant correlation was found between the results of dichotic digit test of monolinguals and bilinguals (p=0.195, and also between the results of right and left ears in monolingual (p=0.460 and bilingual (p=0.054 groups. The mean score of women was significantly more than men (p=0.031. Conclusion: There was no significant difference between bilinguals and monolinguals in divided auditory attention; and it seems that acquisition of second language in lower ages has no noticeable effect on this type of auditory attention.
Trenado, Carlos; Haab, Lars; Strauss, Daniel J
Auditory evoked cortical potentials (AECPs) have been consolidated as a diagnostic tool in audiology. Further applications of this technique are in experimental neuropsychology, neuroscience, and psychiatry, e.g., for the attention deficit disorder, schizophrenia, or for studying the tinnitus decompensation. In particular, numerous psychophysiological studies have emphasized their dynamic characteristics in relation to exogenous and endogenous attention. However, the effect of corticothalamic feedback dynamics to neural correlates of focal and nonfocal attention and its large-scale effect reflected in AECPs is far from being understood. To address this issue, we model neural correlates of auditory selective attention reflected in AECPs by using corticothalamic feedback dynamics. In our framework, we make use of a well-known multiscale model of evoked potentials, for which we define for the first time a neurofunctional map of relevant corticothalamic loops to the hearing path. Such loops are in turn are coupled to our proposed probabilistic scheme of auditory selective attention. It is concluded that our model represents a promising approach to gain a deeper understanding of the neurodynamics of auditory attention and might be used as an efficient forward model to support hypotheses that are obtained in experimental paradigms involving AECPs.
Berenson, Kathy R; Gyurak, Anett; Ayduk, Ozlem; Downey, Geraldine; Garner, Matthew J; Mogg, Karin; Bradley, Brendan P; Pine, Daniel S
Two studies tested the hypothesis that Rejection Sensitivity (RS) increases vulnerability to disruption of attention by social threat cues, as would be consistent with prior evidence that it motivates individuals to prioritize detecting and managing potential rejection at a cost to other personal and interpersonal goals. In Study 1, RS predicted disruption of ongoing goal-directed attention by social threat but not negative words in an Emotional Stroop task. In Study 2, RS predicted attentional avoidance of threatening but not pleasant faces in a Visual Probe task. Threat-avoidant attention was also associated with features of borderline personality disorder. This research extends understanding of processes by which RS contributes to a self-perpetuating cycle of interpersonal problems and distress.
Picolini, Mirela Machado
Full Text Available Introduction: The sustained auditory attention is crucial for the development of some communication skills and learning. Objective: To evaluate the effect of time of day and type of school attended by children in their ability to sustained auditory attention. Method: We performed a prospective study of 50 volunteer children of both sexes, aged 7 years, with normal hearing, no learning or behavioral problems and no complaints of attention. These participants underwent Ability Test of Sustained Auditory Attention (SAAAT. The performance was evaluated by total score and the decrease of vigilance. Statistical analysis was used to analysis of variance (ANOVA with significance level of 5% (p<0.05. Results: The result set by the normative test for the age group evaluated showed a statistically significant difference for the errors of inattention (p=0.041, p=0.027 and total error score (p=0.033, p=0.024, in different periods assessment and school types, respectively. Conclusion: Children evaluated in the afternoon and the children studying in public schools had a poorer performance on auditory attention sustained.
Singh, Gurjit; Pichora-Fuller, M Kathleen; Schneider, Bruce A
The effects of directing, switching, and misdirecting auditory spatial attention in a complex listening situation were investigated in 8 younger and 8 older listeners with normal-hearing sensitivity below 4 kHz. In two companion experiments, a target sentence was presented from one spatial location and two competing sentences were presented simultaneously, one from each of two different locations. Pretrial, listeners were informed of the call-sign cue that identified which of the three sentences was the target and of the probability of the target sentence being presented from each of the three possible locations. Four different probability conditions varied in the likelihood of the target being presented at the left, center, and right locations. In Experiment 1, four timing conditions were tested: the original (unedited) sentences (which contained about 300 msec of filler speech between the call-sign cue and the onset of the target words), or modified (edited) sentences with silent pauses of 0, 150, or 300 msec replacing the filler speech. In Experiment 2, when the cued sentence was presented from an unlikely (side) listening location, for half of the trials the listener's task was to report target words from the cued sentence (cue condition); for the remaining trials, the listener's task was to report target words from the sentence presented from the opposite, unlikely (side) listening location (anticue condition). In Experiment 1, for targets presented from the likely (center) location, word identification was better for the unedited than for modified sentences. For targets presented from unlikely (side) locations, word identification was better when there was more time between the call-sign cue and target words. All listeners benefited similarly from the availability of more compared with less time and the presentation of continuous compared with interrupted speech. In Experiment 2, the key finding was that age-related performance deficits were observed in
Laraway, Lee Ann
To examine differences between auditory selective attention abilities of normal and cerebral-palsied individuals, 23 cerebral-palsied and 23 normal subjects (5-21) were asked to repeat a series of 30 items in presence of intermittent white noise. Results indicated that cerebral-palsied individuals perform significantly more poorly when the…
Full Text Available Auditory selective attention enables task-relevant auditory events to be enhanced and irrelevant ones suppressed. In the present study we used a frequency tagging paradigm to investigate the effects of attention on auditory steady state responses (ASSR. The ASSR was elicited by simultaneously presenting two different streams of white noise, amplitude modulated at either 16 and 23.5 Hz or 32.5 and 40 Hz. The two different frequencies were presented to each ear and participants were instructed to selectively attend to one ear or the other (confirmed by behavioral evidence. The results revealed that modulation of ASSR by selective attention depended on the modulation frequencies used and whether the activation was contralateral or ipsilateral. Attention enhanced the ASSR for contralateral activation from either ear for 16 Hz and suppressed the ASSR for ipsilateral activation for 16 Hz and 23.5 Hz. For modulation frequencies of 32.5 or 40 Hz attention did not affect the ASSR. We propose that the pattern of enhancement and inhibition may be due to binaural suppressive effects on ipsilateral stimulation and the dominance of contralateral hemisphere during dichotic listening. In addition to the influence of cortical processing asymmetries, these results may also reflect a bias towards inhibitory ipsilateral and excitatory contralateral activation present at the level of inferior colliculus. That the effect of attention was clearest for the lower modulation frequencies suggests that such effects are likely mediated by cortical brain structures or by those in close proximity to cortex.
Mahajan, Yatin; Davis, Chris; Kim, Jeesun
Auditory selective attention enables task-relevant auditory events to be enhanced and irrelevant ones suppressed. In the present study we used a frequency tagging paradigm to investigate the effects of attention on auditory steady state responses (ASSR). The ASSR was elicited by simultaneously presenting two different streams of white noise, amplitude modulated at either 16 and 23.5 Hz or 32.5 and 40 Hz. The two different frequencies were presented to each ear and participants were instructed to selectively attend to one ear or the other (confirmed by behavioral evidence). The results revealed that modulation of ASSR by selective attention depended on the modulation frequencies used and whether the activation was contralateral or ipsilateral. Attention enhanced the ASSR for contralateral activation from either ear for 16 Hz and suppressed the ASSR for ipsilateral activation for 16 Hz and 23.5 Hz. For modulation frequencies of 32.5 or 40 Hz attention did not affect the ASSR. We propose that the pattern of enhancement and inhibition may be due to binaural suppressive effects on ipsilateral stimulation and the dominance of contralateral hemisphere during dichotic listening. In addition to the influence of cortical processing asymmetries, these results may also reflect a bias towards inhibitory ipsilateral and excitatory contralateral activation present at the level of inferior colliculus. That the effect of attention was clearest for the lower modulation frequencies suggests that such effects are likely mediated by cortical brain structures or by those in close proximity to cortex.
Mahajan, Yatin; Davis, Chris; Kim, Jeesun
Auditory selective attention enables task-relevant auditory events to be enhanced and irrelevant ones suppressed. In the present study we used a frequency tagging paradigm to investigate the effects of attention on auditory steady state responses (ASSR). The ASSR was elicited by simultaneously presenting two different streams of white noise, amplitude modulated at either 16 and 23.5 Hz or 32.5 and 40 Hz. The two different frequencies were presented to each ear and participants were instructed to selectively attend to one ear or the other (confirmed by behavioral evidence). The results revealed that modulation of ASSR by selective attention depended on the modulation frequencies used and whether the activation was contralateral or ipsilateral. Attention enhanced the ASSR for contralateral activation from either ear for 16 Hz and suppressed the ASSR for ipsilateral activation for 16 Hz and 23.5 Hz. For modulation frequencies of 32.5 or 40 Hz attention did not affect the ASSR. We propose that the pattern of enhancement and inhibition may be due to binaural suppressive effects on ipsilateral stimulation and the dominance of contralateral hemisphere during dichotic listening. In addition to the influence of cortical processing asymmetries, these results may also reflect a bias towards inhibitory ipsilateral and excitatory contralateral activation present at the level of inferior colliculus. That the effect of attention was clearest for the lower modulation frequencies suggests that such effects are likely mediated by cortical brain structures or by those in close proximity to cortex. PMID:25334021
Hitch, G J
The auditory suffix effect (SE), in which recall of the terminal items of a sequence is impaired by presenting a redundant item at the end of the sequence, has been attributed to the displacement of information from auditory sensory storage. However, the SE may result entirely from unnecessary processing of the redundant item due to a failure of attentional control. Two studies examined this possibility using visual presentation to minimize the importance of sensory storage as a source of information. Experiment I first demonstrated a visual SE and showed that its magnitude did not vary when background illumination was altered, a factor which affects the duration of sensory storage. Experiment II used auditory as well as visual presentation and tested the hypothesis that training subjects to ignore the suffix would reduce the SE. Training was achieved by interpolating redundant items identical to the suffix within sequences. It abolished the visual SE but left the auditory SE unaffected. The visual SE, therefore, is not solely determined by the physical characteristics of the suffix, and cannot be based on erasure in sensory storage. The auditory data, on the other hand, were consistent with the erasure hypothesis. It was concluded that an SE does not of itself demonstrate the involvement of sensory storage, and, in particular, the visual SE appears to reflect the degree to which the redundant item can be excluded from focal attention.
Toet, A.; Houtkamp, J.M.; Meulen, R. van der
We investigated whether manipulation of visual and auditory depth and speed cues can affect a user’s sense of risk for a low-cost nonimmersive virtual environment (VE) representing a highway environment with traffic incidents. The VE is currently used in an examination program to assess procedural
Toet, A.; Houtkamp, J.M.; Meulen, van der R.
We investigated whether manipulation of visual and auditory depth and speed cues can affect a user’s sense of risk for a low-cost nonimmersive virtual environment (VE) representing a highway environment with traffic incidents. The VE is currently used in an examination program to assess procedural
Geoffrey A Coalson
Full Text Available Adults who stutter (AWS are less accurate in their immediate repetition of novel phonological sequences compared to adults who do not stutter (AWNS. The present study examined whether manipulation of the following two aspects of traditional nonword repetition tasks unmask distinct weaknesses in phonological working memory in AWS: (1 presentation of stimuli with less-frequent stress patterns, and (2 removal of auditory-orthographic cues immediately prior to response.Fifty-two participants (26 AWS, 26 AWNS produced 12 bisyllabic nonwords in the presence of corresponding auditory-orthographic cues (i.e., immediate repetition task, and the absence of auditory-orthographic cues (i.e., short-term recall task. Half of each cohort (13 AWS, 13 AWNS were exposed to the stimuli with high-frequency trochaic stress, and half (13 AWS, 13 AWNS were exposed to identical stimuli with lower-frequency iambic stress.No differences in immediate repetition accuracy for trochaic or iambic nonwords were observed for either group. However, AWS were less accurate when recalling iambic nonwords than trochaic nonwords in the absence of auditory-orthographic cues.Manipulation of two factors which may minimize phonological demand during standard nonword repetition tasks increased the number of errors in AWS compared to AWNS. These findings suggest greater vulnerability in phonological working memory in AWS, even when producing nonwords as short as two syllables.
Ghai, Shashank; Ghai, Ishan; Schmitz, Gerd; Effenberg, Alfred O
The use of rhythmic auditory cueing to enhance gait performance in parkinsonian patients' is an emerging area of interest. Different theories and underlying neurophysiological mechanisms have been suggested for ascertaining the enhancement in motor performance. However, a consensus as to its effects based on characteristics of effective stimuli, and training dosage is still not reached. A systematic review and meta-analysis was carried out to analyze the effects of different auditory feedbacks on gait and postural performance in patients affected by Parkinson's disease. Systematic identification of published literature was performed adhering to PRISMA guidelines, from inception until May 2017, on online databases; Web of science, PEDro, EBSCO, MEDLINE, Cochrane, EMBASE and PROQUEST. Of 4204 records, 50 studies, involving 1892 participants met our inclusion criteria. The analysis revealed an overall positive effect on gait velocity, stride length, and a negative effect on cadence with application of auditory cueing. Neurophysiological mechanisms, training dosage, effects of higher information processing constraints, and use of cueing as an adjunct with medications are thoroughly discussed. This present review bridges the gaps in literature by suggesting application of rhythmic auditory cueing in conventional rehabilitation approaches to enhance motor performance and quality of life in the parkinsonian community.
Varnet, Léo; Knoblauch, Kenneth; Serniclaes, Willy; Meunier, Fanny; Hoen, Michel
Although there is a large consensus regarding the involvement of specific acoustic cues in speech perception, the precise mechanisms underlying the transformation from continuous acoustical properties into discrete perceptual units remains undetermined. This gap in knowledge is partially due to the lack of a turnkey solution for isolating critical speech cues from natural stimuli. In this paper, we describe a psychoacoustic imaging method known as the Auditory Classification Image technique that allows experimenters to estimate the relative importance of time-frequency regions in categorizing natural speech utterances in noise. Importantly, this technique enables the testing of hypotheses on the listening strategies of participants at the group level. We exemplify this approach by identifying the acoustic cues involved in da/ga categorization with two phonetic contexts, Al- or Ar-. The application of Auditory Classification Images to our group of 16 participants revealed significant critical regions on the second and third formant onsets, as predicted by the literature, as well as an unexpected temporal cue on the first formant. Finally, through a cluster-based nonparametric test, we demonstrate that this method is sufficiently sensitive to detect fine modifications of the classification strategies between different utterances of the same phoneme.
Jennifer L. O’Brien
Full Text Available Auditory cognitive training (ACT improves attention in older adults; however, the underlying neurophysiological mechanisms are still unknown. The present study examined the effects of ACT on the P3b event-related potential reflecting attention allocation (amplitude and speed of processing (latency during stimulus categorization and the P1-N1-P2 complex reflecting perceptual processing (amplitude and latency. Participants completed an auditory oddball task before and after 10 weeks of ACT (n = 9 or a no contact control period (n = 15. Parietal P3b amplitudes to oddball stimuli decreased at post-test in the trained group as compared to those in the control group, and frontal P3b amplitudes show a similar trend, potentially reflecting more efficient attentional allocation after ACT. No advantages for the ACT group were evident for auditory perceptual processing or speed of processing in this small sample. Our results provide preliminary evidence that ACT may enhance the efficiency of attention allocation, which may account for the positive impact of ACT on the everyday functioning of older adults.
O'Brien, Jennifer L; Lister, Jennifer J; Fausto, Bernadette A; Clifton, Gregory K; Edwards, Jerri D
Auditory cognitive training (ACT) improves attention in older adults; however, the underlying neurophysiological mechanisms are still unknown. The present study examined the effects of ACT on the P3b event-related potential reflecting attention allocation (amplitude) and speed of processing (latency) during stimulus categorization and the P1-N1-P2 complex reflecting perceptual processing (amplitude and latency). Participants completed an auditory oddball task before and after 10 weeks of ACT (n = 9) or a no contact control period (n = 15). Parietal P3b amplitudes to oddball stimuli decreased at post-test in the trained group as compared to those in the control group, and frontal P3b amplitudes show a similar trend, potentially reflecting more efficient attentional allocation after ACT. No advantages for the ACT group were evident for auditory perceptual processing or speed of processing in this small sample. Our results provide preliminary evidence that ACT may enhance the efficiency of attention allocation, which may account for the positive impact of ACT on the everyday functioning of older adults.
Dai, Lengshi; Shinn-Cunningham, Barbara G
Listeners with normal hearing thresholds (NHTs) differ in their ability to steer attention to whatever sound source is important. This ability depends on top-down executive control, which modulates the sensory representation of sound in the cortex. Yet, this sensory representation also depends on the coding fidelity of the peripheral auditory system. Both of these factors may thus contribute to the individual differences in performance. We designed a selective auditory attention paradigm in which we could simultaneously measure envelope following responses (EFRs, reflecting peripheral coding), onset event-related potentials (ERPs) from the scalp (reflecting cortical responses to sound) and behavioral scores. We performed two experiments that varied stimulus conditions to alter the degree to which performance might be limited due to fine stimulus details vs. due to control of attentional focus. Consistent with past work, in both experiments we find that attention strongly modulates cortical ERPs. Importantly, in Experiment I, where coding fidelity limits the task, individual behavioral performance correlates with subcortical coding strength (derived by computing how the EFR is degraded for fully masked tones compared to partially masked tones); however, in this experiment, the effects of attention on cortical ERPs were unrelated to individual subject performance. In contrast, in Experiment II, where sensory cues for segregation are robust (and thus less of a limiting factor on task performance), inter-subject behavioral differences correlate with subcortical coding strength. In addition, after factoring out the influence of subcortical coding strength, behavioral differences are also correlated with the strength of attentional modulation of ERPs. These results support the hypothesis that behavioral abilities amongst listeners with NHTs can arise due to both subcortical coding differences and differences in attentional control, depending on stimulus characteristics
Full Text Available Listeners with normal hearing thresholds differ in their ability to steer attention to whatever sound source is important. This ability depends on top-down executive control, which modulates the sensory representation of sound in cortex. Yet, this sensory representation also depends on the coding fidelity of the peripheral auditory system. Both of these factors may thus contribute to the individual differences in performance. We designed a selective auditory attention paradigm in which we could simultaneously measure envelope following responses (EFRs, reflecting peripheral coding, onset event-related potentials from the scalp (ERPs, reflecting cortical responses to sound, and behavioral scores. We performed two experiments that varied stimulus conditions to alter the degree to which performance might be limited due to fine stimulus details vs. due to control of attentional focus. Consistent with past work, in both experiments we find that attention strongly modulates cortical ERPs. Importantly, in Experiment I, where coding fidelity limits the task, individual behavioral performance correlates with subcortical coding strength (derived by computing how the EFR is degraded for fully masked tones compared to partially masked tones; however, in this experiment, the effects of attention on cortical ERPs were unrelated to individual subject performance. In contrast, in Experiment II, where sensory cues for segregation are robust (and thus less of a limiting factor on task performance, inter-subject behavioral differences correlate with subcortical coding strength. In addition, after factoring out the influence of subcortical coding strength, behavioral differences are also correlated with the strength of attentional modulation of ERPs. These results support the hypothesis that behavioral abilities amongst listeners with normal hearing thresholds can arise due to both subcortical coding differences and differences in attentional control, depending on
Watson, Christopher J G; Carlile, Simon; Kelly, Heather; Balachandar, Kapilesh
The capacity of healthy adult listeners to accommodate to altered spectral cues to the source locations of broadband sounds has now been well documented. In recent years we have demonstrated that the degree and speed of accommodation are improved by using an integrated sensory-motor training protocol under anechoic conditions. Here we demonstrate that the learning which underpins the localization performance gains during the accommodation process using anechoic broadband training stimuli generalize to environmentally relevant scenarios. As previously, alterations to monaural spectral cues were produced by fitting participants with custom-made outer ear molds, worn during waking hours. Following acute degradations in localization performance, participants then underwent daily sensory-motor training to improve localization accuracy using broadband noise stimuli over ten days. Participants not only demonstrated post-training improvements in localization accuracy for broadband noises presented in the same set of positions used during training, but also for stimuli presented in untrained locations, for monosyllabic speech sounds, and for stimuli presented in reverberant conditions. These findings shed further light on the neuroplastic capacity of healthy listeners, and represent the next step in the development of training programs for users of assistive listening devices which degrade localization acuity by distorting or bypassing monaural cues.
Altvater-Mackensen, Nicole; Grossmann, Tobias
Infants' language exposure largely involves face-to-face interactions providing acoustic and visual speech cues but also social cues that might foster language learning. Yet, both audiovisual speech information and social information have so far received little attention in research on infants' early language development. Using a preferential…
arriving cue ( primacy ), sometimes in favor of the last ( recency ), and sometimes in favor of both, with cues arriving in the middle of a sequence...INFORMATION SOURCES OF VARYING WEIGHTS: THE EFFECT OF DISPLAY FEATURES AND ATTENTION CUEING Christopher D. Wickens, Heather L. Pringle, and James Merlo...SUBTITLE Integration of Information Sources of Varying Weights: The Effect of Display Features and Attention Cueing 5a. CONTRACT NUMBER 5b. GRANT
Koelewijn, T.; Bronkhorst, A.; Theeuwes, J.
There is debate in the crossmodal cueing literature as to whether capture of visual attention by means of sound is a fully automatic process. Recent studies show that when visual attention is endogenously focused sound still captures attention. The current study investigated whether there is
Cappe, Céline; Thut, Gregor; Romei, Vincenzo; Murray, Micah M
An object's motion relative to an observer can confer ethologically meaningful information. Approaching or looming stimuli can signal threats/collisions to be avoided or prey to be confronted, whereas receding stimuli can signal successful escape or failed pursuit. Using movement detection and subjective ratings, we investigated the multisensory integration of looming and receding auditory and visual information by humans. While prior research has demonstrated a perceptual bias for unisensory and more recently multisensory looming stimuli, none has investigated whether there is integration of looming signals between modalities. Our findings reveal selective integration of multisensory looming stimuli. Performance was significantly enhanced for looming stimuli over all other multisensory conditions. Contrasts with static multisensory conditions indicate that only multisensory looming stimuli resulted in facilitation beyond that induced by the sheer presence of auditory-visual stimuli. Controlling for variation in physical energy replicated the advantage for multisensory looming stimuli. Finally, only looming stimuli exhibited a negative linear relationship between enhancement indices for detection speed and for subjective ratings. Maximal detection speed was attained when motion perception was already robust under unisensory conditions. The preferential integration of multisensory looming stimuli highlights that complex ethologically salient stimuli likely require synergistic cooperation between existing principles of multisensory integration. A new conceptualization of the neurophysiologic mechanisms mediating real-world multisensory perceptions and action is therefore supported.
Ramirez, Jason J; Monti, Peter M; Colwill, Ruth M
The effect of alcohol-cue exposure on eliciting craving has been well documented, and numerous theoretical models assert that craving is a clinically significant construct central to the motivation and maintenance of alcohol-seeking behavior. Furthermore, some theories propose a relationship between craving and attention, such that cue-induced increases in craving bias attention toward alcohol cues, which, in turn, perpetuates craving. This study examined the extent to which alcohol cues induce craving and bias attention toward alcohol cues among underage college-student drinkers. We designed within-subject cue-reactivity and visual-probe tasks to assess in vivo alcohol-cue exposure effects on craving and attentional bias on 39 undergraduate college drinkers (ages 18-20). Participants expressed greater subjective craving to drink alcohol following in vivo cue exposure to a commonly consumed beer compared with water exposure. Furthermore, following alcohol-cue exposure, participants exhibited greater attentional biases toward alcohol cues as measured by a visual-probe task. In addition to the cue-exposure effects on craving and attentional bias, within-subject differences in craving across sessions marginally predicted within-subject differences in attentional bias. Implications for both theory and practice are discussed. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
Reed, Darrin K.; Tóth, Brigitta; Best, Virginia; Majdak, Piotr; Colburn, H. Steven; Shinn-Cunningham, Barbara
Studies of auditory looming bias have shown that sources increasing in intensity are more salient than sources decreasing in intensity. Researchers have argued that listeners are more sensitive to approaching sounds compared with receding sounds, reflecting an evolutionary pressure. However, these studies only manipulated overall sound intensity; therefore, it is unclear whether looming bias is truly a perceptual bias for changes in source distance, or only in sound intensity. Here we demonstrate both behavioral and neural correlates of looming bias without manipulating overall sound intensity. In natural environments, the pinnae induce spectral cues that give rise to a sense of externalization; when spectral cues are unnatural, sounds are perceived as closer to the listener. We manipulated the contrast of individually tailored spectral cues to create sounds of similar intensity but different naturalness. We confirmed that sounds were perceived as approaching when spectral contrast decreased, and perceived as receding when spectral contrast increased. We measured behavior and electroencephalography while listeners judged motion direction. Behavioral responses showed a looming bias in that responses were more consistent for sounds perceived as approaching than for sounds perceived as receding. In a control experiment, looming bias disappeared when spectral contrast changes were discontinuous, suggesting that perceived motion in distance and not distance itself was driving the bias. Neurally, looming bias was reflected in an asymmetry of late event-related potentials associated with motion evaluation. Hence, both our behavioral and neural findings support a generalization of the auditory looming bias, representing a perceptual preference for approaching auditory objects. PMID:28827336
Baumgartner, Robert; Reed, Darrin K; Tóth, Brigitta; Best, Virginia; Majdak, Piotr; Colburn, H Steven; Shinn-Cunningham, Barbara
Studies of auditory looming bias have shown that sources increasing in intensity are more salient than sources decreasing in intensity. Researchers have argued that listeners are more sensitive to approaching sounds compared with receding sounds, reflecting an evolutionary pressure. However, these studies only manipulated overall sound intensity; therefore, it is unclear whether looming bias is truly a perceptual bias for changes in source distance, or only in sound intensity. Here we demonstrate both behavioral and neural correlates of looming bias without manipulating overall sound intensity. In natural environments, the pinnae induce spectral cues that give rise to a sense of externalization; when spectral cues are unnatural, sounds are perceived as closer to the listener. We manipulated the contrast of individually tailored spectral cues to create sounds of similar intensity but different naturalness. We confirmed that sounds were perceived as approaching when spectral contrast decreased, and perceived as receding when spectral contrast increased. We measured behavior and electroencephalography while listeners judged motion direction. Behavioral responses showed a looming bias in that responses were more consistent for sounds perceived as approaching than for sounds perceived as receding. In a control experiment, looming bias disappeared when spectral contrast changes were discontinuous, suggesting that perceived motion in distance and not distance itself was driving the bias. Neurally, looming bias was reflected in an asymmetry of late event-related potentials associated with motion evaluation. Hence, both our behavioral and neural findings support a generalization of the auditory looming bias, representing a perceptual preference for approaching auditory objects.
Sanders, Lisa D; Stevens, Courtney; Coch, Donna; Neville, Helen J
Behavioral and electrophysiological evidence suggests that the development of selective attention extends over the first two decades of life. However, much of this research may underestimate the attention abilities of young children. By providing strong, redundant attention cues, we show that sustained endogenous selective attention has similar effects on ERP indices of auditory processing in adults and children as young as 3 years old. All participants were cued to selectively attend to one of two simultaneously presented stories that differed in location (left/right), voice (male/female), and content. The morphology of the ERP waveforms elicited by probes embedded in the stories was very different for adults, who showed a typical positive-negative-positive pattern in the 300 ms after probe onset, and children, who showed a single broad positivity during this epoch. However, for 3- to 5-year-olds, 6- to 8-year-olds, and adults, probes in the attended story elicited larger amplitude ERPs beginning around 100 ms after probe onset. This attentional modulation of exogenously driven components was longer in duration for the youngest children. In addition, attended linguistic probes elicited a larger negativity 300-500 ms for all groups, indicative of additional attentional processing. These data show that with adequate cues, even children as young as 3 years old can selectively attend to one auditory stream while ignoring another and that doing so alters auditory sensory processing at an early stage. Furthermore, they suggest that the neural mechanisms by which selective attention affects auditory processing are remarkably adult-like by this age.
Soltanparast, Sanaz; Jafari, Zahra; Sameni, Seyed Jalal; Salehi, Masoud
The purpose of the present study was to evaluate the psychometric properties (validity and reliability) of the Persian version of the Sustained Auditory Attention Capacity Test in children with attention deficit hyperactivity disorder. The Persian version of the Sustained Auditory Attention Capacity Test was constructed to assess sustained auditory attention using the method provided by Feniman and colleagues (2007). In this test, comments were provided to assess the child's attentional deficit by determining inattention and impulsiveness error, the total scores of the sustained auditory attention capacity test and attention span reduction index. In the present study for determining the validity and reliability of in both Rey Auditory Verbal Learning test and the Persian version of the Sustained Auditory Attention Capacity Test (SAACT), 46 normal children and 41 children with Attention Deficit Hyperactivity (ADHD), all right-handed and aged between 7 and 11 of both genders, were evaluated. In determining convergent validity, a negative significant correlation was found between the three parts of the Rey Auditory Verbal Learning test (first, fifth, and immediate recall) and all indicators of the SAACT except attention span reduction. By comparing the test scores between the normal and ADHD groups, discriminant validity analysis showed significant differences in all indicators of the test except for attention span reduction (pCapacity test has good validity and reliability, that matches other reliable tests, and it can be used for the identification of children with attention deficits and if they suspected to have Attention Deficit Hyperactivity Disorder.
Kühnis, Jürg; Elmer, Stefan; Meyer, Martin; Jäncke, Lutz
Here, we applied a multi-feature mismatch negativity (MMN) paradigm in order to systematically investigate the neuronal representation of vowels and temporally manipulated CV syllables in a homogeneous sample of string players and non-musicians. Based on previous work indicating an increased sensitivity of the musicians' auditory system, we expected to find that musically trained subjects will elicit increased MMN amplitudes in response to temporal variations in CV syllables, namely voice-onset time (VOT) and duration. In addition, since different vowels are principally distinguished by means of frequency information and musicians are superior in extracting tonal (and thus frequency) information from an acoustic stream, we also expected to provide evidence for an increased auditory representation of vowels in the experts. In line with our hypothesis, we could show that musicians are not only advantaged in the pre-attentive encoding of temporal speech cues, but most notably also in processing vowels. Additional "just noticeable difference" measurements suggested that the musicians' perceptual advantage in encoding speech sounds was more likely driven by the generic constitutional properties of a highly trained auditory system, rather than by its specialisation for speech representations per se. These results shed light on the origin of the often reported advantage of musicians in processing a variety of speech sounds. Copyright © 2013 Elsevier Ltd. All rights reserved.
Full Text Available Selective attention to a spatial location has shown enhance perception and facilitate behaviour for events at attended locations. However, selection relies not only on where but also when an event occurs. Recently, interest has turned to how intrinsic neural oscillations in the brain entrain to rhythms in our environment, and, stimuli appearing in or out of synch with a rhythm have shown to modulate perception and performance. Temporal expectations created by rhythms and spatial attention are two processes which have independently shown to affect stimulus processing but it remains largely unknown how, and if, they interact. In four separate tasks, this study investigated the effects of voluntary spatial attention and bottom-up temporal expectations created by rhythms in both unimodal and crossmodal conditions. In each task the participant used an informative cue, either colour or pitch, to direct their covert spatial attention to the left or right, and respond as quickly as possible to a target. The lateralized target (visual or auditory was then presented at the attended or unattended side. Importantly, although not task relevant, the cue was a rhythm of either flashes or beeps. The target was presented in or out of sync (early or late with the rhythmic cue. The results showed participants were faster responding to spatially attended compared to unattended targets in all tasks. Moreover, there was an effect of rhythmic cueing upon response times in both unimodal and crossmodal conditions. Responses were faster to targets presented in sync with the rhythm compared to when they appeared too early in both crossmodal tasks. That is, rhythmic stimuli in one modality influenced the temporal expectancy in the other modality, suggesting temporal expectancies created by rhythms are crossmodal. Interestingly, there was no interaction between top-down spatial attention and rhythmic cueing in any task suggesting these two processes largely influenced
Morris, Richard; Griffiths, Oren; Le Pelley, Michael E.; Weickert, Thomas W.
Many modern learning theories assume that the amount of attention to a cue depends on how well that cue predicted important events in the past. Schizophrenia is associated with deficits in attention and recent theories of psychosis have argued that positive symptoms such as delusions and hallucinations are related to a failure of selective attention. However, evidence demonstrating that attention to irrelevant cues is related to positive symptoms in schizophrenia is lacking. We used a novel method of measuring attention to nonpredictive (and thus irrelevant) cues in a causal learning test (Le Pelley ME, McLaren IP. Learned associability and associative change in human causal learning. Q J Exp Psychol B. 2003;56:68–79) to assess whether healthy adults and people with schizophrenia discriminate previously predictive and nonpredictive cues. In a series of experiments with independent samples, we demonstrated: (1) when people with schizophrenia who had severe positive symptoms successfully distinguished between predictive and nonpredictive cues during training, they failed to discriminate between predictive and nonpredictive cues relative to healthy adults during subsequent testing and (2) learning about nonpredictive cues was correlated with more severe positive symptoms scores in schizophrenia. These results suggest that positive symptoms of schizophrenia are related to increased attention to nonpredictive cues during causal learning. This deficit in selective attention results in learning irrelevant causal associations and may be the basis of positive symptoms in schizophrenia. PMID:22267535
Frischen, Alexandra; Bayliss, Andrew P; Tipper, Steven P
During social interactions, people's eyes convey a wealth of information about their direction of attention and their emotional and mental states. This review aims to provide a comprehensive overview of past and current research into the perception of gaze behavior and its effect on the observer. This encompasses the perception of gaze direction and its influence on perception of the other person, as well as gaze-following behavior such as joint attention, in infant, adult, and clinical populations. Particular focus is given to the gaze-cueing paradigm that has been used to investigate the mechanisms of joint attention. The contribution of this paradigm has been significant and will likely continue to advance knowledge across diverse fields within psychology and neuroscience. Copyright 2007 APA
Boyd, Alan W.; Whitmer, William M; Soraghan, John J.; Akeroyd, Michael A
Hearing-aid wearers have reported sound source locations as being perceptually internalized (i.e., inside their head). The contribution of hearing-aid design to internalization has, however, received little attention. This experiment compared the sensitivity of hearing-impaired (HI) and normal-hearing (NH) listeners to externalization cues when listening with their own ears and simulated BTE hearing-aids in increasingly complex listening situations and reduced pinna cues. Participants rated t...
Brodie, Matthew A D; Dean, Roger T; Beijer, Tim R; Canning, Colleen G; Smith, Stuart T; Menant, Jasmine C; Lord, Stephen R
Unsteady gait and falls are major problems for people with Parkinson's disease (PD). Symmetric auditory cues at altered cadences have been used to improve walking speed or step length. However, few people are exactly symmetric in terms of morphology or movement patterns and effects of symmetric cueing on gait steadiness are inconclusive. To investigate if matching auditory cue a/symmetry to an individual's intrinsic symmetry or asymmetry affects gait steadiness, gait symmetry, and comfort to cues, in people with PD, healthy age-matched controls (HAM) and young. Thirty participants; 10 with PD, 11 HAM (66 years), and 9 young (30 years), completed five baseline walks (no cues) and twenty-five cued walks at habitual cadence but different a/symmetries. Outcomes included; gait steadiness (step time variability and smoothness by harmonic ratios), walking speed, symmetry, comfort, and cue lag times. Without cues, PD participants had slower and less steady gait than HAM or young. Gait symmetry was distinct from gait steadiness, and unaffected by cue symmetry or a diagnosis of PD, but associated with aging. All participants maintained preferred gait symmetry and lag times independent of cue symmetry. When cues were matched to the individual's habitual gait symmetry and cadence: Gait steadiness improved in the PD group, but deteriorated in the HAM controls, and was unchanged in the young. Gait outcomes worsened for the two PD participants who reported discomfort to cued walking and had high New Freezing of Gait scores. It cannot be assumed all individuals benefit equally from auditory cues. Symmetry matched auditory cues compensated for unsteady gait in most people with PD, but interfered with gait steadiness in older people without basal ganglia deficits.
Roberts, K. L.; Andersen, Tobias; Kyllingsbæk, Søren
We report initial progress towards creating an auditory analogue of a mathematical model of visual attention: the ‘Theory of Visual Attention’ (TVA; Bundesen, 1990). TVA is one of the best established models of visual attention. It assumes that visual stimuli are initially processed in parallel...... to the data produces the following parameters: the minimum amount of information required for target identification (t0); the rate at which information is encoded, assuming an exponential function (v); the relative attentional weight to targets versus distractors (α); and the capacity of VSTM (K). TVA has...
Teng, Y; Vyazovska, O V; Wasserman, E A
We deployed the Multiple Necessary Cues (MNC) discrimination task to see if pigeons can simultaneously attend to four different dimensions of complex visual stimuli. Specifically, we trained eight pigeons on a simultaneous discrimination to peck only 1 of 16 compound stimuli created from all possible combinations of two stimulus values from four separable visual dimensions: shape (circle/square), size (large/small), line orientation (horizontal/vertical), and brightness (dark/light). Some pigeons had CLHD (circle, large, horizontal, dark) as the positive stimulus (S+), whereas others had SSVL (square, small, vertical, light) as the S+. All eight pigeons acquired the MNC discrimination, suggesting that they had attended to all four dimensions. Learning rate was similar to all four dimensions, with learning along the orientation dimension being a bit faster than along the other three dimensions. The more dimensions along which the S-s differed from the S+, the faster was learning, suggesting an added benefit from increasing perceptual disparities between the S-s and the S+. Of particular note, evidence of attentional tradeoffs among the four dimensions was much weaker with the simultaneous task than with the successive task. We consider several reasons for this empirical disparity. Copyright © 2014 Elsevier B.V. All rights reserved.
de Koning, Bjorn B.; Tabbers, Huib K.; Rikers, Remy M. J. P.; Paas, Fred
This paper examines the transferability of successful cueing approaches from text and static visualization research to animations. Theories of visual attention and learning as well as empirical evidence for the instructional effectiveness of attention cueing are reviewed and, based on Mayer's theory of multimedia learning, a framework was…
Full Text Available BACKGROUND: The ability to separate two interleaved melodies is an important factor in music appreciation. This ability is greatly reduced in people with hearing impairment, contributing to difficulties in music appreciation. The aim of this study was to assess whether visual cues, musical training or musical context could have an effect on this ability, and potentially improve music appreciation for the hearing impaired. METHODS: Musicians (N = 18 and non-musicians (N = 19 were asked to rate the difficulty of segregating a four-note repeating melody from interleaved random distracter notes. Visual cues were provided on half the blocks, and two musical contexts were tested, with the overlap between melody and distracter notes either gradually increasing or decreasing. CONCLUSIONS: Visual cues, musical training, and musical context all affected the difficulty of extracting the melody from a background of interleaved random distracter notes. Visual cues were effective in reducing the difficulty of segregating the melody from distracter notes, even in individuals with no musical training. These results are consistent with theories that indicate an important role for central (top-down processes in auditory streaming mechanisms, and suggest that visual cues may help the hearing-impaired enjoy music.
Ruggles, Dorea; Shinn-Cunningham, Barbara
Listeners can selectively attend to a desired target by directing attention to known target source features, such as location or pitch. Reverberation, however, reduces the reliability of the cues that allow a target source to be segregated and selected from a sound mixture. Given this, it is likely that reverberant energy interferes with selective auditory attention. Anecdotal reports suggest that the ability to focus spatial auditory attention degrades even with early aging, yet there is little evidence that middle-aged listeners have behavioral deficits on tasks requiring selective auditory attention. The current study was designed to look for individual differences in selective attention ability and to see if any such differences correlate with age. Normal-hearing adults, ranging in age from 18 to 55 years, were asked to report a stream of digits located directly ahead in a simulated rectangular room. Simultaneous, competing masker digit streams were simulated at locations 15° left and right of center. The level of reverberation was varied to alter task difficulty by interfering with localization cues (increasing localization blur). Overall, performance was best in the anechoic condition and worst in the high-reverberation condition. Listeners nearly always reported a digit from one of the three competing streams, showing that reverberation did not render the digits unintelligible. Importantly, inter-subject differences were extremely large. These differences, however, were not significantly correlated with age, memory span, or hearing status. These results show that listeners with audiometrically normal pure tone thresholds differ in their ability to selectively attend to a desired source, a task important in everyday communication. Further work is necessary to determine if these differences arise from differences in peripheral auditory function or in more central function.
Oray, Serkan; Lu, Zhong-Lin; Dawson, Michael E
To investigate the cross-modal nature of the exogenous attention system, we studied how involuntary attention in the visual modality affects ERPs elicited by sudden onset of events in the auditory modality. Relatively loud auditory white noise bursts were presented to subjects with random and long inter-trial intervals. The noise bursts were either presented alone, or paired with a visual stimulus with a visual to auditory onset asynchrony of 120 ms. In a third condition, the visual stimuli were shown alone. All three conditions, auditory alone, visual alone, and paired visual/auditory, were randomly inter-mixed and presented with equal probabilities. Subjects were instructed to fixate on a point in front of them without task instructions concerning either the auditory or visual stimuli. ERPs were recorded from 28 scalp sites throughout every experimental session. Compared to ERPs in the auditory alone condition, pairing the auditory noise bursts with the visual stimulus reduced the amplitude of the auditory N100 component at Cz by 40% and the auditory P200/P300 component at Cz by 25%. No significant topographical change was observed in the scalp distributions of the N100 and P200/P300. Our results suggest that involuntary attention to visual stimuli suppresses early sensory (N100) as well as late cognitive (P200/P300) processing of sudden auditory events. The activation of the exogenous attention system by sudden auditory onset can be modified by involuntary visual attention in a cross-model, passive prepulse inhibition paradigm.
Tremblay, Kelly L.; Shahin, Antoine J.; Picton, Terence; Ross, Bernhard
Objective Auditory training alters neural activity in humans but it is unknown if these alterations are specific to the trained cue. The objective of this study was to determine if enhanced cortical activity was specific to the trained voice-onset-time (VOT) stimuli ‘mba’ and ’ba’, or whether it generalized to the control stimulus ‘a’ that did not contain the trained cue. Methods Thirteen adults were trained to identify a 10 ms VOT cue that differentiated the two experimental stimuli. We recorded event-related potentials (ERPs) evoked by three different speech sounds ‘ba’ ‘mba’ and ‘a’ before and after six days of VOT training. Results The P2 wave increased in amplitude after training for both control and experimental stimuli, but the effects differed between stimulus conditions. Whereas the effects of training on P2 amplitude were greatest in the left hemisphere for the trained stimuli, enhanced P2 activity was seen in both hemispheres for the control stimulus. In addition, subjects with enhanced pre-training N1 amplitudes were more responsive to training and showed the most perceptual improvement. Conclusion Both stimulus-specific and general effects of training can be measured in humans. An individual’s pre-training N1 response might predict their capacity for improvement. Significance N1 and P2 responses can be used to examine physiological correlates of human auditory perceptual learning. PMID:19028139
Gutschalk, Alexander; Rupp, André; Dykstra, Andrew R
Serially presented tones are sometimes segregated into two perceptually distinct streams. An ongoing debate is whether this basic streaming phenomenon reflects automatic processes or requires attention focused to the stimuli. Here, we examined the influence of focused attention on streaming-related activity in human auditory cortex using magnetoencephalography (MEG). Listeners were presented with a dichotic paradigm in which left-ear stimuli consisted of canonical streaming stimuli (ABA_ or ABAA) and right-ear stimuli consisted of a classical oddball paradigm. In phase one, listeners were instructed to attend the right-ear oddball sequence and detect rare deviants. In phase two, they were instructed to attend the left ear streaming stimulus and report whether they heard one or two streams. The frequency difference (ΔF) of the sequences was set such that the smallest and largest ΔF conditions generally induced one- and two-stream percepts, respectively. Two intermediate ΔF conditions were chosen to elicit bistable percepts (i.e., either one or two streams). Attention enhanced the peak-to-peak amplitude of the P1-N1 complex, but only for ambiguous ΔF conditions, consistent with the notion that automatic mechanisms for streaming tightly interact with attention and that the latter is of particular importance for ambiguous sound sequences.
Doolan, K J; Breslin, G; Hanna, D; Gallagher, A M
The incentive sensitisation model of obesity suggests that modification of the dopaminergic associated reward systems in the brain may result in increased awareness of food-related visual cues present in the current food environment. Having a heightened awareness of these visual food cues may impact on food choices and eating behaviours with those being most aware of or demonstrating greater attention to food-related stimuli potentially being at greater risk of overeating and subsequent weight gain. To date, research related to attentional responses to visual food cues has been both limited and conflicting. Such inconsistent findings may in part be explained by the use of different methodological approaches to measure attentional bias and the impact of other factors such as hunger levels, energy density of visual food cues and individual eating style traits that may influence visual attention to food-related cues outside of weight status alone. This review examines the various methodologies employed to measure attentional bias with a particular focus on the role that attentional processing of food-related visual cues may have in obesity. Based on the findings of this review, it appears that it may be too early to clarify the role visual attention to food-related cues may have in obesity. Results however highlight the importance of considering the most appropriate methodology to use when measuring attentional bias and the characteristics of the study populations targeted while interpreting results to date and in designing future studies.
Jollie, Ashley; Ivanoff, Jason; Webb, Nicole E; Jamieson, Andrew S
Predictive central cues generate location-based expectancies, voluntary shifts of attention, and facilitate target processing. Often, location-based expectancies and voluntary attention are confounded in cueing tasks. Here we vary the predictability of central cues to determine whether they can evoke the inhibition of target processing in three go/no-go experiments. In the first experiment, the central cue was uninformative and did not predict the target's location. Importantly, these cues did not seem to affect target processing. In the second experiment, the central cue indicated the most or the least likely location of the target. Surprisingly, both types of cues facilitated target processing at the cued location. In the third experiment, the central cue predicted the most likely location of a no-go target, but it did not provide relevant information pertaining to the location of the go target. Again, the central cue facilitated processing of the go target. These results suggest that efforts to strategically allocate inhibition may be thwarted by the paradoxical monitoring of the cued location. The current findings highlight the need to further explore the relationship between location-based expectancies and spatial attention in cueing tasks.
Hansen, Kirstin Anderson; Maxwell, Alyssa; Siebert, Ursula
the water surface. Whether some of these birds make use of acoustic cues while underwater is unknown. An interesting species in this respect is the great cormorant (Phalacrocorax carbo), being one of the most effective marine predators and relying on the aquatic environment for food year round. Here, its......In-air hearing in birds has been thoroughly investigated. Sound provides birds with auditory information for species and individual recognition from their complex vocalizations, as well as cues while foraging and for avoiding predators. Some 10% of existing species of birds obtain their food under...... underwater hearing abilities were investigated using psychophysics, where the bird learned to detect the presence or absence of a tone while submerged. The greatest sensitivity was found at 2 kHz, with an underwater hearing threshold of 71 dB re 1 μPa rms. The great cormorant is better at hearing underwater...
Metrik, Jane; Aston, Elizabeth R; Kahler, Christopher W; Rohsenow, Damaris J; McGeary, John E; Knopik, Valerie S; MacKillop, James
Incentive salience is a multidimensional construct that includes craving, drug value relative to other reinforcers, and implicit motivation such as attentional bias to drug cues. Laboratory cue reactivity (CR) paradigms have been used to evaluate marijuana incentive salience with measures of craving, but not with behavioral economic measures of marijuana demand or implicit attentional processing tasks. This within-subjects study used a new CR paradigm to examine multiple dimensions of marijuana's incentive salience and to compare CR-induced increases in craving and demand. Frequent marijuana users (N=93, 34% female) underwent exposure to neutral cues then to lit marijuana cigarettes. Craving, marijuana demand via a marijuana purchase task, and heart rate were assessed after each cue set. A modified Stroop task with cannabis and control words was completed after the marijuana cues as a measure of attentional bias. Relative to neutral cues, marijuana cues significantly increased subjective craving and demand indices of intensity (i.e., drug consumed at $0) and Omax (i.e., peak drug expenditure). Elasticity significantly decreased following marijuana cues, reflecting sustained purchase despite price increases. Craving was correlated with demand indices (r's: 0.23-0.30). Marijuana users displayed significant attentional bias for cannabis-related words after marijuana cues. Cue-elicited increases in intensity were associated with greater attentional bias for marijuana words. Greater incentive salience indexed by subjective, behavioral economic, and implicit measures was observed after marijuana versus neutral cues, supporting multidimensional assessment. The study highlights the utility of a behavioral economic approach in detecting cue-elicited changes in marijuana incentive salience. Published by Elsevier Ireland Ltd.
Higgs, Suzanne; Rutters, Femke; Thomas, Jason M; Naish, Katherine; Humphreys, Glyn W
Attentional biases towards food cues may be linked to the development of obesity. The present study investigated the mechanisms underlying attentional biases to food cues by assessing the role of top down influences, such as working memory (WM). We assessed whether attention in normal-weight, sated participants was drawn to food items specifically when that food item was held in WM. Twenty-three participants (15 f/8 m, age 23.4±5 year, BMI 23.5±4 kg/m(2)) took part in a laboratory based study assessing reaction times to food and non-food stimuli. Participants were presented with an initial cue stimulus to either hold in WM or to merely attend to, and then searched for the target (a circle) in a two-item display. On valid trials the target was flanked by a picture matching the cue, on neutral trials the display did not contain a picture matching the cue, and on invalid trials the distractor (a square) was flanked by a picture matching the cue. Cues were food, cars or stationery items. We observed that, relative to the effects with non-food stimuli, food items in WM strongly affected attention when the memorised cue re-appeared in the search display. In particular there was an enhanced response on valid trials, when the re-appearance of the memorised cue coincided with the search target. There were no effects of cue category on attentional guidance when the cues were merely attended to but not held in WM. These data point towards food having a strong effect on top-down guidance of search from working memory, and suggest a mechanism whereby individuals who are preoccupied with thoughts of food, for example obese individuals, show facilitated detection of food cues in the environment. Copyright © 2012 Elsevier Ltd. All rights reserved.
Israel, Moran M; Jolicoeur, Pierre; Cohen, Asher
It is well established that processes of perception and action interact. A key question concerns the role of attention in the interaction between perception-action processes. We tested the hypothesis that spatial attention is shared by perception and action. We created a dual-task paradigm: In one task, spatial information is relevant for perception (spatial-input task) but not for action, and in a second task, spatial information is relevant for action (spatial-output task) but not for perception. We used endogenous pre-cueing, with two between-subjects conditions: In one condition the cue was predictive only for the target location in the spatial-input task; in a second condition the cue was predictive only for the location of the response in the spatial-output task. In both conditions, the cueing equally affected both tasks, regardless of the information conveyed by the cue. This finding directly supports the shared input-output attention hypothesis.
van Ede, Freek; de Lange, Floris P; Maris, Eric
We investigated whether symbolic endogenous attentional cues affect perceptual accuracy and reaction time (RT) via different cognitive and neural processes. We recorded magnetoencephalography in 19 humans while they performed a cued somatosensory discrimination task in which the cue-target interval was varied between 0 and 1000 ms. Comparing behavioral and neural measures, we show that (1) attentional cueing affects accuracy and RT with different time courses and (2) the time course of our neural measure (anticipatory suppression of neuronal oscillations in stimulus-receiving sensory cortex) only accounts for the accuracy time course. A model is proposed in which the effect on accuracy is explained by a single process (preparatory excitability increase in sensory cortex), whereas the effect on RT is explained by an additional process that is sensitive to cue-target compatibility (post-target comparison between expected and actual stimulus location). These data provide new insights into the mechanisms underlying behavioral consequences of attentional cueing.
Full Text Available Others' gaze and emotional facial expression are important cues for the process of attention orienting. Here, we investigated with magnetoencephalography (MEG whether the combination of averted gaze and fearful expression may elicit a selectively early effect of attention orienting on the brain responses to targets. We used the direction of gaze of centrally presented fearful and happy faces as the spatial attention orienting cue in a Posner-like paradigm where the subjects had to detect a target checkerboard presented at gazed-at (valid trials or non gazed-at (invalid trials locations of the screen. We showed that the combination of averted gaze and fearful expression resulted in a very early attention orienting effect in the form of additional parietal activity between 55 and 70 ms for the valid versus invalid targets following fearful gaze cues. No such effect was obtained for the targets following happy gaze cues. This early cue-target validity effect selective of fearful gaze cues involved the left superior parietal region and the left lateral middle occipital region. These findings provide the first evidence for an effect of attention orienting induced by fearful gaze in the time range of C1. In doing so, they demonstrate the selective impact of combined gaze and fearful expression cues in the process of attention orienting.
Field, Matt; Hogarth, Lee; Bleasdale, Daniel; Wright, Phoebe; Fernie, Gordon; Christiansen, Paul
Theoretical models suggest that attentional bias for alcohol-related cues develops because cues signal the availability of alcohol, and the expectancy elicited by alcohol cues is responsible for the maintenance of attentional bias among regular drinkers. We investigated the moderating role of alcohol expectancy on attentional bias for alcohol-related cues. Within-subjects experimental design. Psychology laboratories. Adult social drinkers (n=58). On a trial-by-trial basis, participants were informed of the probability (100%, 50%, 0%) that they would receive beer at the end of the trial before their eye movements towards alcohol-related and control cues were measured. Heavy social drinkers showed an attentional bias for alcohol-related cues regardless of alcohol expectancy. However, in light social drinkers, attentional bias was only seen on 100% probability trials, i.e. when alcohol was expected imminently. Attentional bias for alcohol-related cues is sensitive to the current expectancy of receiving alcohol in light social drinkers, but it occurs independently of the current level of alcohol expectancy in heavy drinkers. © 2011 The Authors, Addiction © 2011 Society for the Study of Addiction.
Gladwin, Thomas E.; ter Mors-Schulte, Mieke H. J.; Ridderinkhof, K. Richard; Wiers, Reinout W.
Automatic attentional engagement toward and disengagement from alcohol cues play a role in alcohol use and dependence. In the current study, social drinkers performed a spatial cueing task designed to evoke conflict between such automatic processes and task instructions, a potentially important task
Full Text Available Shashank Ghai,1 Ishan Ghai,2 Alfred O. Effenberg1 1Institute for Sports Science, Leibniz University Hannover, Hannover, Germany; 2School of Life Sciences, Jacobs University, Bremen, Germany Abstract: Auditory entrainment can influence gait performance in movement disorders. The entrainment can incite neurophysiological and musculoskeletal changes to enhance motor execution. However, a consensus as to its effects based on gait in people with cerebral palsy is still warranted. A systematic review and meta-analysis were carried out to analyze the effects of rhythmic auditory cueing on spatiotemporal and kinematic parameters of gait in people with cerebral palsy. Systematic identification of published literature was performed adhering to Preferred Reporting Items for Systematic Reviews and Meta-Analyses and American Academy for Cerebral Palsy and Developmental Medicine guidelines, from inception until July 2017, on online databases: Web of Science, PEDro, EBSCO, Medline, Cochrane, Embase and ProQuest. Kinematic and spatiotemporal gait parameters were evaluated in a meta-analysis across studies. Of 547 records, nine studies involving 227 participants (108 children/119 adults met our inclusion criteria. The qualitative review suggested beneficial effects of rhythmic auditory cueing on gait performance among all included studies. The meta-analysis revealed beneficial effects of rhythmic auditory cueing on gait dynamic index (Hedge’s g=0.9, gait velocity (1.1, cadence (0.3, and stride length (0.5. This review for the first time suggests a converging evidence toward application of rhythmic auditory cueing to enhance gait performance and stability in people with cerebral palsy. This article details underlying neurophysiological mechanisms and use of cueing as an efficient home-based intervention. It bridges gaps in the literature, and suggests translational approaches on how rhythmic auditory cueing can be incorporated in rehabilitation approaches to
Efrati, Adi; Gutfreund, Yoram
The auditory space map in the optic tectum (OT) (also known as superior colliculus in mammals) relies on the tuning of neurons to auditory localization cues that correspond to specific sound source locations. This study investigates the effects of early auditory experiences on the neural representation of binaural auditory localization cues. Young barn owls were raised in continuous omnidirectional broadband noise from before hearing onset to the age of ∼ 65 days. Data from these birds were compared with data from age-matched control owls and from normal adult owls (>200 days). In noise-reared owls, the tuning of tectal neurons for interaural level differences and interaural time differences was broader than in control owls. Moreover, in neurons from noise-reared owls, the interaural level differences tuning was biased towards sounds louder in the contralateral ear. A similar bias appeared, but to a much lesser extent, in age-matched control owls and was absent in adult owls. To follow the recovery process from noise exposure, we continued to survey the neural representations in the OT for an extended period of up to several months after removal of the noise. We report that all the noise-rearing effects tended to recover gradually following exposure to a normal acoustic environment. The results suggest that deprivation from experiencing normal acoustic localization cues disrupts the maturation of the auditory space map in the OT.
Daisy J Mechelmans
Full Text Available Compulsive sexual behaviour (CSB is relatively common and has been associated with significant distress and psychosocial impairments. CSB has been conceptualized as either an impulse control disorder or a non-substance 'behavioural' addiction. Substance use disorders are commonly associated with attentional biases to drug cues which are believed to reflect processes of incentive salience. Here we assess male CSB subjects compared to age-matched male healthy controls using a dot probe task to assess attentional bias to sexually explicit cues. We show that compared to healthy volunteers, CSB subjects have enhanced attentional bias to explicit cues but not neutral cues particularly for early stimuli latency. Our findings suggest enhanced attentional bias to explicit cues possibly related to an early orienting attentional response. This finding dovetails with our recent observation that sexually explicit videos were associated with greater activity in a neural network similar to that observed in drug-cue-reactivity studies. Greater desire or wanting rather than liking was further associated with activity in this neural network. These studies together provide support for an incentive motivation theory of addiction underlying the aberrant response towards sexual cues in CSB.
Gandemer, Lennie; Parseihian, Gaetan; Kronland-Martinet, Richard; Bourdin, Christophe
It has long been suggested that sound plays a role in the postural control process. Few studies however have explored sound and posture interactions. The present paper focuses on the specific impact of audition on posture, seeking to determine the attributes of sound that may be useful for postural purposes. We investigated the postural sway of young, healthy blindfolded subjects in two experiments involving different static auditory environments. In the first experiment, we compared effect on sway in a simple environment built from three static sound sources in two different rooms: a normal vs. an anechoic room. In the second experiment, the same auditory environment was enriched in various ways, including the ambisonics synthesis of a immersive environment, and subjects stood on two different surfaces: a foam vs. a normal surface. The results of both experiments suggest that the spatial cues provided by sound can be used to improve postural stability. The richer the auditory environment, the better this stabilization. We interpret these results by invoking the “spatial hearing map” theory: listeners build their own mental representation of their surrounding environment, which provides them with spatial landmarks that help them to better stabilize. PMID:28694770
Gandemer, Lennie; Parseihian, Gaetan; Kronland-Martinet, Richard; Bourdin, Christophe
It has long been suggested that sound plays a role in the postural control process. Few studies however have explored sound and posture interactions. The present paper focuses on the specific impact of audition on posture, seeking to determine the attributes of sound that may be useful for postural purposes. We investigated the postural sway of young, healthy blindfolded subjects in two experiments involving different static auditory environments. In the first experiment, we compared effect on sway in a simple environment built from three static sound sources in two different rooms: a normal vs. an anechoic room. In the second experiment, the same auditory environment was enriched in various ways, including the ambisonics synthesis of a immersive environment, and subjects stood on two different surfaces: a foam vs. a normal surface. The results of both experiments suggest that the spatial cues provided by sound can be used to improve postural stability. The richer the auditory environment, the better this stabilization. We interpret these results by invoking the "spatial hearing map" theory: listeners build their own mental representation of their surrounding environment, which provides them with spatial landmarks that help them to better stabilize.
Full Text Available It has long been suggested that sound plays a role in the postural control process. Few studies however have explored sound and posture interactions. The present paper focuses on the specific impact of audition on posture, seeking to determine the attributes of sound that may be useful for postural purposes. We investigated the postural sway of young, healthy blindfolded subjects in two experiments involving different static auditory environments. In the first experiment, we compared effect on sway in a simple environment built from three static sound sources in two different rooms: a normal vs. an anechoic room. In the second experiment, the same auditory environment was enriched in various ways, including the ambisonics synthesis of a immersive environment, and subjects stood on two different surfaces: a foam vs. a normal surface. The results of both experiments suggest that the spatial cues provided by sound can be used to improve postural stability. The richer the auditory environment, the better this stabilization. We interpret these results by invoking the “spatial hearing map” theory: listeners build their own mental representation of their surrounding environment, which provides them with spatial landmarks that help them to better stabilize.
Palmer, Kara K.; Matsuyama, Abigail L.; Irwin, J. Megan; Porter, Jared M.; Robinson, Leah E.
Background and purpose: Attentional focus cues have been shown to impact motor performance of adults and children. Specifically, an external focus of attention results in improved motor learning and performance as compared to adopting an internal focus of attention. The purpose of this study was to determine the effects of an internal and external…
Azarian, Bobby; Esser, Elizabeth G; Peterson, Matthew S
Previous work indicates that threatening facial expressions with averted eye gaze can act as a signal of imminent danger, enhancing attentional orienting in the gazed-at direction. However, this threat-related gaze-cueing effect is only present in individuals reporting high levels of anxiety. The present study used eye tracking to investigate whether additional directional social cues, such as averted angry and fearful human body postures, not only cue attention, but also the eyes. The data show that although body direction did not predict target location, anxious individuals made faster eye movements when fearful or angry postures were facing towards (congruent condition) rather than away (incongruent condition) from peripheral targets. Our results provide evidence for attentional cueing in response to threat-related directional body postures in those with anxiety. This suggests that for such individuals, attention is guided by threatening social stimuli in ways that can influence and bias eye movement behaviour.
Rerko, Laura; Souza, Alessandra S; Oberauer, Klaus
In working memory (WM) tasks, performance can be boosted by directing attention to one memory object: When a retro-cue in the retention interval indicates which object will be tested, responding is faster and more accurate (the retro-cue benefit). We tested whether the retro-cue benefit in WM depends on sustained attention to the cued object by inserting an attention-demanding interruption task between the retro-cue and the memory test. In the first experiment, the interruption task required participants to shift their visual attention away from the cued representation and to a visual classification task on colors. In the second and third experiments, the interruption task required participants to shift their focal attention within WM: Attention was directed away from the cued representation by probing another representation from the memory array prior to probing the cued object. The retro-cue benefit was not attenuated by shifts of perceptual attention or by shifts of attention within WM. We concluded that sustained attention is not needed to maintain the cued representation in a state of heightened accessibility.
Full Text Available An essential step in understanding the processes underlying the general mechanism of perceptual categorization is to identify which portions of a physical stimulation modulate the behavior of our perceptual system. More specifically, in the context of speech comprehension, it is still a major open challenge to understand which information is used to categorize a speech stimulus as one phoneme or another, the auditory primitives relevant for the categorical perception of speech being still unknown. Here we propose to adapt technique relying on a Generalized Linear Model with smoothness priors technique, already used in the visual domain for estimation of so-called classification images, to auditory experiments. This statistical model offers a rigorous framework for dealing with non-Gaussian noise, as it is often the case in the auditory modality, and limits the amount of noise in the estimated template by enforcing smoother solution. By applying this technique to a specific two-alternative forced choice experiment between stimuli ‘aba’ and ‘ada’ in noise with an adaptive SNR, we confirm that the second formantic transition is a key for classifying phonemes into /b/ or /d/ in noise, and that its estimation by the auditory system is a relative measurement across spectral bands and in relation to the perceived height of the second formant in the preceding syllable. Through this example, we show how the GLM with smoothness priors approach can be applied to the identification of fine functional acoustic cues in speech perception. Finally we discuss some assumptions of the model in the specific case of speech perception.
Lund, Emily; Schuele, C Melanie
The purpose of this study was to compare types of maternal auditory-visual input about word referents available to children with cochlear implants, children with normal hearing matched for age, and children with normal hearing matched for vocabulary size. Although other works have considered the acoustic qualities of maternal input provided to children with cochlear implants, this study is the first to consider auditory-visual maternal input provided to children with cochlear implants. Participants included 30 mother-child dyads from three groups: children who wore cochlear implants (n = 10 dyads), children matched for chronological age (n = 10 dyads), and children matched for expressive vocabulary size (n = 10 dyads). All participants came from English-speaking families, with the families of children with hearing loss committed to developing listening and spoken language skills (not sign language). All mothers had normal hearing. Mother-child interactions were video recorded during mealtimes in the home. Each dyad participated in two mealtime observations. Maternal utterances were transcribed and coded for (a) nouns produced, (b) child-directed utterances, (c) nouns unknown to children per maternal report, and (d) auditory and visual cues provided about referents for unknown nouns. Auditory and visual cues were coded as either converging, diverging, or auditory-only. Mothers of children with cochlear implants provided percentages of converging and diverging cues that were similar to the percentages of mothers of children matched for chronological age. Mothers of children matched for vocabulary size, on the other hand, provided a higher percentage of converging auditory-visual cues and lower percentage of diverging cues than did mothers of children with cochlear implants. Groups did not differ in provision of auditory-only cues. The present study represents the first step toward identification of environmental input characteristics that may affect lexical learning
Stuart, Samuel; Lord, Sue; Galna, Brook; Rochester, Lynn
Gait impairment is a core feature of Parkinson's disease (PD) with implications for falls risk. Visual cues improve gait in PD, but the underlying mechanisms are unclear. Evidence suggests that attention and vision play an important role; however, the relative contribution from each is unclear. Measurement of visual exploration (specifically saccade frequency) during gait allows for real-time measurement of attention and vision. Understanding how visual cues influence visual exploration may allow inferences of the underlying mechanisms to response which could help to develop effective therapeutics. This study aimed to examine saccade frequency during gait in response to a visual cue in PD and older adults and investigate the roles of attention and vision in visual cue response in PD. A mobile eye-tracker measured saccade frequency during gait in 55 people with PD and 32 age-matched controls. Participants walked in a straight line with and without a visual cue (50 cm transverse lines) presented under single task and dual-task (concurrent digit span recall). Saccade frequency was reduced when walking in PD compared to controls; however, visual cues ameliorated saccadic deficit. Visual cues significantly increased saccade frequency in both PD and controls under both single task and dual-task. Attention rather than visual function was central to saccade frequency and gait response to visual cues in PD. In conclusion, this study highlights the impact of visual cues on visual exploration when walking and the important role of attention in PD. Understanding these complex features will help inform intervention development. © 2018 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Full Text Available The ability to select sound streams from background noise becomes challenging with age, even with normal peripheral auditory functioning. Reduced stream segregation ability has been reported in older compared to younger adults. However, the reason why there is a difference is still unknown. The current study investigated the hypothesis that automatic sound processing is impaired with aging, which then contributes to difficulty actively selecting subsets of sounds in noisy environments. We presented a simple intensity oddball sequence in various conditions with irrelevant background sounds while recording EEG. The ability to detect the oddball tones was dependent on the ability to automatically or actively segregate the sounds to frequency streams. Listeners were able to actively segregate sounds to perform the loudness detection task, but there was no indication of automatic segregation of background sounds while watching a movie. Thus, our results indicate impaired automatic processes in aging that may explain more effortful listening, and that tax attentional systems when selecting sound streams in noisy environments.
Dana L. Strait; Jessica Slater; Samantha O’Connell; Nina Kraus
Selective attention decreases trial-to-trial variability in cortical auditory-evoked activity. This effect increases over the course of maturation, potentially reflecting the gradual development of selective attention and inhibitory control. Work in adults indicates that music training may alter the development of this neural response characteristic, especially over brain regions associated with executive control: in adult musicians, attention decreases variability in auditory-evoked response...
de Jong, Ritske; Toffanin, Paolo; Harbers, Marten; Martens, Sander
Frequency tagging has been often used to study intramodal attention but not intermodal attention. We used EEG and simultaneous frequency tagging of auditory and visual sources to study intermodal focused and divided attention in detection and discrimination performance. Divided-attention costs were
This paper reviews the literature and reports on the current state of knowledge regarding the potential for managers to use visual (VC), auditory (AC), and olfactory (OC) cues to manage foraging behavior and spatial distribution of rangeland livestock. We present evidence that free-ranging livestock...
Feenstra, M. G.; Vogel, M.; Botterblom, M. H.; Joosten, R. N.; de Bruin, J. P.
We used bilateral microdialysis in the medial prefrontal cortex (PFC) of awake, freely moving rats to study aversive conditioning to an auditory cue in the controlled environment of the Skinner box. The presentation of the explicit conditioned stimuli (CS), previously associated with foot shocks,
Bidet-Caulet, Aurélie; Fischer, Catherine; Besle, Julien; Aguera, Pierre-Emmanuel; Giard, Marie-Helene; Bertrand, Olivier
In noisy environments, we use auditory selective attention to actively ignore distracting sounds and select relevant information, as during a cocktail party to follow one particular conversation. The present electrophysiological study aims at deciphering the spatiotemporal organization of the effect of selective attention on the representation of concurrent sounds in the human auditory cortex. Sound onset asynchrony was manipulated to induce the segregation of two concurrent auditory streams. Each stream consisted of amplitude modulated tones at different carrier and modulation frequencies. Electrophysiological recordings were performed in epileptic patients with pharmacologically resistant partial epilepsy, implanted with depth electrodes in the temporal cortex. Patients were presented with the stimuli while they either performed an auditory distracting task or actively selected one of the two concurrent streams. Selective attention was found to affect steady-state responses in the primary auditory cortex, and transient and sustained evoked responses in secondary auditory areas. The results provide new insights on the neural mechanisms of auditory selective attention: stream selection during sound rivalry would be facilitated not only by enhancing the neural representation of relevant sounds, but also by reducing the representation of irrelevant information in the auditory cortex. Finally, they suggest a specialization of the left hemisphere in the attentional selection of fine-grained acoustic information.
Tünnermann, Jan; Scharlau, Ingrid
Peripheral visual cues lead to large shifts in psychometric distributions of temporal-order judgments. In one view, such shifts are attributed to attention speeding up processing of the cued stimulus, so-called prior entry. However, sometimes these shifts are so large that it is unlikely that they are caused by attention alone. Here we tested the prevalent alternative explanation that the cue is sometimes confused with the target on a perceptual level, bolstering the shift of the psychometric function. We applied a novel model of cued temporal-order judgments, derived from Bundesen's Theory of Visual Attention. We found that cue-target confusions indeed contribute to shifting psychometric functions. However, cue-induced changes in the processing rates of the target stimuli play an important role, too. At smaller cueing intervals, the cue increased the processing speed of the target. At larger intervals, inhibition of return was predominant. Earlier studies of cued TOJs were insensitive to these effects because in psychometric distributions they are concealed by the conjoint effects of cue-target confusions and processing rate changes.
Khan, Aarlenne Z.; Heinen, Stephen J.; McPeek, Robert M.
Presenting a behaviorally irrelevant cue shortly before a target at the same location decreases the latencies of saccades to the target, a phenomenon known as exogenous attention facilitation. It remains unclear whether exogenous attention interacts with early, sensory stages or later, motor planning stages of saccade production. To distinguish between these alternatives, we used a saccadic adaptation paradigm to dissociate the location of the visual target from the saccade goal. 3 male and 4 female human subjects performed both control trials, in which saccades were made to one of two target eccentricities, and adaptation trials, in which the target was shifted from one location to the other during the saccade. This manipulation adapted saccades so that they eventually were directed to the shifted location. In both conditions, a behaviorally irrelevant cue was flashed 66.7 ms before target appearance at a randomly selected one of seven positions that included the two target locations. In control trials, saccade latencies were shortest when the cue was presented at the target location and increased with cue-target distance. In contrast, adapted saccade latencies were shortest when the cue was presented at the adapted saccade goal, and not at the visual target location. The dynamics of adapted saccades were also altered, consistent with prior adaptation studies, except when the cue was flashed at the saccade goal. Overall, the results suggest that attentional cueing facilitates saccade planning rather than visual processing of the target. PMID:20410101
Sonuga-Barke, Edmund J. S.; De Houwer, Jan; De Ruiter, Karen; Ajzenstzen, Michal; Holland, Sarah
Background: The selective attention of children with attention deficit/hyperactivity disorder (AD/HD) to briefly exposed delay-related cues was examined in two experiments using a dot-probe conditioning paradigm. Method: Colour cues were paired with negatively (i.e., imposition of delay) and positively valenced cues (i.e., escape from or avoidance…
Full Text Available It has been observed that times series of gait parameters (stride length (SL, stride time (ST and stride speed (SS, exhibit long-term persistence and fractal-like properties. Synchronizing steps with rhythmic auditory stimuli modifies the persistent fluctuation pattern to anti-persistence. Another nonlinear method estimates the degree of resilience of gait control to small perturbations, i.e. the local dynamic stability (LDS. The method makes use of the maximal Lyapunov exponent, which estimates how fast a nonlinear system embedded in a reconstructed state space (attractor diverges after an infinitesimal perturbation. We propose to use an instrumented treadmill to simultaneously measure basic gait parameters (time series of SL, ST and SS from which the statistical persistence among consecutive strides can be assessed, and the trajectory of the center of pressure (from which the LDS can be estimated. In 20 healthy participants, the response to rhythmic auditory cueing (RAC of LDS and of statistical persistence (assessed with detrended fluctuation analysis (DFA was compared. By analyzing the divergence curves, we observed that long-term LDS (computed as the reverse of the average logarithmic rate of divergence between the 4th and the 10th strides downstream from nearest neighbors in the reconstructed attractor was strongly enhanced (relative change +47%. That is likely the indication of a more dampened dynamics. The change in short-term LDS (divergence over one step was smaller (+3%. DFA results (scaling exponents confirmed an anti-persistent pattern in ST, SL and SS. Long-term LDS (but not short-term LDS and scaling exponents exhibited a significant correlation between them (r=0.7. Both phenomena probably result from the more conscious/voluntary gait control that is required by RAC. We suggest that LDS and statistical persistence should be used to evaluate the efficiency of cueing therapy in patients with neurological gait disorders.
Marotta, Andrea; Martella, Diana; Maccari, Lisa; Sebastiani, Mara; Casagrande, Maria
Behaviour and neuroimaging studies have shown that poor vigilance (PV) due to sleep deprivation (SD) negatively affects exogenously cued selective attention. In the current study, we assessed the impact of PV due to both partial SD and night-time hours on reflexive attentional orienting triggered by central un-informative eye-gaze and arrow cues. Subjective mood and interference performance in emotional Stroop task were also investigated. Twenty healthy participants performed spatial cueing tasks using central directional arrow and eye-gaze as a cue to orient attention. The target was a word written in different coloured inks. The participant's task was to identify the colour of the ink while ignoring the semantic content of the word (with negative or neutral emotional valence). The experiment took place on 2 days. On the first day, each participant performed a 10-min training session of the spatial cueing task. On the second day, half of participants performed the task once at 4:30 p.m. (BSL) and once at 6:30 a.m. (PV), whereas the other half performed the task in the reversed order. Results showed that mean reaction times on the spatial cueing tasks were worsened by PV, although gaze paradigm was more resistant to this effect as compared to the arrow paradigm. Moreover, PV negatively affects attentional orienting triggered by both central un-informative gaze and arrow cues. Finally, prolonged wakefulness affects self-reported mood but does not influence interference control in emotional Stroop task.
Jongen, Ellen M M; Smulders, Fren T Y; Van der Heiden, Joep S H
Two spatial cueing experiments were conducted to examine the functional significance of lateralized ERP components after cue-onset and to discriminate components related to sensory cue aspects and components related to the direction of attention. In Experiment 1, a simple detection task was presented. In Experiment 2, attentional selection was augmented. Two unimodal visual cueing tasks were presented using nonlateralized line cues and lateralized arrow cues. Lateralized cue effects and modulation after stimulus onset were stronger in Experiment 2. An early posterior component was related to the physical shape of arrows. A posterior negativity (EDAN) may be related to the encoding of direction from arrow cues. An anterior negativity (ADAN) and a posterior positivity (LDAP) were related to the direction of attention. The ADAN was delayed when it was more difficult to derive cue meaning. Finally, the data suggested an overlap of the LDAP and the EDAN.
Passamonti, L. (Luca); M. Luijten (Maartje); Ziauddeen, H.; I. Coyle-Gilchrist (Ian); Rittman, T.; Brain, S.A.E.; Regenthal, R.; I.H.A. Franken (Ingmar); Sahakian, B.J.; Bullmore, E.T.; Robbins, T.W.; Ersche, K.D.
textabstractRationale: Biased attention towards drug-related cues and reduced inhibitory control over the regulation of drug-intake characterize drug addiction. The noradrenaline system has been critically implicated in both attentional and response inhibitory processes and is directly affected by
Passamonti, L.; Luijten, M.; Ziauddeen, H.; Coyle-Gilchrist, I.T.S.; Rittman, T.; Brain, S.A.E.; Regenthal, R.; Franken, I.H.A.; Sahakian, B.J.; Bullmore, E.T.; Robbins, T.W.; Ersche, K.D.
Rationale: Biased attention towards drug-related cues and reduced inhibitory control over the regulation of drug-intake characterize drug addiction. The noradrenaline system has been critically implicated in both attentional and response inhibitory processes and is directly affected by drugs such as
Simone P.W. Haller
Social anxiety is associated with a bias in interpreting social cues; a cognitive bias that is also influenced by attentional deployment. This study contributes to our understanding of the possible attention mechanisms that shape cognitions relevant to social anxiety in this at-risk age group.
Batson, Glenna; Hugenschmidt, Christina E; Soriano, Christina T
Dance is a non-pharmacological intervention that helps maintain functional independence and quality of life in people with Parkinson's disease (PPD). Results from controlled studies on group-delivered dance for people with mild-to-moderate stage Parkinson's have shown statistically and clinically significant improvements in gait, balance, and psychosocial factors. Tested interventions include non-partnered dance forms (ballet and modern dance) and partnered (tango). In all of these dance forms, specific movement patterns initially are learned through repetition and performed in time-to-music. Once the basic steps are mastered, students may be encouraged to improvise on the learned steps as they perform them in rhythm with the music. Here, we summarize a method of teaching improvisational dance that advances previous reported benefits of dance for people with Parkinson's disease (PD). The method relies primarily on improvisational verbal auditory cueing with less emphasis on directed movement instruction. This method builds on the idea that daily living requires flexible, adaptive responses to real-life challenges. In PD, movement disorders not only limit mobility but also impair spontaneity of thought and action. Dance improvisation demands open and immediate interpretation of verbally delivered movement cues, potentially fostering the formation of spontaneous movement strategies. Here, we present an introduction to a proposed method, detailing its methodological specifics, and pointing to future directions. The viewpoint advances an embodied cognitive approach that has eco-validity in helping PPD meet the changing demands of daily living.
Zeamer, Charlotte; Fox Tree, Jean E.
Literature on auditory distraction has generally focused on the effects of particular kinds of sounds on attention to target stimuli. In support of extensive previous findings that have demonstrated the special role of language as an auditory distractor, we found that a concurrent speech stream impaired recall of a short lecture, especially for…
Eckstein, Miguel P; Pham, Binh T; Shimozaki, Steven S
Human performance during visual search typically improves when spatial cues indicate the possible target locations. In many instances, the performance improvement is quantitatively predicted by a Bayesian or quasi-Bayesian observer in which visual attention simply selects the information at the cued locations without changing the quality of processing or sensitivity and ignores the information at the uncued locations. Aside from the general good agreement between the effect of the cue on model and human performance, there has been little independent confirmation that humans are effectively selecting the relevant information. In this study, we used the classification image technique to assess the effectiveness of spatial cues in the attentional selection of relevant locations and suppression of irrelevant locations indicated by spatial cues. Observers searched for a bright target among dimmer distractors that might appear (with 50% probability) in one of eight locations in visual white noise. The possible target location was indicated using a 100% valid box cue or seven 100% invalid box cues in which the only potential target locations was uncued. For both conditions, we found statistically significant perceptual templates shaped as differences of Gaussians at the relevant locations with no perceptual templates at the irrelevant locations. We did not find statistical significant differences between the shapes of the inferred perceptual templates for the 100% valid and 100% invalid cues conditions. The results confirm the idea that during search visual attention allows the observer to effectively select relevant information and ignore irrelevant information. The results for the 100% invalid cues condition suggests that the selection process is not drawn automatically to the cue but can be under the observers' voluntary control.
Boyd, Alan W; Whitmer, William M; Soraghan, John J; Akeroyd, Michael A
Hearing-aid wearers have reported sound source locations as being perceptually internalized (i.e., inside their head). The contribution of hearing-aid design to internalization has, however, received little attention. This experiment compared the sensitivity of hearing-impaired (HI) and normal-hearing listeners to externalization cues when listening with their own ears and simulated behind-the-ear hearing-aids in increasingly complex listening situations and reduced pinna cues. Participants rated the degree of externalization using a multiple-stimulus listening test for mixes of internalized and externalized speech stimuli presented over headphones. The results showed that HI listeners had a contracted perception of externalization correlated with high-frequency hearing loss. © 2012 Acoustical Society of America
Lochbuehler, Kirsten; Wileyto, E Paul; Tang, Kathy Z; Mercincavage, Melissa; Cappella, Joseph N; Strasser, Andrew A
The similarity of e-cigarettes to tobacco cigarettes with regard to shape and usage raises the question of whether e-cigarette cues have the same incentive motivational properties as tobacco cigarette cues. The objective of the present study was to examine whether e-cigarette cues capture and hold smokers' and former smokers' attention and whether the attentional focus is associated with subsequent craving for tobacco cigarettes. It was also examined whether device type (cigalike or mod) moderated this relationship. Participants (46 current daily smokers, 38 former smokers, 48 non-smokers) were randomly assigned to a device type condition in which their eye-movements were assessed while completing a visual probe task. Craving was assessed before and after the task. Smokers, but not former or non-smokers, maintained their gaze longer on e-cigarette than on neutral pictures ( p = 0.004). No difference in dwell time was found between device type. None of the smoking status groups showed faster initial fixations or faster reaction times to e-cigarette compared with neutral cues. Baseline craving was associated with dwell time on e-cigarette cues ( p = 0.004). Longer dwell time on e-cigarette cues was associated with more favorable attitudes towards e-cigarettes. These findings indicate that e-cigarette cues may contribute to craving for tobacco cigarettes and suggest the potential regulation of e-cigarette marketing.
Liu, Ying; Hu, Huijing; Jones, Jeffery A; Guo, Zhiqiang; Li, Weifeng; Chen, Xi; Liu, Peng; Liu, Hanjun
Speakers rapidly adjust their ongoing vocal productions to compensate for errors they hear in their auditory feedback. It is currently unclear what role attention plays in these vocal compensations. This event-related potential (ERP) study examined the influence of selective and divided attention on the vocal and cortical responses to pitch errors heard in auditory feedback regarding ongoing vocalisations. During the production of a sustained vowel, participants briefly heard their vocal pitch shifted up two semitones while they actively attended to auditory or visual events (selective attention), or both auditory and visual events (divided attention), or were not told to attend to either modality (control condition). The behavioral results showed that attending to the pitch perturbations elicited larger vocal compensations than attending to the visual stimuli. Moreover, ERPs were likewise sensitive to the attentional manipulations: P2 responses to pitch perturbations were larger when participants attended to the auditory stimuli compared to when they attended to the visual stimuli, and compared to when they were not explicitly told to attend to either the visual or auditory stimuli. By contrast, dividing attention between the auditory and visual modalities caused suppressed P2 responses relative to all the other conditions and caused enhanced N1 responses relative to the control condition. These findings provide strong evidence for the influence of attention on the mechanisms underlying the auditory-vocal integration in the processing of pitch feedback errors. In addition, selective attention and divided attention appear to modulate the neurobehavioral processing of pitch feedback errors in different ways. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Hepworth, Rebecca; Mogg, Karin; Brignell, Catherine; Bradley, Brendan P
Following negative reinforcement and affect-regulation models of dysfunctional appetitive motivation, this study examined the effect of negative mood on objective and subjective cognitive indices of motivation for food; i.e., attentional bias for food cues and self-reported hunger/urge to eat, respectively. The study extended previous research on the effect of mood on food motivation by using (i) an experimental mood manipulation, (ii) an established index of attentional bias from the visual-probe task and (iii) pictorial food cues, which have greater ecological validity than word stimuli. Young female adults (n=80) were randomly allocated to a neutral or negative mood induction procedure. Attentional biases were assessed at two cue exposure durations (500 and 2000ms). Results showed that negative mood increased both attentional bias for food cues and subjective appetite. Attentional bias and subjective appetite were positively inter-correlated, suggesting a common mechanism, i.e. activation of the food-reward system. Attentional bias was also associated with trait eating style, such as external and restrained eating. Thus, current mood and trait eating style each influenced motivation for food (as reflected by subjective appetite and attentional bias). Findings relate to models of cognitive mechanisms underlying normal and dysfunctional appetitive motivation and eating behaviour. 2009 Elsevier Ltd. All rights reserved.
Perona-Garcelán, Salvador; Carrascoso-López, Francisco; García-Montes, José M; Vallina-Fernández, Oscar; Pérez-Álvarez, Marino; Ductor-Recuerda, María Jesús; Salas-Azcona, Rosario; Cuevas-Yust, Carlos; Gómez-Gómez, María Teresa
The purpose of this work was to study the potentially mediating role of certain dissociative factors, such as depersonalization, between self-focused attention and auditory hallucinations. A total of 59 patients diagnosed with schizophrenic disorder completed a self-focused attention scale ( M. F. Scheier & C. S. Carver, 1985 ), the Cambridge Depersonalization Scale (M. Sierra & G. E. Berrios, 2000), and the hallucination and delusion items on the Positive and Negative Syndrome Scale (S. R. Kay, L. A. Opler, & J. P. Lindenmayer, 1988). The results showed that self-focused attention correlated positively with auditory hallucinations, with delusions, and with depersonalization. It was also demonstrated that depersonalization has a mediating role between self-focused attention and auditory hallucinations but not delusions. In the discussion, the importance of dissociative processes in understanding the formation and maintenance of auditory hallucinations is suggested.
Kolarik, Andrew J; Cirstea, Silvia; Pardhan, Shahina
Totally blind listeners often demonstrate better than normal capabilities when performing spatial hearing tasks. Accurate representation of three-dimensional auditory space requires the processing of available distance information between the listener and the sound source; however, auditory distance cues vary greatly depending upon the acoustic properties of the environment, and it is not known which distance cues are important to totally blind listeners. Our data show that totally blind listeners display better performance compared to sighted age-matched controls for distance discrimination tasks in anechoic and reverberant virtual rooms simulated using a room-image procedure. Totally blind listeners use two major auditory distance cues to stationary sound sources, level and direct-to-reverberant ratio, more effectively than sighted controls for many of the virtual distances tested. These results show that significant compensation among totally blind listeners for virtual auditory spatial distance leads to benefits across a range of simulated acoustic environments. No significant differences in performance were observed between listeners with partial non-correctable visual losses and sighted controls, suggesting that sensory compensation for virtual distance does not occur for listeners with partial vision loss.
Full Text Available Selective auditory attention is essential for human listeners to be able to communicate in multi-source environments. Selective attention is known to modulate the neural representation of the auditory scene, boosting the representation of a target sound relative to the background, but the strength of this modulation, and the mechanisms contributing to it, are not well understood. Here, listeners performed a behavioral experiment demanding sustained, focused spatial auditory attention while we measured cortical responses using electroencephalography (EEG. We presented three concurrent melodic streams; listeners were asked to attend and analyze the melodic contour of one of the streams, randomly selected from trial to trial. In a control task, listeners heard the same sound mixtures, but performed the contour judgment task on a series of visual arrows, ignoring all auditory streams. We found that the cortical responses could be fit as weighted sum of event-related potentials evoked by the stimulus onsets in the competing streams. The weighting to a given stream was roughly 10 dB higher when it was attended compared to when another auditory stream was attended; during the visual task, the auditory gains were intermediate. We then used a template-matching classification scheme to classify single-trial EEG results. We found that in all subjects, we could determine which stream the subject was attending significantly better than by chance. By directly quantifying the effect of selective attention on auditory cortical responses, these results reveal that focused auditory attention both suppresses the response to an unattended stream and enhances the response to an attended stream. The single-trial classification results add to the growing body of literature suggesting that auditory attentional modulation is sufficiently robust that it could be used as a control mechanism in brain-computer interfaces.
Creswell, John David; Tabibnia, Golnaz; Julson, Erica; Kober, Hedy; Tindle, Hilary A.
An emerging body of research suggests that mindfulness-based interventions may be beneficial for smoking cessation and the treatment of other addictive disorders. One way that mindfulness may facilitate smoking cessation is through the reduction of craving to smoking cues. The present work considers whether mindful attention can reduce self-reported and neural markers of cue-induced craving in treatment seeking smokers. Forty-seven (n = 47) meditation-naïve treatment-seeking smokers (12-h abstinent from smoking) viewed and made ratings of smoking and neutral images while undergoing functional magnetic resonance imaging (fMRI). Participants were trained and instructed to view these images passively or with mindful attention. Results indicated that mindful attention reduced self-reported craving to smoking images, and reduced neural activity in a craving-related region of subgenual anterior cingulate cortex (sgACC). Moreover, a psychophysiological interaction analysis revealed that mindful attention reduced functional connectivity between sgACC and other craving-related regions compared to passively viewing smoking images, suggesting that mindfulness may decouple craving neurocircuitry when viewing smoking cues. These results provide an initial indication that mindful attention may describe a ‘bottom-up’ attention to one’s present moment experience in ways that can help reduce subjective and neural reactivity to smoking cues in smokers. PMID:22114078
Westbrook, Cecilia; Creswell, John David; Tabibnia, Golnaz; Julson, Erica; Kober, Hedy; Tindle, Hilary A
An emerging body of research suggests that mindfulness-based interventions may be beneficial for smoking cessation and the treatment of other addictive disorders. One way that mindfulness may facilitate smoking cessation is through the reduction of craving to smoking cues. The present work considers whether mindful attention can reduce self-reported and neural markers of cue-induced craving in treatment seeking smokers. Forty-seven (n = 47) meditation-naïve treatment-seeking smokers (12-h abstinent from smoking) viewed and made ratings of smoking and neutral images while undergoing functional magnetic resonance imaging (fMRI). Participants were trained and instructed to view these images passively or with mindful attention. Results indicated that mindful attention reduced self-reported craving to smoking images, and reduced neural activity in a craving-related region of subgenual anterior cingulate cortex (sgACC). Moreover, a psychophysiological interaction analysis revealed that mindful attention reduced functional connectivity between sgACC and other craving-related regions compared to passively viewing smoking images, suggesting that mindfulness may decouple craving neurocircuitry when viewing smoking cues. These results provide an initial indication that mindful attention may describe a 'bottom-up' attention to one's present moment experience in ways that can help reduce subjective and neural reactivity to smoking cues in smokers.
Full Text Available Primates live in complex social groups and rely on social cues to direct their attention. For example, primates react faster to an unpredictable stimulus after seeing a conspecific looking in the direction of that stimulus. In the current study we tested the specificity of facial cues (gaze direction for orienting attention and their interaction with other cues that are known to guide attention. In particular, we tested whether macaque monkeys only respond to gaze cues from conspecifics or if the effect generalizes across species. We found an attentional advantage of conspecific faces over that of other human and cartoon faces. Because gaze cues are often conveyed by gesture, we also explored the effect of image motion (a simulated glance on the orienting of attention in monkeys. We found that the simulated glance did not significantly enhance the speed of orienting for monkey face stimuli, but had a significant effect for images of human faces. Finally, because gaze cues presumably guide attention towards relevant or rewarding stimuli, we explored whether orienting of attention was modulated by reward predictiveness. When the cue predicted reward location, face and non-face cues were effective in speeding responses towards the cued location. This effect was strongest for conspecific faces. In sum, our results suggest that while conspecific gaze cues activate an intrinsic process that reflexively directs spatial attention, its effect is relatively small in comparison to other features including motion and reward predictiveness. It is possible that gaze cues are more important for decision-making and voluntary orienting than for reflexive orienting.
Dana L. Strait
Full Text Available Selective attention decreases trial-to-trial variability in cortical auditory-evoked activity. This effect increases over the course of maturation, potentially reflecting the gradual development of selective attention and inhibitory control. Work in adults indicates that music training may alter the development of this neural response characteristic, especially over brain regions associated with executive control: in adult musicians, attention decreases variability in auditory-evoked responses recorded over prefrontal cortex to a greater extent than in nonmusicians. We aimed to determine whether this musician-associated effect emerges during childhood, when selective attention and inhibitory control are under development. We compared cortical auditory-evoked variability to attended and ignored speech streams in musicians and nonmusicians across three age groups: preschoolers, school-aged children and young adults. Results reveal that childhood music training is associated with reduced auditory-evoked response variability recorded over prefrontal cortex during selective auditory attention in school-aged child and adult musicians. Preschoolers, on the other hand, demonstrate no impact of selective attention on cortical response variability and no musician distinctions. This finding is consistent with the gradual emergence of attention during this period and may suggest no pre-existing differences in this attention-related cortical metric between children who undergo music training and those who do not.
Strait, Dana L; Slater, Jessica; O'Connell, Samantha; Kraus, Nina
Selective attention decreases trial-to-trial variability in cortical auditory-evoked activity. This effect increases over the course of maturation, potentially reflecting the gradual development of selective attention and inhibitory control. Work in adults indicates that music training may alter the development of this neural response characteristic, especially over brain regions associated with executive control: in adult musicians, attention decreases variability in auditory-evoked responses recorded over prefrontal cortex to a greater extent than in nonmusicians. We aimed to determine whether this musician-associated effect emerges during childhood, when selective attention and inhibitory control are under development. We compared cortical auditory-evoked variability to attended and ignored speech streams in musicians and nonmusicians across three age groups: preschoolers, school-aged children and young adults. Results reveal that childhood music training is associated with reduced auditory-evoked response variability recorded over prefrontal cortex during selective auditory attention in school-aged child and adult musicians. Preschoolers, on the other hand, demonstrate no impact of selective attention on cortical response variability and no musician distinctions. This finding is consistent with the gradual emergence of attention during this period and may suggest no pre-existing differences in this attention-related cortical metric between children who undergo music training and those who do not. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Full Text Available To determine the effect of reducing spatial uncertainty by attentional cueing on contrast sensitivity at a range of spatial locations and with different stimulus sizes.Six observers underwent perimetric testing with the Humphrey Visual Field Analyzer (HFA full threshold paradigm, and the output thresholds were compared to conditions where stimulus location was verbally cued to the observer. We varied the number of points cued, the eccentric and spatial location, and stimulus size (Goldmann size I, III and V. Subsequently, four observers underwent laboratory-based psychophysical testing on a custom computer program using Method of Constant Stimuli to determine the frequency-of-seeing (FOS curves with similar variables.We found that attentional cueing increased contrast sensitivity when measured using the HFA. We report a difference of approximately 2 dB with size I at peripheral and mid-peripheral testing locations. For size III, cueing had a greater effect for points presented in the periphery than in the mid-periphery. There was an exponential decay of the effect of cueing with increasing number of elements cued. Cueing a size V stimulus led to no change. FOS curves generated from laboratory-based psychophysical testing confirmed an increase in contrast detection sensitivity under the same conditions. We found that the FOS curve steepened when spatial uncertainty was reduced.We show that attentional cueing increases contrast sensitivity when using a size I or size III test stimulus on the HFA when up to 8 points are cued but not when a size V stimulus is cued. We show that this cueing also alters the slope of the FOS curve. This suggests that at least 8 points should be used to minimise potential attentional factors that may affect measurement of contrast sensitivity in the visual field.
Full Text Available Motivational objects attract attention due to their rewarding properties, but less is known about the role that top-down cognitive processes play in the attention paid to motivationally relevant objects and how this is affected by relevant behaviour traits. Here we assess how thinking about food affects attentional guidance to food items and how this is modulated by traits relating to dietary self-control. Participants completed two tasks in which they were presented with an initial cue (food or non-food to either hold in working memory (memory task or to merely attend to (priming task. Holding food items in working memory strongly affected attention when the memorized cue re-appeared in the search display. Tendency towards disinhibited eating was associated with greater attention to food versus non-food pictures in both the priming and working memory tasks, consistent with greater attention to food cues per se. Successful dieters, defined as those high in dietary restraint and low in tendency to disinhibition, showed reduced attention to food when holding food-related information in working memory. These data suggest a strong top-down effect of thinking about food on attention to food items and indicate that the suppression of food items in working memory could be a marker of dieting success.
Victorino, Kristen R.; Schwartz, Richard G.
Purpose: Children with specific language impairment (SLI) appear to demonstrate deficits in attention and its control. Selective attention involves the cognitive control of attention directed toward a relevant stimulus and simultaneous inhibition of attention toward irrelevant stimuli. The current study examined attention control during a…
de Heering, Adélaïde; Dormal, Giulia; Pelland, Maxime; Lewis, Terri; Maurer, Daphne; Collignon, Olivier
Is a short and transient period of visual deprivation early in life sufficient to induce lifelong changes in how we attend to, and integrate, simple visual and auditory information [1, 2]? This question is of crucial importance given the recent demonstration in both animals and humans that a period of blindness early in life permanently affects the brain networks dedicated to visual, auditory, and multisensory processing [1-16]. To address this issue, we compared a group of adults who had been treated for congenital bilateral cataracts during early infancy with a group of normally sighted controls on a task requiring simple detection of lateralized visual and auditory targets, presented alone or in combination. Redundancy gains obtained from the audiovisual conditions were similar between groups and surpassed the reaction time distribution predicted by Miller's race model. However, in comparison to controls, cataract-reversal patients were faster at processing simple auditory targets and showed differences in how they shifted attention across modalities. Specifically, they were faster at switching attention from visual to auditory inputs than in the reverse situation, while an opposite pattern was observed for controls. Overall, these results reveal that the absence of visual input during the first months of life does not prevent the development of audiovisual integration but enhances the salience of simple auditory inputs, leading to a different crossmodal distribution of attentional resources between auditory and visual stimuli. Copyright © 2016 Elsevier Ltd. All rights reserved.
Seiss, Ellen; Driver, Jon; Eimer, Martin
We used ERP measures to investigate how attentional filtering requirements affect preparatory attentional control and spatially selective visual processing. In a spatial cueing experiment, attentional filtering demands were manipulated by presenting task-relevant visual stimuli either in isolation (target-only task) or together with irrelevant adjacent distractors (target-plus-distractors task). ERPs were recorded in response to informative spatial precues, and in response to subsequent visual stimuli at attended and unattended locations. The preparatory ADAN component elicited during the cue-target interval was larger and more sustained in the target-plus-distractors task, reflecting the demand of stronger attentional filtering. By contrast, two other preparatory lateralised components (EDAN and LDAP) were unaffected by the attentional filtering demand. Similar enhancements of P1 and N1 components in response to the lateral imperative visual stimuli were observed at cued versus uncued locations, regardless of filtering demand, whereas later attentional-related negativities beyond 200 ms post-stimulus were larger the target-plus-distractor task. Our results implicate that the ADAN component is linked to preparatory top-down control processes involved in the attentional filtering of irrelevant distractors; such filtering also affects later attention-related negativities recorded after the onset of the imperative stimulus. ERPs can reveal effects of expected attentional filtering of irrelevant distractors on preparatory attentional control processes and spatially selective visual processing.
Wegrzyn, Martin; Herbert, Cornelia; Ethofer, Thomas; Flaisch, Tobias; Kissler, Johanna
Visually presented emotional words are processed preferentially and effects of emotional content are similar to those of explicit attention deployment in that both amplify visual processing. However, auditory processing of emotional words is less well characterized and interactions between emotional content and task-induced attention have not been fully understood. Here, we investigate auditory processing of emotional words, focussing on how auditory attention to positive and negative words impacts their cerebral processing. A Functional magnetic resonance imaging (fMRI) study manipulating word valence and attention allocation was performed. Participants heard negative, positive and neutral words to which they either listened passively or attended by counting negative or positive words, respectively. Regardless of valence, active processing compared to passive listening increased activity in primary auditory cortex, left intraparietal sulcus, and right superior frontal gyrus (SFG). The attended valence elicited stronger activity in left inferior frontal gyrus (IFG) and left SFG, in line with these regions' role in semantic retrieval and evaluative processing. No evidence for valence-specific attentional modulation in auditory regions or distinct valence-specific regional activations (i.e., negative > positive or positive > negative) was obtained. Thus, allocation of auditory attention to positive and negative words can substantially increase their processing in higher-order language and evaluative brain areas without modulating early stages of auditory processing. Inferior and superior frontal brain structures mediate interactions between emotional content, attention, and working memory when prosodically neutral speech is processed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Posavac, Heidi D.; Sheridan, Susan M.; Posavac, Steven S.
Tests the efficacy of a cueing procedure for improving the impulse regulation of four boys with Attention Deficit Hyperactivity Disorder (ADHD) during social skills training. Behavioral data suggested that all subjects demonstrated positive changes in impulse regulation. Likewise, the treatment effects appeared to have produced positive effects on…
B.R. Bocanegra (Bruno); R. Zeelenberg (René)
textabstractIn the present study, we demonstrated that the emotional significance of a spatial cue enhances the effect of covert attention on spatial and temporal resolution (i. e., our ability to discriminate small spatial details and fast temporal flicker). Our results indicated that fearful face
de Koning, Bjorn B.; Tabbers, Huib K.; Rikers, Remy M. J. P.; Paas, Fred
This study investigated whether learners construct more accurate mental representations from animations when instructional explanations are provided via narration than when learners attempt to infer functional relations from the animation through self-explaining. Also effects of attention guidance by means of cueing are investigated. Psychology…
Möttönen, Riikka; van de Ven, Gido M; Watkins, Kate E
The earliest stages of cortical processing of speech sounds take place in the auditory cortex. Transcranial magnetic stimulation (TMS) studies have provided evidence that the human articulatory motor cortex contributes also to speech processing. For example, stimulation of the motor lip representation influences specifically discrimination of lip-articulated speech sounds. However, the timing of the neural mechanisms underlying these articulator-specific motor contributions to speech processing is unknown. Furthermore, it is unclear whether they depend on attention. Here, we used magnetoencephalography and TMS to investigate the effect of attention on specificity and timing of interactions between the auditory and motor cortex during processing of speech sounds. We found that TMS-induced disruption of the motor lip representation modulated specifically the early auditory-cortex responses to lip-articulated speech sounds when they were attended. These articulator-specific modulations were left-lateralized and remarkably early, occurring 60-100 ms after sound onset. When speech sounds were ignored, the effect of this motor disruption on auditory-cortex responses was nonspecific and bilateral, and it started later, 170 ms after sound onset. The findings indicate that articulatory motor cortex can contribute to auditory processing of speech sounds even in the absence of behavioral tasks and when the sounds are not in the focus of attention. Importantly, the findings also show that attention can selectively facilitate the interaction of the auditory cortex with specific articulator representations during speech processing.
Ahveninen, Jyrki; Hämäläinen, Matti; Jääskeläinen, Iiro P; Ahlfors, Seppo P; Huang, Samantha; Lin, Fa-Hsuan; Raij, Tommi; Sams, Mikko; Vasios, Christos E; Belliveau, John W
How can we concentrate on relevant sounds in noisy environments? A "gain model" suggests that auditory attention simply amplifies relevant and suppresses irrelevant afferent inputs. However, it is unclear whether this suffices when attended and ignored features overlap to stimulate the same neuronal receptive fields. A "tuning model" suggests that, in addition to gain, attention modulates feature selectivity of auditory neurons. We recorded magnetoencephalography, EEG, and functional MRI (fMRI) while subjects attended to tones delivered to one ear and ignored opposite-ear inputs. The attended ear was switched every 30 s to quantify how quickly the effects evolve. To produce overlapping inputs, the tones were presented alone vs. during white-noise masking notch-filtered ±1/6 octaves around the tone center frequencies. Amplitude modulation (39 vs. 41 Hz in opposite ears) was applied for "frequency tagging" of attention effects on maskers. Noise masking reduced early (50-150 ms; N1) auditory responses to unattended tones. In support of the tuning model, selective attention canceled out this attenuating effect but did not modulate the gain of 50-150 ms activity to nonmasked tones or steady-state responses to the maskers themselves. These tuning effects originated at nonprimary auditory cortices, purportedly occupied by neurons that, without attention, have wider frequency tuning than ±1/6 octaves. The attentional tuning evolved rapidly, during the first few seconds after attention switching, and correlated with behavioral discrimination performance. In conclusion, a simple gain model alone cannot explain auditory selective attention. In nonprimary auditory cortices, attention-driven short-term plasticity retunes neurons to segregate relevant sounds from noise.
Vachon, François; Labonté, Katherine; Marsh, John E.
The occurrence of an unexpected, infrequent sound in an otherwise homogeneous auditory background tends to disrupt the ongoing cognitive task. This "deviation effect" is typically explained in terms of attentional capture whereby the deviant sound draws attention away from the focal activity, regardless of the nature of this activity.…
Murphy, Cristina Ferraz Borges; Zachi, Elaine Cristina; Roque, Daniela Tsubota; Ventura, Dora Selma Fix; Schochat, Eliane
To investigate the existence of correlations between the performance of children in auditory temporal tests (Frequency Pattern and Gaps in Noise--GIN) and IQ, attention, memory and age measurements. Fifteen typically developing individuals between the ages of 7 to 12 years and normal hearing participated in the study. Auditory temporal processing tests (GIN and Frequency Pattern), as well as a Memory test (Digit Span), Attention tests (auditory and visual modality) and intelligence tests (RAVEN test of Progressive Matrices) were applied. Significant and positive correlation between the Frequency Pattern test and age variable were found, which was considered good (ptest and the variables tested. Auditory temporal skills seem to be influenced by different factors: while the performance in temporal ordering skill seems to be influenced by maturational processes, the performance in temporal resolution was not influenced by any of the aspects investigated.
Chermak, G D; Hall, J W; Musiek, F E
Children diagnosed with attention deficit hyperactivity disorder (ADHD) frequently present difficulties performing tasks that challenge the central auditory nervous system. The relationship between ADHD and central auditory processing disorder (CAPD) is examined from the perspectives of cognitive neuroscience, audiology, and neuropsychology. The accumulating evidence provides a basis for the overlapping clinical profiles yet differentiates CAPD and ADHD as clinically distinct entities. Common and distinctive management strategies are outlined.
Doolan, Katy J; Breslin, Gavin; Hanna, Donncha; Murphy, Kate; Gallagher, Alison M
Based on the theory of incentive sensitization, the aim of this study was to investigate differences in attentional processing of food-related visual cues between normal-weight and overweight/obese males and females. Twenty-six normal-weight (14M, 12F) and 26 overweight/obese (14M, 12F) adults completed a visual probe task and an eye-tracking paradigm. Reaction times and eye movements to food and control images were collected during both a fasted and fed condition in a counterbalanced design. Participants had greater visual attention towards high-energy-density food images compared to low-energy-density food images regardless of hunger condition. This was most pronounced in overweight/obese males who had significantly greater maintained attention towards high-energy-density food images when compared with their normal-weight counterparts however no between weight group differences were observed for female participants. High-energy-density food images appear to capture visual attention more readily than low-energy-density food images. Results also suggest the possibility of an altered visual food cue-associated reward system in overweight/obese males. Attentional processing of food cues may play a role in eating behaviors thus should be taken into consideration as part of an integrated approach to curbing obesity. © 2014 The Obesity Society.
Full Text Available Background and Aim: Sustained attention refers to the ability to maintain attention in target stimuli over a sustained period of time. This study was conducted to develop a Persian version of the sustained auditory attention capacity test and to study its results in normal children.Methods: To develop the Persian version of the sustained auditory attention capacity test, like the original version, speech stimuli were used. The speech stimuli consisted of one hundred monosyllabic words consisting of a 20 times random of and repetition of the words of a 21-word list of monosyllabic words, which were randomly grouped together. The test was carried out at comfortable hearing level using binaural, and diotic presentation modes on 46 normal children of 7 to 11 years of age of both gender.Results: There was a significant difference between age, and an average of impulsiveness error score (p=0.004 and total score of sustained auditory attention capacity test (p=0.005. No significant difference was revealed between age, and an average of inattention error score and attention reduction span index. Gender did not have a significant impact on various indicators of the test.Conclusion: The results of this test on a group of normal hearing children confirmed its ability to measure sustained auditory attention capacity through speech stimuli.
Kuhns, Anna B; Dombert, Pascasie L; Mengotti, Paola; Fink, Gereon R; Vossel, Simone
Predictions about upcoming events influence how we perceive and respond to our environment. There is increasing evidence that predictions may be generated based upon previous observations following Bayesian principles, but little is known about the underlying cortical mechanisms and their specificity for different cognitive subsystems. The present study aimed at identifying common and distinct neural signatures of predictive processing in the spatial attentional and motor intentional system. Twenty-three female and male healthy human volunteers performed two probabilistic cueing tasks with either spatial or motor cues while lying in the fMRI scanner. In these tasks, the percentage of cue validity changed unpredictably over time. Trialwise estimates of cue predictability were derived from a Bayesian observer model of behavioral responses. These estimates were included as parametric regressors for analyzing the BOLD time series. Parametric effects of cue predictability in valid and invalid trials were considered to reflect belief updating by precision-weighted prediction errors. The brain areas exhibiting predictability-dependent effects dissociated between the spatial attention and motor intention task, with the right temporoparietal cortex being involved during spatial attention and the left angular gyrus and anterior cingulate cortex during motor intention. Connectivity analyses revealed that all three areas showed predictability-dependent coupling with the right hippocampus. These results suggest that precision-weighted prediction errors of stimulus locations and motor responses are encoded in distinct brain regions, but that crosstalk with the hippocampus may be necessary to integrate new trialwise outcomes in both cognitive systems.SIGNIFICANCE STATEMENT The brain is able to infer the environments' statistical structure and responds strongly to expectancy violations. In the spatial attentional domain, it has been shown that parts of the attentional networks are
Full Text Available A unique sound that deviates from a repetitive background sound induces signature neural responses, such as mismatch negativity and novelty P3 response in electro-encephalography studies. Here we show that a deviant auditory stimulus induces a human pupillary dilation response (PDR that is sensitive to the stimulus properties and irrespective whether attention is directed to the sounds or not. In an auditory oddball sequence, we used white noise and 2000-Hz tones as oddballs against repeated 1000-Hz tones. Participants’ pupillary responses were recorded while they listened to the auditory oddball sequence. In Experiment 1, they were not involved in any task. Results show that pupils dilated to the noise oddballs for approximately 4 s, but no such PDR was found for the 2000-Hz tone oddballs. In Experiments 2, two types of visual oddballs were presented synchronously with the auditory oddballs. Participants discriminated the auditory or visual oddballs while trying to ignore stimuli from the other modality. The purpose of this manipulation was to direct attention to or away from the auditory sequence. In Experiment 3, the visual oddballs and the auditory oddballs were always presented asynchronously to prevent residuals of attention on to-be-ignored oddballs due to the concurrence with the attended oddballs. Results show that pupils dilated to both the noise and 2000-Hz tone oddballs in all conditions. Most importantly, PDRs to noise were larger than those to the 2000-Hz tone oddballs regardless of the attention condition in both experiments. The overall results suggest that the stimulus-dependent factor of the PDR appears to be independent of attention.
Liao, Hsin-I; Yoneya, Makoto; Kidani, Shunsuke; Kashino, Makio; Furukawa, Shigeto
A unique sound that deviates from a repetitive background sound induces signature neural responses, such as mismatch negativity and novelty P3 response in electro-encephalography studies. Here we show that a deviant auditory stimulus induces a human pupillary dilation response (PDR) that is sensitive to the stimulus properties and irrespective whether attention is directed to the sounds or not. In an auditory oddball sequence, we used white noise and 2000-Hz tones as oddballs against repeated 1000-Hz tones. Participants' pupillary responses were recorded while they listened to the auditory oddball sequence. In Experiment 1, they were not involved in any task. Results show that pupils dilated to the noise oddballs for approximately 4 s, but no such PDR was found for the 2000-Hz tone oddballs. In Experiments 2, two types of visual oddballs were presented synchronously with the auditory oddballs. Participants discriminated the auditory or visual oddballs while trying to ignore stimuli from the other modality. The purpose of this manipulation was to direct attention to or away from the auditory sequence. In Experiment 3, the visual oddballs and the auditory oddballs were always presented asynchronously to prevent residuals of attention on to-be-ignored oddballs due to the concurrence with the attended oddballs. Results show that pupils dilated to both the noise and 2000-Hz tone oddballs in all conditions. Most importantly, PDRs to noise were larger than those to the 2000-Hz tone oddballs regardless of the attention condition in both experiments. The overall results suggest that the stimulus-dependent factor of the PDR appears to be independent of attention.
Ikeda, Kazunari; Sekiguchi, Takahiro; Hayashi, Akiko
As determinants facilitating attention-related modulation of the auditory brainstem response (ABR), two experimental factors were examined: (i) auditory discrimination; and (ii) contralateral masking intensity. Tone pips at 80 dB sound pressure level were presented to the left ear via either single-tone exposures or oddball exposures, whereas white noise was delivered continuously to the right ear at variable intensities (none--80 dB sound pressure level). Participants each conducted two tasks during stimulation, either reading a book (ignoring task) or detecting target tones (attentive task). Task-related modulation within the ABR range was found only during oddball exposures at contralateral masking intensities greater than or equal to 60 dB. Attention-related modulation of ABR can thus be detected reliably during auditory discrimination under contralateral masking of sufficient intensity.
Thomas Edward Gladwin
Full Text Available Attention plays a central role in theories of alcohol dependence; however, its precise role in alcohol-related biases is not yet clear. In the current study, social drinkers performed a spatial cueing task designed to evoke conflict between automatic processes due to incentive salience and control exerted to follow task-related goals. Such conflict is a potentially important task feature from the perspective of dual-process models of addiction. Subjects received instructions either to direct their attention towards pictures of alcoholic beverages, and away from non-alcohol beverages; or to direct their attention towards pictures of non-alcoholic beverages, and away from alcohol beverages. A probe stimulus was likely to appear at the attended location, so that both spatial and non-spatial interference was possible. Activation in medial parietal cortex was found during Approach Alcohol versus Avoid Alcohol blocks. This region is associated with the, possibly automatic, shifting of attention between stimulus features, suggesting that subjects may have shifted attention away from certain features of alcoholic cues when attention had to be directed towards an upcoming stimulus at their location. Further, activation in voxels close to this region was negatively correlated with riskier drinking behavior. A tentative interpretation of the results is that risky drinking may be associated with a reduced tendency to shift attention away from potentially distracting task-irrelevant alcohol cues. The results suggest novel hypotheses and directions for future study, in particular towards the potential therapeutic use of training the ability to shifting attention away from alcohol-related stimulus features.
Seither-Preisler, Annemarie; Parncutt, Richard; Schneider, Peter
Playing a musical instrument is associated with numerous neural processes that continuously modify the human brain and may facilitate characteristic auditory skills. In a longitudinal study, we investigated the auditory and neural plasticity of musical learning in 111 young children (aged 7-9 y) as a function of the intensity of instrumental practice and musical aptitude. Because of the frequent co-occurrence of central auditory processing disorders and attentional deficits, we also tested 21 children with attention deficit (hyperactivity) disorder [AD(H)D]. Magnetic resonance imaging and magnetoencephalography revealed enlarged Heschl's gyri and enhanced right-left hemispheric synchronization of the primary evoked response (P1) to harmonic complex sounds in children who spent more time practicing a musical instrument. The anatomical characteristics were positively correlated with frequency discrimination, reading, and spelling skills. Conversely, AD(H)D children showed reduced volumes of Heschl's gyri and enhanced volumes of the plana temporalia that were associated with a distinct bilateral P1 asynchrony. This may indicate a risk for central auditory processing disorders that are often associated with attentional and literacy problems. The longitudinal comparisons revealed a very high stability of auditory cortex morphology and gray matter volumes, suggesting that the combined anatomical and functional parameters are neural markers of musicality and attention deficits. Educational and clinical implications are considered. Copyright © 2014 the authors 0270-6474/14/3410937-13$15.00/0.
Kemps, Eva; Tiggemann, Marika; Stewart-Davis, Ebony
Two experiments investigated whether attentional bias modification can inoculate people to withstand exposure to real-world appetitive food cues, namely television advertisements for chocolate products. Using a modified dot probe task, undergraduate women were trained to direct their attention toward (attend) or away from (avoid) chocolate pictures. Experiment 1 (N = 178) consisted of one training session; Experiment 2 (N = 161) included 5 weekly sessions. Following training, participants viewed television advertisements of chocolate or control products. They then took part in a so-called taste test as a measure of chocolate consumption. Attentional bias for chocolate was measured before training and after viewing the advertisements, and in Experiment 2 also at 24-h and 1-week follow-up. In Experiment 2, but not Experiment 1, participants in the avoid condition showed a significant reduction in attentional bias for chocolate, regardless of whether they had been exposed to advertisements for chocolate or control products. However, this inoculation effect on attentional bias did not generalise to chocolate intake. Future research involving more extensive attentional re-training may be needed to ascertain whether the inoculation effect on attentional bias can extend to consumption, and thus help people withstand exposure to real-world palatable food cues. Copyright © 2017 Elsevier Ltd. All rights reserved.
Lalani, Sanam J; Duffield, Tyler C; Trontel, Haley G; Bigler, Erin D; Abildskov, Tracy J; Froehlich, Alyson; Prigge, Molly B D; Travers, Brittany G; Anderson, Jeffrey S; Zielinski, Brandon A; Alexander, Andrew; Lange, Nicholas; Lainhart, Janet E
Studies have shown that individuals with autism spectrum disorder (ASD) tend to perform significantly below typically developing individuals on standardized measures of attention, even when controlling for IQ. The current study sought to examine within ASD whether anatomical correlates of attention performance differed between those with average to above-average IQ (AIQ group) and those with low-average to borderline ability (LIQ group) as well as in comparison to typically developing controls (TDC). Using automated volumetric analyses, we examined regional volume of classic attention areas including the superior frontal gyrus, anterior cingulate cortex, and precuneus in ASD AIQ (n = 38) and LIQ (n = 18) individuals along with 30 TDC. Auditory attention performance was assessed using subtests of the Test of Memory and Learning (TOMAL) compared among the groups and then correlated with regional brain volumes. Analyses revealed group differences in attention. The three groups did not differ significantly on any auditory attention-related brain volumes; however, trends toward significant size-attention function interactions were observed. Negative correlations were found between the volume of the precuneus and auditory attention performance for the AIQ ASD group, indicating larger volume related to poorer performance. Implications for general attention functioning and dysfunctional neural connectivity in ASD are discussed.
Schwartz, Zachary P; David, Stephen V
Auditory selective attention is required for parsing crowded acoustic environments, but cortical systems mediating the influence of behavioral state on auditory perception are not well characterized. Previous neurophysiological studies suggest that attention produces a general enhancement of neural responses to important target sounds versus irrelevant distractors. However, behavioral studies suggest that in the presence of masking noise, attention provides a focal suppression of distractors that compete with targets. Here, we compared effects of attention on cortical responses to masking versus non-masking distractors, controlling for effects of listening effort and general task engagement. We recorded single-unit activity from primary auditory cortex (A1) of ferrets during behavior and found that selective attention decreased responses to distractors masking targets in the same spectral band, compared with spectrally distinct distractors. This suppression enhanced neural target detection thresholds, suggesting that limited attention resources serve to focally suppress responses to distractors that interfere with target detection. Changing effort by manipulating target salience consistently modulated spontaneous but not evoked activity. Task engagement and changing effort tended to affect the same neurons, while attention affected an independent population, suggesting that distinct feedback circuits mediate effects of attention and effort in A1. © The Author 2017. Published by Oxford University Press.
Cooney, Sarah; Dignam, Holly; Brady, Nuala
Determining where another person is attending is an important skill for social interaction that relies on various visual cues, including the turning direction of the head and body. This study reports a novel high-level visual aftereffect that addresses the important question of how these sources of information are combined in gauging social attention. We show that adapting to images of heads turned 25° to the right or left produces a perceptual bias in judging the turning direction of subsequently presented bodies. In contrast, little to no change in the judgment of head orientation occurs after adapting to extremely oriented bodies. The unidirectional nature of the aftereffect suggests that cues from the human body signaling social attention are combined in a hierarchical fashion and is consistent with evidence from single-cell recording studies in nonhuman primates showing that information about head orientation can override information about body posture when both are visible. PMID:26359866
Full Text Available Determining where another person is attending is an important skill for social interaction that relies on various visual cues, including the turning direction of the head and body. This study reports a novel high-level visual aftereffect that addresses the important question of how these sources of information are combined in gauging social attention. We show that adapting to images of heads turned 25° to the right or left produces a perceptual bias in judging the turning direction of subsequently presented bodies. In contrast, little to no change in the judgment of head orientation occurs after adapting to extremely oriented bodies. The unidirectional nature of the aftereffect suggests that cues from the human body signaling social attention are combined in a hierarchical fashion and is consistent with evidence from single-cell recording studies in nonhuman primates showing that information about head orientation can override information about body posture when both are visible.
David L Woods
Full Text Available BACKGROUND: While human auditory cortex is known to contain tonotopically organized auditory cortical fields (ACFs, little is known about how processing in these fields is modulated by other acoustic features or by attention. METHODOLOGY/PRINCIPAL FINDINGS: We used functional magnetic resonance imaging (fMRI and population-based cortical surface analysis to characterize the tonotopic organization of human auditory cortex and analyze the influence of tone intensity, ear of delivery, scanner background noise, and intermodal selective attention on auditory cortex activations. Medial auditory cortex surrounding Heschl's gyrus showed large sensory (unattended activations with two mirror-symmetric tonotopic fields similar to those observed in non-human primates. Sensory responses in medial regions had symmetrical distributions with respect to the left and right hemispheres, were enlarged for tones of increased intensity, and were enhanced when sparse image acquisition reduced scanner acoustic noise. Spatial distribution analysis suggested that changes in tone intensity shifted activation within isofrequency bands. Activations to monaural tones were enhanced over the hemisphere contralateral to stimulation, where they produced activations similar to those produced by binaural sounds. Lateral regions of auditory cortex showed small sensory responses that were larger in the right than left hemisphere, lacked tonotopic organization, and were uninfluenced by acoustic parameters. Sensory responses in both medial and lateral auditory cortex decreased in magnitude throughout stimulus blocks. Attention-related modulations (ARMs were larger in lateral than medial regions of auditory cortex and appeared to arise primarily in belt and parabelt auditory fields. ARMs lacked tonotopic organization, were unaffected by acoustic parameters, and had distributions that were distinct from those of sensory responses. Unlike the gradual adaptation seen for sensory responses
Kumpik, Daniel P; Roberts, Helen E; King, Andrew J; Bizley, Jennifer K
The sound-induced flash illusion (SIFI) is a multisensory perceptual phenomenon in which the number of brief visual stimuli perceived by an observer is influenced by the number of concurrently presented sounds. While the strength of this illusion has been shown to be modulated by the temporal congruence of the stimuli from each modality, there is conflicting evidence regarding its dependence upon their spatial congruence. We addressed this question by examining SIFIs under conditions in which the spatial reliability of the visual stimuli was degraded and different sound localization cues were presented using either free-field or closed-field stimulation. The likelihood of reporting a SIFI varied with the spatial cue composition of the auditory stimulus and was highest when binaural cues were presented over headphones. SIFIs were more common for small flashes than for large flashes, and for small flashes at peripheral locations, subjects experienced a greater number of illusory fusion events than fission events. However, the SIFI was not dependent on the spatial proximity of the audiovisual stimuli, but was instead determined primarily by differences in subjects' underlying sensitivity across the visual field to the number of flashes presented. Our findings indicate that the influence of auditory stimulation on visual numerosity judgments can occur independently of the spatial relationship between the stimuli. © 2014 ARVO.
Murphy, Cristina F B; Pagan-Neves, Luciana O; Wertzner, Haydée F; Schochat, Eliane
Although research has demonstrated that children with specific language impairment (SLI) and reading disorder (RD) exhibit sustained attention deficits, no study has investigated sustained attention in children with speech sound disorder (SSD). Given the overlap of symptoms, such as phonological memory deficits, between these different language disorders (i.e., SLI, SSD and RD) and the relationships between working memory, attention and language processing, it is worthwhile to investigate whether deficits in sustained attention also occur in children with SSD. A total of 55 children (18 diagnosed with SSD (8.11 ± 1.231) and 37 typically developing children (8.76 ± 1.461)) were invited to participate in this study. Auditory and visual sustained-attention tasks were applied. Children with SSD performed worse on these tasks; they committed a greater number of auditory false alarms and exhibited a significant decline in performance over the course of the auditory detection task. The extent to which performance is related to auditory perceptual difficulties and probable working memory deficits is discussed. Further studies are needed to better understand the specific nature of these deficits and their clinical implications.
Cristina F B Murphy
Full Text Available Although research has demonstrated that children with specific language impairment (SLI and reading disorder (RD exhibit sustained attention deficits, no study has investigated sustained attention in children with speech sound disorder (SSD. Given the overlap of symptoms, such as phonological memory deficits, between these different language disorders (i.e., SLI, SSD and RD and the relationships between working memory, attention and language processing, it is worthwhile to investigate whether deficits in sustained attention also occur in children with SSD. A total of 55 children (18 diagnosed with SSD (8.11 ± 1.231 and 37 typically developing children (8.76 ± 1.461 were invited to participate in this study. Auditory and visual sustained-attention tasks were applied. Children with SSD performed worse on these tasks; they committed a greater number of auditory false alarms and exhibited a significant decline in performance over the course of the auditory detection task. The extent to which performance is related to auditory perceptual difficulties and probable working memory deficits is discussed. Further studies are needed to better understand the specific nature of these deficits and their clinical implications.
Garland, Eric L.; Froeliger, Brett; Passik, Steven D.; Matthew O Howard
Recurrent use of prescription opioid analgesics by chronic pain patients may result in opioid dependence, which involves implicit neurocognitive operations that organize and impel craving states and compulsive drug taking behavior. Prior studies have identified an attentional bias (AB) towards heroin among heroin dependent individuals. The aim of this study was to determine whether opioid-dependent chronic pain patients exhibit an AB towards prescription opioidrelated cues. Opioid-dependent c...
Krueger, Konstanze; Flauger, Birgit; Farmer, Kate; Maros, Katalin
This study evaluates the horse (Equus caballus) use of human local enhancement cues and reaction to human attention when making feeding decisions. The superior performance of dogs in observing human states of attention suggests this ability evolved with domestication. However, some species show an improved ability to read human cues through socialization and training. We observed 60 horses approach a bucket with feed in a three-way object-choice task when confronted with (a) an unfamiliar or (b) a familiar person in 4 different situations: (1) squatting behind the bucket, facing the horse (2) standing behind the bucket, facing the horse (3) standing behind the bucket in a back-turned position, gazing away from the horse and (4) standing a few meters from the bucket in a distant, back-turned position, again gazing away from the horse. Additionally, postures 1 and 2 were tested both with the person looking permanently at the horse and with the person alternating their gaze between the horse and the bucket. When the person remained behind the correct bucket, it was chosen significantly above chance. However, when the test person was turned and distant from the buckets, the horses' performance deteriorated. In the turned person situations, the horses approached a familiar person and walked towards their focus of attention significantly more often than with an unfamiliar person. Additionally, in the squatting and standing person situations, some horses approached the person before approaching the correct bucket. This happened more with a familiar person. We therefore conclude that horses can use humans as a local enhancement cue independently of their body posture or gaze consistency when the persons remain close to the food source and that horses seem to orientate on the attention of familiar more than of unfamiliar persons. We suggest that socialization and training improve the ability of horses to read human cues. © Springer-Verlag 2010
Bratakos, M S; Reed, C M; Delhorne, L A; Denesvich, G
The objective of this study was to compare the effects of a single-band envelope cue as a supplement to speechreading of segmentals and sentences when presented through either the auditory or tactual modality. The supplementary signal, which consisted of a 200-Hz carrier amplitude-modulated by the envelope of an octave band of speech centered at 500 Hz, was presented through a high-performance single-channel vibrator for tactual stimulation or through headphones for auditory stimulation. Normal-hearing subjects were trained and tested on the identification of a set of 16 medial vowels in /b/-V-/d/ context and a set of 24 initial consonants in C-/a/-C context under five conditions: speechreading alone (S), auditory supplement alone (A), tactual supplement alone (T), speechreading combined with the auditory supplement (S+A), and speechreading combined with the tactual supplement (S+T). Performance on various speech features was examined to determine the contribution of different features toward improvements under the aided conditions for each modality. Performance on the combined conditions (S+A and S+T) was compared with predictions generated from a quantitative model of multi-modal performance. To explore the relationship between benefits for segmentals and for connected speech within the same subjects, sentence reception was also examined for the three conditions of S, S+A, and S+T. For segmentals, performance generally followed the pattern of T < A < S < S+T < S+A. Significant improvements to speechreading were observed with both the tactual and auditory supplements for consonants (10 and 23 percentage-point improvements, respectively), but only with the auditory supplement for vowels (a 10 percentage-point improvement). The results of the feature analyses indicated that improvements to speechreading arose primarily from improved performance on the features low and tense for vowels and on the features voicing, nasality, and plosion for consonants. These
Lee, Adrian K C; Larson, Eric; Maddox, Ross K; Shinn-Cunningham, Barbara G
Over the last four decades, a range of different neuroimaging tools have been used to study human auditory attention, spanning from classic event-related potential studies using electroencephalography to modern multimodal imaging approaches (e.g., combining anatomical information based on magnetic resonance imaging with magneto- and electroencephalography). This review begins by exploring the different strengths and limitations inherent to different neuroimaging methods, and then outlines some common behavioral paradigms that have been adopted to study auditory attention. We argue that in order to design a neuroimaging experiment that produces interpretable, unambiguous results, the experimenter must not only have a deep appreciation of the imaging technique employed, but also a sophisticated understanding of perception and behavior. Only with the proper caveats in mind can one begin to infer how the cortex supports a human in solving the "cocktail party" problem. This article is part of a Special Issue entitled Human Auditory Neuroimaging. Copyright © 2013 Elsevier B.V. All rights reserved.
Trenado, Carlos; Haab, Lars; Strauss, Daniel J
Auditory evoked cortical potentials (AECP) are well established as diagnostic tool in audiology and gain more and more impact in experimental neuropsychology, neuro-science, and psychiatry, e.g., for the attention deficit disorder, schizophrenia, or for studying the tinnitus decompensation. The modulation of AECP due to exogenous and endogenous attention plays a major role in many clinical applications and has experimentally been studied in neuropsychology. However the relation of corticothalamic feedback dynamics to focal and non-focal attention and its large-scale effect reflected in AECPs is far from being understood. In this paper, we model neural correlates of auditory attention reflected in AECPs using corticothalamic feedback dynamics. We present a mapping of a recently developed multiscale model of evoked potentials to the hearing path and discuss for the first time its neurofunctionality in terms of corticothalamic feedback loops related to focal and non-focal attention. Our model reinforced recent experimental results related to online attention monitoring using AECPs with application as objective tinnitus decompensation measure. It is concluded that our model presents a promising approach to gain a deeper understanding of the neurodynamics of auditory attention and might be use as an efficient forward model to reinforce hypotheses that are obtained from experimental paradigms involving AECPs.
Full Text Available Reduced neural processing of a tone is observed when it is presented after a sound whose spectral range closely frames the frequency of the tone. This observation might be explained by the mechanism of lateral inhibition (LI due to inhibitory interneurons in the auditory system. So far, several characteristics of bottom up influences on LI have been identified, while the influence of top-down processes such as directed attention on LI has not been investigated. Hence, the study at hand aims at investigating the modulatory effects of focused attention on LI in the human auditory cortex. In the magnetoencephalograph, we present two types of masking sounds (white noise vs. withe noise passing through a notch filter centered at a specific frequency, followed by a test tone with a frequency corresponding to the center-frequency of the notch filter. Simultaneously, subjects were presented with visual input on a screen. To modulate the focus of attention, subjects were instructed to concentrate either on the auditory input or the visual stimuli. More specific, on one half of the trials, subjects were instructed to detect small deviations in loudness in the masking sounds while on the other half of the trials subjects were asked to detect target stimuli on the screen. The results revealed a reduction in neural activation due to LI, which was larger during auditory compared to visual focused attention. Attentional modulations of LI were observed in two post-N1m time intervals. These findings underline the robustness of reduced neural activation due to LI in the auditory cortex and point towards the important role of attention on the modulation of this mechanism in more evaluative processing stages.
Engell, Alva; Junghöfer, Markus; Stein, Alwina; Lau, Pia; Wunderlich, Robert; Wollbrink, Andreas; Pantev, Christo
Reduced neural processing of a tone is observed when it is presented after a sound whose spectral range closely frames the frequency of the tone. This observation might be explained by the mechanism of lateral inhibition (LI) due to inhibitory interneurons in the auditory system. So far, several characteristics of bottom up influences on LI have been identified, while the influence of top-down processes such as directed attention on LI has not been investigated. Hence, the study at hand aims at investigating the modulatory effects of focused attention on LI in the human auditory cortex. In the magnetoencephalograph, we present two types of masking sounds (white noise vs. withe noise passing through a notch filter centered at a specific frequency), followed by a test tone with a frequency corresponding to the center-frequency of the notch filter. Simultaneously, subjects were presented with visual input on a screen. To modulate the focus of attention, subjects were instructed to concentrate either on the auditory input or the visual stimuli. More specific, on one half of the trials, subjects were instructed to detect small deviations in loudness in the masking sounds while on the other half of the trials subjects were asked to detect target stimuli on the screen. The results revealed a reduction in neural activation due to LI, which was larger during auditory compared to visual focused attention. Attentional modulations of LI were observed in two post-N1m time intervals. These findings underline the robustness of reduced neural activation due to LI in the auditory cortex and point towards the important role of attention on the modulation of this mechanism in more evaluative processing stages.
Sandro Franceschini; Piergiorgio Trevisan; Luca Ronconi; Sara Bertoni; Susan Colmar; Kit Double; Andrea Facoetti; Simone Gori
.... In our study, we tested reading skills and phonological working memory, visuo-spatial attention, auditory, visual and audio-visual stimuli localization, and cross-sensory attentional shifting in two...
Sharma, Mridula; Dhamani, Imran; Leung, Johahn; Carlile, Simon
The aim of this study was to examine attention, memory, and auditory processing in children with reported listening difficulty in noise (LDN) despite having clinically normal hearing. Twenty-one children with LDN and 15 children with no listening concerns (controls) participated. The clinically normed auditory processing tests included the Frequency/Pitch Pattern Test (FPT; Musiek, 2002), the Dichotic Digits Test (Musiek, 1983), the Listening in Spatialized Noise-Sentences (LiSN-S) test (Dillon, Cameron, Glyde, Wilson, & Tomlin, 2012), gap detection in noise (Baker, Jayewardene, Sayle, & Saeed, 2008), and masking level difference (MLD; Wilson, Moncrieff, Townsend, & Pillion, 2003). Also included were research-based psychoacoustic tasks, such as auditory stream segregation, localization, sinusoidal amplitude modulation (SAM), and fine structure perception. All were also evaluated on attention and memory test batteries. The LDN group was significantly slower switching their auditory attention and had poorer inhibitory control. Additionally, the group mean results showed significantly poorer performance on FPT, MLD, 4-Hz SAM, and memory tests. Close inspection of the individual data revealed that only 5 participants (out of 21) in the LDN group showed significantly poor performance on FPT compared with clinical norms. Further testing revealed the frequency discrimination of these 5 children to be significantly impaired. Thus, the LDN group showed deficits in attention switching and inhibitory control, whereas only a subset of these participants demonstrated an additional frequency resolution deficit.
Reel, Leigh Ann; Hicks, Candace Bourland
Purpose: The authors assessed adult selective auditory attention to determine effects of (a) differences between the vocal/speaking characteristics of different mixed-gender pairs of masking talkers and (b) the rhythmic structure of the language of the competing speech. Method: Reception thresholds for English sentences were measured for 50…
Schwent, V. L.; Hillyard, S. A.; Galambos, R.
Enhancement of the auditory vertex potentials with selective attention to dichotically presented tone pips was found to be critically sensitive to the range of inter-stimulus intervals in use. Only at the shortest intervals was a clear-cut enhancement of the latency component to stimuli observed for the attended ear.
Zimmermann, Jacqueline F.; Moscovitch, Morris; Alain, Claude
Long-term memory (LTM) has been shown to bias attention to a previously learned visual target location. Here, we examined whether memory-predicted spatial location can facilitate the detection of a faint pure tone target embedded in real world audio clips (e.g., soundtrack of a restaurant). During an initial familiarization task, participants…
Strait, Dana L; Kraus, Nina
Even in the quietest of rooms, our senses are perpetually inundated by a barrage of sounds, requiring the auditory system to adapt to a variety of listening conditions in order to extract signals of interest (e.g., one speaker's voice amidst others). Brain networks that promote selective attention are thought to sharpen the neural encoding of a target signal, suppressing competing sounds and enhancing perceptual performance. Here, we ask: does musical training benefit cortical mechanisms that underlie selective attention to speech? To answer this question, we assessed the impact of selective auditory attention on cortical auditory-evoked response variability in musicians and non-musicians. Outcomes indicate strengthened brain networks for selective auditory attention in musicians in that musicians but not non-musicians demonstrate decreased prefrontal response variability with auditory attention. Results are interpreted in the context of previous work documenting perceptual and subcortical advantages in musicians for the hearing and neural encoding of speech in background noise. Musicians' neural proficiency for selectively engaging and sustaining auditory attention to language indicates a potential benefit of music for auditory training. Given the importance of auditory attention for the development and maintenance of language-related skills, musical training may aid in the prevention, habilitation, and remediation of individuals with a wide range of attention-based language, listening and learning impairments.
Dana L Strait
Full Text Available Even in the quietest of rooms, our senses are perpetually inundated by a barrage of sounds, requiring the auditory system to adapt to a variety of listening conditions in order to extract signals of interest (e.g., one speaker’s voice amidst others. Brain networks that promote selective attention are thought to sharpen the neural encoding of a target signal, suppressing competing sounds and enhancing perceptual performance. Here, we ask: does musical training benefit cortical mechanisms that underlie selective attention to speech? To answer this question, we assessed the impact of selective auditory attention on cortical auditory-evoked response variability in musicians and nonmusicians. Outcomes indicate strengthened brain networks for selective auditory attention in musicians in that musicians but not nonmusicians demonstrate decreased prefrontal response variability with auditory attention. Results are interpreted in the context of previous work from our laboratory documenting perceptual and subcortical advantages in musicians for the hearing and neural encoding of speech in background noise. Musicians’ neural proficiency for selectively engaging and sustaining auditory attention to language indicates a potential benefit of music for auditory training. Given the importance of auditory attention for the development of language-related skills, musical training may aid in the prevention, habilitation and remediation of children with a wide range of attention-based language and learning impairments.
Griskova-Bulanova, Inga; Ruksenas, Osvaldas; Dapsys, Kastytis
To explore the modulation of auditory steady-state response (ASSR) by experimental tasks, differing in attentional focus and arousal level.......To explore the modulation of auditory steady-state response (ASSR) by experimental tasks, differing in attentional focus and arousal level....
van Lutterveld, Remko; Oranje, Bob; Abramovic, Lucija
OBJECTIVE: Schizophrenia is associated with aberrant event-related potentials (ERPs) such as reductions in P300, processing negativity and mismatch negativity amplitudes. These deficits may be related to the propensity of schizophrenia patients to experience auditory verbal hallucinations (AVH). ...... found for mismatch negativity. CONCLUSION: Contrary to our expectations, non-psychotic individuals with AVH show increased rather than decreased psychophysiological measures of effortful attention compared to healthy controls, refuting a pivotal role of decreased effortful attention...
Putkinen, Vesa; Makkonen, Tommi; Eerola, Tuomas
Abstract Previous studies indicate that positive mood broadens the scope of visual attention, which can manifest as heightened distractibility. We used event-related potentials (ERP) to investigate whether music-induced positive mood has comparable effects on selective attention in the auditory domain. Subjects listened to experimenter-selected happy, neutral or sad instrumental music and afterwards participated in a dichotic listening task. Distractor sounds in the unattended channel elicite...
Sanders-Jackson, Ashley N.; Cappella, Joseph N.; Linebarger, Deborah L.; Piotrowski, Jessica Taylor; O'Keeffe, Moira; Strasser, Andrew A.
This study examines how addicted smokers attend visually to smoking-related public service announcements (PSAs) in adults smokers. Smokers' onscreen visual fixation is an indicator of cognitive resources allocated to visual attention. Characteristic of individuals with addictive tendencies, smokers are expected to be appetitively activated by…
Haghighi, Marzieh; Moghadamfalahi, Mohammad; Akcakaya, Murat; Shinn-Cunningham, Barbara G; Erdogmus, Deniz
Recent findings indicate that brain interfaces have the potential to enable attention-guided auditory scene analysis and manipulation in applications, such as hearing aids and augmented/virtual environments. Specifically, noninvasively acquired electroencephalography (EEG) signals have been demonstrated to carry some evidence regarding, which of multiple synchronous speech waveforms the subject attends to. In this paper, we demonstrate that: 1) using data- and model-driven cross-correlation features yield competitive binary auditory attention classification results with at most 20 s of EEG from 16 channels or even a single well-positioned channel; 2) a model calibrated using equal-energy speech waveforms competing for attention could perform well on estimating attention in closed-loop unbalanced-energy speech waveform situations, where the speech amplitudes are modulated by the estimated attention posterior probability distribution; 3) such a model would perform even better if it is corrected (linearly, in this instance) based on EEG evidence dependence on speech weights in the mixture; and 4) calibrating a model based on population EEG could result in acceptable performance for new individuals/users; therefore, EEG-based auditory attention classifiers may generalize across individuals, leading to reduced or eliminated calibration time and effort.
Full Text Available Background and Aim: Learning disability is a term referes to a group of disorders manifesting listening, reading, writing, or mathematical problems. These children mostly have attention difficulties in classroom that leads to many learning problems. In this study we aimed to compare the auditory attention of 7 to 9 year old children with learning disability to non- learning disability age matched normal group.Methods: Twenty seven male 7 to 9 year old students with learning disability and 27 age and sex matched normal conrols were selected with unprobable simple sampling. 27 In order to evaluate auditory selective and divided attention, Farsi versions of speech in noise and dichotic digit test were used respectively.Results: Comparison of mean scores of Farsi versions of speech in noise in both ears of 7 and 8 year-old students in two groups indicated no significant difference (p>0.05 Mean scores of 9 year old controls was significant more than those of the cases only in the right ear (p=0.033. However, no significant difference was observed between mean scores of dichotic digit test assessing the right ear of 9 year-old learning disability and non learning disability students (p>0.05. Moreover, mean scores of 7 and 8 year- old students with learning disability was less than those of their normal peers in the left ear (p>0.05.Conclusion: Selective auditory attention is not affected in the optimal signal to noise ratio, while divided attention seems to be affected by maturity delay of auditory system or central auditory system disorders.
Scheerer, Nichole E; Tumber, Anupreet K; Jones, Jeffery A
Hearing one's own voice is important for regulating ongoing speech and for mapping speech sounds onto articulator movements. However, it is currently unknown whether attention mediates changes in the relationship between motor commands and their acoustic output, which are necessary as growth and aging inevitably cause changes to the vocal tract. In this study, participants produced vocalizations while they heard their vocal pitch persistently shifted downward one semitone in both single- and dual-task conditions. During the single-task condition, participants vocalized while passively viewing a visual stream. During the dual-task condition, participants vocalized while also monitoring a visual stream for target letters, forcing participants to divide their attention. Participants' vocal pitch was measured across each vocalization, to index the extent to which their ongoing vocalization was modified as a result of the deviant auditory feedback. Smaller compensatory responses were recorded during the dual-task condition, suggesting that divided attention interfered with the use of auditory feedback for the regulation of ongoing vocalizations. Participants' vocal pitch was also measured at the beginning of each vocalization, before auditory feedback was available, to assess the extent to which the deviant auditory feedback was used to modify subsequent speech motor commands. Smaller changes in vocal pitch at vocalization onset were recorded during the dual-task condition, suggesting that divided attention diminished sensorimotor learning. Together, the results of this study suggest that attention is required for the speech motor control system to make optimal use of auditory feedback for the regulation and planning of speech motor commands. Copyright © 2016 the American Physiological Society.
Ho, Ming-Chou; Chang, Catherine Fountain; Li, Ren-Hau; Tang, Tze-Chun
The betel nut (Areca catecu) is regarded by the World Health Organization as the fourth most prevalent human carcinogen. Our study aims to investigate whether habitual chewers show bias in their attention toward betel nut usage. In the current study, heavy and light betel nut chewers were instructed to respond to a probe presented immediately after either one of a pair of areca-related picture and non-areca-matched picture. The presentation durations of these pictures were manipulated to investigate attentional biases under awareness threshold (17 ms), in initial orienting (200 ms), and maintenance of attention (2,000 ms). Faster response to the probe replacing the areca-related picture, in comparison with a matched picture, indicated attentional bias. The results showed that neither group showed subliminal attentional biases. Further, heavy chewers, but not light chewers, exhibited supraliminal biases toward betel nut cues in initial orienting of attention and maintained attention. Moreover, attentional bias scores at 2,000 ms were also shown to be positively associated with betel nut craving and dependence. Implications of the current findings are thoroughly discussed in the article. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Curtis, Ashley F; Turner, Gary R; Park, Norman W; Murtha, Susan J E
Spatially informative auditory and vibrotactile (cross-modal) cues can facilitate attention but little is known about how similar cues influence visual spatial working memory (WM) across the adult lifespan. We investigated the effects of cues (spatially informative or alerting pre-cues vs. no cues), cue modality (auditory vs. vibrotactile vs. visual), memory array size (four vs. six items), and maintenance delay (900 vs. 1800 ms) on visual spatial location WM recognition accuracy in younger adults (YA) and older adults (OA). We observed a significant interaction between spatially informative pre-cue type, array size, and delay. OA and YA benefitted equally from spatially informative pre-cues, suggesting that attentional orienting prior to WM encoding, regardless of cue modality, is preserved with age. Contrary to predictions, alerting pre-cues generally impaired performance in both age groups, suggesting that maintaining a vigilant state of arousal by facilitating the alerting attention system does not help visual spatial location WM.
Bareham, Corinne A; Georgieva, Stanimira D; Kamke, Marc R; Lloyd, David; Bekinschtein, Tristan A; Mattingley, Jason B
Selective attention is the process of directing limited capacity resources to behaviourally relevant stimuli while ignoring competing stimuli that are currently irrelevant. Studies in healthy human participants and in individuals with focal brain lesions have suggested that the right parietal cortex is crucial for resolving competition for attention. Following right-hemisphere damage, for example, patients may have difficulty reporting a brief, left-sided stimulus if it occurs with a competitor on the right, even though the same left stimulus is reported normally when it occurs alone. Such "extinction" of contralesional stimuli has been documented for all the major sense modalities, but it remains unclear whether its occurrence reflects involvement of one or more specific subregions of the temporo-parietal cortex. Here we employed repetitive transcranial magnetic stimulation (rTMS) over the right hemisphere to examine the effect of disruption of two candidate regions - the supramarginal gyrus (SMG) and the superior temporal gyrus (STG) - on auditory selective attention. Eighteen neurologically normal, right-handed participants performed an auditory task, in which they had to detect target digits presented within simultaneous dichotic streams of spoken distractor letters in the left and right channels, both before and after 20 min of 1 Hz rTMS over the SMG, STG or a somatosensory control site (S1). Across blocks, participants were asked to report on auditory streams in the left, right, or both channels, which yielded focused and divided attention conditions. Performance was unchanged for the two focused attention conditions, regardless of stimulation site, but was selectively impaired for contralateral left-sided targets in the divided attention condition following stimulation of the right SMG, but not the STG or S1. Our findings suggest a causal role for the right inferior parietal cortex in auditory selective attention. Copyright © 2017 Elsevier Ltd. All rights
Christina M. Karns
Full Text Available Auditory selective attention is a critical skill for goal-directed behavior, especially where noisy distractions may impede focusing attention. To better understand the developmental trajectory of auditory spatial selective attention in an acoustically complex environment, in the current study we measured auditory event-related potentials (ERPs across five age groups: 3–5 years; 10 years; 13 years; 16 years; and young adults. Using a naturalistic dichotic listening paradigm, we characterized the ERP morphology for nonlinguistic and linguistic auditory probes embedded in attended and unattended stories. We documented robust maturational changes in auditory evoked potentials that were specific to the types of probes. Furthermore, we found a remarkable interplay between age and attention-modulation of auditory evoked potentials in terms of morphology and latency from the early years of childhood through young adulthood. The results are consistent with the view that attention can operate across age groups by modulating the amplitude of maturing auditory early-latency evoked potentials or by invoking later endogenous attention processes. Development of these processes is not uniform for probes with different acoustic properties within our acoustically dense speech-based dichotic listening task. In light of the developmental differences we demonstrate, researchers conducting future attention studies of children and adolescents should be wary of combining analyses across diverse ages.
Karns, Christina M.; Isbell, Elif; Giuliano, Ryan J.; Neville, Helen J.
Auditory selective attention is a critical skill for goal-directed behavior, especially where noisy distractions may impede focusing attention. To better understand the developmental trajectory of auditory spatial selective attention in an acoustically complex environment, in the current study we measured auditory event-related potentials (ERPs) in human children across five age groups: 3–5 years; 10 years; 13 years; 16 years; and young adults using a naturalistic dichotic listening paradigm, characterizing the ERP morphology for nonlinguistic and linguistic auditory probes embedded in attended and unattended stories. We documented robust maturational changes in auditory evoked potentials that were specific to the types of probes. Furthermore, we found a remarkable interplay between age and attention-modulation of auditory evoked potentials in terms of morphology and latency from the early years of childhood through young adulthood. The results are consistent with the view that attention can operate across age groups by modulating the amplitude of maturing auditory early-latency evoked potentials or by invoking later endogenous attention processes. Development of these processes is not uniform for probes with different acoustic properties within our acoustically dense speech-based dichotic listening task. In light of the developmental differences we demonstrate, researchers conducting future attention studies of children and adolescents should be wary of combining analyses across diverse ages. PMID:26002721
Mortimer, J; Krysztofiak, J; Custard, S; McKune, A J
The effect of sport stacking on auditory and visual attention in 32 Grade 3 children was examined using a randomised, cross-over design. Children were randomly assigned to a sport stacking (n=16) or arts/crafts group (n=16) with these activities performed over 3 wk. (12 30-min. sessions, 4 per week). This was followed by a 3-wk. wash-out period after which there was a cross-over and the 3-wk. intervention repeated, with the sports stacking group performing arts/crafts and the arts/crafts group performing sports stacking. Performance on the Integrated Visual and Auditory Continuous Performance Test, a measure of auditory and visual attention, was assessed before and after each of the 3-wk. interventions for each group. Comparisons indicated that sport stacking resulted in significant improvement in high demand function and fine motor regulation, while it caused a significant reduction in low demand function. Auditory and visual attention adaptations to sport stacking may be specific to the high demand nature of the task.
Feniman, Mariza Ribeiro
Full Text Available Introduction: The attention is an underlying neuropsychology function to all the cognitive processes. The auditory deficiency compromises the normal development of the child, modifying diverse auditory abilities, including the attention. Objective: to compare the performance of children in the Test of the Ability of Auditory Attention Support, as for the different forms of application (auricular phones and free field, sort and, application order. Method: 40 children (7 years old voluntary with typical development had participated, divided in two groups: G1 and G2, composites of 20 children each. The application of the THAAS in the G1 if gave first with auricular phones and after that in free field and the G2 the process was inverse. The evaluation consisted in: specific questionnaire, auditory tests and application of the THAAS. Results: It did not have significant difference how much to the sort. For the THAAS with phones, the G1 presented greater amount of errors of carelessness and total punctuation. For the THAAS in field it had a significant difference of the G2 for the monitoring decrease. How much to the application form, the G1 demonstrated a bigger number of errors when it was used phones. The G2 did not demonstrate difference. Conclusion: It had viability in the application of the THAAS in Free Field, being able to be adopted the same used normative values for the conventional way of evaluation.
Elfenbein, Hillary Anger; Jang, Daisung; Sharma, Sudeep; Sanchez-Burks, Jeffrey
Emotional intelligence (EI) has captivated researchers and the public alike, but it has been challenging to establish its components as objective abilities. Self-report scales lack divergent validity from personality traits, and few ability tests have objectively correct answers. We adapt the Stroop task to introduce a new facet of EI called emotional attention regulation (EAR), which involves focusing emotion-related attention for the sake of information processing rather than for the sake of regulating one's own internal state. EAR includes 2 distinct components. First, tuning in to nonverbal cues involves identifying nonverbal cues while ignoring alternate content, that is, emotion recognition under conditions of distraction by competing stimuli. Second, tuning out of nonverbal cues involves ignoring nonverbal cues while identifying alternate content, that is, the ability to interrupt emotion recognition when needed to focus attention elsewhere. An auditory test of valence included positive and negative words spoken in positive and negative vocal tones. A visual test of approach-avoidance included green- and red-colored facial expressions depicting happiness and anger. The error rates for incongruent trials met the key criteria for establishing the validity of an EI test, in that the measure demonstrated test-retest reliability, convergent validity with other EI measures, divergent validity from factors such as general processing speed and mostly personality, and predictive validity in this case for well-being. By demonstrating that facets of EI can be validly theorized and empirically assessed, results also speak to the validity of EI more generally. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Tallus, Jussi; Soveri, Anna; Hämäläinen, Heikki; Tuomainen, Jyrki; Laine, Matti
Facilitation of general cognitive capacities such as executive functions through training has stirred considerable research interest during the last decade. Recently we demonstrated that training of auditory attention with forced attention dichotic listening not only facilitated that performance but also generalized to an untrained attentional task. In the present study, 13 participants underwent a 4-week dichotic listening training programme with instructions to report syllables presented to the left ear (FL training group). Another group (n = 13) was trained using the non-forced instruction, asked to report whichever syllable they heard the best (NF training group). The study aimed to replicate our previous behavioural results, and to explore the neurophysiological correlates of training through event-related brain potentials (ERPs). We partially replicated our previous behavioural training effects, as the FL training group tended to show more allocation of auditory spatial attention to the left ear in a standard dichotic listening task. ERP measures showed diminished N1 and enhanced P2 responses to dichotic stimuli after training in both groups, interpreted as improvement in early perceptual processing of the stimuli. Additionally, enhanced anterior N2 amplitudes were found after training, with relatively larger changes in the FL training group in the forced-left condition, suggesting improved top-down control on the trained task. These results show that top-down cognitive training can modulate the left-right allocation of auditory spatial attention, accompanied by a change in an evoked brain potential related to cognitive control.
Funamizu, Akihiro; Kanzaki, Ryohei; Takahashi, Hirokazu
Neural representation in the auditory cortex is rapidly modulated by both top-down attention and bottom-up stimulus properties, in order to improve perception in a given context. Learning-induced, pre-attentive, map plasticity has been also studied in the anesthetized cortex; however, little attention has been paid to rapid, context-dependent modulation. We hypothesize that context-specific learning leads to pre-attentively modulated, multiplex representation in the auditory cortex. Here, we investigate map plasticity in the auditory cortices of anesthetized rats conditioned in a context-dependent manner, such that a conditioned stimulus (CS) of a 20-kHz tone and an unconditioned stimulus (US) of a mild electrical shock were associated only under a noisy auditory context, but not in silence. After the conditioning, although no distinct plasticity was found in the tonotopic map, tone-evoked responses were more noise-resistive than pre-conditioning. Yet, the conditioned group showed a reduced spread of activation to each tone with noise, but not with silence, associated with a sharpening of frequency tuning. The encoding accuracy index of neurons showed that conditioning deteriorated the accuracy of tone-frequency representations in noisy condition at off-CS regions, but not at CS regions, suggesting that arbitrary tones around the frequency of the CS were more likely perceived as the CS in a specific context, where CS was associated with US. These results together demonstrate that learning-induced plasticity in the auditory cortex occurs in a context-dependent manner.
Full Text Available Neural representation in the auditory cortex is rapidly modulated by both top-down attention and bottom-up stimulus properties, in order to improve perception in a given context. Learning-induced, pre-attentive, map plasticity has been also studied in the anesthetized cortex; however, little attention has been paid to rapid, context-dependent modulation. We hypothesize that context-specific learning leads to pre-attentively modulated, multiplex representation in the auditory cortex. Here, we investigate map plasticity in the auditory cortices of anesthetized rats conditioned in a context-dependent manner, such that a conditioned stimulus (CS of a 20-kHz tone and an unconditioned stimulus (US of a mild electrical shock were associated only under a noisy auditory context, but not in silence. After the conditioning, although no distinct plasticity was found in the tonotopic map, tone-evoked responses were more noise-resistive than pre-conditioning. Yet, the conditioned group showed a reduced spread of activation to each tone with noise, but not with silence, associated with a sharpening of frequency tuning. The encoding accuracy index of neurons showed that conditioning deteriorated the accuracy of tone-frequency representations in noisy condition at off-CS regions, but not at CS regions, suggesting that arbitrary tones around the frequency of the CS were more likely perceived as the CS in a specific context, where CS was associated with US. These results together demonstrate that learning-induced plasticity in the auditory cortex occurs in a context-dependent manner.
Full Text Available An experienced car mechanic can often deduce what's wrong with a car by carefully listening to the sound of the ailing engine, despite the presence of multiple sources of noise. Indeed, the ability to select task-relevant sounds for awareness, whilst ignoring irrelevant ones, constitutes one of the most fundamental of human faculties, but the underlying neural mechanisms have remained elusive. While most of the literature explains the neural basis of selective attention by means of an increase in neural gain, a number of papers propose enhancement in neural selectivity as an alternative or a complementary mechanism.Here, to address the question whether pure gain increase alone can explain auditory selective attention in humans, we quantified the auditory cortex frequency selectivity in 20 healthy subjects by masking 1000-Hz tones by continuous noise masker with parametrically varying frequency notches around the tone frequency (i.e., a notched-noise masker. The task of the subjects was, in different conditions, to selectively attend to either occasionally occurring slight increments in tone frequency (1020 Hz, tones of slightly longer duration, or ignore the sounds. In line with previous studies, in the ignore condition, the global field power (GFP of event-related brain responses at 100 ms from the stimulus onset to the 1000-Hz tones was suppressed as a function of the narrowing of the notch width. During the selective attention conditions, the suppressant effect of the noise notch width on GFP was decreased, but as a function significantly different from a multiplicative one expected on the basis of simple gain model of selective attention.Our results suggest that auditory selective attention in humans cannot be explained by a gain model, where only the neural activity level is increased, but rather that selective attention additionally enhances auditory cortex frequency selectivity.
Souza, Alessandra S; Oberauer, Klaus
The concept of attention has a prominent place in cognitive psychology. Attention can be directed not only to perceptual information, but also to information in working memory (WM). Evidence for an internal focus of attention has come from the retro-cue effect: Performance in tests of visual WM is improved when attention is guided to the test-relevant contents of WM ahead of testing them. The retro-cue paradigm has served as a test bed to empirically investigate the functions and limits of the focus of attention in WM. In this article, we review the growing body of (behavioral) studies on the retro-cue effect. We evaluate the degrees of experimental support for six hypotheses about what causes the retro-cue effect: (1) Attention protects representations from decay, (2) attention prioritizes the selected WM contents for comparison with a probe display, (3) attended representations are strengthened in WM, (4) not-attended representations are removed from WM, (5) a retro-cue to the retrieval target provides a head start for its retrieval before decision making, and (6) attention protects the selected representation from perceptual interference. The extant evidence provides support for the last four of these hypotheses.
Favrot, Sylvain Emmanuel; Buchholz, Jörg
A known challenge in sound field reproduction techniques such as high-order Ambisonics (HOA) is the reproduction of nearby sound sources. In order to reproduce such nearby sound sources, the near-field compensated (NFC) method with angular weighting windows (AWWs) has been previously roposed...... for HOA . Considering auditory distance perception, (low-frequency)interaural level differences represent the main auditory cue for nearby real sound sources outside the median plane. Simulations showed that these ILD cues can be reproduced with existing weighted NFC-HOA methods for frequencies above...
Kim, Duck O; Zahorik, Pavel; Carney, Laurel H; Bishop, Brian B; Kuwada, Shigeyuki
Mechanisms underlying sound source distance localization are not well understood. Here we tested the hypothesis that a novel mechanism can create monaural distance sensitivity: a combination of auditory midbrain neurons' sensitivity to amplitude modulation (AM) depth and distance-dependent loss of AM in reverberation. We used virtual auditory space (VAS) methods for sounds at various distances in anechoic and reverberant environments. Stimulus level was constant across distance. With increasing modulation depth, some rabbit inferior colliculus neurons increased firing rates whereas others decreased. These neurons exhibited monotonic relationships between firing rates and distance for monaurally presented noise when two conditions were met: (1) the sound had AM, and (2) the environment was reverberant. The firing rates as a function of distance remained approximately constant without AM in either environment and, in an anechoic condition, even with AM. We corroborated this finding by reproducing the distance sensitivity using a neural model. We also conducted a human psychophysical study using similar methods. Normal-hearing listeners reported perceived distance in response to monaural 1 octave 4 kHz noise source sounds presented at distances of 35-200 cm. We found parallels between the rabbit neural and human responses. In both, sound distance could be discriminated only if the monaural sound in reverberation had AM. These observations support the hypothesis. When other cues are available (e.g., in binaural hearing), how much the auditory system actually uses the AM as a distance cue remains to be determined. Copyright © 2015 the authors 0270-6474/15/355360-13$15.00/0.
Fisher, Naomi; Lattimore, Paul; Malinowski, Peter
Excessive energy intake that contributes to overweight and obesity is arguably driven by pleasure associated with the rewarding properties of energy-dense palatable foods. It is important to address influences of external food cues in food-abundant societies where people make over 200 food related decisions each day. This study experimentally examines protective effects of a mindful attention induction on appetitive measures, state craving and food intake following exposure to energy-dense foods. Forty females were randomly allocated to a standard food-cue exposure condition in which attention is brought to the hedonic properties of food or food-cue exposure following a mindful attention induction. Appetitive reactions were measured pre, post and 10 min after post-cue exposure, after which a plate of cookies was used as a surreptitious means of measuring food intake. Self-reported hunger remained unchanged and fullness significantly increased for the mindful attention group post-cue exposure whereas hunger significantly increased for the standard attention group and fullness remained unchanged. There was no significant between-group difference in state craving post-cue exposure and 10 min later. Significantly more cookies were eaten by the standard attention group 10 min post-cue exposure although no significant between-group differences in appetitive and craving measures were reported at that time. Our results point to a promising brief intervention strategy and highlights the importance of distinguishing mindful attention from attention. Results also demonstrate that mindful attention can influence food intake even when craving and hunger are experienced. Copyright © 2016 Elsevier Ltd. All rights reserved.
Full Text Available Previous research suggests that deficits in attention-emotion interaction are implicated in schizophrenia symptoms. Although disruption in auditory processing is crucial in the pathophysiology of schizophrenia, deficits in interaction between emotional processing of auditorily presented language stimuli and auditory attention have not yet been clarified. To address this issue, the current study used a dichotic listening task to examine 22 patients with schizophrenia and 24 age-, sex-, parental socioeconomic background-, handedness-, dexterous ear-, and intelligence quotient-matched healthy controls. The participants completed a word recognition task on the attended side in which a word with emotionally valenced content (negative/positive/neutral was presented to one ear and a different neutral word was presented to the other ear. Participants selectively attended to either ear. In the control subjects, presentation of negative but not positive word stimuli provoked a significantly prolonged reaction time compared with presentation of neutral word stimuli. This interference effect for negative words existed whether or not subjects directed attention to the negative words. This interference effect was significantly smaller in the patients with schizophrenia than in the healthy controls. Furthermore, the smaller interference effect was significantly correlated with severe positive symptoms and delusional behavior in the patients with schizophrenia. The present findings suggest that aberrant interaction between semantic processing of negative emotional content and auditory attention plays a role in production of positive symptoms in schizophrenia. (224 words.
Kolarik, Andrew; Cirstea, Silvia; Pardhan, Shahina
The study investigated how listeners used level and direct-to-reverberant ratio (D/R) cues to discriminate distances to virtual sound sources. Sentence pairs were presented at virtual distances in simulated rooms that were either reverberant or anechoic. Performance on the basis of level was generally better than performance based on D/R. Increasing room reverberation time improved performance based on the D/R cue such that the two cues provided equally effective information at further virtual source distances in highly reverberant environments. Orientation of the listener within the virtual room did not affect performance.
Slater, Kyle D.; Marozeau, Jeremy
, we test whether tactile cues can be used to segregate 2 interleaved melodies. Twelve musicians and 12 nonmusicians were asked to detect changes in a 4-note repeated melody interleaved with a random melody. In order to perform this task, the listener must be able to segregate the target melody from...... the random melody. Tactile cues were applied to the listener’s fingers on half of the blocks. Results showed that tactile cues can significantly improve the melodic segregation ability in both musician and nonmusician groups in challenging listening conditions. Overall, the musician group performance...
van Holst, Ruth J; Lemmens, Jeroen S; Valkenburg, Patti M; Peter, Jochen; Veltman, Dick J; Goudriaan, Anna E
The aim of this study was to examine whether behavioral tendencies commonly related to addictive behaviors are also related to problematic computer and video game playing in adolescents. The study of attentional bias and response inhibition, characteristic for addictive disorders, is relevant to the ongoing discussion on whether problematic gaming should be classified as an addictive disorder. We tested the relation between self-reported levels of problem gaming and two behavioral domains: attentional bias and response inhibition. Ninety-two male adolescents performed two attentional bias tasks (addiction-Stroop, dot-probe) and a behavioral inhibition task (go/no-go). Self-reported problem gaming was measured by the game addiction scale, based on the Diagnostic and Statistical Manual of Mental Disorders-fourth edition criteria for pathological gambling and time spent on computer and/or video games. Male adolescents with higher levels of self-reported problem gaming displayed signs of error-related attentional bias to game cues. Higher levels of problem gaming were also related to more errors on response inhibition, but only when game cues were presented. These findings are in line with the findings of attentional bias reported in clinically recognized addictive disorders, such as substance dependence and pathological gambling, and contribute to the discussion on the proposed concept of "Addiction and Related Disorders" (which may include non-substance-related addictive behaviors) in the Diagnostic and Statistical Manual of Mental Disorders-fourth edition. Copyright © 2012 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
Berger, Itai; Nevo, Yoram
Attention deficit hyperactivity disorder (ADHD) is a childhood-onset disorder that is considered one of the most common neurobehavioral disorders. The symptoms of ADHD should be cast, not as static or fixed neurobehavioral deficits, but rather in terms of underlying developmental processes. Targeting attentional disorders early in life can bring about fundamental alterations in the pathogenesis of ADHD, and thus prevent or moderate the course of the disorder. The developmental approach can enable predictions concerning characteristics of ADHD that develop over time and inform us about multiple risk and protective factors that transact to impact its development, as well as the development of a broad range of associated co-morbid features. In this review, we describe the complex factors that predict and mediate the developmental course of ADHD, providing early cues for ADHD diagnosis and intervention in young children that will optimize outcome. Copyright © 2013 Wiley Periodicals, Inc.
Asmundson, Gordon J G; Carleton, R Nicholas; Ekong, Jane
Evidence supporting the notion that patients with chronic pain are characterized by attentional biases for sensory and affect pain words, and that such biases are mediated by fear of pain, is mixed. The present investigation was an attempt to replicate and extend initial findings obtained with the dot-probe task. Thirty patients with chronic headache and 19 healthy controls were tested using a dot-probe task including affect pain, sensory pain, and neutral words. Individual difference variables, including fear of pain measures, were assessed and considered in analyses. Selective attention was denoted using the bias index, congruency index, and incongruency index. There were no significant between-group differences or interactions between group and word type observed for any of the indices of selective attention. Across groups there was evidence for a significant association between anxiety sensitivity and the bias index for sensory pain words, and between affective description of current pain and the incongruency index for affect pain words. These results do not provide convincing evidence that patients with chronic headache selectively attend to affect or sensory pain cues when compared to healthy controls. The significant cross-groups associations between anxiety sensitivity and current pain description and indices of selective attention are consistent with the notion that attentional biases may be influenced by fear propensity and current concerns. Implications of the findings and future research directions are discussed.
Putkinen, Vesa; Makkonen, Tommi; Eerola, Tuomas
Previous studies indicate that positive mood broadens the scope of visual attention, which can manifest as heightened distractibility. We used event-related potentials (ERP) to investigate whether music-induced positive mood has comparable effects on selective attention in the auditory domain. Subjects listened to experimenter-selected happy, neutral or sad instrumental music and afterwards participated in a dichotic listening task. Distractor sounds in the unattended channel elicited responses related to early sound encoding (N1/MMN) and bottom-up attention capture (P3a) while target sounds in the attended channel elicited a response related to top-down-controlled processing of task-relevant stimuli (P3b). For the subjects in a happy mood, the N1/MMN responses to the distractor sounds were enlarged while the P3b elicited by the target sounds was diminished. Behaviorally, these subjects tended to show heightened error rates on target trials following the distractor sounds. Thus, the ERP and behavioral results indicate that the subjects in a happy mood allocated their attentional resources more diffusely across the attended and the to-be-ignored channels. Therefore, the current study extends previous research on the effects of mood on visual attention and indicates that even unfamiliar instrumental music can broaden the scope of auditory attention via its effects on mood. © The Author (2017). Published by Oxford University Press.
Full Text Available This study investigated links between lower-level visual attention processes and higher-level problem solving. This was done by overlaying visual cues on conceptual physics problem diagrams to direct participants’ attention to relevant areas to facilitate problem solving. Participants (N = 80 individually worked through four problem sets, each containing a diagram, while their eye movements were recorded. Each diagram contained regions that were relevant to solving the problem correctly and separate regions related to common incorrect responses. Problem sets contained an initial problem, six isomorphic training problems, and a transfer problem. The cued condition saw visual cues overlaid on the training problems. Participants’ verbal responses were used to determine their accuracy. The study produced two major findings. First, short duration visual cues can improve problem solving performance on a variety of insight physics problems, including transfer problems not sharing the surface features of the training problems, but instead sharing the underlying solution path. Thus, visual cues can facilitate re-representing a problem and overcoming impasse, enabling a correct solution. Importantly, these cueing effects on problem solving did not involve the solvers’ attention necessarily embodying the solution to the problem. Instead, the cueing effects were caused by solvers attending to and integrating relevant information in the problems into a solution path. Second, these short duration visual cues when administered repeatedly over multiple training problems resulted in participants becoming more efficient at extracting the relevant information on the transfer problem, showing that such cues can improve the automaticity with which solvers extract relevant information from a problem. Both of these results converge on the conclusion that lower-order visual processes driven by attentional cues can influence higher-order cognitive processes
Downer, Joshua D; Rapone, Brittany; Verhein, Jessica; O'Connor, Kevin N; Sutter, Mitchell L
Sensory environments often contain an overwhelming amount of information, with both relevant and irrelevant information competing for neural resources. Feature attention mediates this competition by selecting the sensory features needed to form a coherent percept. How attention affects the activity of populations of neurons to support this process is poorly understood because population coding is typically studied through simulations in which one sensory feature is encoded without competition. Therefore, to study the effects of feature attention on population-based neural coding, investigations must be extended to include stimuli with both relevant and irrelevant features. We measured noise correlations (rnoise) within small neural populations in primary auditory cortex while rhesus macaques performed a novel feature-selective attention task. We found that the effect of feature-selective attention on rnoise depended not only on the population tuning to the attended feature, but also on the tuning to the distractor feature. To attempt to explain how these observed effects might support enhanced perceptual performance, we propose an extension of a simple and influential model in which shifts in rnoise can simultaneously enhance the representation of the attended feature while suppressing the distractor. These findings present a novel mechanism by which attention modulates neural populations to support sensory processing in cluttered environments.SIGNIFICANCE STATEMENT Although feature-selective attention constitutes one of the building blocks of listening in natural environments, its neural bases remain obscure. To address this, we developed a novel auditory feature-selective attention task and measured noise correlations (rnoise) in rhesus macaque A1 during task performance. Unlike previous studies showing that the effect of attention on rnoise depends on population tuning to the attended feature, we show that the effect of attention depends on the tuning to the
Begault, Durand R.; Bittner, Rachel M.; Anderson, Mark R.
Auditory communication displays within the NextGen data link system may use multiple synthetic speech messages replacing traditional ATC and company communications. The design of an interface for selecting amongst multiple incoming messages can impact both performance (time to select, audit and release a message) and preference. Two design factors were evaluated: physical pressure-sensitive switches versus flat panel "virtual switches", and the presence or absence of auditory feedback from switch contact. Performance with stimuli using physical switches was 1.2 s faster than virtual switches (2.0 s vs. 3.2 s); auditory feedback provided a 0.54 s performance advantage (2.33 s vs. 2.87 s). There was no interaction between these variables. Preference data were highly correlated with performance.
Seiss, Ellen; Gherri, Elena; Eardley, Alison F; Eimer, Martin
Lateralized ERP components triggered during cued shifts of spatial attention (anterior directing attention negativity [ADAN], late directing attention positivity [LDAP]) have been observed during visual, auditory, and tactile attention tasks, suggesting that these components reflect supramodal attentional control processes. This interpretation has recently been called into question by the finding that the ADAN is absent in response to auditory attention cues. Here we demonstrate that ADAN and LDAP components are reliably elicited in a purely unimodal auditory attention task where auditory cues are followed by auditory imperative stimuli. The fact that the ADAN is not restricted to task contexts where visual or tactile stimuli are relevant is consistent with the hypothesis that this component is linked to supramodal attentional control.
Morris, David Jackson; Steinmetzger, Kurt; Tøndering, John
The modulation of auditory event-related potentials (ERP) by attention generally results in larger amplitudes when stimuli are attended. We measured the P1-N1-P2 acoustic change complex elicited with synthetic overt (second formant, F2 = 1000 Hz) and subtle (F2 = 100 Hz) diphthongs, while subjects....... Multivariate analysis of ERP components from the rising F2 changes showed main effects of attention on P2 amplitude and latency, and N1-P2 amplitude. P2 amplitude decreased by 40% between the attend and ignore conditions, and by 60% between the attend and divert conditions. The effect of diphthong magnitude...
Thomas Hieronymus Bak
Full Text Available Recent studies, using predominantly visual tasks, indicate that early bilinguals outperform monolinguals on attention tests. It remains less clear whether such advantages extend to those bilinguals who have acquired their second language later in life.We examined this question in 38 monolingual and 60 bilingual university students. The bilingual group was further subdivided into early childhood, late childhood and early adulthood bilinguals. The assessment consisted of five subtests from the clinically validated Test of Everyday Attention (TEA. Overall, bilinguals outperformed monolinguals on auditory attention tests, but not on visual search tasks. The latter observation suggests that the differences between bilinguals and monolinguals are specific and not due to a generally higher cognitive performance in bilinguals.Within the bilingual group, early childhood bilinguals showed a larger advantage on attention switching, late childhood/early adulthood bilinguals on selective attention. We conclude that the bilingual advantage extends into the auditory domain and is not confined to childhood bilinguals, although its scope might be slightly different in early and late bilinguals.
Bak, Thomas H; Vega-Mendoza, Mariana; Sorace, Antonella
Recent studies, using predominantly visual tasks, indicate that early bilinguals tend to outperform monolinguals on attention tests. It remains less clear whether such advantages extend to those bilinguals who have acquired their second language later in life. We examined this question in 38 monolingual and 60 bilingual university students. The bilingual group was further subdivided into early childhood (ECB), late childhood (LCB), and early adulthood bilinguals (EAB). The assessment consisted of five subtests from the clinically validated Test of Everyday Attention (TEA). Overall, bilinguals outperformed monolinguals on auditory attention tests, but not on visual search tasks. The latter observation suggests that the differences between bilinguals and monolinguals are specific and not due to a generally higher cognitive performance in bilinguals. Within the bilingual group, ECB showed a larger advantage on attention switching, LCB/EAB on selective attention. We conclude that the effects of bilingualism extend into the auditory domain and are not confined to childhood bilinguals, although their scope might be slightly different in early and late bilinguals.
Goodhew, Stephanie C; Kidd, Evan
Humans appear to rely on spatial mappings to describe and represent concepts. In particular, conceptual cueing refers to the effect whereby after reading or hearing a particular word, the location of observers' visual attention in space can be systematically shifted in a particular direction. For example, words such as "sun" and "happy" orient attention upwards, whereas words such as "basement" and "bitter" orient attention downwards. This area of research has garnered much interest, particularly within the embodied cognition framework, for its potential to enhance our understanding of the interaction between abstract cognitive processes such as language and basic visual processes such as attention and stimulus processing. To date, however, this area has relied on subjective classification criteria to determine whether words ought to be classified as having a meaning that implies "up" or "down." The present study, therefore, provides a set of 498 items that have each been systematically rated by over 90 participants, providing refined, continuous measures of the extent to which people associate given words with particular spatial dimensions. The resulting database provides an objective means to aid item-selection for future research in this area.
Faith M. Hanlon
Full Text Available Successful adaptive behavior relies on the ability to automatically (bottom-up orient attention to different locations in the environment. This results in a biphasic pattern in which reaction times (RT are faster for stimuli that occur in the same spatial location (valid for the first few hundred milliseconds, which is termed facilitation. This is followed by faster RT for stimuli that appear in novel locations (invalid after longer delays, termed inhibition of return. The neuronal areas and networks involved in the transition between states of facilitation and inhibition remain poorly understood, especially for auditory stimuli. Functional magnetic resonance imaging (fMRI data were therefore collected in a large sample of healthy volunteers (N = 52 at four separate auditory stimulus onset asynchronies (SOAs; 200, 400, 600, and 800 ms. Behavioral results indicated that facilitation (valid RT < invalid RT occurred at the 200 ms SOA, with inhibition of return (valid RT > invalid RT present at the three longer SOAs. fMRI results showed several brain areas varying their activation as a function of SOA, including bilateral superior temporal gyrus, anterior thalamus, cuneus, dorsal anterior cingulate gyrus, and right ventrolateral prefrontal cortex (VLPFC/anterior insula. Right VLPFC was active during a behavioral state of facilitation, and its activation (invalid – valid trials further correlated with behavioral reorienting at the 200 ms delay. These results suggest that right VLPFC plays a critical role when auditory attention must be quickly deployed or redeployed, demanding heightened cognitive and inhibitory control. In contrast to previous work, the ventral and dorsal frontoparietal attention networks were both active during valid and invalid trials across SOAs. These results suggest that the dorsal and ventral networks may not be as specialized during bottom-up auditory orienting as has been previously reported during visual orienting.
Hanlon, Faith M; Dodd, Andrew B; Ling, Josef M; Bustillo, Juan R; Abbott, Christopher C; Mayer, Andrew R
Successful adaptive behavior relies on the ability to automatically (bottom-up) orient attention to different locations in the environment. This results in a biphasic pattern in which reaction times (RT) are faster for stimuli that occur in the same spatial location (valid) for the first few hundred milliseconds, which is termed facilitation. This is followed by faster RT for stimuli that appear in novel locations (invalid) after longer delays, termed inhibition of return. The neuronal areas and networks involved in the transition between states of facilitation and inhibition remain poorly understood, especially for auditory stimuli. Functional magnetic resonance imaging (fMRI) data were therefore collected in a large sample of healthy volunteers (N = 52) at four separate auditory stimulus onset asynchronies (SOAs; 200, 400, 600, and 800 ms). Behavioral results indicated that facilitation (valid RT SOA, with inhibition of return (valid RT > invalid RT) present at the three longer SOAs. fMRI results showed several brain areas varying their activation as a function of SOA, including bilateral superior temporal gyrus, anterior thalamus, cuneus, dorsal anterior cingulate gyrus, and right ventrolateral prefrontal cortex (VLPFC)/anterior insula. Right VLPFC was active during a behavioral state of facilitation, and its activation (invalid - valid trials) further correlated with behavioral reorienting at the 200 ms delay. These results suggest that right VLPFC plays a critical role when auditory attention must be quickly deployed or redeployed, demanding heightened cognitive and inhibitory control. In contrast to previous work, the ventral and dorsal frontoparietal attention networks were both active during valid and invalid trials across SOAs. These results suggest that the dorsal and ventral networks may not be as specialized during bottom-up auditory orienting as has been previously reported during visual orienting.
Zink, Rob; Hunyadi, Borbála; Van Huffel, Sabine; De Vos, Maarten
Objective. In the past few years there has been a growing interest in studying brain functioning in natural, real-life situations. Mobile EEG allows to study the brain in real unconstrained environments but it faces the intrinsic challenge that it is impossible to disentangle observed changes in brain activity due to increase in cognitive demands by the complex natural environment or due to the physical involvement. In this work we aim to disentangle the influence of cognitive demands and distractions that arise from such outdoor unconstrained recordings. Approach. We evaluate the ERP and single trial characteristics of a three-class auditory oddball paradigm recorded in outdoor scenario’s while peddling on a fixed bike or biking freely around. In addition we also carefully evaluate the trial specific motion artifacts through independent gyro measurements and control for muscle artifacts. Main results. A decrease in P300 amplitude was observed in the free biking condition as compared to the fixed bike conditions. Above chance P300 single-trial classification in highly dynamic real life environments while biking outdoors was achieved. Certain significant artifact patterns were identified in the free biking condition, but neither these nor the increase in movement (as derived from continuous gyrometer measurements) can explain the differences in classification accuracy and P300 waveform differences with full clarity. The increased cognitive load in real-life scenarios is shown to play a major role in the observed differences. Significance. Our findings suggest that auditory oddball results measured in natural real-life scenarios are influenced mainly by increased cognitive load due to being in an unconstrained environment.
This study examines whether nonverbal visual and/or auditory channels are more effective in detecting foreign-language anxiety. Recent research suggests that language teachers are often able to successfully decode the nonverbal behaviors indicative of foreign-language anxiety; however, relatively little is known about whether visual and/or…
Devore, Sasha; Ihlefeld, Antje; Hancock, Kenneth; Shinn-Cunningham, Barbara; Delgutte, Bertrand
In reverberant environments, acoustic reflections interfere with the direct sound arriving at a listener's ears, distorting the spatial cues for sound localization. Yet, human listeners have little difficulty localizing sounds in most settings. Because reverberant energy builds up over time, the source location is represented relatively faithfully during the early portion of a sound, but this representation becomes increasingly degraded later in the stimulus. We show that the directional sensitivity of single neurons in the auditory midbrain of anesthetized cats follows a similar time course, although onset dominance in temporal response patterns results in more robust directional sensitivity than expected, suggesting a simple mechanism for improving directional sensitivity in reverberation. In parallel behavioral experiments, we demonstrate that human lateralization judgments are consistent with predictions from a population rate model decoding the observed midbrain responses, suggesting a subcortical origin for robust sound localization in reverberant environments.
Muller-Gass, Alexandra; Macdonald, Margaret; Schröger, Erich; Sculthorpe, Lauren; Campbell, Kenneth
The P3a is an event-related potential (ERP) component believed to reflect an attention-switch to task-irrelevant stimuli or stimulus information. The present study concerns the automaticity of the processes underlying the auditory P3a. More specifically, we investigated whether the auditory P3a is an attention-independent component, that is, whether it can still be elicited under highly-focused selective attention to a different (visual) channel. Furthermore, we examined whether the auditory P3a can be modulated by the demands of the visual diversion task. Subjects performed a continuous visual tracking task that varied in difficulty, based on the number of objects to-be-tracked. Task-irrelevant auditory stimuli were presented at very rapid and random rates concurrently to the visual task. The auditory sequence included rare increments (+10 dB) and decrements (-20 dB) in intensity relative to the frequently-presented standard stimulus. Importantly, the auditory deviant stimuli elicited a significant P3a during the most difficult visual task, when conditions were optimised to prevent attentional slippage to the auditory channel. This finding suggests that the elicitation of the auditory P3a does not require available central capacity, and confirms the automatic nature of the processes underlying this ERP component. Moreover, the difficulty of the visual task did not modulate either the mismatch negativity (MMN) or the P3a but did have an effect on a late (350-400 ms) negativity, an ERP deflection perhaps related to a subsequent evaluation of the auditory change. Together, these results imply that the auditory P3a could reflect a strongly-automatic process, one that does not require and is not modulated by attention.
Lemos, Isabel Cristina Cavalcanti; Feniman, Mariza Ribeiro
Cleft lip and palate (CLP) is a risk indicator to middle ear alterations, which may damage the development of auditory abilities such as attention that is essential to learn new skills, oral and written communication...
Full Text Available Objectives: The amplitude of the auditory steady-state response (ASSR is enhanced in tinnitus. As ASSR ampli¬tude is also enhanced by attention, the effect of tinnitus on ASSR amplitude could be interpreted as an effect of attention mediated by tinnitus. As attention effects on the N1 are signi¬fi¬cantly larger than those on the ASSR, if the effect of tinnitus on ASSR amplitude were due to attention, there should be similar amplitude enhancement effects in tinnitus for the N1 component of the auditory evoked response. Methods: MEG recordings of auditory evoked responses which were previously examined for the ASSR (Diesch et al. 2010 were analysed with respect to the N1m component. Like the ASSR previously, the N1m was analysed in the source domain (source space projection. Stimuli were amplitude-modulated tones with one of three carrier fre¬quen¬cies matching the tinnitus frequency or a surrogate frequency 1½ octaves above the audio¬metric edge frequency in con¬trols, the audiometric edge frequency, and a frequency below the audio¬metric edgeResults: In the earlier ASSR study (Diesch et al., 2010, the ASSR amplitude in tinnitus patients, but not in controls, was significantly larger in the (surrogate tinnitus condition than in the edge condition. In the present study, both tinnitus patients and healthy controls show an N1m-amplitude profile identical to the one of ASSR amplitudes in healthy controls. N1m amplitudes elicited by tonal frequencies located at the audiometric edge and at the (surrogate tinnitus frequency are smaller than N1m amplitudes elicited by sub-edge tones and do not differ among each other.Conclusions: There is no N1-amplitude enhancement effect in tinnitus. The enhancement effect of tinnitus on ASSR amplitude cannot be accounted for in terms of attention induced by tinnitus.
Simpson, M.I.G.; Barnes, G.R.; Johnson, S.R.; Hillebrand, A.; Singh, K.D.; Green, G.G.R.
Speech contains complex amplitude modulations that have envelopes with multiple temporal cues. The processing of these complex envelopes is not well explained by the classical models of amplitude modulation processing. This may be because the evidence for the models typically comes from the use of
Lancioni, Giulio E.; Singh, Nirbhay N.; O'Reilly, Mark F.; Sigafoos, Jeff; Campodonico, Francesca; Oliva, Doretta
This study was an effort to extend the evaluation of orientation technology for promoting independent indoor traveling in persons with multiple disabilities. Two participants (adults) were included, who were to travel to activity destinations within occupational settings. The orientation system involved (a) cueing sources only at the destinations…
Lambert, Anthony J; Wilkie, Jaimie; Greenwood, Andrea; Ryckman, Nathan; Sciberras-Lim, Evatte; Booker, Laura-Jane; Tahara-Eckl, Lenore
To what extent are shifts of attention driven by encoding of visual-spatial landmarks, associated with useful locations, or by encoding of environmental cues that act as symbolic representations, providing information about where to look next? In Experiment 1 we found that when cues were presented with a long exposure time (300 ms) attention shifts were driven by the symbolic identity of cue stimuli, independently of their visual-spatial (landmark) features; but when cues were exposed very briefly, (66 ms), attention shifts were independent of symbolic information, and were driven instead by visual landmark features. This unexpected finding was interpreted in terms of the transient and sustained response characteristics of the M-cell and P-cell inputs to the dorsal and ventral visual streams, respectively, and informed our theoretical proposal that attentional effects elicited by visual-spatial landmarks may be driven by dorsal stream ("where pathway") encoding; while attentional effects driven by the symbolic identity of cues may be driven by ventral stream ("what pathway") encoding. Detailed predictions derived from this proposal, and based on distinct physiological properties of the 2 visual streams were tested and confirmed in Experiments 2-6. Our results suggest that a 2-process view of attention shifting can be integrated with dual-stream models of vision. According to this unified theory: (a) Landmarks associated with visually useful locations elicit rapid, nonconscious shifts of attention, via nonsemantic, dorsal visual stream encoding of their features and spatial relationships; (b) Slower, endogenous shifts of attention are elicited by ventral visual stream encoding of symbolic-semantic information. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Ikeda, Kazunari; Hayashi, Akiko; Sekiguchi, Takahiro; Era, Shukichi
It is known in humans that electrophysiological measures such as the auditory brainstem response (ABR) are difficult to identify the attention effect at the auditory periphery, whereas the centrifugal effect has been detected by measuring otoacoustic emissions. This research developed a measure responsive to the shift of human scalp potentials within a brief post-stimulus period (13 ms), that is, displacement percentage, and applied it to an experiment to retrieve the peripheral attention effect. In the present experimental paradigm, tone pips were exposed to the left ear whereas the other ear was masked by white noise. Twelve participants each conducted two conditions of either ignoring or attending to the tone pips. Relative to averaged scalp potentials in the ignoring condition, the shift of the potentials was found within early component range during the attentive condition, and displacement percentage then revealed a significant magnitude difference between the two conditions. These results suggest that, using a measure representing the potential shift itself, the peripheral effect of attention can be detected from human scalp potentials.
van Leeuwen, Tessa M; Hagoort, Peter; Händel, Barbara F
Grapheme-color synesthetes perceive color when reading letters or digits. We investigated oscillatory brain signals of synesthetes vs. controls using magnetoencephalography. Brain oscillations specifically in the alpha band (∼10Hz) have two interesting features: alpha has been linked to inhibitory processes and can act as a marker for attention. The possible role of reduced inhibition as an underlying cause of synesthesia, as well as the precise role of attention in synesthesia is widely discussed. To assess alpha power effects due to synesthesia, synesthetes as well as matched controls viewed synesthesia-inducing graphemes, colored control graphemes, and non-colored control graphemes while brain activity was recorded. Subjects had to report a color change at the end of each trial which allowed us to assess the strength of synesthesia in each synesthete. Since color (synesthetic or real) might allocate attention we also included an attentional cue in our paradigm which could direct covert attention. In controls the attentional cue always caused a lateralization of alpha power with a contralateral decrease and ipsilateral alpha increase over occipital sensors. In synesthetes, however, the influence of the cue was overruled by color: independent of the attentional cue, alpha power decreased contralateral to the color (synesthetic or real). This indicates that in synesthetes color guides attention. This was confirmed by reaction time effects due to color, i.e. faster RTs for the color side independent of the cue. Finally, the stronger the observed color dependent alpha lateralization, the stronger was the manifestation of synesthesia as measured by congruency effects of synesthetic colors on RTs. Behavioral and imaging results indicate that color induces a location-specific, automatic shift of attention towards color in synesthetes but not in controls. We hypothesize that this mechanism can facilitate coupling of grapheme and color during the development of
Batson, Glenna; Hugenschmidt, Christina E.; Soriano, Christina T.
Dance is a non-pharmacological intervention that helps maintain functional independence and quality of life in people with Parkinson’s disease (PPD). Results from controlled studies on group-delivered dance for people with mild-to-moderate stage Parkinson’s have shown statistically and clinically significant improvements in gait, balance, and psychosocial factors. Tested interventions include non-partnered dance forms (ballet and modern dance) and partnered (tango). In all of these dance forms, specific movement patterns initially are learned through repetition and performed in time-to-music. Once the basic steps are mastered, students may be encouraged to improvise on the learned steps as they perform them in rhythm with the music. Here, we summarize a method of teaching improvisational dance that advances previous reported benefits of dance for people with Parkinson’s disease (PD). The method relies primarily on improvisational verbal auditory cueing with less emphasis on directed movement instruction. This method builds on the idea that daily living requires flexible, adaptive responses to real-life challenges. In PD, movement disorders not only limit mobility but also impair spontaneity of thought and action. Dance improvisation demands open and immediate interpretation of verbally delivered movement cues, potentially fostering the formation of spontaneous movement strategies. Here, we present an introduction to a proposed method, detailing its methodological specifics, and pointing to future directions. The viewpoint advances an embodied cognitive approach that has eco-validity in helping PPD meet the changing demands of daily living. PMID:26925029
Full Text Available Dance is a non-pharmacological intervention that helps maintain functional independence and quality of life in people with Parkinson’s disease (PPD. Results from controlled studies on group-delivered dance for people with mild-to-moderate stage Parkinson’s have shown statistically and clinically significant improvements in gait, balance, and psychosocial factors. Tested interventions include non-partnered dance forms (ballet and modern dance and partnered (tango. In all of these dance forms, specific movement patterns initially are learned through repetition and performed in time to music. Once the basic steps are mastered, students may be encouraged to improvise on the learned steps as they perform them in rhythm with the music. Here, we summarize a method of teaching improvisational dance that advances previous reported benefits of dance for people with PD. The method relies primarily on improvisational verbal auditory cueing (VAC with less emphasis on directed movement instruction. This method builds on the idea that daily living requires flexible, adaptive responses to real-life challenges. In PD, movement disorders not only limit mobility, but also impair spontaneity of thought and action. Dance improvisation trains spontaneity of thought, fostering open and immediate interpretation of verbally delivered movement cues. Here we present an introduction to a proposed method, detailing its methodological specifics, and pointing to future directions. The viewpoint advances an embodied cognitive approach that has eco-validity in helping PPD meet the changing demands of daily living.
Zhang, Yu-Xuan; Barry, Johanna G; Moore, David R; Amitay, Sygal
Attention modulates auditory perception, but there are currently no simple tests that specifically quantify this modulation. To fill the gap, we developed a new, easy-to-use test of attention in listening (TAIL) based on reaction time. On each trial, two clearly audible tones were presented sequentially, either at the same or different ears. The frequency of the tones was also either the same or different (by at least two critical bands). When the task required same/different frequency judgments, presentation at the same ear significantly speeded responses and reduced errors. A same/different ear (location) judgment was likewise facilitated by keeping tone frequency constant. Perception was thus influenced by involuntary orienting of attention along the task-irrelevant dimension. When information in the two stimulus dimensions were congruent (same-frequency same-ear, or different-frequency different-ear), response was faster and more accurate than when they were incongruent (same-frequency different-ear, or different-frequency same-ear), suggesting the involvement of executive control to resolve conflicts. In total, the TAIL yielded five independent outcome measures: (1) baseline reaction time, indicating information processing efficiency, (2) involuntary orienting of attention to frequency and (3) location, and (4) conflict resolution for frequency and (5) location. Processing efficiency and conflict resolution accounted for up to 45% of individual variances in the low- and high-threshold variants of three psychoacoustic tasks assessing temporal and spectral processing. Involuntary orientation of attention to the irrelevant dimension did not correlate with perceptual performance on these tasks. Given that TAIL measures are unlikely to be limited by perceptual sensitivity, we suggest that the correlations reflect modulation of perceptual performance by attention. The TAIL thus has the power to identify and separate contributions of different components of attention
Röer, Jan P; Körner, Ulrike; Buchner, Axel; Bell, Raoul
It is well established that task-irrelevant, to-be-ignored speech adversely affects serial short-term memory (STM) for visually presented items compared with a quiet control condition. However, there is an ongoing debate about whether the semantic content of the speech has the capacity to capture attention and to disrupt memory performance. In the present article, we tested whether taboo words are more difficult to ignore than neutral words. Taboo words or neutral words were presented as (a) steady state sequences in which the same distractor word was repeated, (b) changing state sequences in which different distractor words were presented, and (c) auditory deviant sequences in which a single distractor word deviated from a sequence of repeated words. Experiments 1 and 2 showed that taboo words disrupted performance more than neutral words. This taboo effect did not habituate and it did not differ between individuals with high and low working memory capacity. In Experiments 3 and 4, in which only a single deviant taboo word was presented, no taboo effect was obtained. These results do not support the idea that the processing of the auditory distractors' semantic content is the result of occasional attention switches to the auditory modality. Instead, the overall pattern of results is more in line with a functional view of auditory distraction, according to which the to-be-ignored modality is routinely monitored for potentially important stimuli (e.g., self-relevant or threatening information), the detection of which draws processing resources away from the primary task. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Full Text Available ABSTRACT Purpose: to compare the frequency of disfluencies and speech rate in spontaneous speech and reading in adults with and without stuttering in non-altered and delayed auditory feedback (NAF, DAF. Methods: participants were 30 adults: 15 with Stuttering (Research Group - RG, and 15 without stuttering (Control Group - CG. The procedures were: audiological assessment and speech fluency evaluation in two listening conditions, normal and delayed auditory feedback (100 milliseconds delayed by Fono Tools software. Results: the DAF caused a significant improvement in the fluency of spontaneous speech in RG when compared to speech under NAF. The effect of DAF was different in CG, because it increased the common disfluencies and the total of disfluencies in spontaneous speech and reading, besides showing an increase in the frequency of stuttering-like disfluencies in reading. The intergroup analysis showed significant differences in the two speech tasks for the two listening conditions in the frequency of stuttering-like disfluencies and in the total of disfluencies, and in the flows of syllable and word-per-minute in the NAF. Conclusion: the results demonstrated that delayed auditory feedback promoted fluency in spontaneous speech of adults who stutter, without interfering in the speech rate. In non-stuttering adults an increase occurred in the number of common disfluencies and total of disfluencies as well as reduction of speech rate in spontaneous speech and reading.
Garland, Eric L.; Howard, Matthew O.
Background Some chronic pain patients receiving long-term opioid analgesic pharmacotherapy are at risk for misusing opioids. Like other addictive behaviors, risk of opioid misuse may be signaled by an attentional bias (AB) towards drug-related cues. The purpose of this study was to examine opioid AB as a potential predictor of opioid misuse among chronic pain patients following behavioral treatment. Methods Chronic pain patients taking long-term opioid analgesics (N = 47) completed a dot probe task designed to assess opioid AB, as well as self-report measures of opioid misuse and pain severity, and then participated in behavioral treatment. Regression analyses examined opioid AB and cue-elicited craving as predictors of opioid misuse at 3-months posttreatment follow-up. Results Patients who scored high on a measure of opioid misuse risk following treatment exhibited significantly greater opioid AB scores than patients at low risk for opioid misuse. Opioid AB for 200 ms cues and cue-elicited craving significantly predicted opioid misuse risk 20 weeks later, even after controlling for pre-treatment opioid dependence diagnosis, opioid misuse, and pain severity (Model R2 = .50). Conclusion Biased initial attentional orienting to prescription opioid cues and cue-elicited craving may reliably signal future opioid misuse risk following treatment. These measures may therefore provide potential prognostic indicators of treatment outcome. PMID:25282309
Sugimoto, Fumie; Katayama, Jun'ichi
Previous studies using a three-stimulus oddball task have shown the amplitude of P3a elicited by distractor stimuli increases when perceptual discrimination between standard and target stimuli becomes difficult. This means that the attentional capture by the distractor stimuli is enhanced along with an increase in task difficulty. So far, the increase of P3a has been reported when standard, target, and distractor stimuli were presented within one sensory modality (i.e., visual or auditory). In the present study, we further investigated whether or not the increase of P3a can also be observed when the distractor stimuli are presented in a different modality from the standard and target stimuli. Twelve participants performed a three-stimulus oddball task in which they were required to discriminate between visual standard and target stimuli. As the distractor stimuli, either another visual stimulus or an auditory stimulus was presented in separate blocks. Visual distractor stimuli elicited P3a, and its amplitude increased when visual standard/target discrimination was difficult, replicating previous findings. Auditory distractor stimuli elicited P3a, and importantly, its amplitude also increased when visual standard/target discrimination was difficult. This result means that attentional capture by distractor stimuli can be enhanced even when the distractor stimuli are presented in a different modality from the standard and target stimuli. Possible mechanisms and implications are discussed in terms of the relative saliency of distractor stimuli, influences of temporal/spatial attention, and the load involved in a task. Copyright © 2017 Elsevier B.V. All rights reserved.
Zeamer, Charlotte; Fox Tree, Jean E
Literature on auditory distraction has generally focused on the effects of particular kinds of sounds on attention to target stimuli. In support of extensive previous findings that have demonstrated the special role of language as an auditory distractor, we found that a concurrent speech stream impaired recall of a short lecture, especially for verbatim language. But impaired recall effects were also found with a variety of nonlinguistic noises, suggesting that neither type of noise nor amplitude and duration of noise are adequate predictors of distraction. Rather, distraction occurred when it was difficult for a listener to process sounds and assemble coherent, differentiable streams of input, one task-salient and attended and the other task-irrelevant and inhibited. In 3 experiments, the effects of auditory distractors during a short spoken lecture were tested. Participants recalled details of the lecture and also reported their opinions of the sound quality. Our findings suggest that distractors that are difficult to designate as either task related or environment related (and therefore irrelevant) draw cognitive processing resources away from a target speech stream during a listening task, impairing recall. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Dowd, Emma Wu; Mitroff, Stephen R
Many factors influence visual search, including how much targets stand out (i.e., their visual salience) and whether they are currently relevant (i.e., Are they in working memory?). Although these are two known influences on search performance, it is unclear how they interact to guide attention. The present study explored this interplay by having participants hold an item in memory for a subsequent test while simultaneously conducting a multiple-target visual search. Importantly, the memory item could match one or neither of two targets from the search. In Experiment 1, when the memory item did not match either target, participants found a high-salience target first, demonstrating a baseline salience effect. This effect was exaggerated when a high-salience target was in working memory and completely reversed when a low-salience target was in memory, demonstrating a powerful influence of working memory guidance. Experiment 2 amplified the salience effect by including very high-salience, "pop-out"-like targets. Yet this salience effect was still attenuated when the memory item matched a less salient target. Experiment 3 confirmed these were memory-based effects and not priming. Collectively, these findings illustrate the influential role of working memory in guiding visual attention, even in the face of competing bottom-up salience cues.
Hendrikse, J J; Cachia, R L; Kothe, E J; McPhie, S; Skouteris, H; Hayden, M J
Obesity rates have increased dramatically in recent decades, and it has proven difficult to treat. An attentional bias towards food cues may be implicated in the aetiology of obesity and influence cravings and food consumption. This review systematically investigated whether attentional biases to food cues exist in overweight/obese compared with healthy weight individuals. Electronic database were searched for relevant papers from inception to October 2014. Only studies reporting food-related attentional bias between either overweight (body mass index [BMI] 25.0-29.9 kg m(-2)) or obese (BMI ≥ 30) participants and healthy weight participants (BMI 18.5-24.9) were included. The findings of 19 studies were reported in this review. Results of the literature are suggestive of differences in attentional bias, with all but four studies supporting the notion of enhanced reactivity to food stimuli in overweight individuals and individuals with obesity. This support for attentional bias was observed primarily in studies that employed psychophysiological techniques (i.e. electroencephalogram, eye-tracking and functional magnetic resonance imaging). Despite the heterogeneous methodology within the featured studies, all measures of attentional bias demonstrated altered cue-reactivity in individuals with obesity. Considering the theorized implications of attentional biases on obesity pathology, researchers are encouraged to replicate flagship studies to strengthen these inferences. © 2015 World Obesity.
Fisher, Derek J; Labelle, Alain; Knott, Verner J
In line with emerging research strategies focusing on specific symptoms rather than global syndromes in psychiatric disorders, we examined the functional neural correlates of auditory verbal hallucinations (AHs) in schizophrenia. Recent neuroimaging and behavioural evidence suggest altered early cognitive processes may be seen in patients with AH as a result of limited processing resources. The P3a subcomponent of the P300, an event-related potential (ERP) index of early attention switching, was assessed in 12 hallucinating patients (HP), 12 non-hallucinating patients (NP) and 12 healthy controls (HC) within a passive two-tone auditory oddball paradigm using vowel phonemes. P3a amplitudes and latencies were measured in response to across-phoneme changes. Following P3a acquisition, patients indicated the duration, intensity and clarity of their auditory hallucinations during recording. Hallucinating patients exhibited smaller P3a amplitudes than non-hallucinating patients and healthy controls. In HPs, P3a amplitude was negatively correlated with AH trait scores. These findings suggest that AHs are associated with impaired processing of speech as evidenced by altered P3a amplitudes to vowel phonemes. This finding may be due to limited cognitive resources available for incoming external stimuli due to a usurping of finite resources by AHs. The P3a may be a useful non-invasive tool for probing relationships between hallucinatory and neural states within schizophrenia and the manner in which auditory processing is altered in these afflicted patients. Copyright © 2010 Elsevier B.V. All rights reserved.
Christiansen, Simon Krogholt
The ability to perceptually segregate concurrent sound sources and focus one’s attention on a single source at a time is essential for the ability to use acoustic information. While perceptual experiments have determined a range of acoustic cues that help facilitate auditory stream segregation......, it is not clear how the auditory system realizes the task. This thesis presents a study of the mechanisms involved in auditory stream segregation. Through a combination of psychoacoustic experiments, designed to characterize the influence of acoustic cues on auditory stream formation, and computational models...... of auditory processing, the role of auditory preprocessing and temporal coherence in auditory stream formation was evaluated. The computational model presented in this study assumes that auditory stream segregation occurs when sounds stimulate non-overlapping neural populations in a temporally incoherent...
Shapley, Kathy; Carrell, Thomas
One of the earliest explanations for good speech intelligibility in poor listening situations was context [Miller et al., J. Exp. Psychol. 41 (1951)]. Context presumably allows listeners to group and predict speech appropriately and is known as a top-down listening strategy. Amplitude comodulation is another mechanism that has been shown to improve sentence intelligibility. Amplitude comodulation provides acoustic grouping information without changing the linguistic content of the desired signal [Carrell and Opie, Percept. Psychophys. 52 (1992); Hu and Wang, Proceedings of ICASSP-02 (2002)] and is considered a bottom-up process. The present experiment investigated how amplitude comodulation and semantic information combined to improve speech intelligibility. Sentences with high- and low-predictability word sequences [Boothroyd and Nittrouer, J. Acoust. Soc. Am. 84 (1988)] were constructed in two different formats: time-varying sinusoidal sentences (TVS) and reduced-channel sentences (RC). The stimuli were chosen because they minimally represent the traditionally defined speech cues and therefore emphasized the importance of the high-level context effects and low-level acoustic grouping cues. Results indicated that semantic information did not influence intelligibility levels of TVS and RC sentences. In addition amplitude modulation aided listeners' intelligibility scores in the TVS condition but hindered listeners' intelligibility scores in the RC condition.
Bradley, Brendan P; Garner, Matthew; Hudson, Laura; Mogg, Karin
According to recent models of addiction, negative affect plays an important role in maintaining drug dependence. The study investigated the effect of negative mood on attentional biases for smoking-related cues and smoking urge in cigarette smokers. Eye movements to smoking-related and control pictures, and manual response times to probes, were recorded during a visual probe task. Smoking urges and mood were assessed by self-report measures. Negative affect was manipulated experimentally as a within-participants independent variable; that is, each participant received negative and neutral mood induction procedures, in counterbalanced order in separate sessions, before the attentional task. There were two groups of participants: smokers and nonsmokers. Smokers showed (i) a greater tendency to shift gaze initially towards smoking-related cues, and (ii) greater urge to smoke when they were in negative mood compared with neutral mood. Manual response time data suggested that smokers showed a greater tendency than nonsmokers to maintain attention on smoking-related cues, irrespective of mood. The results offer partial support for the view that negative mood increases selective attention to drug cues, and urge to smoke, in smokers. The findings are discussed in relation to an affective processing model of negative reinforcement in drug dependence.
Keshavarz, Behrang; Campos, Jennifer L; DeLucia, Patricia R; Oberfeld, Daniel
Estimating time to contact (TTC) involves multiple sensory systems, including vision and audition. Previous findings suggested that the ratio of an object's instantaneous optical size/sound intensity to its instantaneous rate of change in optical size/sound intensity (τ) drives TTC judgments. Other evidence has shown that heuristic-based cues are used, including final optical size or final sound pressure level. Most previous studies have used decontextualized and unfamiliar stimuli (e.g., geometric shapes on a blank background). Here we evaluated TTC estimates by using a traffic scene with an approaching vehicle to evaluate the weights of visual and auditory TTC cues under more realistic conditions. Younger (18-39 years) and older (65+ years) participants made TTC estimates in three sensory conditions: visual-only, auditory-only, and audio-visual. Stimuli were presented within an immersive virtual-reality environment, and cue weights were calculated for both visual cues (e.g., visual τ, final optical size) and auditory cues (e.g., auditory τ, final sound pressure level). The results demonstrated the use of visual τ as well as heuristic cues in the visual-only condition. TTC estimates in the auditory-only condition, however, were primarily based on an auditory heuristic cue (final sound pressure level), rather than on auditory τ. In the audio-visual condition, the visual cues dominated overall, with the highest weight being assigned to visual τ by younger adults, and a more equal weighting of visual τ and heuristic cues in older adults. Overall, better characterizing the effects of combined sensory inputs, stimulus characteristics, and age on the cues used to estimate TTC will provide important insights into how these factors may affect everyday behavior.
Full Text Available A goal of the SKILLS project is to develop Virtual Reality (VR-based training simulators for different application domains, one of which is juggling. Within this context the value of multimodal VR environments for skill acquisition is investigated. In this study, we investigated whether it was necessary to render the sounds of virtual balls hitting virtual hands within the juggling training simulator. First, we recorded sounds at the jugglers’ ears and found the sound of ball hitting hands to be audible. Second, we asked 24 jugglers to juggle under normal conditions (Audible or while listening to pink noise intended to mask the juggling sounds (Inaudible. We found that although the jugglers themselves reported no difference in their juggling across these two conditions, external juggling experts rated rhythmic stability worse in the Inaudible condition than in the Audible condition. This result suggests that auditory information should be rendered in the VR juggling training simulator.
Fasoli, Fabio; Maass, Anne; Paladino, Maria Paola; Sulpizio, Simone
The growing body of literature on the recognition of sexual orientation from voice ("auditory gaydar") is silent on the cognitive and social consequences of having a gay-/lesbian- versus heterosexual-sounding voice. We investigated this issue in four studies (overall N = 276), conducted in Italian language, in which heterosexual listeners were exposed to single-sentence voice samples of gay/lesbian and heterosexual speakers. In all four studies, listeners were found to make gender-typical inferences about traits and preferences of heterosexual speakers, but gender-atypical inferences about those of gay or lesbian speakers. Behavioral intention measures showed that listeners considered lesbian and gay speakers as less suitable for a leadership position, and male (but not female) listeners took distance from gay speakers. Together, this research demonstrates that having a gay/lesbian rather than heterosexual-sounding voice has tangible consequences for stereotyping and discrimination.
Batterink, Laura J; Creery, Jessica D; Paller, Ken A
Slow oscillations during slow-wave sleep (SWS) may facilitate memory consolidation by regulating interactions between hippocampal and cortical networks. Slow oscillations appear as high-amplitude, synchronized EEG activity, corresponding to upstates of neuronal depolarization and downstates of hyperpolarization. Memory reactivations occur spontaneously during SWS, and can also be induced by presenting learning-related cues associated with a prior learning episode during sleep. This technique, targeted memory reactivation (TMR), selectively enhances memory consolidation. Given that memory reactivation is thought to occur preferentially during the slow-oscillation upstate, we hypothesized that TMR stimulation effects would depend on the phase of the slow oscillation. Participants learned arbitrary spatial locations for objects that were each paired with a characteristic sound (eg, cat-meow). Then, during SWS periods of an afternoon nap, one-half of the sounds were presented at low intensity. When object location memory was subsequently tested, recall accuracy was significantly better for those objects cued during sleep. We report here for the first time that this memory benefit was predicted by slow-wave phase at the time of stimulation. For cued objects, location memories were categorized according to amount of forgetting from pre- to post-nap. Conditions of high versus low forgetting corresponded to stimulation timing at different slow-oscillation phases, suggesting that learning-related stimuli were more likely to be processed and trigger memory reactivation when they occurred at the optimal phase of a slow oscillation. These findings provide insight into mechanisms of memory reactivation during sleep, supporting the idea that reactivation is most likely during cortical upstates. Slow-wave sleep (SWS) is characterized by synchronized neural activity alternating between active upstates and quiet downstates. The slow-oscillation upstates are thought to provide a
Kleiman, Tali; Trope, Yaacov; Amodio, David M
Self-control in one's food choices often depends on the regulation of attention toward healthy choices and away from temptations. We tested whether selective attention to food cues can be modulated by a newly developed proactive self-control mechanism-control readiness-whereby control activated in one domain can facilitate control in another domain. In two studies, we elicited the activation of control using a color-naming Stroop task and tested its effect on attention to food cues in a subsequent, unrelated task. We found that control readiness modulates both overt attention, which involves shifts in eye gaze (Study 1), and covert attention, which involves shift in mental attention without shifting in eye gaze (Study 2). We further demonstrated that individuals for whom tempting food cues signal a self-control problem (operationalized by relatively higher BMI) were especially likely to benefit from control readiness. We discuss the theoretical contributions of the control readiness model and the implications of our findings for enhancing proactive self-control to overcome temptation in food choices. Copyright © 2016 Elsevier Inc. All rights reserved.
Spielmann, Mona Isabel; Schröger, Erich; Kotz, Sonja A; Bendixen, Alexandra
Sounds emitted by different sources arrive at our ears as a mixture that must be disentangled before meaningful information can be retrieved. It is still a matter of debate whether this decomposition happens automatically or requires the listener's attention. These opposite positions partly stem from different methodological approaches to the problem. We propose an integrative approach that combines the logic of previous measurements targeting either auditory stream segregation (interpreting a mixture as coming from two separate sources) or integration (interpreting a mixture as originating from only one source). By means of combined behavioral and event-related potential (ERP) measures, our paradigm has the potential to measure stream segregation and integration at the same time, providing the opportunity to obtain positive evidence of either one. This reduces the reliance on zero findings (i.e., the occurrence of stream integration in a given condition can be demonstrated directly, rather than indirectly based on the absence of empirical evidence for stream segregation, and vice versa). With this two-way approach, we systematically manipulate attention devoted to the auditory stimuli (by varying their task relevance) and to their underlying structure (by delivering perceptual tasks that require segregated or integrated percepts). ERP results based on the mismatch negativity (MMN) show no evidence for a modulation of stream integration by attention, while stream segregation results were less clear due to overlapping attention-related components in the MMN latency range. We suggest future studies combining the proposed two-way approach with some improvements in the ERP measurement of sequential stream segregation.
Hill, N. J.; Schölkopf, B.
We report on the development and online testing of an electroencephalogram-based brain-computer interface (BCI) that aims to be usable by completely paralysed users—for whom visual or motor-system-based BCIs may not be suitable, and among whom reports of successful BCI use have so far been very rare. The current approach exploits covert shifts of attention to auditory stimuli in a dichotic-listening stimulus design. To compare the efficacy of event-related potentials (ERPs) and steady-state auditory evoked potentials (SSAEPs), the stimuli were designed such that they elicited both ERPs and SSAEPs simultaneously. Trial-by-trial feedback was provided online, based on subjects' modulation of N1 and P3 ERP components measured during single 5 s stimulation intervals. All 13 healthy subjects were able to use the BCI, with performance in a binary left/right choice task ranging from 75% to 96% correct across subjects (mean 85%). BCI classification was based on the contrast between stimuli in the attended stream and stimuli in the unattended stream, making use of every stimulus, rather than contrasting frequent standard and rare ‘oddball’ stimuli. SSAEPs were assessed offline: for all subjects, spectral components at the two exactly known modulation frequencies allowed discrimination of pre-stimulus from stimulus intervals, and of left-only stimuli from right-only stimuli when one side of the dichotic stimulus pair was muted. However, attention modulation of SSAEPs was not sufficient for single-trial BCI communication, even when the subject's attention was clearly focused well enough to allow classification of the same trials via ERPs. ERPs clearly provided a superior basis for BCI. The ERP results are a promising step towards the development of a simple-to-use, reliable yes/no communication system for users in the most severely paralysed states, as well as potential attention-monitoring and -training applications outside the context of assistive technology.
Hill, N J; Schölkopf, B
We report on the development and online testing of an electroencephalogram-based brain-computer interface (BCI) that aims to be usable by completely paralysed users-for whom visual or motor-system-based BCIs may not be suitable, and among whom reports of successful BCI use have so far been very rare. The current approach exploits covert shifts of attention to auditory stimuli in a dichotic-listening stimulus design. To compare the efficacy of event-related potentials (ERPs) and steady-state auditory evoked potentials (SSAEPs), the stimuli were designed such that they elicited both ERPs and SSAEPs simultaneously. Trial-by-trial feedback was provided online, based on subjects' modulation of N1 and P3 ERP components measured during single 5 s stimulation intervals. All 13 healthy subjects were able to use the BCI, with performance in a binary left/right choice task ranging from 75% to 96% correct across subjects (mean 85%). BCI classification was based on the contrast between stimuli in the attended stream and stimuli in the unattended stream, making use of every stimulus, rather than contrasting frequent standard and rare 'oddball' stimuli. SSAEPs were assessed offline: for all subjects, spectral components at the two exactly known modulation frequencies allowed discrimination of pre-stimulus from stimulus intervals, and of left-only stimuli from right-only stimuli when one side of the dichotic stimulus pair was muted. However, attention modulation of SSAEPs was not sufficient for single-trial BCI communication, even when the subject's attention was clearly focused well enough to allow classification of the same trials via ERPs. ERPs clearly provided a superior basis for BCI. The ERP results are a promising step towards the development of a simple-to-use, reliable yes/no communication system for users in the most severely paralysed states, as well as potential attention-monitoring and -training applications outside the context of assistive technology.
Biesmans, Wouter; Vanthornhout, Jonas; Wouters, Jan; Moonen, Marc; Francart, Tom; Bertrand, Alexander
Recent research has shown that it is possible to detect which of two simultaneous speakers a person is attending to, using brain recordings and the temporal envelope of the separate speech signals. However, a wide range of possible methods for extracting this speech envelope exists. This paper assesses the effect of different envelope extraction methods with varying degrees of auditory modelling on the performance of auditory attention detection (AAD), and more specifically on the detection accuracy. It is found that sub-band envelope extraction with proper power-law compression yields best performance, and that the use of several more detailed auditory models does not yield a further improvement in performance.
Sinex, D.G. (Boys Town National Research Hospital, Omaha, NE (United States))
Acoustic cues to the identity of consonants such as d and t vary according to contextual factors such as the position of the consonant within a syllable. However, investigations of the neural coding of consonants have almost always used stimuli in which the consonant occurs in the syllable-initial position. The present experiments examined the peripheral neural representation of spectral and temporal cues that can distinguish between stop consonants d and t in syllable-final position. Stimulus sets consisting of the syllables hid, hit, hud, and hut were recorded by three different talkers. During the consonant closure interval, the spectrum of d was characterized by the presence of a low-frequency voice bar. Most neurons responses were characterized by discharge rate decreases at the beginning of the closure interval and by rate increases that marked the release of the consonant closure. Exceptions were seen in the responses of neurons with characteristics frequencies (CFs) below approximately 0.7 kHz to syllables ending in d. These neurons responded to the voice bar with discharge rates that could approach the rates elicited by the vowel. The latencies of prominent discharge rate changes were measured for all neurons and used to compute the length of the encoded closure interval. The encoded interval was clearly longer for syllables ending in t than in d. The encoded interval increased with CF for both consonants but more rapidly for t. Differences in the encoded closure interval were small for syllables with different vowels or syllables produced by different talkers. 29 refs., 10 figs.
Martin, Thomas J.; Grigg, Amanda; Kim, Susy A.; Ririe, Douglas G.; Eisenach, James C.
Background The 5 choice serial reaction time task (5CSRTT) is commonly used to assess attention in rodents. We sought to develop a variant of the 5CSRTT that would speed training to objective success criteria, and to test whether this variant could determine attention capability in each subject. New Method Fisher 344 rats were trained to perform a variant of the 5CSRTT in which the duration of visual cue presentation (cue duration) was titrated between trials based upon performance. The cue duration was decreased when the subject made a correct response, or increased with incorrect responses or omissions. Additionally, test day challenges were provided consisting of lengthening the intertrial interval and inclusion of a visual distracting stimulus. Results Rats readily titrated the cue duration to less than 1 sec in 25 training sessions or less (mean ± SEM, 22.9 ± 0.7), and the median cue duration (MCD) was calculated as a measure of attention threshold. Increasing the intertrial interval increased premature responses, decreased the number of trials completed, and increased the MCD. Decreasing the intertrial interval and time allotted for consuming the food reward demonstrated that a minimum of 3.5 sec is required for rats to consume two food pellets and successfully attend to the next trial. Visual distraction in the form of a 3 Hz flashing light increased the MCD and both premature and time out responses. Comparison with existing method The titration variant of the 5CSRTT is a useful method that dynamically measures attention threshold across a wide range of subject performance, and significantly decreases the time required for training. Task challenges produce similar effects in the titration method as reported for the classical procedure. Conclusions The titration 5CSRTT method is an efficient training procedure for assessing attention and can be utilized to assess the limit in performance ability across subjects and various schedule manipulations. PMID
Franceschini, Sandro; Trevisan, Piergiorgio; Ronconi, Luca; Bertoni, Sara; Colmar, Susan; Double, Kit; Facoetti, Andrea; Gori, Simone
Dyslexia is characterized by difficulties in learning to read and there is some evidence that action video games (AVG), without any direct phonological or orthographic stimulation, improve reading efficiency in Italian children with dyslexia. However, the cognitive mechanism underlying this improvement and the extent to which the benefits of AVG training would generalize to deep English orthography, remain two critical questions. During reading acquisition, children have to integrate written letters with speech sounds, rapidly shifting their attention from visual to auditory modality. In our study, we tested reading skills and phonological working memory, visuo-spatial attention, auditory, visual and audio-visual stimuli localization, and cross-sensory attentional shifting in two matched groups of English-speaking children with dyslexia before and after they played AVG or non-action video games. The speed of words recognition and phonological decoding increased after playing AVG, but not non-action video games. Furthermore, focused visuo-spatial attention and visual-to-auditory attentional shifting also improved only after AVG training. This unconventional reading remediation program also increased phonological short-term memory and phoneme blending skills. Our report shows that an enhancement of visuo-spatial attention and phonological working memory, and an acceleration of visual-to-auditory attentional shifting can directly translate into better reading in English-speaking children with dyslexia.
Full Text Available Abstract Background Parkinson's disease is a progressive neurological disorder resulting from a degeneration of dopamine producing cells in the substantia nigra. Clinical symptoms typically affect gait pattern and motor performance. Evidence suggests that the use of individual auditory cueing devices may be used effectively for the management of gait and freezing in people with Parkinson's disease. The primary aim of the randomised controlled trial is to evaluate the effect of an individual auditory cueing device on freezing and gait speed in people with Parkinson's disease. Methods A prospective multi-centre randomised cross over design trial will be conducted. Forty-seven subjects will be randomised into either Group A or Group B, each with a control and intervention phase. Baseline measurements will be recorded using the Freezing of Gait Questionnaire as the primary outcome measure and 3 secondary outcome measures, the 10 m Walk Test, Timed "Up & Go" Test and the Modified Falls Efficacy Scale. Assessments are taken 3-times over a 3-week period. A follow-up assessment will be completed after three months. A secondary aim of the study is to evaluate the impact of such a device on the quality of life of people with Parkinson's disease using a qualitative methodology. Conclusion The Apple iPod-Shuffle™ and similar devices provide a cost effective and an innovative platform for integration of individual auditory cueing devices into clinical, social and home environments and are shown to have immediate effect on gait, with improvements in walking speed, stride length and freezing. It is evident that individual auditory cueing devices are of benefit to people with Parkinson's disease and the aim of this randomised controlled trial is to maximise the benefits by allowing the individual to use devices in both a clinical and social setting, with minimal disruption to their daily routine. Trial registration The protocol for this study is registered
Full Text Available The interplay between top-down, bottom-up attention and consciousness is frequently tested in altered states of consciousness, including transitions between stages of sleep and sedation, and in pathological disorders of consciousness (the vegetative and minimally conscious states; VS and MCS. One of the most widely used tasks to assess cognitive processing in this context is the auditory oddball paradigm, where an infrequent change in a sequence of sounds elicits, in awake subjects, a characteristic EEG event-related potential (ERP called the mismatch negativity (MMN, followed by the classic P300 wave. The latter is further separable into the slightly earlier, anterior P3a and the later, posterior P3b, linked to bottom-up and top-down attention, respectively. We discuss here the putative dissociations between attention and awareness in disorders of consciousness, sedation and sleep, bearing in mind the recently emerging evidence from healthy volunteers and patients. These findings highlight the neurophysiological and cognitive parallels (and differences across these three distinct variations in levels of consciousness, and inform the theoretical framework for interpreting the role of attention therein.
Lanzetta-Valdo, Bianca Pinheiro; Oliveira, Giselle Alves de; Ferreira, Jane Tagarro Correa; Palacios, Ester Miyuki Nakamura
Introduction Children with Attention Deficit Hyperactivity Disorder can present Auditory Processing (AP) Disorder. Objective The study examined the AP in ADHD children compared with non-ADHD children, and before and after 3 and 6 months of methylphenidate (MPH) treatment in ADHD children. Methods Drug-naive children diagnosed with ADHD combined subtype aging between 7 and 11 years, coming from public and private outpatient service or public and private school, and age-gender-matched non-ADHD children, participated in an open, non-randomized study from February 2013 to December 2013. They were submitted to a behavioral battery of AP tests comprising Speech with white Noise, Dichotic Digits (DD), and Pitch Pattern Sequence (PPS) and were compared with non-ADHD children. They were followed for 3 and 6 months of MPH treatment (0.5 mg/kg/day). Results ADHD children presented larger number of errors in DD ( p < 0.01), and less correct responses in the PPS ( p < 0.0001) and in the SN ( p < 0.05) tests when compared with non-ADHD children. The treatment with MPH, especially along 6 months, significantly decreased the mean errors in the DD ( p < 0.01) and increased the correct response in the PPS ( p < 0.001) and SN ( p < 0.01) tests when compared with the performance before MPH treatment. Conclusions ADHD children show inefficient AP in selected behavioral auditory battery suggesting impaired in auditory closure, binaural integration, and temporal ordering. Treatment with MPH gradually improved these deficiencies and completely reversed them by reaching a performance similar to non-ADHD children at 6 months of treatment.
Full Text Available Abstract Background The speech signal contains both information about phonological features such as place of articulation and non-phonological features such as speaker identity. These are different aspects of the 'what'-processing stream (speaker vs. speech content, and here we show that they can be further segregated as they may occur in parallel but within different neural substrates. Subjects listened to two different vowels, each spoken by two different speakers. During one block, they were asked to identify a given vowel irrespectively of the speaker (phonological categorization, while during the other block the speaker had to be identified irrespectively of the vowel (speaker categorization. Auditory evoked fields were recorded using 148-channel magnetoencephalography (MEG, and magnetic source imaging was obtained for 17 subjects. Results During phonological categorization, a vowel-dependent difference of N100m source location perpendicular to the main tonotopic gradient replicated previous findings. In speaker categorization, the relative mapping of vowels remained unchanged but sources were shifted towards more posterior and more superior locations. Conclusions These results imply that the N100m reflects the extraction of abstract invariants from the speech signal. This part of the processing is accomplished in auditory areas anterior to AI, which are part of the auditory 'what' system. This network seems to include spatially separable modules for identifying the phonological information and for associating it with a particular speaker that are activated in synchrony but within different regions, suggesting that the 'what' processing can be more adequately modeled by a stream of parallel stages. The relative activation of the parallel processing stages can be modulated by attentional or task demands.
Liebel, Spencer W; Nelson, Jason M
We investigated the auditory and visual working memory functioning in college students with attention-deficit/hyperactivity disorder, learning disabilities, and clinical controls. We examined the role attention-deficit/hyperactivity disorder subtype status played in working memory functioning. The unique influence that both domains of working memory have on reading and math abilities was investigated. A sample of 268 individuals seeking postsecondary education comprise four groups of the present study: 110 had an attention-deficit/hyperactivity disorder diagnosis only, 72 had a learning disability diagnosis only, 35 had comorbid attention-deficit/hyperactivity disorder and learning disability diagnoses, and 60 individuals without either of these disorders comprise a clinical control group. Participants underwent a comprehensive neuropsychological evaluation, and licensed psychologists employed a multi-informant, multi-method approach in obtaining diagnoses. In the attention-deficit/hyperactivity disorder only group, there was no difference between auditory and visual working memory functioning, t(100) = -1.57, p = .12. In the learning disability group, however, auditory working memory functioning was significantly weaker compared with visual working memory, t(71) = -6.19, p attention-deficit/hyperactivity disorder only group, there were no auditory or visual working memory functioning differences between participants with either a predominantly inattentive type or a combined type diagnosis. Visual working memory did not incrementally contribute to the prediction of academic achievement skills. Individuals with attention-deficit/hyperactivity disorder did not demonstrate significant working memory differences compared with clinical controls. Individuals with a learning disability demonstrated weaker auditory working memory than individuals in either the attention-deficit/hyperactivity or clinical control groups.
Bekhtereva, Valeria; Craddock, Matt; Müller, Matthias M
Emotionally arousing stimuli are known to rapidly draw the brain's processing resources, even when they are task-irrelevant. The steady-state visual evoked potential (SSVEP) response, a neural response to a flickering stimulus which effectively allows measurement of the processing resources devoted to that stimulus, has been used to examine this process of attentional shifting. Previous studies have used a task in which participants detected periods of coherent motion in flickering random dot kinematograms (RDKs) which generate an SSVEP, and found that task-irrelevant emotional stimuli withdraw more attentional resources from the task-relevant RDKs than task-irrelevant neutral stimuli. However, it is not clear whether the emotion-related differences in the SSVEP response are conditional on higher-level extraction of emotional cues as indexed by well-known event-related potential (ERPs) components (N170, early posterior negativity, EPN), or if affective bias in competition for visual attention resources is a consequence of a time-invariant shifting process. In the present study, we used two different types of emotional distractors - IAPS pictures and facial expressions - for which emotional cue extraction occurs at different speeds, being typically earlier for faces (at ~170ms, as indexed by the N170) than for IAPS images (~220-280ms, EPN). We found that emotional modulation of attentional resources as measured by the SSVEP occurred earlier for faces (around 180ms) than for IAPS pictures (around 550ms), after the extraction of emotional cues as indexed by visual ERP components. This is consistent with emotion related re-allocation of attentional resources occurring after emotional cue extraction rather than being linked to a time-fixed shifting process. Copyright © 2015 Elsevier Inc. All rights reserved.
Soskey, Laura N; Allen, Paul D; Bennetto, Loisa
One of the earliest observable impairments in autism spectrum disorder (ASD) is a failure to orient to speech and other social stimuli. Auditory spatial attention, a key component of orienting to sounds in the environment, has been shown to be impaired in adults with ASD. Additionally, specific deficits in orienting to social sounds could be related to increased acoustic complexity of speech. We aimed to characterize auditory spatial attention in children with ASD and neurotypical controls, and to determine the effect of auditory stimulus complexity on spatial attention. In a spatial attention task, target and distractor sounds were played randomly in rapid succession from speakers in a free-field array. Participants attended to a central or peripheral location, and were instructed to respond to target sounds at the attended location while ignoring nearby sounds. Stimulus-specific blocks evaluated spatial attention for simple non-speech tones, speech sounds (vowels), and complex non-speech sounds matched to vowels on key acoustic properties. Children with ASD had significantly more diffuse auditory spatial attention than neurotypical children when attending front, indicated by increased responding to sounds at adjacent non-target locations. No significant differences in spatial attention emerged based on stimulus complexity. Additionally, in the ASD group, more diffuse spatial attention was associated with more severe ASD symptoms but not with general inattention symptoms. Spatial attention deficits have important implications for understanding social orienting deficits and atypical attentional processes that contribute to core deficits of ASD. Autism Res 2017, 10: 1405-1416. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.
Michalowski, Jaroslaw M; Pané-Farré, Christiane A; Löw, Andreas; Hamm, Alfons O
This study systematically investigated the sensitivity of the phobic attention system by measuring event-related potentials (ERPs) in spider-phobic and non-phobic volunteers in a context where spider and neutral pictures were presented (phobic threat condition) and in contexts where no phobic but unpleasant and neutral or only neutral pictures were displayed (phobia-irrelevant conditions). In a between-group study, participants were assigned to phobia-irrelevant conditions either before or after the exposure to spider pictures (pre-exposure vs post-exposure participants). Additionally, each picture was preceded by a fixation cross presented in one of three different colors that were informative about the category of an upcoming picture. In the phobic threat condition, spider-phobic participants showed a larger P1 than controls for all pictures and signal cues. Moreover, individuals with spider phobia who were sensitized by the exposure to phobic stimuli (i.e. post-exposure participants) responded with an increased P1 also in phobia-irrelevant conditions. In contrast, no group differences between spider-phobic and non-phobic individuals were observed in the P1-amplitudes during viewing of phobia-irrelevant stimuli in the pre-exposure group. In addition, cues signaling neutral pictures elicited decreased stimulus-preceding negativity (SPN) compared with cues signaling emotional pictures. Moreover, emotional pictures and cues signaling emotional pictures evoked larger early posterior negativity (EPN) and late positive potential (LPP) than neutral stimuli. Spider phobics showed greater selective attention effects than controls for phobia-relevant pictures (increased EPN and LPP) and cues (increased LPP and SPN). Increased sensitization of the attention system observed in spider-phobic individuals might facilitate fear conditioning and promote generalization of fear playing an important role in the maintenance of anxiety disorders. © The Author (2015). Published by
Oldoni, Damiano; De Coensel, Bert; Boes, Michiel; Rademaker, Michaël; De Baets, Bernard; Van Renterghem, Timothy; Botteldooren, Dick
Urban soundscape design involves creating outdoor spaces that are pleasing to the ear. One way to achieve this goal is to add or accentuate sounds that are considered to be desired by most users of the space, such that the desired sounds mask undesired sounds, or at least distract attention away from undesired sounds. In view of removing the need for a listening panel to assess the effectiveness of such soundscape measures, the interest for new models and techniques is growing. In this paper, a model of auditory attention to environmental sound is presented, which balances computational complexity and biological plausibility. Once the model is trained for a particular location, it classifies the sounds that are present in the soundscape and simulates how a typical listener would switch attention over time between different sounds. The model provides an acoustic summary, giving the soundscape designer a quick overview of the typical sounds at a particular location, and allows assessment of the perceptual effect of introducing additional sounds.
Full Text Available It has been shown that healthy aging affects the ability to focus attention on a given task and to ignore distractors. Here, we asked whether long-term physical activity is associated with lower susceptibility to distraction of auditory attention, and how physically active and inactive seniors may differ regarding subcomponents of auditory attention. An auditory duration discrimination task was employed, and involuntary attentional shifts to task-irrelevant rare frequency deviations and subsequent reorientation were studied by analysis of behavioral data and event-related potential measures. The frequency deviations impaired performance more in physically inactive than active seniors. This was accompanied by a stronger frontal positivity (P3a and increased activation of anterior cingulate cortex, suggesting a stronger involuntary shift of attention towards task-irrelevant stimulus features in inactive compared to active seniors. These results indicate a positive relationship between physical fitness and attentional control in elderly, presumably due to more focused attentional resources and enhanced inhibition of irrelevant stimulus features.
Ouchi, Yoshitaka; Meguro, Kenichi; Akanuma, Kyoko; Kato, Yuriko; Yamaguchi, Satoshi
Background. Alzheimer's disease (AD) patients have a poor response to the voices of caregivers. After administration of donepezil, caregivers often find that patients respond more frequently, whereas they had previously pretended to be “deaf.” We investigated whether auditory selective attention is associated with response to donepezil. Methods. The subjects were40 AD patients, 20 elderly healthy controls (HCs), and 15 young HCs. Pure tone audiometry was conducted and an original Auditory Selective Attention (ASA) test was performed with a MoCA vigilance test. Reassessment of the AD group was performed after donepezil treatment for 3 months. Results. Hearing level of the AD group was the same as that of the elderly HC group. However, ASA test scores decreased in the AD group and were correlated with the vigilance test scores. Donepezil responders (MMSE 3+) also showed improvement on the ASA test. At baseline, the responders had higher vigilance and lower ASA test scores. Conclusion. Contrary to the common view, AD patients had a similar level of hearing ability to healthy elderly. Auditory attention was impaired in AD patients, which suggests that unnecessary sounds should be avoided in nursing homes. Auditory selective attention is associated with response to donepezil in AD. PMID:26161001
Lallier, Marie; Donnadieu, Sophie; Valdois, Sylviane
The simultaneous auditory processing skills of 17 dyslexic children and 17 skilled readers were measured using a dichotic listening task. Results showed that the dyslexic children exhibited difficulties reporting syllabic material when presented simultaneously. As a measure of simultaneous visual processing, visual attention span skills were…
Foster, Nicholas E. V.; Ouimet, Tia; Tryfon, Ana; Doyle-Thomas, Krissy; Anagnostou, Evdokia; Hyde, Krista L.
In vision, typically-developing (TD) individuals perceive "global" (whole) before "local" (detailed) features, whereas individuals with autism spectrum disorder (ASD) exhibit a local bias. However, auditory global-local distinctions are less clear in ASD, particularly in terms of age and attention effects. To these aims, here…
/ although it could be equivalent to promoting lips uttering /ada/. Our findings suggest that at higher-level processing stages, auditory cues do interact with the perceptual decision and with the dominance mechanism involved during visual rivalry. These results are discussed according to the individual differences in the audio-visual integration for speech perception. We propose a descriptive model based on known characteristics of binocular rivalry, which accounts for most of these findings. In this model, the top-down attentional control (volition is modulated by lower-level audio-visual matching.
Treder, M. S.; Purwins, H.; Miklody, D.; Sturm, I.; Blankertz, B.
Objective. Polyphonic music (music consisting of several instruments playing in parallel) is an intuitive way of embedding multiple information streams. The different instruments in a musical piece form concurrent information streams that seamlessly integrate into a coherent and hedonistically appealing entity. Here, we explore polyphonic music as a novel stimulation approach for use in a brain-computer interface. Approach. In a multi-streamed oddball experiment, we had participants shift selective attention to one out of three different instruments in music audio clips. Each instrument formed an oddball stream with its own specific standard stimuli (a repetitive musical pattern) and oddballs (deviating musical pattern). Main results. Contrasting attended versus unattended instruments, ERP analysis shows subject- and instrument-specific responses including P300 and early auditory components. The attended instrument can be classified offline with a mean accuracy of 91% across 11 participants. Significance. This is a proof of concept that attention paid to a particular instrument in polyphonic music can be inferred from ongoing EEG, a finding that is potentially relevant for both brain-computer interface and music research.
Green, Jessica J; McDonald, John J
We conducted two audiovisual experiments to determine whether event-related potential (ERP) components elicited by attention-directing cues reflect supramodal attentional control. Symbolic visual cues were used to direct attention prior to auditory targets in Experiment 1, and symbolic auditory cues were used to direct attention prior to visual targets in Experiment 2. Different patterns of cue ERPs were found in the two experiments. A frontal negativity called the ADAN was absent in Experiment 2, which indicates that this component does not reflect supramodal attentional control. A posterior positivity called the LDAP was observed in both experiments but was focused more posteriorly over the occipital scalp in Experiment 2. This component appears to reflect multiple processes, including visual processes involved in location marking and target preparation as well as supramodal processes involved in attentional control.
Lewald, Jörg; Hanenberg, Christina; Getzmann, Stephan
Successful speech perception in complex auditory scenes with multiple competing speakers requires spatial segregation of auditory streams into perceptually distinct and coherent auditory objects and focusing of attention toward the speaker of interest. Here, we focused on the neural basis of this remarkable capacity of the human auditory system and investigated the spatiotemporal sequence of neural activity within the cortical network engaged in solving the "cocktail-party" problem. Twenty-eight subjects localized a target word in the presence of three competing sound sources. The analysis of the ERPs revealed an anterior contralateral subcomponent of the N2 (N2ac), computed as the difference waveform for targets to the left minus targets to the right. The N2ac peaked at about 500 ms after stimulus onset, and its amplitude was correlated with better localization performance. Cortical source localization for the contrast of left versus right targets at the time of the N2ac revealed a maximum in the region around left superior frontal sulcus and frontal eye field, both of which are known to be involved in processing of auditory spatial information. In addition, a posterior-contralateral late positive subcomponent (LPCpc) occurred at a latency of about 700 ms. Both these subcomponents are potential correlates of allocation of spatial attention to the target under cocktail-party conditions. © 2016 Society for Psychophysiological Research.
Yoo, Won-Gyu; Park, Se-Yeon
The etiology of the neck and back discomfort are highly associated with abnormal static posture such as forward head posture and flexed relaxed posture; such postures are regarded as the risk factors for work-related musculoskeletal disorders. Although, various ergonomic chairs and devices have been developed for computer workers, there are few reports of software that can alert users to their posture or work hours. Purpose of the present study was to investigate the difference of kinematics of the neck and trunk segments as well as muscular activation between condition with and without posture related auditory cueing. Twelve male computer workers were recruited in this study. A posture related auditory cueing (PAC) program used a media file that generated postural correction cue at intervals of 300 seconds. Surface electromyography was used to measure the activity of the erector spine and upper trapezius. Kinematic data were obtained using an ultrasonic three dimensional movement analysis system. The results showed that the means of trunk flexion and forward head angle were significantly reduced with PAC. The muscular activity of the erector spine and upper trapezius was significantly higher with the PAC and significantly lower without the PAC. Our findings suggested that the software providing PACs is an ergonomic device with positive effects for preventing habitual poor posture and potential for widespread practical usage.
Bellis, Teri James; Billiet, Cassie; Ross, Jody
Cacace and McFarland (2005) have suggested that the addition of cross-modal analogs will improve the diagnostic specificity of (C)APD (central auditory processing disorder) by ensuring that deficits observed are due to the auditory nature of the stimulus and not to supra-modal or other confounds. Others (e.g., Musiek et al, 2005) have expressed concern about the use of such analogs in diagnosing (C)APD given the uncertainty as to the degree to which cross-modal measures truly are analogous and emphasize the nonmodularity of the CANs (central auditory nervous system) and its function, which precludes modality specificity of (C)APD. To date, no studies have examined the clinical utility of cross-modal (e.g., visual) analogs of central auditory tests in the differential diagnosis of (C)APD. This study investigated performance of children diagnosed with (C)APD, children diagnosed with ADHD (attention deficit hyperactivity disorder), and typically developing children on three diagnostic tests of central auditory function and their corresponding visual analogs. The study sought to determine whether deficits observed in the (C)APD group were restricted to the auditory modality and the degree to which the addition of visual analogs aids in the ability to differentiate among groups. An experimental repeated measures design was employed. Participants consisted of three groups of right-handed children (normal control, n=10; ADHD, n=10; (C)APD, n=7) with normal and symmetrical hearing sensitivity, normal or corrected-to-normal visual acuity, and no family or personal history of disorders unrelated to their primary diagnosis. Participants in Groups 2 and 3 met current diagnostic criteria for ADHD and (C)APD. Visual analogs of three tests in common clinical use for the diagnosis of (C)APD were used (Dichotic Digits [Musiek, 1983]; Frequency Patterns [Pinheiro and Ptacek, 1971]; and Duration Patterns [Pinheiro and Musiek, 1985]). Participants underwent two 1 hr test sessions
Wykowska, Agnieszka; Hommel, Bernhard; Schubö, Anna
In line with the Theory of Event Coding (Hommel et al., 2001a), action planning has been shown to affect perceptual processing - an effect that has been attributed to a so-called intentional weighting mechanism (Wykowska et al., 2009; Memelink and Hommel, 2012), whose functional role is to provide information for open parameters of online action adjustment (Hommel, 2010). The aim of this study was to test whether different types of action representations induce intentional weighting to various degrees. To meet this aim, we introduced a paradigm in which participants performed a visual search task while preparing to grasp or to point. The to-be performed movement was signaled either by a picture of a required action or a word cue. We reasoned that picture cues might trigger a more concrete action representation that would be more likely to activate the intentional weighting of perceptual dimensions that provide information for online action control. In contrast, word cues were expected to trigger a more abstract action representation that would be less likely to induce intentional weighting. In two experiments, preparing for an action facilitated the processing of targets in an unrelated search task if they differed from distractors on a dimension that provided information for online action control. As predicted, however, this effect was observed only if action preparation was signaled by picture cues but not if it was signaled by word cues. We conclude that picture cues are more efficient than word cues in activating the intentional weighting of perceptual dimensions, presumably by specifying not only invariant characteristics of the planned action but also the dimensions of action-specific parameters.
Full Text Available In line with the Theory of Event Coding (Hommel et al., 2001, action planning has been shown to affect perceptual processing—an effect that has been attributed to a so-called intentional weighting mechanism (Memelink & Hommel, in press; Wykowska, Schubö, & Hommel, 2009, whose functional role is to provide information for open parameters of online action adjustment (Hommel, 2010. The aim of this study was to test whether different types of action representations induce intentional weighting to various degrees. To meet this aim, we introduced a paradigm in which participants performed a visual search task while preparing to grasp or to point. The to-be performed movement was signaled either by a picture of a required action or a word cue. We reasoned that picture cues might trigger a more concrete action representation that would be more likely to activate the intentional weighting of perceptual dimensions that provide information for online action control. In contrast, word cues were expected to trigger a more abstract action representation that would be less likely to induce intentional weighting. In two experiments, preparing for an action facilitated the processing of targets in an unrelated search task if they differed from distractors on a dimension that provided information for online action control. As predicted, however, this effect was observed only if action preparation was signaled by picture cues but not if it was signaled by word cues. We conclude that picture cues are more efficient than word cues in activating the intentional weighting of perceptual dimensions, presumably by specifying not only invariant characteristics of the planned action but also the dimensions of action-specific parameters.
Full Text Available BACKGROUND: In predictive spatial cueing studies, reaction times (RT are shorter for targets appearing at cued locations (valid trials than at other locations (invalid trials. An increase in the amplitude of early P1 and/or N1 event-related potential (ERP components is also present for items appearing at cued locations, reflecting early attentional sensory gain control mechanisms. However, it is still unknown at which stage in the processing stream these early amplitude effects are translated into latency effects. METHODOLOGY/PRINCIPAL FINDINGS: Here, we measured the latency of two ERP components, the N2pc and the sustained posterior contralateral negativity (SPCN, to evaluate whether visual selection (as indexed by the N2pc and visual-short term memory processes (as indexed by the SPCN are delayed in invalid trials compared to valid trials. The P1 was larger contralateral to the cued side, indicating that attention was deployed to the cued location prior to the target onset. Despite these early amplitude effects, the N2pc onset latency was unaffected by cue validity, indicating an express, quasi-instantaneous re-engagement of attention in invalid trials. In contrast, latency effects were observed for the SPCN, and these were correlated to the RT effect. CONCLUSIONS/SIGNIFICANCE: Results show that latency differences that could explain the RT cueing effects must occur after visual selection processes giving rise to the N2pc, but at or before transfer in visual short-term memory, as reflected by the SPCN, at least in discrimination tasks in which the target is presented concurrently with at least one distractor. Given that the SPCN was previously associated to conscious report, these results further show that entry into consciousness is delayed following invalid cues.
Charlotte Elisabeth Wittekind
Full Text Available Using variants of the emotional Stroop task (EST, a large number of studies demonstrated attentional biases in individuals with PTSD across different types of trauma. However, the specificity and robustness of the emotional Stroop effect in PTSD were questioned recently. In particular, the paradigm cannot disentangle underlying cognitive mechanisms. Transgenerational studies provide evidence that consequences of trauma are not limited to the traumatized people, but extend to close relatives, especially the children. To further investigate attentional biases in PTSD and to shed light on the underlying cognitive mechanism(s, a spatial-cueing paradigm with pictures of different emotional valence (neutral, anxiety, depression, trauma was administered to individuals displaced as children during World War II with (n = 22 and without PTSD (n = 26 as well as to nontraumatized controls (n = 22. To assess whether parental PTSD is associated with biased information processing in children, each one adult offspring was also included in the study. PTSD was not associated with attentional biases for trauma-related stimuli. There was no evidence for a transgenerational transmission of biased information processing. However, when samples were regrouped based on current depression, a reduced inhibition of return (IOR effect emerged for depression-related cues. IOR refers to the phenomenon that with longer intervals between cue and target the validity effect is reversed: uncued locations are associated with shorter and cued locations with longer RTs. The results diverge from EST studies and demonstrate that findings on attentional biases yield equivocal results across different paradigms. Attentional biases for trauma-related material may only appear for verbal but not for visual stimuli in an elderly population with childhood trauma with PTSD. Future studies should more closely investigate whether findings from younger trauma populations also manifest in older
Yang, Ming-Tao; Hsu, Chun-Hsien; Yeh, Pei-Wen; Lee, Wang-Tso; Liang, Jao-Shwann; Fu, Wen-Mei; Lee, Chia-Ying
Inattention (IA) has been a major problem in children with attention deficit/hyperactivity disorder (ADHD), accounting for their behavioral and cognitive dysfunctions. However, there are at least three processing steps underlying attentional control for auditory change detection, namely pre-attentive change detection, involuntary attention orienting, and attention reorienting for further evaluation. This study aimed to examine whether children with ADHD would show deficits in any of these subcomponents by using mismatch negativity (MMN), P3a, and late discriminative negativity (LDN) as event-related potential (ERP) markers, under the passive auditory oddball paradigm. Two types of stimuli—pure tones and Mandarin lexical tones—were used to examine if the deficits were general across linguistic and non-linguistic domains. Participants included 15 native Mandarin-speaking children with ADHD and 16 age-matched controls (across groups, age ranged between 6 and 15 years). Two passive auditory oddball paradigms (lexical tones and pure tones) were applied. The pure tone oddball paradigm included a standard stimulus (1000 Hz, 80%) and two deviant stimuli (1015 and 1090 Hz, 10% each). The Mandarin lexical tone oddball paradigm’s standard stimulus was /yi3/ (80%) and two deviant stimuli were /yi1/ and /yi2/ (10% each). The results showed no MMN difference, but did show attenuated P3a and enhanced LDN to the large deviants for both pure and lexical tone changes in the ADHD group. Correlation analysis showed that children with higher ADHD tendency, as indexed by parents’ and teachers’ ratings on ADHD symptoms, showed less positive P3a amplitudes when responding to large lexical tone deviants. Thus, children with ADHD showed impaired auditory change detection for both pure tones and lexical tones in both involuntary attention switching, and attention reorienting for further evaluation. These ERP markers may therefore be used for the evaluation of anti-ADHD drugs that
Full Text Available Inattention has been a major problem in children with attention deficit/hyperactivity disorder (ADHD, accounting for their behavioral and cognitive dysfunctions. However, there are at least three processing steps underlying attentional control for auditory change detection, namely pre-attentive change detection, involuntary attention orienting, and attention reorienting for further evaluation. This study aimed to examine whether children with ADHD would show deficits in any of these subcomponents by using mismatch negativity (MMN, P3a, and late discriminative negativity (LDN as event-related potential (ERP markers, under the passive auditory oddball paradigm. Two types of stimuli - pure tones and Mandarin lexical tones - were used to examine if the deficits were general across linguistic and non-linguistic domains. Participants included 15 native Mandarin-speaking children with ADHD and 16 age-matched controls (across groups, age ranged between 6 and 15 years. Two passive auditory oddball paradigms (lexical tones and pure tones were applied. Pure tone paradigm included standard stimuli (1000 Hz, 80% and two deviant stimuli (1015 Hz and 1090 Hz, 10% each. The Mandarin lexical tone paradigm’s standard stimuli was /yi3/ (80% and two deviant stimuli were /yi1/ and /yi2/ (10% each. The results showed no MMN difference, but did show attenuated P3a and enhanced LDN to the large deviants for both pure and lexical tone changes in the ADHD group. Correlation analysis showed that children with higher ADHD tendency, as indexed by parents’ and teachers’ rating on ADHD symptoms, showed less positive P3a amplitudes when responding to large lexical tone deviants. Thus, children with ADHD showed impaired auditory change detection for both pure tones and lexical tones in both involuntary attention switching, and attention reorienting for further evaluation. These ERP markers may therefore be used for evaluation of anti-ADHD drugs that aim to alleviate these
Bertels, Julie; Kolinsky, Régine; Pietrons, Elise; Morais, José
Using an auditory adaptation of the emotional and taboo Stroop tasks, the authors compared the effects of negative and taboo spoken words in mixed and blocked designs. Both types of words elicited carryover effects with mixed presentations and interference with blocked presentations, suggesting similar long-lasting attentional effects. Both were also relatively resilient to the long-lasting influence of the preceding emotional word. Hence, contrary to what has been assumed (Schmidt & Saari, 2007), negative and taboo words do not seem to differ in terms of the temporal dynamics of the interdimensional shifting, at least in the auditory modality. PsycINFO Database Record (c) 2011 APA, all rights reserved.
Choi, Wonjae; Lee, GyuChang; Lee, Seungwon
To investigate the effect of a cognitive-motor dual-task using auditory cues on the balance of patients with chronic stroke. Randomized controlled trial. Inpatient rehabilitation center. Thirty-seven individuals with chronic stroke. The participants were randomly allocated to the dual-task group (n=19) and the single-task group (n=18). The dual-task group performed a cognitive-motor dual-task in which they carried a circular ring from side to side according to a random auditory cue during treadmill walking. The single-task group walked on a treadmill only. All subjects completed 15 min per session, three times per week, for four weeks with conventional rehabilitation five times per week over the four weeks. Before and after intervention, both static and dynamic balance were measured with a force platform and using the Timed Up and Go (TUG) test. The dual-task group showed significant improvement in all variables compared to the single-task group, except for anteroposterior (AP) sway velocity with eyes open and TUG at follow-up: mediolateral (ML) sway velocity with eye open (dual-task group vs. single-task group: 2.11 mm/s vs. 0.38 mm/s), ML sway velocity with eye close (2.91 mm/s vs. 1.35 mm/s), AP sway velocity with eye close (4.84 mm/s vs. 3.12 mm/s). After intervention, all variables showed significant improvement in the dual-task group compared to baseline. The study results suggest that the performance of a cognitive-motor dual-task using auditory cues may influence balance improvements in chronic stroke patients. © The Author(s) 2014.
Altmann, Christian F; Ueda, Ryuhei; Bucher, Benoit; Furukawa, Shigeto; Ono, Kentaro; Kashino, Makio; Mima, Tatsuya; Fukuyama, Hidenao
Interaural time (ITD) and level differences (ILD) constitute the two main cues for sound localization in the horizontal plane. Despite extensive research in animal models and humans, the mechanism of how these two cues are integrated into a unified percept is still far from clear. In this study, our aim was to test with human electroencephalography (EEG) whether integration of dynamic ITD and ILD cues is reflected in the so-called motion-onset response (MOR), an evoked potential elicited by moving sound sources. To this end, ITD and ILD trajectories were determined individually by cue trading psychophysics. We then measured EEG while subjects were presented with either static click-trains or click-trains that contained a dynamic portion at the end. The dynamic part was created by combining ITD with ILD either congruently to elicit the percept of a right/leftward moving sound, or incongruently to elicit the percept of a static sound. In two experiments that differed in the method to derive individual dynamic cue trading stimuli, we observed an MOR with at least a change-N1 (cN1) component for both the congruent and incongruent conditions at about 160-190 ms after motion-onset. A significant change-P2 (cP2) component for both the congruent and incongruent ITD/ILD combination was found only in the second experiment peaking at about 250 ms after motion onset. In sum, this study shows that a sound which - by a combination of counter-balanced ITD and ILD cues - induces a static percept can still elicit a motion-onset response, indicative of independent ITD and ILD processing at the level of the MOR - a component that has been proposed to be, at least partly, generated in non-primary auditory cortex. Copyright © 2017 Elsevier Inc. All rights reserved.
Paavilainen, Petri; Illi, Janne; Moisseinen, Nella; Niinisalo, Maija; Ojala, Karita; Reinikainen, Johanna; Vainio, Lari
The task-irrelevant spatial location of a cue stimulus affects the processing of a subsequent target. This "Posner effect" has been explained by an exogenous attention shift to the spatial location of the cue, improving perceptual processing of the target. We studied whether the left/right location of task-irrelevant and uninformative tones produces cueing effects on the processing of visual targets. Tones were presented randomly from left or right. In the first condition, the subsequent visual target, requiring response either with the left or right hand, was presented peripherally to left or right. In the second condition, the target was a centrally presented left/right-pointing arrow, indicating the response hand. In the third condition, the tone and the central arrow were presented simultaneously. Data were recorded on compatible (the tone location and the response hand were the same) and incompatible trials. Reaction times were longer on incompatible than on compatible trials. The results of the second and third conditions are difficult to explain with the attention-shift model emphasizing improved perceptual processing in the cued location, as the central target did not require any location-based processing. Consequently, as an alternative explanation they suggest response priming in the hand corresponding to the spatial location of the tone. Simultaneous lateralized readiness potential (LRP) recordings were consistent with the behavioral data, the tone cues eliciting on incompatible trials a fast preparation for the incorrect response and on compatible trials preparation for the correct response. © 2016 Scandinavian Psychological Associations and John Wiley & Sons Ltd.
Whitmer, William M.; Brown, Christopher A.; Dye, Raymond H.; Jurcin, Noah F.
The cocktail-party paradigm was applied to nonspeech signals. Pure-tone stimuli were used in an extension of the Franssen effect, an auditory illusion wherein the location of a slow-onset signal is perceived to be the same as a simultaneous, contralateral sudden-onset signal. Listeners heard simultaneous sudden-onset (transient) and contralateral slow-onset (steady-state) tones in a reverberant environment with a second delayed transient from a third azimuthal location. Results showed that the Franssen effect was either maintained or ``reset,'' but not reduced. The ongoing steady-state tone was perceived either at the initial-transient and then delayed-transient location, or at the initial transient location only. None of the listeners showed a location bias to the delayed transient tone when its frequency differed from the initial signal frequency or was replaced with noise. In additional conditions based on the ``false Haas effect,'' consonant-vowel pairs representing transient and steady-state signals were segregated contralaterally in a reverberant space. Results showed no resemblance to the Franssen effect. In general, results indicated that the role of attention is fundamental to the localization of an ongoing stimulus. [Work supported by NIH.
Matsuda, Ayasa; Hara, Keiko; Watanabe, Satsuki; Matsuura, Masato; Ohta, Katsuya; Matsushima, Eisuke
Absolute pitch (AP) refers to the ability to identify the pitch of sound without reference. To clarify the neurophysiological characteristics of AP, we compared mismatch negativity (MMN) elicited by scale and non-scale notes between AP possessors and non-AP individuals. Eight individuals who were able to identify pitch with perfect accuracy were defined as AP possessors. Eighteen participants who failed to achieve perfect accuracy were included in the non-AP group. We presented participants with two tone pairs, in a scale condition and a non-scale condition. The frequency ratios of the two pairs were the same. MMN over the frontal region in the non-scale condition was larger in the AP group than the non-AP group. In contrast, no such difference was observed between the two groups in the scale condition. The results suggest that pre-attentive processing of non-scale note sounds in the auditory cortex is a salient neurophysiological characteristic of AP. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Elling, Ludger; Steinberg, Christian; Bröckelmann, Ann-Kathrin; Dobel, Christan; Bölte, Jens; Junghofer, Markus
Background Acute stress is a stereotypical, but multimodal response to a present or imminent challenge overcharging an organism. Among the different branches of this multimodal response, the consequences of glucocorticoid secretion have been extensively investigated, mostly in connection with long-term memory (LTM). However, stress responses comprise other endocrine signaling and altered neuronal activity wholly independent of pituitary regulation. To date, knowledge of the impact of such “paracorticoidal” stress responses on higher cognitive functions is scarce. We investigated the impact of an ecological stressor on the ability to direct selective attention using event-related potentials in humans. Based on research in rodents, we assumed that a stress-induced imbalance of catecholaminergic transmission would impair this ability. Methodology/Principal Findings The stressor consisted of a single cold pressor test. Auditory negative difference (Nd) and mismatch negativity (MMN) were recorded in a tonal dichotic listening task. A time series of such tasks confirmed an increased distractibility occuring 4–7 minutes after onset of the stressor as reflected by an attenuated Nd. Salivary cortisol began to rise 8–11 minutes after onset when no further modulations in the event-related potentials (ERP) occurred, thus precluding a causal relationship. This effect may be attributed to a stress-induced activation of mesofrontal dopaminergic projections. It may also be attributed to an activation of noradrenergic projections. Known characteristics of the modulation of ERP by different stress-related ligands were used for further disambiguation of causality. The conjuncture of an attenuated Nd and an increased MMN might be interpreted as indicating a dopaminergic influence. The selective effect on the late portion of the Nd provides another tentative clue for this. Conclusions/Significance Prior studies have deliberately tracked the adrenocortical influence on cognition
Binaural interaction in the auditory brainstem response (ABR) represents the discrepancy between the binaural waveform and the sum of monaural ones. A typical ABR binaural interaction in humans is a reduction of the binaural amplitude compared to the monaural sum at the wave-V latency, i.e., the DN1 component. It has been considered that the DN1 is mainly elicited by high frequency components of stimuli whereas some studies have shown the contribution of low-to-middle frequency components to the DN1. To examine this issue, the present study compared the ABR binaural interaction elicited by tone pips (1 kHz, 10-ms duration) with the one by clicks (a rectangular wave, 0.1-ms duration) presented at 80 dB peak equivalent SPL and a fixed stimulus onset interval (180 ms). The DN1 due to tone pips was vulnerable compared to the click-evoked DN1. The pip-evoked DN1 was significantly detected under auditory attention whereas it failed to reach significance under visual attention. The click-evoked DN1 was robustly present for the two attention conditions. The current results might confirm the high frequency sound contribution to the DN1 elicitation. Copyright © 2015 Elsevier B.V. All rights reserved.
Schönwald, Liane I; Müller, Matthias M
In our previous studies on competition for attentional processing resources in early visual cortex between a foreground task and distracting emotional background images we found that emotional background images withdraw attentional resources from the foreground task after about 400 ms. Costs in behavioral data and a significant reduction of the steady state visual evoked potential (SSVEP) amplitude that was elicited by the foreground task lasted for several hundred milliseconds. We speculated that the differential effect in SSVEP amplitudes is preceded by the extraction of the emotional cue. Event related potential (ERP) studies to emotional and neutral complex images identified an early posterior negativity (EPN) as a robust neural signature of emotional cue extraction. The late positive potential (LPP) was related to in-depth processing of the emotional image. We extracted ERPs that were evoked by the onset of background images concurrently with the SSVEP that was elicited by the foreground task. Emotional compared to neutral background pictures evoked a more negative EPN at about 190 ms and a more positive LPP at about 700 ms after image onset. SSVEP amplitudes became significantly smaller with emotional background images after about 400 ms lasting for several hundred ms. Interestingly, we found no significant correlations between the three components, indicating that they act independently. Source localizations resulted in nonoverlapping cortical generators. Results suggest a cascade of perceptual processes: Extraction of the emotional cue preceded biasing of attentional resources away from the foreground task towards the emotional image for an evaluation of the picture content. Copyright © 2013 Wiley Periodicals, Inc.
Visual Attention to Alcohol Cues and Responsible Drinking Statements Within Alcohol Advertisements and Public Health Campaigns: Relationships With Drinking Intentions and Alcohol Consumption in the Laboratory
Both alcohol advertising and public health campaigns increase alcohol consumption in the short term, and this may be attributable to attentional capture by alcohol-related cues in both types of media. The present studies investigated the association between (a) visual attention to alcohol cues and responsible drinking statements in alcohol advertising and public health campaigns, and (b) next-week drinking intentions (Study 1) and drinking behavior in the lab (Study 2). In Study 1, 90 male participants viewed 1 of 3 TV alcohol adverts (conventional advert; advert that emphasized responsible drinking; or public health campaign; between-subjects manipulation) while their visual attention to alcohol cues and responsible drinking statements was recorded, before reporting their drinking intentions. Study 2 used a within-subjects design in which 62 participants (27% male) viewed alcohol and soda advertisements while their attention to alcohol/soda cues and responsible drinking statements was recorded, before completing a bogus taste test with different alcoholic and nonalcoholic drinks. In both studies, alcohol cues attracted more attention than responsible drinking statements, except when viewing a public health TV campaign. Attention to responsible drinking statements was not associated with intentions to drink alcohol over the next week (Study 1) or alcohol consumption in the lab (Study 2). However, attention to alcohol portrayal cues within alcohol advertisements was associated with ad lib alcohol consumption in Study 2, although attention to other types of alcohol cues (brand logos, glassware, and packaging) was not associated. Future studies should investigate how responsible drinking statements might be improved to attract more attention. PMID:28493753
Full Text Available Background: The procognitive actions of the nicotinic acetylcholine receptor (nAChR agonist nicotine are believed, in part, to motivate the excessive cigarette smoking in schizophrenia, a disorder associated with deficits in multiple cognitive domains, including low level auditory sensory processes and higher order attention-dependent operations. Objectives: As N-methyl-D-aspartate receptor (NMDAR hypofunction has been shown to contribute to these cognitive impairments, the primary aims of this healthy volunteer study were to: a to shed light on the separate and interactive roles of nAChR and NMDAR systems in the modulation of auditory sensory memory (and sustained attention, as indexed by the auditory event-related brain potential (ERP – mismatch negativity (MMN, and b to examine how these effects are moderated by a predisposition to auditory hallucinations/delusions (HD. Methods: In a randomized, double-blind, placebo controlled design involving a low intravenous dose of ketamine (.04 mg/kg and a 4 mg dose of nicotine gum, MMN and performance on a rapid visual information processing (RVIP task of sustained attention were examined in 24 healthy controls psychometrically stratified as being lower (L-HD, n = 12 or higher (H-HD for HD propensity. Results: Ketamine significantly slowed MMN, and reduced MMN in H-HD, with amplitude attenuation being blocked by the co-administration of nicotine. Nicotine significantly enhanced response speed (reaction time and accuracy (increased % hits and d΄ and reduced false alarms on the RIVIP, with improved performance accuracy being prevented when nicotine was administered with ketamine. Both % hits and d΄, as well as reaction time were poorer in H-HD (vs. L-HD and while hit rate and d΄ was increased by nicotine in H-HD, reaction time was slowed by ketamine in L-HD. Conclusions: Nicotine alleviated ketamine-induced sensory memory impairments and improved attention, particularly in individuals prone to HD.
Jayakar, Reema; King, Tricia Z; Morris, Robin; Na, Sabrina
We examined the nature of verbal memory deficits and the possible hippocampal underpinnings in long-term adult survivors of childhood brain tumor. 35 survivors (M = 24.10 ± 4.93 years at testing; 54% female), on average 15 years post-diagnosis, and 59 typically developing adults (M = 22.40 ± 4.35 years, 54% female) participated. Automated FMRIB Software Library (FSL) tools were used to measure hippocampal, putamen, and whole brain volumes. The California Verbal Learning Test-Second Edition (CVLT-II) was used to assess verbal memory. Hippocampal, F(1, 91) = 4.06, ηp² = .04; putamen, F(1, 91) = 11.18, ηp² = .11; and whole brain, F(1, 92) = 18.51, ηp² = .17, volumes were significantly lower for survivors than controls (p memory indices of auditory attention list span (Trial 1: F(1, 92) = 12.70, η² = .12) and final list learning (Trial 5: F(1, 92) = 6.01, η² = .06) were significantly lower for survivors (p Memory differences between survivors and controls are largely contingent upon auditory attention list span. Only hippocampal volume is associated with the auditory attention list span component of verbal memory. These findings are particularly robust for survivors treated with radiation. PsycINFO Database Record (c) 2015 APA, all rights reserved.
Torppa, Ritva; Faulkner, Andrew; Huotilainen, Minna; Järvikivi, Juhani; Lipsanen, Jari; Laasonen, Marja; Vainio, Martti
To study prosodic perception in early-implanted children in relation to auditory discrimination, auditory working memory, and exposure to music. Word and sentence stress perception, discrimination of fundamental frequency (F0), intensity and duration, and forward digit span were measured twice over approximately 16 months. Musical activities were assessed by questionnaire. Twenty-one early-implanted and age-matched normal-hearing (NH) children (4-13 years). Children with cochlear implants (CIs) exposed to music performed better than others in stress perception and F0 discrimination. Only this subgroup of implanted children improved with age in word stress perception, intensity discrimination, and improved over time in digit span. Prosodic perception, F0 discrimination and forward digit span in implanted children exposed to music was equivalent to the NH group, but other implanted children performed more poorly. For children with CIs, word stress perception was linked to digit span and intensity discrimination: sentence stress perception was additionally linked to F0 discrimination. Prosodic perception in children with CIs is linked to auditory working memory and aspects of auditory discrimination. Engagement in music was linked to better performance across a range of measures, suggesting that music is a valuable tool in the rehabilitation of implanted children.
Boettcher, Johanna; Leek, Linda; Matson, Lisa; Holmes, Emily A.; Browning, Michael; MacLeod, Colin; Andersson, Gerhard; Carlbring, Per
Biases in attention processes are thought to play a crucial role in the aetiology and maintenance of Social Anxiety Disorder (SAD). The goal of the present study was to examine the efficacy of a programme intended to train attention towards positive cues and a programme intended to train attention towards negative cues. In a randomised, controlled, double-blind design, the impact of these two training conditions on both selective attention and social anxiety were compared to that of a control training condition. A modified dot probe task was used, and delivered via the internet. A total of 129 individuals, diagnosed with SAD, were randomly assigned to one of these three conditions and took part in a 14-day programme with daily training/control sessions. Participants in all three groups did not on average display an attentional bias prior to the training. Critically, results on change in attention bias implied that significantly differential change in selective attention to threat was not detected in the three conditions. However, symptoms of social anxiety reduced significantly from pre- to follow-up-assessment in all three conditions (dwithin = 0.63–1.24), with the procedure intended to train attention towards threat cues producing, relative to the control condition, a significantly greater reduction of social fears. There were no significant differences in social anxiety outcome between the training condition intended to induce attentional bias towards positive cues and the control condition. To our knowledge, this is the first RCT where a condition intended to induce attention bias to negative cues yielded greater emotional benefits than a control condition. Intriguingly, changes in symptoms are unlikely to be by the mechanism of change in attention processes since there was no change detected in bias per se. Implications of this finding for future research on attention bias modification in social anxiety are discussed. Trial Registration Clinical
M. Littel (Marianne); I.H.A. Franken (Ingmar)
textabstractAbstract Substance use disorders are characterized by cognitive processing biases, such as automatically detecting and orienting attention towards drug-related stimuli. However, it is unclear how, when and what kind of attention (i.e. implicit, explicit) interacts with the processing of
M. Littel (Marianne); I.H.A. Franken (Ingmar)
textabstractAbstract Substance use disorders are characterized by cognitive processing biases, such as automatically detecting and orienting attention towards drug-related stimuli. However, it is unclear how, when and what kind of attention (i.e. implicit, explicit) interacts with the processing of
van Lutterveld, Remko; Oranje, Bob; Abramovic, Lucija
OBJECTIVE: Schizophrenia is associated with aberrant event-related potentials (ERPs) such as reductions in P300, processing negativity and mismatch negativity amplitudes. These deficits may be related to the propensity of schizophrenia patients to experience auditory verbal hallucinations (AVH...
Strick, Madelijn; Holland, Rob W; Van Baaren, Rick; Van Knippenberg, Ad
The humor effect refers to a robust finding in memory research that humorous information is easily recalled, at the expense of recall of nonhumorous information that was encoded in close temporal proximity. Previous research suggests that memory retrieval processes underlie this effect. That is, free recall is biased toward humorous information, which interferes with the retrieval of nonhumorous information. The present research tested an additional explanation that has not been specifically addressed before: Humor receives enhanced attention during information encoding, which decreases attention for context information. Participants observed humorous, nonhumorous positive, and nonhumorous neutral texts paired with novel consumer brands, while their eye movements were recorded using eye-tracker technology. The results confirmed that humor receives prolonged attention relative to both positive and neutral nonhumorous information. This enhanced attention correlated with impaired brand recognition.
Brailean, Ana Maria; Koster, Ernst H W; Hoorelbeke, Kristof; De Raedt, Rudi
Research indicates that individuals at-risk for depression are characterized by high sensitivity to loss and reduced sensitivity to reward. Moreover, it has been shown that attentional bias plays an important role in depression vulnerability. The current study aimed to examine the interplay between these risk factors for depression by examining the development of attentional bias toward reward and loss signals in dysphoric participants (individuals with elevated levels of depressive symptoms). Shapes were conditioned to reward and loss and subsequently presented in a dot probe task in a sample of dysphoric and nondysphoric participants. Nondysphoric individuals oriented towards reward-related signals whereas dysphoric individuals failed to develop a reward-related attentional bias. This attentional effect was observed in the absence of group differences in motivational factors. No group differences were found for attentional bias for loss-related signals, despite the fact that dysphoric individuals performed worse in response to losing. The current sample is not clinical thus generalization to clinical depression is not warranted. We argue that impaired early attentional processing of rewards are an important cognitive risk factor for anhedonic symptoms in persons with dysphoria. Copyright © 2014 Elsevier Ltd. All rights reserved.
Geliebter, Allan; Benson, Leora; Pantazatos, Spiro P; Hirsch, Joy; Carnell, Susan
Obese individuals show altered neural responses to high-calorie food cues. Individuals with binge eating [BE], who exhibit heightened impulsivity and emotionality, may show a related but distinct pattern of irregular neural responses. However, few neuroimaging studies have compared BE and non-BE groups. To examine neural responses to food cues in BE, 10 women with BE and 10 women without BE (non-BE) who were matched for obesity (5 obese and 5 lean in each group) underwent fMRI scanning during presentation of visual (picture) and auditory (spoken word) cues representing high energy density (ED) foods, low-ED foods, and non-foods. We then compared regional brain activation in BE vs. non-BE groups for high-ED vs. low-ED foods. To explore differences in functional connectivity, we also compared psychophysiologic interactions [PPI] with dorsal anterior cingulate cortex [dACC] for BE vs. non-BE groups. Region of interest (ROI) analyses revealed that the BE group showed more activation than the non-BE group in the dACC, with no activation differences in the striatum or orbitofrontal cortex [OFC]. Exploratory PPI analyses revealed a trend towards greater functional connectivity with dACC in the insula, cerebellum, and supramarginal gyrus in the BE vs. non-BE group. Our results suggest that women with BE show hyper-responsivity in the dACC as well as increased coupling with other brain regions when presented with high-ED cues. These differences are independent of body weight, and appear to be associated with the BE phenotype. Copyright © 2015 Elsevier Ltd. All rights reserved.
Bleichner, Martin G.; Mirkovic, Bojana; Debener, Stefan
Objective. This study presents a direct comparison of a classical EEG cap setup with a new around-the-ear electrode array (cEEGrid) to gain a better understanding of the potential of ear-centered EEG. Approach. Concurrent EEG was recorded from a classical scalp EEG cap and two cEEGrids that were placed around the left and the right ear. Twenty participants performed a spatial auditory attention task in which three sound streams were presented simultaneously. The sound streams were three seconds long and differed in the direction of origin (front, left, right) and the number of beats (3, 4, 5 respectively), as well as the timbre and pitch. The participants had to attend to either the left or the right sound stream. Main results. We found clear attention modulated ERP effects reflecting the attended sound stream for both electrode setups, which agreed in morphology and effect size. A single-trial template matching classification showed that the direction of attention could be decoded significantly above chance (50%) for at least 16 out of 20 participants for both systems. The comparably high classification results of the single trial analysis underline the quality of the signal recorded with the cEEGrids. Significance. These findings are further evidence for the feasibility of around the-ear EEG recordings and demonstrate that well described ERPs can be measured. We conclude that concealed behind-the-ear EEG recordings can be an alternative to classical cap EEG acquisition for auditory attention monitoring.
The cochlear implant (CI) provides a sensation of hearing for deaf-born children. However, many CI children show poor language outcomes, which may be related to the deficiency of CIs in delivering pitch. This thesis studies the development of those neural processes and behavioural skills linked to the perception of pitch which may play a role in language acquisition. We measured with event-related brain potentials (ERPs) the neural discrimination of and attention shift to changes in music, th...
Amy F. Teten
Full Text Available This study compared the effectiveness of auditory and visual redirections in facilitating topic coherence for persons with Dementia of Alzheimer’s Type (DAT. Five persons with moderate stage DAT engaged in conversation with the first author. Three topics related to activities of daily living, recreational activities, food, and grooming, were broached. Each topic was presented three times to each participant: once as a baseline condition, once with auditory redirection to topic, and once with visual redirection to topic. Transcripts of the interactions were scored for overall coherence. Condition was a significant factor in that the DAT participants exhibited better topic maintenance under visual and auditory conditions as opposed to baseline. In general, the performance of the participants was not affected by the topic, except for significantly higher overall coherence ratings for the visually redirected interactions dealing with the topic of food.
Scott A Stone
Full Text Available Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible.
Maruff, P; Yucel, M; Danckert, J; Stuart, G; Currie, J
On the covert orienting of visual attention task (COVAT), responses to targets appearing at the location indicated by a non-predictive spatial cue are faster than responses to targets appearing at uncued locations when stimulus onset asynchrony (SOA) is less than approximately 200 ms. For longer SOAs, this pattern reverses and RTs to targets appearing at uncued locations become faster than RTs to targets appearing at the cued location. This facilitation followed by inhibition has been termed the biphasic effect of non-predictive peripheral spatial cues. Currently, there is debate about whether these two processes are independent. This issue was addressed in a series of experiments where the temporal overlap between the peripheral cue and target was manipulated at both short and long SOAs. Results showed that facilitation was present only when the SOA was short and there was temporal overlap between cue and target. Conversely, inhibition occurred only when the SOA was long and there was no temporal overlap between cue and target. The biphasic effect, with an early facilitation followed by a later inhibition, occurred only when the cue duration was fixed such that there was temporal overlap between the cue and target at short but not long SOAs. In a final experiment, the duration of targets the temporal overlap between cue and target and the SOA were manipulated factorially. The results showed that facilitation occurred only when the SOA was short, there was temporal overlap between cue and target and the target remained visible until the subject responded. These results suggest that the facilitation and inhibition found on COVATs which use non-informative peripheral cues are independent processes and their presence and magnitude is related to the temporal properties of cues and targets.
Doornwaard, Suzan; van den Eijnden, Regina; Johnson, Adam; ter Bogt, Tom
This study examined whether exposure to sexualized media influences the subconscious process of attention allocation to subsequently encountered stimuli. One hundred twenty-three participants (61 females) between 18-23 years (M age = 19.99 years) watched a 3-minute video clip containing either
Full Text Available Attentional capture is usually stronger for task-relevant than irrelevant stimuli, whereas irrelevant stimuli can trigger equal or even stronger amounts of inhibition than relevant stimuli. Capture and inhibition, however, are typically assessed in separate trials, leaving it open whether or not inhibition of irrelevant stimuli is a consequence of preceding attentional capture by the same stimuli or whether inhibition is the only response to these stimuli. Here, we tested the relationship between capture and inhibition in a setup allowing for estimates of the capture and inhibition based on the very same trials. We recorded saccadic inhibition after relevant and irrelevant stimuli. At the same time, we recorded the N2pc, an event-related potential, reflecting initial capture of attention. We found attentional capture not only for, relevant but importantly also for irrelevant stimuli, although the N2pc was stronger for relevant than irrelevant stimuli. In addition, inhibition of saccades was the same for relevant and irrelevant stimuli. We conclude with a discussion of the mechanisms that are responsible for these effects.
van Holst, Ruth J.; Lemmens, Jeroen S.; Valkenburg, Patti M.; Peter, Jochen; Veltman, Dick J.; Goudriaan, Anna E.
Purpose: The aim of this study was to examine whether behavioral tendencies commonly related to addictive behaviors are also related to problematic computer and video game playing in adolescents. The study of attentional bias and response inhibition, characteristic for addictive disorders, is
Cook, Michelle; Visser, Ryan
Multimedia presentations that combine visual and verbal information are widely used for instructional purposes. While the design of the text-graphic relationship is difficult, several design strategies with the potential to reduce cognitive load have been identified in the literature. The purpose of this study is to examine how split-attention,…
van Holst, R.J.; Lemmens, J.S.; Valkenburg, P.M.; Peter, J.; Veltman, D.J.; Goudriaan, A.E.
Purpose: The aim of this study was to examine whether behavioral tendencies commonly related to addictive behaviors are also related to problematic computer and video game playing in adolescents. The study of attentional bias and response inhibition, characteristic for addictive disorders, is
van Holst, R.J.; Lemmens, J.S.; Valkenburg, P.M.; Peter, J.; Veltman, D.J.; Goudriaan, A.E.
Purpose The aim of this study was to examine whether behavioral tendencies commonly related to addictive behaviors are also related to problematic computer and video game playing in adolescents. The study of attentional bias and response inhibition, characteristic for addictive disorders, is
Nougier, Vincent; And Others
The development of visual orienting to a cued target on the part of practicing and nonpracticing tennis players aged 13, 16, and 25 years was examined. Results indicated that practicers were not faster than nonpracticers in processing visual information and that subjects of all ages oriented attention voluntarily to cued locations. (LB)
Loeber, Sabine; Grosshans, Martin; Herpertz, Stephan; Kiefer, Falk; Herpertz, Sabine C
Overeating, weight gain and obesity are considered as a major health problem in Western societies. At present, an impairment of response inhibition and a biased salience attribution to food-associated stimuli are considered as important factors associated with weight gain. However, recent findings suggest that the association between an impaired response inhibition and salience attribution and weight gain might be modulated by other factors. Thus, hunger might cause food-associated cues to be perceived as more salient and rewarding and might be associated with an impairment of response inhibition. However, at present, little is known how hunger interacts with these processes. Thus, the aim of the present study was to investigate whether hunger modulates response inhibition and attention allocation towards food-associated stimuli in normal-weight controls. A go-/nogo task with food-associated and control words and a visual dot-probe task with food-associated and control pictures were administered to 48 normal-weight participants (mean age 24.5 years, range 19-40; mean BMI 21.6, range 18.5-25.4). Hunger was assessed twofold using a self-reported measure of hunger and a measurement of the blood glucose level. Our results indicated that self-reported hunger affected behavioral response inhibition in the go-/nogo task. Thus, hungry participants committed significantly more commission errors when food-associated stimuli served as distractors compared to when control stimuli were the distractors. This effect was not observed in sated participants. In addition, we found that self-reported hunger was associated with a lower number of omission errors in response to food-associated stimuli indicating a higher salience of these stimuli. Low blood glucose level was not associated with an impairment of response inhibition. However, our results indicated that the blood glucose level was associated with an attentional bias towards food-associated cues in the visual dot probe task
Mondelli, Maria Fernanda Capoani Garcia
Full Text Available Introduction: To process and decode the acoustic stimulation are necessary cognitive and neurophysiological mechanisms. The hearing stimulation is influenced by cognitive factor from the highest levels, such as the memory, attention and learning. The sensory deprivation caused by hearing loss from the conductive type, frequently in population with cleft lip and palate, can affect many cognitive functions - among them the attention, besides harm the school performance, linguistic and interpersonal. Objective: Verify the perception of the parents of children with cleft lip and palate about the hearing attention of their kids. Method: Retrospective study of infants with any type of cleft lip and palate, without any genetic syndrome associate which parents answered a relevant questionnaire about the auditory attention skills. Results: 44 are from the male kind and 26 from the female kind, 35,71% of the answers were affirmative for the hearing loss and 71,43% to otologic infections. Conclusion: Most of the interviewed parents pointed at least one of the behaviors related to attention contained in the questionnaire, indicating that the presence of cleft lip and palate can be related to difficulties in hearing attention.
Full Text Available The auditory efferent system is a neural network that originates in the auditory cortex and projects to the cochlear receptor through olivocochlear (OC neurons. Medial OC neurons make cholinergic synapses with outer hair cells (OHCs through nicotinic receptors constituted by α9 and α10 subunits. One of the physiological functions of the α9 nicotinic receptor subunit (α9-nAChR is the suppression of auditory distractors during selective attention to visual stimuli. In a recent study we demonstrated that the behavioral performance of alpha-9 nicotinic receptor knock-out (KO mice is altered during selective attention to visual stimuli with auditory distractors since they made less correct responses and more omissions than wild type (WT mice. As the inhibition of the behavioral responses to irrelevant stimuli is an important mechanism of the selective attention processes, behavioral errors are relevant measures that can reflect altered inhibitory control. Errors produced during a cued attention task can be classified as premature, target and perseverative errors. Perseverative responses can be considered as an inability to inhibit the repetition of an action already planned, while premature responses can be considered as an index of the ability to wait or retain an action. Here, we studied premature, target and perseverative errors during a visual attention task with auditory distractors in WT and KO mice. We found that α9-KO mice make fewer perseverative errors with longer latencies than WT mice in the presence of auditory distractors. In addition, although we found no significant difference in the number of target error between genotypes, KO mice made more short-latency target errors than WT mice during the presentation of auditory distractors. The fewer perseverative error made by α9-KO mice could be explained by a reduced motivation for reward and an increased impulsivity during decision making with auditory distraction in KO mice.
Full Text Available Objective: Neural hypo-sensitivity to cues predicting positive reinforcement has been observed in ADHD using the Monetary Incentive Delay (MID task. Here we report the first study using an electrophysiological analogue of this task to distinguish between (i cue related anticipation of reinforcement and downstream effects on (ii target engagement and (iii performance in a clinical sample of adolescents with ADHD and controls. Methods: Thirty-one controls and 32 adolescents with ADHD aged 10â16 years performed the electrophysiological (e-MID task â in which preparatory cues signal whether a response to an upcoming target will be reinforced or not â under three conditions; positive reinforcement, negative reinforcement (response cost and no consequence (neutral. We extracted values for both cue-related potentials known to be, both, associated with response preparation and modulated by reinforcement (Cue P3 and Cue CNV and target-related potentials (target P3 and compared these between ADHD and controls. Results: ADHD and controls did not differ on cue-related components on neutral trials. Against expectation, adolescents with ADHD displayed Cue P3 and Cue CNV reinforcement-related enhancement (versus neutral trials compared to controls. ADHD individuals displayed smaller target P3 amplitudes and slower and more variable performance â but effects were not modulated by reinforcement contingencies. When age, IQ and conduct problems were controlled effects were marginally significant but the pattern of results did not change. Discussion: ADHD was associated with hypersensitivity to positive (and marginally negative reinforcement reflected on components often thought to be associated with response preparation â however these did not translate into improved attention to targets. In the case of ADHD, upregulated CNV may be a specific marker of hyper-arousal rather than an enhancement of anticipatory attention to upcoming targets
Solé Puig, Maria; Pérez Zapata, Laura; Aznar-Casanova, J Antonio; Supèr, Hans
Covert spatial attention produces biases in perceptual and neural responses in the absence of overt orienting movements. The neural mechanism that gives rise to these effects is poorly understood. Here we report the relation between fixational eye movements, namely eye vergence, and covert attention. Visual stimuli modulate the angle of eye vergence as a function of their ability to capture attention. This illustrates the relation between eye vergence and bottom-up attention. In visual and auditory cue/no-cue paradigms, the angle of vergence is greater in the cue condition than in the no-cue condition. This shows a top-down attention component. In conclusion, observations reveal a close link between covert attention and modulation in eye vergence during eye fixation. Our study suggests a basis for the use of eye vergence as a tool for measuring attention and may provide new insights into attention and perceptual disorders.
Devore, Sasha; Ihlefeld, Antje; Hancock, Kenneth; Shinn-Cunningham, Barbara; Delgutte, Bertrand
In reverberant environments, acoustic reflections interfere with the direct sound arriving at a listener’s ears, distorting the spatial cues for sound localization. Yet, human listeners have little difficulty localizing sounds in most settings. Because reverberant energy builds up over time, the source location is represented relatively faithfully during the early portion of a sound, but this representation becomes increasingly degraded later in the stimulus. We show that the directional sens...
Josef J. Bless
Full Text Available Emerging evidence of the validity of collecting data in natural settings using smartphone applications has opened new possibilities for psychological assessment, treatment, and research. In this study we explored the feasibility and effectiveness of using a mobile application for self-supervised training of auditory attention. In addition, we investigated the neural underpinnings of the training procedure with functional magnetic resonance imaging (fMRI, as well as possible transfer effects to untrained cognitive interference tasks. Subjects in the training group performed the training task on an iPod touch two times a day (morning/evening for three weeks; subjects in the control group received no training, but were tested at the same time interval as the training group. Behavioral responses were measured before and after the training period in both groups, together with measures of task-related neural activations by fMRI. The results showed an expected performance increase after training that corresponded to activation decreases in brain regions associated with selective auditory processing (left posterior temporal gyrus and executive functions (right middle frontal gyrus, indicating more efficient processing in task-related neural networks after training. Our study suggests that cognitive training delivered via mobile applications is feasible and improves the ability to focus attention with corresponding effects on neural plasticity. Future research should focus on the clinical benefits of mobile cognitive training. Limitations of the study are discussed including reduced experimental control and lack of transfer effects.
Brewster, Ryan C; King, Tricia Z; Burns, Thomas G; Drossner, David M; Mahle, William T
White matter disruptions have been identified in individuals with congenital heart disease (CHD). However, no specific theory-driven relationships between microstructural white matter disruptions and cognition have been established in CHD. We conducted a two-part study. First, we identified significant differences in fractional anisotropy (FA) of emerging adults with CHD using Tract-Based Spatial Statistics (TBSS). TBSS analyses between 22 participants with CHD and 18 demographically similar controls identified five regions of normal appearing white matter with significantly lower FA in CHD, and two higher. Next, two regions of lower FA in CHD were selected to examine theory-driven differential relationships with cognition: voxels along the left uncinate fasciculus (UF; a tract theorized to contribute to verbal memory) and voxels along the right middle cerebellar peduncle (MCP; a tract previously linked to attention). In CHD, a significant positive correlation between UF FA and memory was found, r(20)=.42, p=.049 (uncorrected). There was no correlation between UF and auditory attention span. A positive correlation between MCP FA and auditory attention span was found, r(20)=.47, p=.027 (uncorrected). There was no correlation between MCP and memory. In controls, no significant relationships were identified. These results are consistent with previous literature demonstrating lower FA in younger CHD samples, and provide novel evidence for disrupted white matter integrity in emerging adults with CHD. Furthermore, a correlational double dissociation established distinct white matter circuitry (UF and MCP) and differential cognitive correlates (memory and attention span, respectively) in young adults with CHD.
Groom, Madeleine J.; Kochhar, Puja; Hamilton, Antonia; Liddle, Elizabeth B.; Simeou, Marina; Hollis, Chris
This study investigated the neurobiological basis of comorbidity between autism spectrum disorder (ASD) and attention deficit/hyperactivity disorder (ADHD). We compared children with ASD, ADHD or ADHD+ASD and typically developing controls (CTRL) on behavioural and electrophysiological correlates of gaze cue and face processing. We measured effects…
Beeghly, Marjorie; Rose-Jacobs, Ruth; Martin, Brett M; Cabral, Howard J; Heeren, Timothy C; Frank, Deborah A
Neuropsychological processes such as attention and memory contribute to children's higher-level cognitive and language functioning and predict academic achievement. The goal of this analysis was to evaluate whether level of intrauterine cocaine exposure (IUCE) alters multiple aspects of preadolescents' neuropsychological functioning assessed using a single age-referenced instrument, the NEPSY: A Developmental Neuropsychological Assessment (NEPSY) (Korkman et al., 1998), after controlling for relevant covariates. Participants included 137 term 9.5-year-old children from low-income urban backgrounds (51% male, 90% African American/Caribbean) from an ongoing prospective longitudinal study. Level of IUCE was assessed in the newborn period using infant meconium and maternal report. 52% of the children had IUCE (65% with lighter IUCE, and 35% with heavier IUCE), and 48% were unexposed. Infants with Fetal Alcohol Syndrome, HIV seropositivity, or intrauterine exposure to illicit substances other than cocaine and marijuana were excluded. At the 9.5-year follow-up visit, trained examiners masked to IUCE and background variables evaluated children's neuropsychological functioning using the NEPSY. The association between level of IUCE and NEPSY outcomes was evaluated in a series of linear regressions controlling for intrauterine exposure to other substances and relevant child, caregiver, and demographic variables. Results indicated that level of IUCE was associated with lower scores on the Auditory Attention and Narrative Memory tasks, both of which require auditory information processing and sustained attention for successful performance. However, results did not follow the expected ordinal, dose-dependent pattern. Children's neuropsychological test scores were also altered by a variety of other biological and psychosocial factors. Copyright © 2014 Elsevier Inc. All rights reserved.
Wang, Hongyan; Zhang, Gaoyan; Liu, Baolin
Semantic priming is an important research topic in the field of cognitive neuroscience. Previous studies have shown that the uni-modal semantic priming effect can be modulated by attention. However, the influence of attention on cross-modal semantic priming is unclear. To investigate this issue, the present study combined a cross-modal semantic priming paradigm with an auditory spatial attention paradigm, presenting the visual pictures as the prime stimuli and the semantically related or unrelated sounds as the target stimuli. Event-related potentials results showed that when the target sound was attended to, the N400 effect was evoked. The N400 effect was also observed when the target sound was not attended to, demonstrating that the cross-modal semantic priming effect persists even though the target stimulus is not focused on. Further analyses revealed that the N400 effect evoked by the unattended sound was significantly lower than the effect evoked by the attended sound. This contrast provides new evidence that the cross-modal semantic priming effect can be modulated by attention.
Saupe, Katja; Koelsch, Stefan; Rübsamen, Rudolf
To investigate the influence of spatial information in auditory scene analysis, polyphonic music (three parts in different timbres) was composed and presented in free field. Each part contained large falling interval jumps in the melody and the task of subjects was to detect these events in one part ("target part") while ignoring the other parts. All parts were either presented from the same location (0 degrees; overlap condition) or from different locations (-28 degrees, 0 degrees, and 28 degrees or -56 degrees, 0 degrees, and 56 degrees in the azimuthal plane), with the target part being presented either at 0 degrees or at one of the right-sided locations. Results showed that spatial separation of 28 degrees was sufficient for a significant improvement in target detection (i.e., in the detection of large interval jumps) compared to the overlap condition, irrespective of the position (frontal or right) of the target part. A larger spatial separation of the parts resulted in further improvements only if the target part was lateralized. These data support the notion of improvement in the suppression of interfering signals with spatial sound source separation. Additionally, the data show that the position of the relevant sound source influences auditory performance.
Klemen, Jane; Buchel, Christian; Buhler, Mira; Menz, Mareike M.; Rose, Michael
Attentional interference between tasks performed in parallel is known to have strong and often undesired effects. As yet, however, the mechanisms by which interference operates remain elusive. A better knowledge of these processes may facilitate our understanding of the effects of attention on human performance and the debilitating consequences…
Radford, Craig A; Montgomery, John C; Caiger, Paul; Higgs, Dennis M
The auditory evoked potential technique has been used for the past 30 years to evaluate the hearing ability of fish. The resulting audiograms are typically presented in terms of sound pressure (dB re. 1 μPa) with the particle motion (dB re. 1 m s(-2)) component largely ignored until recently. When audiograms have been presented in terms of particle acceleration, one of two approaches has been used for stimulus characterisation: measuring the pressure gradient between two hydrophones or using accelerometers. With rare exceptions these values are presented from experiments using a speaker as the stimulus, thus making it impossible to truly separate the contribution of direct particle motion and pressure detection in the response. Here, we compared the particle acceleration and pressure auditory thresholds of three species of fish with differing hearing specialisations, goldfish (Carassius auratus, weberian ossicles), bigeye (Pempheris adspersus, ligamentous hearing specialisation) and a third species with no swim bladder, the common triplefin (Forstergyian lappillum), using three different methods of determining particle acceleration. In terms of particle acceleration, all three fish species have similar hearing thresholds, but when expressed as pressure thresholds goldfish are the most sensitive, followed by bigeye, with triplefin the least sensitive. It is suggested here that all fish have a similar ability to detect the particle motion component of the sound field and it is their ability to transduce the pressure component of the sound field to the inner ear via ancillary hearing structures that provides the differences in hearing ability. Therefore, care is needed in stimuli presentation and measurement when determining hearing ability of fish and when interpreting comparative hearing abilities between species.
Qiao, Zhengxue; Yang, Aiying; Qiu, Xiaohui; Yang, Xiuxian; Zhang, Congpei; Zhu, Xiongzhao; He, Jincai; Wang, Lin; Bai, Bing; Sun, Hailian; Zhao, Lun; Yang, Yanjie
Gender differences in rates of major depressive disorder (MDD) are well established, but gender differences in cognitive function have been little studied. Auditory mismatch negativity (MMN) was used to investigate gender differences in pre-attentive information processing in first episode MDD. In the deviant-standard reverse oddball paradigm, duration auditory MMN was obtained in 30 patients (15 males) and 30 age-/education-matched controls. Over frontal-central areas, mean amplitude of increment MMN (to a 150-ms deviant tone) was smaller in female than male patients; there was no sex difference in decrement MMN (to a 50-ms deviant tone). Neither increment nor decrement MMN differed between female and male patients over temporal areas. Frontal-central MMN and temporal MMN did not differ between male and female controls in any condition. Over frontal-central areas, mean amplitude of increment MMN was smaller in female patients than female controls; there was no difference in decrement MMN. Neither increment nor decrement MMN differed between female patients and female controls over temporal areas. Frontal-central MMN and temporal MMN did not differ between male patients and male controls. Mean amplitude of increment MMN in female patients did not correlate with symptoms, suggesting this sex-specific deficit is a trait- not a state-dependent phenomenon. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Lineweaver, Tara T; Kercood, Suneeta; O'Keeffe, Nicole B; O'Brien, Kathleen M; Massey, Eric J; Campbell, Samantha J; Pierce, Jenna N
Two studies addressed how young adult college students with attention deficit hyperactivity disorder (ADHD) (n = 44) compare to their nonaffected peers (n = 42) on tests of auditory and visual-spatial working memory (WM), are vulnerable to auditory and visual distractions, and are affected by a simple intervention. Students with ADHD demonstrated worse auditory WM than did controls. A near significant trend indicated that auditory distractions interfered with the visual WM of both groups and that, whereas controls were also vulnerable to visual distractions, visual distractions improved visual WM in the ADHD group. The intervention was ineffective. Limited correlations emerged between self-reported ADHD symptoms and objective test performances; students with ADHD who perceived themselves as more symptomatic often had better WM and were less vulnerable to distractions than their ADHD peers.
Schwent, V. L.; Hillyard, S. A.; Galambos, R.
A randomized sequence of tone bursts was delivered to subjects at short inter-stimulus intervals with the tones originating from one of three spatially and frequency specific channels. The subject's task was to count the tones in one of the three channels at a time, ignoring the other two, and press a button after each tenth tone. In different conditions, tones were given at high and low intensities and with or without a background white noise to mask the tones. The N sub 1 component of the auditory vertex potential was found to be larger in response to attended channel tones in relation to unattended tones. This selective enhancement of N sub 1 was minimal for loud tones presented without noise and increased markedly for the lower tone intensity and in noise added conditions.
Volosin, Márta; Gaál, Zsófia Anna; Horváth, János
The present study investigated how fast younger and older adults recovered from a distracted attentional state induced by rare, unpredictable sound events. The attentional state was characterized by the auditory N1 event-related potential (ERP), which is enhanced for sound events in the focus of attention. Younger (19-26 years) and older (62-74 years) adults listened to continuous tones containing rare pitch changes (glides) and short gaps. Glides and gaps could be separated in 150ms, 250ms, 650ms or longer and the task was gap detection while ignoring glides. With longer glide-gap separations similar N1 enhancements were observable in both groups suggesting that the duration of the distracted sensory state was not affected by aging. Older adults responded, however, slower at short glide-gap separations which indicated that distraction at subsequent levels of processing may have nonetheless more impact in older than in younger adults. Copyright © 2017 Elsevier B.V. All rights reserved.
Coch, Donna; Sanders, Lisa D; Neville, Helen J
In a dichotic listening paradigm, event-related potentials (ERPs) were recorded to linguistic and nonlinguistic probe stimuli embedded in 2 different narrative contexts as they were either attended or unattended. In adults, the typical N1 attention effect was observed for both types of probes: Probes superimposed on the attended narrative elicited an enhanced negativity compared to the same probes when unattended. Overall, this sustained attention effect was greater over medial and left lateral sites, but was more posteriorly distributed and of longer duration for linguistic as compared to nonlinguistic probes. In contrast, in 6- to 8-year-old children the ERPs were morphologically dissimilar to those elicited in adults and children displayed a greater positivity to both types of probe stimuli when embedded in the attended as compared to the unattended narrative. Although both adults and children showed attention effects beginning at about 100 msec, only adults displayed left-lateralized attention effects and a distinct, posterior distribution for linguistic probes. These results suggest that the attentional networks indexed by this task continue to develop beyond the age of 8 years.
Treder, Matthias S.; Purwins, Hendrik; Miklody, Daniel
. Here, we explore polyphonic music as a novel stimulation approach for future use in a brain-computer interface. In a musical oddball experiment, we had participants shift selective attention to one out of three different instruments in music audio clips, with each instrument occasionally playing one...... 11 participants. This is a proof of concept that attention paid to a particular instrument in polyphonic music can be inferred from ongoing EEG, a finding that is potentially relevant for both brain-computer interface and music research....
Marchegiani, Letizia; Fafoutis, Xenofon
We are interested in the distribution of top-down attention in noisy environments, in which the listening capability is challenged by rock music playing in the background. We conducted behavioral experiments in which the subjects were asked to focus their attention on a narrative and detect...... a specific word, while the voice of the narrator was masked by rock songs that were alternating in the background. Our study considers several types of songs and investigates how their distinct features affect the ability to segregate sounds. Additionally, we examine the effect of the subjects' familiarity...... to the music....
Li, Shu-Chen; Passow, Susanne; Nietfeld, Wilfried; Schröder, Julia; Bertram, Lars; Heekeren, Hauke R; Lindenberger, Ulman
Using a specific variant of the dichotic listening paradigm, we studied the influence of dopamine on attentional modulation of auditory perception by assessing effects of allelic variation of a single-nucleotide polymorphism (SNP) rs907094 in the DARPP-32 gene (dopamine and adenosine 3', 5'-monophosphate-regulated phosphoprotein 32 kilodations; also known as PPP1R1B) on behavior and cortical evoked potentials. A frequent DARPP-32 haplotype that includes the A allele of this SNP is associated with higher mRNA expression of DARPP-32 protein isoforms, striatal dopamine receptor function, and frontal-striatal connectivity. As we hypothesized, behaviorally the A homozygotes were more flexible in selectively attending to auditory inputs than any G carriers. Moreover, this genotype also affected auditory evoked cortical potentials that reflect early sensory and late attentional processes. Specifically, analyses of event-related potentials (ERPs) revealed that amplitudes of an early component of sensory selection (N1) and a late component (N450) reflecting attentional deployment for conflict resolution were larger in A homozygotes than in any G carriers. Taken together, our data lend support for dopamine's role in modulating auditory attention both during the early sensory selection and late conflict resolution stages. Copyright © 2013 Elsevier Ltd. All rights reserved.
Emmert, Stacey; Kercood, Suneeta; Grskovic, Janice A.
Using a single-subject alternating treatments reversal design, the effects of three conditions, tactile stimulation, auditory stimulation, and choice of the two, were compared on the math story problem solving of elementary students with attention problems. Students attempted and solved slightly more problems and engaged in fewer off-task…
Elena V Orekhova
Full Text Available The extended phenotype of autism spectrum disorders (ASD includes a combination of arousal regulation problems, sensory modulation difficulties, and attention re-orienting deficit. A slow and inefficient re-orienting to stimuli that appear outside of the attended sensory stream is thought to be especially detrimental for social functioning. Event-related potentials (ERPs and magnetic fields (ERFs may help to reveal which processing stages underlying brain response to unattended but salient sensory event are affected in individuals with ASD. Previous research focusing on two sequential stages of the brain response - automatic detection of physical changes in auditory stream, indexed by mismatch negativity (MMN, and evaluation of stimulus novelty, indexed by P3a component, - found in individuals with ASD either increased, decreased or normal processing of deviance and novelty. The review examines these apparently conflicting results, notes gaps in previous findings, and suggests a potentially unifying hypothesis relating the dampened responses to unattended sensory events to the deficit in rapid arousal process. Specifically, ‘sensory gating’ studies focused on pre-attentive arousal consistently demonstrated that brain response to unattended and temporally novel sound in ASD is already affected at around 100 ms after stimulus onset. We hypothesize that abnormalities in nicotinic cholinergic arousal pathways, previously reported in individuals with ASD, may contribute to these ERP/ERF aberrations and result in attention re-orienting deficit. Such cholinergic dysfunction may be present in individuals with ASD early in life and can influence both sensory processing and attention re-orienting behavior. Identification of early neurophysiological biomarkers for cholinergic deficit would help to detect infants at risk who can potentially benefit from particular types of therapies or interventions.
Burg, E. van der; Olivers, C.N.L.; Bronkhorst, A.W.; Koelewijn, T.; Theeuwes, J.
Als binnen een halve seconde twee visuele items in een serieel aangeboden stroom moeten worden geselecteerd, is de prestatie voor het tweede item vaak relatief slecht (er treedt een attentional blink op); wanneer het eerste echter item auditief wordt aangeboden, verdwijnt de blink meestal. We
Full Text Available Following a multi-talker conversation relies on the ability to rapidly and efficiently shift the focus of spatial attention from one talker to another. The current study investigated the listening costs associated with shifts in spatial attention during conversational turn-taking in 16 normally-hearing listeners using a novel sentence recall task. Three pairs of syntactically fixed but semantically unpredictable matrix sentences, recorded from a single male talker, were presented concurrently through an array of three loudspeakers (directly ahead and +/-30° azimuth. Subjects attended to one spatial location, cued by a tone, and followed the target conversation from one sentence to the next using the call-sign at the beginning of each sentence. Subjects were required to report the last three words of each sentence (speech recall task or answer multiple choice questions related to the target material (speech comprehension task. The reading span test, attention network test, and trail making test were also administered to assess working memory, attentional control, and executive function. There was a 10.7 ± 1.3% decrease in word recall, a pronounced primacy effect, and a rise in masker confusion errors and word omissions when the target switched location between sentences. Switching costs were independent of the location, direction, and angular size of the spatial shift but did appear to be load dependent and only significant for complex questions requiring multiple cognitive operations. Reading span scores were positively correlated with total words recalled, and negatively correlated with switching costs and word omissions. Task switching speed (Trail-B time was also significantly correlated with recall accuracy. Overall, this study highlights i the listening costs associated with shifts in spatial attention and ii the important role of working memory in maintaining goal relevant information and extracting meaning from dynamic multi
Lin, Gaven; Carlile, Simon
Following a multi-talker conversation relies on the ability to rapidly and efficiently shift the focus of spatial attention from one talker to another. The current study investigated the listening costs associated with shifts in spatial attention during conversational turn-taking in 16 normally-hearing listeners using a novel sentence recall task. Three pairs of syntactically fixed but semantically unpredictable matrix sentences, recorded from a single male talker, were presented concurrently through an array of three loudspeakers (directly ahead and +/-30° azimuth). Subjects attended to one spatial location, cued by a tone, and followed the target conversation from one sentence to the next using the call-sign at the beginning of each sentence. Subjects were required to report the last three words of each sentence (speech recall task) or answer multiple choice questions related to the target material (speech comprehension task). The reading span test, attention network test, and trail making test were also administered to assess working memory, attentional control, and executive function. There was a 10.7 ± 1.3% decrease in word recall, a pronounced primacy effect, and a rise in masker confusion errors and word omissions when the target switched location between sentences. Switching costs were independent of the location, direction, and angular size of the spatial shift but did appear to be load dependent and only significant for complex questions requiring multiple cognitive operations. Reading span scores were positively correlated with total words recalled, and negatively correlated with switching costs and word omissions. Task switching speed (Trail-B time) was also significantly correlated with recall accuracy. Overall, this study highlights (i) the listening costs associated with shifts in spatial attention and (ii) the important role of working memory in maintaining goal relevant information and extracting meaning from dynamic multi-talker conversations.
Jacobson, Mark W; Delis, Dean C; Bondi, Mark W; Salmon, David P
Some studies of elderly individuals with the ApoE-e4 genotype noted subtle deficits on tests of attention such as the WAIS-R Digit Span subtest, but these findings have not been consistently reported. One possible explanation for the inconsistent results could be the presence of subgroups of e4+ individuals with asymmetric cognitive profiles (i.e., significant discrepancies between verbal and visuospatial skills). Comparing genotype groups with individual, modality-specific tests might obscure subtle differences between verbal and visuospatial attention in these asymmetric subgroups. In this study, we administered the WAIS-R Digit Span and WMS-R Visual Memory Span subtests to 21 nondemented elderly e4+ individuals and 21 elderly e4- individuals matched on age, education, and overall cognitive ability. We hypothesized that a) the e4+ group would show a higher incidence of asymmetric cognitive profiles when comparing Digit Span/Visual Memory Span performance relative to the e4- group; and (b) an analysis of individual test performance would fail to reveal differences between the two subject groups. Although the groups' performances were comparable on the individual attention span tests, the e4+ group showed a significantly larger discrepancy between digit span and spatial span scores compared to the e4- group. These findings suggest that contrast measures of modality-specific attentional skills may be more sensitive to subtle group differences in at-risk groups, even when the groups do not differ on individual comparisons of standardized test means. The increased discrepancy between verbal and visuospatial attention may reflect the presence of "subgroups" within the ApoE-e4 group that are qualitatively similar to asymmetric subgroups commonly associated with the earliest stages of AD.
Maess, Burkhard; Jacobsen, Thomas; Schröger, Erich; Friederici, Angela D
Changes in the pitch of repetitive sounds elicit the mismatch negativity (MMN) of the event-related brain potential (ERP). There exist two alternative accounts for this index of automatic change detection: (1) A sensorial, non-comparator account according to which ERPs in oddball sequences are affected by differential refractory states of frequency-specific afferent cortical neurons. (2) A cognitive, comparator account stating that MMN reflects the outcome of a memory comparison between a neuronal model of the frequently presented standard sound with the sensory memory representation of the changed sound. Using a condition controlling for refractoriness effects, the two contributions to MMN can be disentangled. The present study used whole-head MEG to further elucidate the sensorial and cognitive contributions to frequency MMN. Results replicated ERP findings that MMN to pitch change is a compound of the activity of a sensorial, non-comparator mechanism and a cognitive, comparator mechanism which could be separated in time. The sensorial part of frequency MMN consisting of spatially dipolar patterns was maximal in the late N1 range (105-125 ms), while the cognitive part peaked in the late MMN-range (170-200 ms). Spatial principal component analyses revealed that the early part of the traditionally measured MMN (deviant minus standard) is mainly due to the sensorial mechanism while the later mainly due to the cognitive mechanism. Inverse modeling revealed sources for both MMN contributions in the gyrus temporales transversus, bilaterally. These MEG results suggest temporally distinct but spatially overlapping activities of non-comparator-based and comparator-based mechanisms of automatic frequency change detection in auditory cortex.
Yoncheva, Yuliya N.; Maurer, Urs; Zevin, Jason D.; McCandliss, Bruce D.
ERP responses to spoken words are sensitive to both rhyming effects and effects of associated spelling patterns. Are such effects automatically elicited by spoken words or dependent on selectively attending to phonology? To address this question, ERP responses to spoken word pairs were investigated under two equally demanding listening tasks that directed selective attention either to sub-syllabic phonology (i.e., rhyme judgments) or to melodies embedded within the words. ERPs elicited when participants selectively attended to phonology demonstrated a rhyming effect that was concurrent with online stimulus encoding and an orthographic effect that emerged later. ERP responses to the same stimuli presented under melodic focus, however, showed no evidence of sensitivity to rhyme or spelling patterns. Results reveal limitations to the automaticity of such ERP effects, suggesting that rhyme effects may depend, at least to some degree, on allocation of attention to phonology, which may in turn activate task-incidental orthographic information. PMID:23395712
McLaughlin, Paula M; Anderson, Nicole D; Rich, Jill B; Chertkow, Howard; Murtha, Susan J E
Subtle deficits in visual selective attention have been found in amnestic mild cognitive impairment (aMCI). However, few studies have explored performance on visual search paradigms or the Simon task, which are known to be sensitive to disease severity in Alzheimer's patients. Furthermore, there is limited research investigating how deficiencies can be ameliorated with exogenous support (auditory cues). Sixteen individuals with aMCI and 14 control participants completed 3 experimental tasks that varied in demand and cue availability: visual search-alerting, visual search-orienting, and Simon task. Visual selective attention was influenced by aMCI, auditory cues, and task characteristics. Visual search abilities were relatively consistent across groups. The aMCI participants were impaired on the Simon task when working memory was required, but conflict resolution was similar to controls. Spatially informative orienting cues improved response times, whereas spatially neutral alerting cues did not influence performance. Finally, spatially informative auditory cues benefited the aMCI group more than controls in the visual search task, specifically at the largest array size where orienting demands were greatest. These findings suggest that individuals with aMCI have working memory deficits and subtle deficiencies in orienting attention and rely on exogenous information to guide attention. © The Author 2013. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: email@example.com.
Full Text Available It has been claimed that stimuli sharing the color of the nogo-target are suppressed because of the strong incentive to not process the nogo-target, but we failed to replicate this finding. Participants searched for a color singleton in the target display and indicated its shape when it was in the go color. If the color singleton in the target display was in the nogo color, they had to withhold the response. The target display was preceded by a cue display that also contained a color singleton (the cue. The cue was either in the color of the go or nogo target, or it was in an unrelated, neutral color. With cues in the go color, reaction times (RTs were shorter when the cue appeared at the same location as the target compared to when it appeared at a different location. Also, electrophysiological recordings showed that an index of attentional selection, the N2pc, was elicited by go cues. Surprisingly, we failed to replicate cueing costs for cues in the nogo color that were originally reported by Anderson and Folk (2012. Consistently, we also failed to find an electrophysiological index of attentional suppression (the PD for cues in the nogo color. Further, fronto-central ERPs to the cue display showed the same negativity for nogo and neutral stimuli relative to go stimuli, which is at odds with response inhibition and conflict monitoring accounts of the Nogo-N2. Thus, the modified cueing paradigm employed here provides little evidence that features associated with nogo-targets are suppressed at the level of attention or response selection. Rather, nogo-stimuli are efficiently ignored and attention is focused on features that require a response.
Basil, Michael D.
Investigates whether selective attention to a particular television modality resulted in different levels of attention to the visual and auditory modalities. Finds that subjects were able to focus on a particular message channel but that reactions to cues were faster when the audio channel contained the most information and when viewers focused on…
Glumm, Monica M; Kehring, Kathy L; White, Timothy L
This laboratory experiment examined the effects of paired sensory cues that indicate the location of targets on target acquisition performance, the recall of information presented in concurrent visual...
Full Text Available In recent years, a body of research that regards the scientific study of magic performances as a promising method of investigating psychological phenomena in an ecologically valid setting has emerged. Seemingly contradictory findings concerning the ability of social cues to strengthen a magic trick’s effectiveness have been published. In this experiment, an effort was made to disentangle the unique influence of different social and physical triggers of attentional misdirection on observers’ overt and covert attention. The ability of 120 participants to detect the mechanism of a cups-and-balls trick was assessed, and their visual fixations were recorded using an eye-tracker while they were watching the routine. All the investigated techniques of misdirection, including sole usage of social cues, were shown to increase the probability of missing the trick mechanism. Depending on the technique of misdirection used, very different gaze patterns were observed. A combination of social and physical techniques of misdirection influenced participants’ overt attention most effectively.
Tavakoli, Paniz; Campbell, Kenneth
A rarely occurring and highly relevant auditory stimulus occurring outside of the current focus of attention can cause a switching of attention. Such attention capture is often studied in oddball paradigms consisting of a frequently occurring "standard" stimulus which is changed at odd times to form a "deviant". The deviant may result in the capturing of attention. An auditory ERP, the P3a, is often associated with this process. To collect a sufficient amount of data is however very time-consuming. A more multi-feature "optimal" paradigm has been proposed but it is not known if it is appropriate for the study of attention capture. An optimal paradigm was run in which 6 different rare deviants (p=.08) were separated by a standard stimulus (p=.50) and compared to results when 4 oddball paradigms were also run. A large P3a was elicited by some of the deviants in the optimal paradigm but not by others. However, very similar results were observed when separate oddball paradigms were run. The present study indicates that the optimal paradigm provides a very time-saving method to study attention capture and the P3a. Copyright © 2016 Elsevier B.V. All rights reserved.
Getzmann, Stephan; Lewald, Jörg; Falkenstein, Michael
Speech understanding in complex and dynamic listening environments requires (a) auditory scene analysis, namely auditory object formation and segregation, and (b) allocation of the attentional focus to the talker of interest. There is evidence that pre-information is actively used to facilitate these two aspects of the so-called "cocktail-party" problem. Here, a simulated multi-talker scenario was combined with electroencephalography to study scene analysis and allocation of attention in young and middle-aged adults. Sequences of short words (combinations of brief company names and stock-price values) from four talkers at different locations were simultaneously presented, and the detection of target names and the discrimination between critical target values were assessed. Immediately prior to speech sequences, auditory pre-information was provided via cues that either prepared auditory scene analysis or attentional focusing, or non-specific pre-information was given. While performance was generally better in younger than older participants, both age groups benefited from auditory pre-information. The analysis of the cue-related event-related potentials revealed age-specific differences in the use of pre-cues: Younger adults showed a pronounced N2 component, suggesting early inhibition of concurrent speech stimuli; older adults exhibited a stronger late P3 component, suggesting increased resource allocation to process the pre-information. In sum, the results argue for an age-specific utilization of auditory pre-information to improve listening in complex dynamic auditory environments.
Full Text Available Speech understanding in complex and dynamic listening environments requires (a auditory scene analysis, namely auditory object formation and segregation, and (b allocation of the attentional focus to the talker of interest. There is evidence that pre-information is actively used to facilitate these two aspects of the so-called cocktail-party problem. Here, a simulated multi-talker scenario was combined with electroencephalography to study scene analysis and allocation of attention in young and middle-aged adults. Sequences of short words (combinations of brief company names and stock-price values from four talkers at different locations were simultaneously presented, and the detection of target names and the discrimination between critical target values were assessed. Immediately prior to speech sequences, auditory pre-information was provided via cues that either prepared auditory scene analysis or attentional focusing, or non-specific pre-information was given. While performance was generally better in younger than older participants, both age groups benefited from auditory pre-information. The analysis of the cue-related event-related potentials revealed age-specific differences in the use of pre-cues: Younger adults showed a pronounced N2 component, suggesting early inhibition of concurrent speech stimuli; older adults exhibited a stronger late P3 component, suggesting increased resource allocation to process the pre-information. In sum, the results argue for an age-specific utilization of auditory pre-information to improve listening in complex dynamic auditory environments.
Stevens, Courtney; Fanning, Jessica; Coch, Donna; Sanders, Lisa; Neville, Helen
Recent proposals suggest that some interventions designed to improve language skills might also target or train selective attention. The present study examined whether six weeks of high-intensity (100 min/day) training with a computerized intervention program designed to improve language skills would also influence neural mechanisms of selective auditory attention previously shown to be deficient in children with specific language impairment (SLI). Twenty children received computerized training, including 8 children diagnosed with SLI and 12 children with typically developing language. An additional 13 children with typically developing language received no specialized training (NoTx control group) but were tested and retested after a comparable time period to control for maturational and test-retest effects. Before and after training (or a comparable delay period for the NoTx control group), children completed standardized language assessments and an event-related brain potential (ERP) measure of selective auditory attention. Relative to the NoTx control group, children receiving training showed increases in standardized measures of receptive language. In addition, children receiving training showed larger increases in the effects of attention on neural processing following training relative to the NoTx control group. The enhanced effect of attention on neural processing represented a large effect size (Cohen's d=0.8), and was specific to changes in signal enhancement of attended stimuli. These findings indicate that the neural mechanisms of selective auditory attention, previously shown to be deficient in children with SLI, can be remediated through training and can accompany improvements on standardized measures of language.
Gherri, Elena; Driver, Jon; Eimer, Martin
To investigate whether saccade preparation can modulate processing of auditory stimuli in a spatially-specific fashion, ERPs were recorded for a Saccade task, in which the direction of a prepared saccade was cued, prior to an imperative auditory stimulus indicating whether to execute or withhold that saccade. For comparison, we also ran a conventional Covert Attention task, where the same cue now indicated the direction for a covert endogenous attentional shift prior to an auditory target-nontarget discrimination. Lateralised components previously observed during cued shifts of attention (ADAN, LDAP) did not differ significantly across tasks, indicating commonalities between auditory spatial attention and oculomotor control. Moreover, in both tasks, spatially-specific modulation of auditory processing was subsequently found, with enhanced negativity for lateral auditory nontarget stimuli at cued versus uncued locations. This modulation started earlier and was more pronounced for the Covert Attention task, but was also reliably present in the Saccade task, demonstrating that the effects of covert saccade preparation on auditory processing can be similar to effects of endogenous covert attentional orienting, albeit smaller. These findings provide new evidence for similarities but also some differences between oculomotor preparation and shifts of endogenous spatial attention. They also show that saccade preparation can affect not just vision, but also sensory processing of auditory events.
Kuhn, Gustav; Teszka, Robert; Tenaw, Natalia; Kingstone, Alan
People's attention is oriented towards faces, but the extent to which these social attention effects are under top down control is more ambiguous. Our first aim was to measure and compare, in real life and in the lab, people's top-down control over overt and covert shifts in reflexive social attention to the face of another. We employed a magic trick in which the magician used social cues (i.e. asking a question whilst establishing eye contact) to misdirect attention towards his face, and thus preventing participants from noticing a visible colour change to a playing card. Our results show that overall people spend more time looking at the magician's face when he is seen on video than in reality. Additionally, although most participants looked at the magician's face when misdirected, this tendency to look at the face was modulated by instruction (i.e., "keep your attention on the cards"), and therefore, by top down control. Moreover, while the card's colour change was fully visible, the majority of participants failed to notice the change, and critically, change detection (our measure of covert attention) was not affected by where people looked (overt attention). We conclude that there is a tendency to shift overt and covert attention reflexively to faces, but that people exert more top down control over this overt shift in attention. These finding are discussed within a new framework that focuses on the role of eye movements as an attentional process as well as a form of non-verbal communication. Copyright © 2015 Elsevier B.V. All rights reserved.
Full Text Available The effect of the presentation of two different auditory pitches (high & low on manual line-bisection performance was studied to investigate the relationship between space and magnitude representations underlying motor acts. Participants were asked to mark the midpoint of a given line with a pen while they were listening a pitch via headphones. In healthy participants, the effect of the presentation order (blocked or alternative way of auditory stimuli was tested (Exp. 1. The results showed no biasing effect of pitch in blocked-order presentation, whereas the alternative presentation modulated the line-bisection. Lower pitch produced leftward or downward bisection biases whereas higher pitch produced rightward or upward biases, suggesting that visuomotor processing can be spatially modulated by irrelevant auditory cues. In Exp. 2, the effect of such alternative stimulations in line bisection in right brain damaged patients with a unilateral neglect and without a neglect was tested. Similar biasing effects caused by auditory cues were observed although the white noise presentation also affected the patient’s performance. Additionally, the effect of pitch difference was larger for the neglect patient than for the no-neglect patient as well as for healthy participants. The neglect patient’s bisection performance gradually improved during the experiment and was maintained even after one week. It is therefore concluded that auditory cues, characterized by both the pitch difference and the dynamic alternation, influence spatial representations. The larger biasing effect seen in the neglect patient compared to the no-neglect patient and healthy participants suggests that auditory cues could modulate the direction of the attentional bias that is characteristic of neglect patients. Thus the alternative presentation of auditory cues could be used as rehabilitation for neglect patients. The space-pitch associations are discussed in terms of a
Pinedo, Carlos; Young, Laurence; Esken, Robert
..., and the development and evaluation of the NDFR symbology for on/off-boresight viewing. The localized auditory research includes looking at the benefits of augmenting the Terrain Collision Avoidance System (TCAS...
Thompson, Sarah K.; Carlyon, Robert P.; Cusack, Rhodri
Three experiments studied auditory streaming using sequences of alternating "ABA" triplets, where "A" and "B" were 50-ms tones differing in frequency by [delta]f semitones and separated by 75-ms gaps. Experiment 1 showed that detection of a short increase in the gap between a B tone and the preceding A tone, imposed on one ABA triplet, was better…
Dawes, Piers; Bishop, Dorothy
Background: Auditory Processing Disorder (APD) does not feature in mainstream diagnostic classifications such as the "Diagnostic and Statistical Manual of Mental Disorders, 4th Edition" (DSM-IV), but is frequently diagnosed in the United States, Australia and New Zealand, and is becoming more frequently diagnosed in the United Kingdom. Aims: To…
Miles, Eleanor; Poliakoff, Ellen; Brown, Richard J
Peripheral cues are thought to facilitate responses to stimuli presented at the same location because they lead to exogenous attention shifts. Facilitation has been observed in numerous studies of visual and auditory attention, but there have been only four demonstrations of tactile facilitation, all in studies with potential confounds. Three studies used a spatial (finger versus thumb) discrimination task, where the cue could have provided a spatial framework that might have assisted the discrimination of subsequent targets presented on the same side as the cue. The final study circumvented this problem by using a non-spatial discrimination; however, the cues were informative and interspersed with visual cues which may have affected the attentional effects observed. In the current study, therefore, we used a non-spatial tactile frequency discrimination task following a non-informative tactile white noise cue. When the target was presented 150 ms after the cue, we observed faster discrimination responses to targets presented on the same side compared to the opposite side as the cue; by 1000 ms, responses were significantly faster to targets presented on the opposite side to the cue. Thus, we demonstrated that tactile attentional facilitation can be observed in a non-spatial discrimination task, under unimodal conditions and with entirely non-predictive cues. Furthermore, we provide the first demonstration of significant tactile facilitation and tactile inhibition of return within a single experiment.
Ryan, Joseph J; Kreiner, David S; Chapman, Marla D; Stark-Wroblewski, Kim
We investigated the ability of virtual reality (VR) cue exposure to trigger a desire for alcohol among binge-drinking students. Fifteen binge-drinking college students and eight students who were nonbingers were immersed into a neutral-cue environment or room (underwater scenes), followed by four alcohol-cue rooms (bar, party, kitchen, argument), followed by a repeat of the neutral room. The virtual rooms were computer generated via head-mounted visual displays with associated auditory and olfactory stimuli. In each room, participants reported their subjective cravings for alcohol, the amount of attention given to the sight and smell of alcohol, and how much they were thinking of drinking. A 2 x 6 (type of drinker by VR room) repeated measures ANOVA was conducted on the responses to each question. After alcohol exposure, binge drinkers reported significantly higher cravings for and thoughts of alcohol than nonbinge drinkers, whereas differences between the groups following the neutral rooms were not significant.
Söderlund, Göran B W; Jobs, Elisabeth Nilsson
The most common neuropsychiatric condition in the in children is attention deficit hyperactivity disorder (ADHD), affecting ∼6-9% of the population. ADHD is distinguished by inattention and hyperactive, impulsive behaviors as well as poor performance in various cognitive tasks often leading to failures at school. Sensory and perceptual dysfunctions have also been noticed. Prior research has mainly focused on limitations in executive functioning where differences are often explained by deficits in pre-frontal cortex activation. Less notice has been given to sensory perception and subcortical functioning in ADHD. Recent research has shown that children with ADHD diagnosis have a deviant auditory brain stem response compared to healthy controls. The aim of the present study was to investigate if the speech recognition threshold differs between attentive and children with ADHD symptoms in two environmental sound conditions, with and without external noise. Previous research has namely shown that children with attention deficits can benefit from white noise exposure during cognitive tasks and here we investigate if noise benefit is present during an auditory perceptual task. For this purpose we used a modified Hagerman's speech recognition test where children with and without attention deficits performed a binaural speech recognition task to assess the speech recognition threshold in no noise and noise conditions (65 dB). Results showed that the inattentive group displayed a higher speech recognition threshold than typically developed children and that the difference in speech recognition threshold disappeared when exposed to noise at supra threshold level. From this we conclude that inattention can partly be explained by sensory perceptual limitations that can possibly be ameliorated through noise exposure.
Göran B W Söderlund
Full Text Available The most common neuropsychiatric condition in the in children is attention deficit hyperactivity disorder (ADHD, affecting approximately 6-9 % of the population. ADHD is distinguished by inattention and hyperactive, impulsive behaviors as well as poor performance in various cognitive tasks often leading to failures at school. Sensory and perceptual dysfunctions have also been noticed. Prior research has mainly focused on limitations in executive functioning where differences are often explained by deficits in pre-frontal cortex activation. Less notice has been given to sensory perception and subcortical functioning in ADHD. Recent research has shown that children with ADHD diagnosis have a deviant auditory brain stem response compared to healthy controls. The aim of the present study was to investigate if the speech recognition threshold differs between attentive and children with ADHD symptoms in two environmental sound conditions, with and without external noise. Previous research has namely shown that children with attention deficits can benefit from white noise exposure during cognitive tasks and here we investigate if noise benefit is present during an auditory perceptual task. For this purpose we used a modified Hagerman’s speech recognition test where children with and without attention deficits performed a binaural speech recognition task to assess the speech recognition threshold in no noise and noise conditions (65 dB. Results showed that the inattentive group displayed a higher speech recognition threshold than typically developed children (TDC and that the difference in speech recognition threshold disappeared when exposed to noise at supra threshold level. From this we conclude that inattention can partly be explained by sensory perceptual limitations that can possibly be ameliorated through noise exposure.
Karla Maria Ibraim da Freiria Elias
Full Text Available OBJECTIVE: To verify the auditory selective attention in children with stroke. METHODS: Dichotic tests of binaural separation (non-verbal and consonant-vowel and binaural integration - digits and Staggered Spondaic Words Test (SSW - were applied in 13 children (7 boys, from 7 to 16 years, with unilateral stroke confirmed by neurological examination and neuroimaging. RESULTS: The attention performance showed significant differences in comparison to the control group in both kinds of tests. In the non-verbal test, identifications the ear opposite the lesion in the free recall stage was diminished and, in the following stages, a difficulty in directing attention was detected. In the consonant- vowel test, a modification in perceptual asymmetry and difficulty in focusing in the attended stages was found. In the digits and SSW tests, ipsilateral, contralateral and bilateral deficits were detected, depending on the characteristics of the lesions and demand of the task. CONCLUSION: Stroke caused auditory attention deficits when dealing with simultaneous sources of auditory information.OBJETIVO: Verificar a habilidade de atenção seletiva em crianças com acidente vascular cerebral (AVC. MÉTODOS: Foram aplicados testes dicóticos de separação (não verbal e consoante-vogal e integração - dígitos e Staggered Spondaic Words Test (SSW - binaural em 13 crianças (7 meninos, entre 7 e 16 anos, com AVC unilateral confirmado por neuroimagem. RESULTADOS: O desempenho atencional diferiu entre os grupos na realização de ambos os tipos de tarefa. Ao teste não verbal, houve menor quantidade de identificações com a orelha contralateral à lesão em atenção livre e dificuldade de focalizar a atenção nas etapas direcionadas. No teste consoante-vogal, houve modificação da assimetria perceptual e dificuldade de focalizar a atenção nas etapas direcionadas. Nos testes de dígitos e SSW, foram constatados défices ipsilaterais, contralaterais e
Karla Maria Ibraim da Freiria Elias
Full Text Available OBJECTIVE: To verify the auditory selective attention in children with stroke. METHODS: Dichotic tests of binaural separation (non-verbal and consonant-vowel and binaural integration - digits and Staggered Spondaic Words Test (SSW - were applied in 13 children (7 boys, from 7 to 16 years, with unilateral stroke confirmed by neurological examination and neuroimaging. RESULTS: The attention performance showed significant differences in comparison to the control group in both kinds of tests. In the non-verbal test, identifications the ear opposite the lesion in the free recall stage was diminished and, in the following stages, a difficulty in directing attention was detected. In the consonant- vowel test, a modification in perceptual asymmetry and difficulty in focusing in the attended stages was found. In the digits and SSW tests, ipsilateral, contralateral and bilateral deficits were detected, depending on the characteristics of the lesions and demand of the task. CONCLUSION: Stroke caused auditory attention deficits when dealing with simultaneous sources of auditory information.OBJETIVO: Verificar a habilidade de atenção seletiva em crianças com acidente vascular cerebral (AVC. MÉTODOS: Foram aplicados testes dicóticos de separação (não verbal e consoante-vogal e integração - dígitos e Staggered Spondaic Words Test (SSW - binaural em 13 crianças (7 meninos, entre 7 e 16 anos, com AVC unilateral confirmado por neuroimagem. RESULTADOS: O desempenho atencional diferiu entre os grupos na realização de ambos os tipos de tarefa. Ao teste não verbal, houve menor quantidade de identificações com a orelha contralateral à lesão em atenção livre e dificuldade de focalizar a atenção nas etapas direcionadas. No teste consoante-vogal, houve modificação da assimetria perceptual e dificuldade de focalizar a atenção nas etapas direcionadas. Nos testes de dígitos e SSW, foram constatados défices ipsilaterais, contralaterais e
Bidet-Caulet, Aurélie; Bottemanne, Laure; Fonteneau, Clara; Giard, Marie-Hélène; Bertrand, Olivier
Attention improves the processing of specific information while other stimuli are disregarded. A good balance between bottom-up (attentional capture by unexpected salient stimuli) and top-down (selection of relevant information) mechanisms is crucial to be both task-efficient and aware of our environment. Only few studies have explored how an isolated unexpected task-irrelevant stimulus outside the attention focus can disturb the top-down attention mechanisms necessary to the good performance of the ongoing task, and how these top-down mechanisms can modulate the bottom-up mechanisms of attentional capture triggered by an unexpected event. We recorded scalp electroencephalography in 18 young adults performing a new paradigm measuring distractibility and assessing both bottom-up and top-down attention mechanisms, at the same time. Increasing task load in top-down attention was found to reduce early processing of the distracting sound, but not bottom-up attentional capture mechanisms nor the behavioral distraction cost in reaction time. Moreover, the impact of bottom-up attentional capture by distracting sounds on target processing was revealed as a delayed latency of the N100 sensory response to target sounds mirroring increased reaction times. These results provide crucial information into how bottom-up and top-down mechanisms dynamically interact and compete in the human brain, i.e. on the precarious balance between voluntary attention and distraction.
Previous emotion recognition studies have suggested an age-related decline in the recognition of facial expressions of emotion. However, these studies often lack ecological validity and do not consider the multiple interacting sensory stimuli that are critical to realworld emotion recognition. In the current study, emotion recognition in everyday life was considered to comprise of the interaction between facial expressions, accompanied by an auditory expression and embedded in a situational c...
Schwent, V. L.; Hillyard, S. A.; Galambos, R.
The effects of varying the rate of delivery of dichotic tone pip stimuli on selective attention measured by evoked-potential amplitudes and signal detectability scores were studied. The subjects attended to one channel (ear) of tones, ignored the other, and pressed a button whenever occasional targets - tones of a slightly higher pitch were detected in the attended ear. Under separate conditions, randomized interstimulus intervals were short, medium, and long. Another study compared the effects of attention on the N1 component of the auditory evoked potential for tone pips presented alone and when white noise was added to make the tones barely above detectability threshold in a three-channel listening task. Major conclusions are that (1) N1 is enlarged to stimuli in an attended channel only in the short interstimulus interval condition (averaging 350 msec), (2) N1 and P3 are related to different modes of selective attention, and (3) attention selectivity in multichannel listening task is greater when tones are faint and/or difficult to detect.
Joel S Cavallo
Full Text Available Environmental stimuli repeatedly paired with drugs of abuse can elicit conditioned responses that are thought to promote future drug seeking. We recently showed that healthy volunteers acquired conditioned responses to auditory and visual stimuli after just two pairings with methamphetamine (MA, 20 mg, oral. This study extended these findings by systematically varying the number of drug-stimuli pairings. We expected that more pairings would result in stronger conditioning. Three groups of healthy adults were randomly assigned to receive 1, 2 or 4 pairings (Groups P1, P2 and P4, Ns = 13, 16, 16, respectively of an auditory-visual stimulus with MA, and another stimulus with placebo (PBO. Drug-cue pairings were administered in an alternating, counterbalanced order, under double-blind conditions, during 4 hr sessions. MA produced prototypic subjective effects (mood, ratings of drug effects and alterations in physiology (heart rate, blood pressure. Although subjects did not exhibit increased behavioral preference for, or emotional reactivity to, the MA-paired cue after conditioning, they did exhibit an increase in attentional bias (initial gaze toward the drug-paired stimulus. Further, subjects who had four pairings reported "liking" the MA-paired cue more than the PBO cue after conditioning. Thus, the number of drug-stimulus pairings, varying from one to four, had only modest effects on the strength of conditioned responses. Further studies investigating the parameters under which drug conditioning occurs will help to identify risk factors for developing drug abuse, and provide new treatment strategies.
Stroganova, Tatiana A; Kozunov, Vladimir V; Posikera, Irina N; Galuta, Ilia A; Gratchev, Vitaliy V; Orekhova, Elena V
Auditory sensory modulation difficulties and problems with automatic re-orienting to sound are well documented in autism spectrum disorders (ASD). Abnormal preattentive arousal processes may contribute to these deficits. In this study, we investigated components of the cortical auditory evoked potential (CAEP) reflecting preattentive arousal in children with ASD and typically developing (TD) children aged 3-8 years. Pairs of clicks ('S1' and 'S2') separated by a 1 sec S1-S2 interstimulus interval (ISI) and much longer (8-10 sec) S1-S1 ISIs were presented monaurally to either the left or right ear. In TD children, the P50, P100 and N1c CAEP components were strongly influenced by temporal novelty of clicks and were much greater in response to the S1 than the S2 click. Irrespective of the stimulation side, the 'tangential' P100 component was rightward lateralized in TD children, whereas the 'radial' N1c component had higher amplitude contralaterally to the stimulated ear. Compared to the TD children, children with ASD demonstrated 1) reduced amplitude of the P100 component under the condition of temporal novelty (S1) and 2) an attenuated P100 repetition suppression effect. The abnormalities were lateralized and depended on the presentation side. They were evident in the case of the left but not the right ear stimulation. The P100 abnormalities in ASD correlated with the degree of developmental delay and with the severity of auditory sensory modulation difficulties observed in early life. The results suggest that some rightward-lateralized brain networks that are crucially important for arousal and attention re-orienting are compromised in children with ASD and that this deficit contributes to sensory modulation difficulties and possibly even other behavioral deficits in ASD.
Tatiana A Stroganova
Full Text Available Auditory sensory modulation difficulties and problems with automatic re-orienting to sound are well documented in autism spectrum disorders (ASD. Abnormal preattentive arousal processes may contribute to these deficits. In this study, we investigated components of the cortical auditory evoked potential (CAEP reflecting preattentive arousal in children with ASD and typically developing (TD children aged 3-8 years. Pairs of clicks ('S1' and 'S2' separated by a 1 sec S1-S2 interstimulus interval (ISI and much longer (8-10 sec S1-S1 ISIs were presented monaurally to either the left or right ear. In TD children, the P50, P100 and N1c CAEP components were strongly influenced by temporal novelty of clicks and were much greater in response to the S1 than the S2 click. Irrespective of the stimulation side, the 'tangential' P100 component was rightward lateralized in TD children, whereas the 'radial' N1c component had higher amplitude contralaterally to the stimulated ear. Compared to the TD children, children with ASD demonstrated 1 reduced amplitude of the P100 component under the condition of temporal novelty (S1 and 2 an attenuated P100 repetition suppression effect. The abnormalities were lateralized and depended on the presentation side. They were evident in the case of the left but not the right ear stimulation. The P100 abnormalities in ASD correlated with the degree of developmental delay and with the severity of auditory sensory modulation difficulties observed in early life. The results suggest that some rightward-lateralized brain networks that are crucially important for arousal and attention re-orienting are compromised in children with ASD and that this deficit contributes to sensory modulation difficulties and possibly even other behavioral deficits in ASD.
Gherri, Elena; Eimer, Martin
The ability to drive safely is disrupted by cell phone conversations, and this has been attributed to a diversion of attention from the visual environment. We employed behavioral and ERP measures to study whether the attentive processing of spoken messages is, in itself, sufficient to produce visual-attentional deficits. Participants searched for visual targets defined by a unique feature (Experiment 1) or feature conjunction (Experiment 2), and simultaneously listened to narrated text passages that had to be recalled later (encoding condition), or heard backward-played speech sounds that could be ignored (control condition). Responses to targets were slower in the encoding condition, and ERPs revealed that the visual processing of search arrays and the attentional selection of target stimuli were less efficient in the encoding relative to the control condition. Results demonstrate that the attentional processing of visual information is impaired when concurrent spoken messages are encoded and maintained, in line with cross-modal links in selective attention, but inconsistent with the view that attentional resources are modality-specific. The distraction of visual attention by active listening could contribute to the adverse effects of cell phone use on driving performance.
Thiessen, Erik D.; Saffran, Jenny R.
Three experiments explored infants' attention to conflicting cues at different ages. Found when stress and statistical cues indicated different word boundaries, 9-month-olds used syllable stress as a cue to segmentation while ignoring statistical cues. Seven-month-olds attended more to statistical cues than to stress cues. Results suggested…
Voicikas, Aleksandras; Niciute, Ieva; Ruksenas, Osvaldas; Griskova-Bulanova, Inga
Auditory steady-state responses (ASSRs) are used to test the ability of local cortical networks to generate gamma frequency activity in patients with psychiatric disorders. For the effective use of ASSRs in research and clinical applications, it is necessary to find a comfortable stimulation type and to know how ASSRs are modulated by the tasks given to the subjects during the recording session. We aimed to evaluate the suitability of flutter amplitude modulated tone (FAM) stimulation for generation of ASSRs: subjective pleasantness of FAMs and attentional effects on FAM-elicited 40Hz ASSRs were assessed. Commonly used click stimulation was used for comparison. FAMs produced ASSRs that were stable over the variety of tasks - they were not modulated by attentional demands during the task; responses to clicks were reduced and less synchronized during distraction. FAM stimuli were rated as less unpleasant and less arousing than click stimuli, thus being more pleasant to the subjects. Our findings suggest that FAM stimulation might be more suitable in conditions, where attention is difficult to control, i.e. in clinical settings. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Söderlund, Göran B W; Björk, Christer; Gustafsson, Peik
Recent research has shown that acoustic white noise (80 dB) can improve task performance in people with attention deficits and/or Attention Deficit Hyperactivity Disorder (ADHD). This is attributed to the phenomenon of stochastic resonance in which a certain amount of noise can improve performance in a brain that is not working at its optimum. We compare here the effect of noise exposure with the effect of stimulant medication on cognitive task performance in ADHD. The aim of the present study was to compare the effects of auditory noise exposure with stimulant medication for ADHD children on a cognitive test battery. A group of typically developed children (TDC) took the same tests as a comparison. Twenty children with ADHD of combined or inattentive subtypes and twenty TDC matched for age and gender performed three different tests (word recall, spanboard and n-back task) during exposure to white noise (80 dB) and in a silent condition. The ADHD children were tested with and without central stimulant medication. In the spanboard- and the word recall tasks, but not in the 2-back task, white noise exposure led to significant improvements for both non-medicated and medicated ADHD children. No significant effects of medication were found on any of the three tasks. This pilot study shows that exposure to white noise resulted in a task improvement that was larger than the one with stimulant medication thus opening up the possibility of using auditory noise as an alternative, non-pharmacological treatment of cognitive ADHD symptoms.
Göran B W Söderlund
Full Text Available Background: Recent research has shown that acoustic white noise (80 dB can improve task performance in people with attention deficits and/or Attention Deficit Hyperactivity Disorder (ADHD. This is attributed to the phenomenon of stochastic resonance in which a certain amount of noise can improve performance in a brain that is not working at its optimum. We compare here the effect of noise exposure with the effect of stimulant medication on cognitive task performance in ADHD. The aim of the present study was to compare the effects of auditory noise exposure with stimulant medication for ADHD children on a cognitive test battery. A group of typically developed children (TDC took the same tests as a comparison.Methods: Twenty children with ADHD of combined or inattentive subtypes and twenty typically developed children matched for age and gender performed three different tests (word recall, spanboard and n-back task during exposure to white noise (80 dB and in a silent condition. The ADHD children were tested with and without central stimulant medication.Results: In the spanboard- and the word recall tasks, but not in the 2-back task, white noise exposure led to significant improvements for both non-medicated and medicated ADHD children. No significant effects of medication were found on any of the three tasks.Conclusion: This pilot study shows that exposure to white noise resulted in a task improvement that was larger than the one with stimulant medication thus opening up the possibility of using auditory noise as an alternative, non-pharmacological treatment of cognitive ADHD symptoms.
Morey, Candice Coker; Cowan, Nelson; Morey, Richard D.; Rouder, Jeffery N.
Prominent roles for general attention resources are posited in many models of working memory, but the manner in which these can be allocated differs between models or is not sufficiently specified. We varied the payoffs for correct responses in two temporally-overlapping recognition tasks, a visual
Purpose: "The Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition" notes that attention-deficit/hyperactivity disorder (ADHD) diagnosed in childhood will persist into adulthood among at least some individuals. There is a paucity of evidence, however, regarding whether other difficulties that often accompany childhood…
Fiedler, Lorenz; Wöstmann, Malte; Graversen, Carina; Brandmeyer, Alex; Lunner, Thomas; Obleser, Jonas
Objective. Conventional, multi-channel scalp electroencephalography (EEG) allows the identification of the attended speaker in concurrent-listening (‘cocktail party’) scenarios. This implies that EEG might provide valuable information to complement hearing aids with some form of EEG and to install a level of neuro-feedback. Approach. To investigate whether a listener’s attentional focus can be detected from single-channel hearing-aid-compatible EEG configurations, we recorded EEG from three electrodes inside the ear canal (‘in-Ear-EEG’) and additionally from 64 electrodes on the scalp. In two different, concurrent listening tasks, participants (n = 7) were fitted with individualized in-Ear-EEG pieces and were either asked to attend to one of two dichotically-presented, concurrent tone streams or to one of two diotically-presented, concurrent audiobooks. A forward encoding model was trained to predict the EEG response at single EEG channels. Main results. Each individual participants’ attentional focus could be detected from single-channel EEG response recorded from short-distance configurations consisting only of a single in-Ear-EEG electrode and an adjacent scalp-EEG electrode. The differences in neural responses to attended and ignored stimuli were consistent in morphology (i.e. polarity and latency of components) across subjects. Significance. In sum, our findings show that the EEG response from a single-channel, hearing-aid-compatible configuration provides valuable information to identify a listener’s focus of attention.
Fiedler, Lorenz; Wöstmann, Malte; Graversen, Carina; Brandmeyer, Alex; Lunner, Thomas; Obleser, Jonas
Conventional, multi-channel scalp electroencephalography (EEG) allows the identification of the attended speaker in concurrent-listening ('cocktail party') scenarios. This implies that EEG might provide valuable information to complement hearing aids with some form of EEG and to install a level of neuro-feedback. To investigate whether a listener's attentional focus can be detected from single-channel hearing-aid-compatible EEG configurations, we recorded EEG from three electrodes inside the ear canal ('in-Ear-EEG') and additionally from 64 electrodes on the scalp. In two different, concurrent listening tasks, participants (n = 7) were fitted with individualized in-Ear-EEG pieces and were either asked to attend to one of two dichotically-presented, concurrent tone streams or to one of two diotically-presented, concurrent audiobooks. A forward encoding model was trained to predict the EEG response at single EEG channels. Each individual participants' attentional focus could be detected from single-channel EEG response recorded from short-distance configurations consisting only of a single in-Ear-EEG electrode and an adjacent scalp-EEG electrode. The differences in neural responses to attended and ignored stimuli were consistent in morphology (i.e. polarity and latency of components) across subjects. In sum, our findings show that the EEG response from a single-channel, hearing-aid-compatible configuration provides valuable information to identify a listener's focus of attention.
Perception and action are coupled via bidirectional relationships between sensory and motor systems. Motor systems influence sensory areas by imparting a feedforward influence on sensory processing termed "motor efference copy" (MEC). MEC is suggested to occur in humans because speech preparation and production modulate neural measures of auditory cortical activity. However, it is not known if MEC can affect auditory perception. We tested the hypothesis that during speech preparation auditory thresholds will increase relative to a control condition, and that the increase would be most evident for frequencies that match the upcoming vocal response. Participants performed trials in a speech condition that contained a visual cue indicating a vocal response to prepare (one of two frequencies), followed by a go signal to speak. To determine threshold shifts, voice-matched or -mismatched pure tones were presented at one of three time points between the cue and target. The control condition was the same except the visual cues did not specify a response and subjects did not speak. For each participant, we measured f0 thresholds in isolation from the task in order to establish baselines. Results indicated that auditory thresholds were highest during speech preparation, relative to baselines and a non-speech control condition, especially at suprathreshold levels. Thresholds for tones that matched the frequency of planned responses gradually increased over time, but sharply declined for the mismatched tones shortly before targets. Findings support the hypothesis that MEC influences auditory perception by modulating thresholds during speech preparation, with some specificity relative to the planned response. The threshold increase in tasks vs. baseline may reflect attentional demands of the tasks.
Lee, Adrian K C; Larson, Eric; Maddox, Ross K
Magneto- and electroencephalography (MEG/EEG) are neuroimaging techniques that provide a high temporal resolution particularly suitable to investigate the cortical networks involved in dynamical perceptual and cognitive tasks, such as attending to different sounds in a cocktail party. Many past studies have employed data recorded at the sensor level only, i.e., the magnetic fields or the electric potentials recorded outside and on the scalp, and have usually focused on activity that is time-locked to the stimulus presentation. This type of event-related field / potential analysis is particularly useful when there are only a small number of distinct dipolar patterns that can be isolated and identified in space and time. Alternatively, by utilizing anatomical information, these distinct field patterns can be localized as current sources on the cortex. However, for a more sustained response that may not be time-locked to a specific stimulus (e.g., in preparation for listening to one of the two simultaneously presented spoken digits based on the cued auditory feature) or may be distributed across multiple spatial locations unknown a priori, the recruitment of a distributed cortical network may not be adequately captured by using a limited number of focal sources. Here, we describe a procedure that employs individual anatomical MRI data to establish a relationship between the sensor information and the dipole activation on the cortex through the use of minimum-norm estimates (MNE). This inverse imaging approach provides us a tool for distributed source analysis. For illustrative purposes, we will describe all procedures using FreeSurfer and MNE software, both freely available. We will summarize the MRI sequences and analysis steps required to produce a forward model that enables us to relate the expected field pattern caused by the dipoles distributed on the cortex onto the M/EEG sensors. Next, we will step through the necessary processes that facilitate us in denoising
Dundon, Neil M; Dockree, Suvi P; Buckley, Vanessa; Merriman, Niamh; Carton, Mary; Clarke, Sarah; Roche, Richard A P; Lalor, Edmund C; Robertson, Ian H; Dockree, Paul M
Patients who suffer traumatic brain injury frequently report difficulty concentrating on tasks and completing routine activities in noisy and distracting environments. Such impairments can have long-term negative psychosocial consequences. A cognitive control function that may underlie this impairment is the capacity to select a goal-relevant signal for further processing while safeguarding it from irrelevant noise. A paradigmatic investigation of this problem was undertaken using a dichotic listening task (study 1) in which comprehension of a stream of speech to one ear was measured in the context of increasing interference from a second stream of irrelevant speech to the other ear. Controls showed an initial decline in performance in the presence of competing speech but thereafter showed adaptation to increasing audibility of irrelevant speech, even at the highest levels of noise. By contrast, patients showed linear decline in performance with increasing noise. Subsequently attempts were made to ameliorate this deficit (study 2) using a cognitive training procedure based on attention process training (APT) that included graded exposure to irrelevant noise over the course of training. Patients were assigned to adaptive and non-adaptive training schedules or to a no-training control group. Results showed that both types of training drove improvements in the dichotic listening and in naturalistic tasks of performance in noise. Improvements were also seen on measures of selective attention in the visual domain suggesting transfer of training. We also observed augmentation of event-related potentials (ERPs) linked to target processing (P3b) but no change in ERPs evoked by distractor stimuli (P3a) suggesting that training heightened tuning of target signals, as opposed to gating irrelevant noise. No changes in any of the above measures were observed in a no-training control group. Together these findings present an ecologically valid approach to measure selective
It has been repeatedly shown that the auditory N1 is enhanced for sounds presented at an attended time point. The present study investigated the underlying mechanisms using a temporal cuing paradigm. In each trial, an auditory cue indicated at which time point a second sound could be relevant for response selection. Crucially, in addition to temporal attention, two physical sound features with known effects on the sensory N1 were manipulated: location and intensity. Positive evidence for conjoint effects of attention and location or attention and intensity would corroborate the notion that the sensory N1 was modulated by temporal attention, thus supporting a gain mechanism. However, the N1 effect of temporal attention was not similarly lateralized as the sensory N1, and, moreover, it was independent of sound intensity. Thus, the present results do not provide compelling evidence that temporal attention involves an increase in sensory gain. Copyright © 2012 Society for Psychophysiological Research.
Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depressin, and hyperacute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of The Sound of a Moracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.
Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depression, and hyper acute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of the sound of a miracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.
Auditory Discrimination of Lexical Stress Patterns in Hearing-Impaired Infants with Cochlear Implants Compared with Normal Hearing: Influence of Acoustic Cues and Listening Experience to the Ambient Language.
Segal, Osnat; Houston, Derek; Kishon-Rabin, Liat
To assess discrimination of lexical stress pattern in infants with cochlear implant (CI) compared with infants with normal hearing (NH). While criteria for cochlear implantation have expanded to infants as young as 6 months, little is known regarding infants' processing of suprasegmental-prosodic cues which are known to be important for the first stages of language acquisition. Lexical stress is an example of such a cue, which, in hearing infants, has been shown to assist in segmenting words from fluent speech and in distinguishing between words that differ only the stress pattern. To date, however, there are no data on the ability of infants with CIs to perceive lexical stress. Such information will provide insight to the speech characteristics that are available to these infants in their first steps of language acquisition. This is of particular interest given the known limitations that the CI device has in transmitting speech information that is mediated by changes in fundamental frequency. Two groups of infants participated in this study. The first group included 20 profoundly hearing-impaired infants with CI, 12 to 33 months old, implanted under the age of 2.5 years (median age of implantation = 14.5 months), with 1 to 6 months of CI use (mean = 2.7 months) and no known additional problems. The second group of infants included 48 NH infants, 11 to 14 months old with normal development and no known risk factors for developmental delays. Infants were tested on their ability to discriminate between nonsense words that differed on their stress pattern only (/dóti/ versus /dotí/ and /dotí/ versus /dóti/) using the visual habituation procedure. The measure for discrimination was the change in looking time between the last habituation trial (e.g., /dóti/) and the novel trial (e.g., /dotí/). (1) Infants with CI showed discrimination between lexical stress pattern with only limited auditory experience with their implant device, (2) discrimination of stress
Heald, Shannon L. M.; Van Hedger, Stephen C.; Nusbaum, Howard C.
In our auditory environment, we rarely experience the exact acoustic waveform twice. This is especially true for communicative signals that have meaning for listeners. In speech and music, the acoustic signal changes as a function of the talker (or instrument), speaking (or playing) rate, and room acoustics, to name a few factors. Yet, despite this acoustic variability, we are able to recognize a sentence or melody as the same across various kinds of acoustic inputs and determine meaning based on listening goals, expectations, context, and experience. The recognition process relates acoustic signals to prior experience despite variability in signal-relevant and signal-irrelevant acoustic properties, some of which could be considered as “noise” in service of a recognition goal. However, some acoustic variability, if systematic, is lawful and can be exploited by listeners to aid in recognition. Perceivable changes in systematic variability can herald a need for listeners to reorganize perception and reorient their attention to more immediately signal-relevant cues. This view is not incorporated currently in many extant theories of auditory perception, which traditionally reduce psychological or neural representations of perceptual objects and the processes that act on them to static entities. While this reduction is likely done for the sake of empirical tractability, such a reduction may seriously distort the perceptual process to be modeled. We argue that perceptual representations, as well as the processes underlying perception, are dynamically determined by an interaction between the uncertainty of the auditory signal and constraints of context. This suggests that the process of auditory recognition is highly context-dependent in that the identity of a given auditory object may be intrinsically tied to its preceding context. To argue for the flexible neural and psychological updating of sound-to-meaning mappings across speech and music, we draw upon examples
Shannon L. M. Heald
Full Text Available In our auditory environment, we rarely experience the exact acoustic waveform twice. This is especially true for communicative signals that have meaning for listeners. In speech and music, the acoustic signal changes as a function of the talker (or instrument, speaking (or playing rate, and room acoustics, to name a few factors. Yet, despite this acoustic variability, we are able to recognize a sentence or melody as the same across various kinds of acoustic inputs and determine meaning based on listening goals, expectations, context, and experience. The recognition process relates acoustic signals to prior experience despite variability in signal-relevant and signal-irrelevant acoustic properties, some of which could be considered as “noise” in service of a recognition goal. However, some acoustic variability, if systematic, is lawful and can be exploited by listeners to aid in recognition. Perceivable changes in systematic variability can herald a need for listeners to reorganize perception and reorient their attention to more immediately signal-relevant cues. This view is not incorporated currently in many extant theories of auditory perception, which traditionally reduce psychological or neural representations of perceptual objects and the processes that act on them to static entities. While this reduction is likely done for the sake of empirical tractability, such a reduction may seriously distort the perceptual process to be modeled. We argue that perceptual representations, as well as the processes underlying perception, are dynamically determined by an interaction between the uncertainty of the auditory signal and constraints of context. This suggests that the process of auditory recognition is highly context-dependent in that the identity of a given auditory object may be intrinsically tied to its preceding context. To argue for the flexible neural and psychological updating of sound-to-meaning mappings across speech and music, we
Pilling, Michael; Barrett, Doug J K
We investigated how dimension-based attention influences visual short-term memory (VSTM). This was done through examining the effects of cueing a feature dimension in two perceptual comparison tasks (change detection and sameness detection). In both tasks, a memory array and a test array consisting of a number of colored shapes were presented successively, interleaved by a blank interstimulus interval (ISI). In Experiment 1 (change detection), the critical event was a feature change in one item across the memory and test arrays. In Experiment 2 (sameness detection), the critical event was the absence of a feature change in one item across the two arrays. Auditory cues indicated the feature dimension (color or shape) of the critical event with 80 % validity; the cues were presented either prior to the memory array, during the ISI, or simultaneously with the test array. In Experiment 1, the cue validity influenced sensitivity only when the cue was given at the earliest position; in Experiment 2, the cue validity influenced sensitivity at all three cue positions. We attributed the greater effectiveness of top-down guidance by cues in the sameness detection task to the more active nature of the comparison process required to detect sameness events (Hyun, Woodman, Vogel, Hollingworth, & Luck, Journal of Experimental Psychology: Human Perception and Performance, 35; 1140-1160, 2009).
Wegmann, Elisa; Brand, Matthias; Snagowski, Jan; Schiebener, Johannes
In everyday life people have to attend to, react to, or inhibit reactions to visual and acoustic cues. These abilities are frequently measured with Go/NoGo tasks using visual stimuli. However, these abilities have rarely been examined with auditory cues. The aims of our study (N = 106) are to develop an auditory Go/NoGo paradigm and to describe brain-healthy participants' performance. We tested convergent validity of the auditory Go/NoGo paradigm by analyzing the correlations with other neuropsychological tasks assessing attentional control and executive functions. We also analyzed the ecological validity of the task by examining correlations of self-reported impulsivity. In the first step we found that the participants are able to differentiate correctly among several sounds and also to appropriately react or inhibit a certain reaction most of the times. Convergent validity was suggested by correlations between the auditory Go/NoGo paradigm and the Color Word Interference Test, Trail Making Test, and Modified Card Sorting Test. We did not find correlations with self-reported impulsivity. Overall, the auditory Go/NoGo paradigm may be used to assess attention and inhibition in the context of auditory stimuli. Future studies may adapt the auditory Go/NoGo paradigm with specific acoustic stimuli (e.g., sound of opening a bottle) in order to address cognitive biases in particular disorders (e.g., alcohol dependence).
volume. The conference's topics include auditory exploration of data via sonification and audification; real time monitoring of multivariate date; sound in immersive interfaces and teleoperation; perceptual issues in auditory display; sound in generalized computer interfaces; technologies supporting...... auditory display creation; data handling for auditory display systems; applications of auditory display....
The phenomenon of crossmodal dynamic visual capture occurs when the direction of motion of a visual cue causes a weakening or reversal of the perceived direction of motion of a concurrently presented auditory stimulus. It is known that there is a perceptual bias towards looming compared to receding stimuli, and faster bimodal reaction times have recently been observed for looming cues compared to receding cues (Cappe et al., 2009). The current studies aimed to test whether visual looming cues are associated with greater dynamic capture of auditory motion in depth compared to receding signals. Participants judged the direction of an auditory motion cue presented with a visual looming cue (expanding disk), a visual receding cue (contracting disk), or visual stationary cue (static disk). Visual cues were presented either simultaneously with the auditory cue, or after 500 ms. We found increased levels of interference with looming visual cues compared to receding visual cues, compared to asynchronous presentation or stationary visual cues. The results could not be explained by the weaker subjective strength of the receding auditory stimulus, as in Experiment 2 the looming and receding auditory cues were matched for perceived strength. These results show that dynamic visual capture of auditory motion in the depth plane is modulated by an adaptive bias for looming compared to receding visual cues.
Berginström, Nils; Johansson, Jonas; Nordström, Peter; Nordström, Anna
Our objective was to present normative data from 70-year-olds on the Integrated Visual and Auditory Continuous Performance Test (IVA), a computerized measure of attention and response control. 640 participants (330 men and 310 women), all aged 70 years, completed the IVA, as well as the Mini-Mental State Examination and the Geriatric Depression Scale. Data were stratified by education and gender. Education differences were found in 11 of 22 IVA scales. Minor gender differences were found in six scales for the high-education group, and two scales for the low-education group. Comparisons of healthy participants and participants with stroke, myocardial infarction, or diabetes showed only minor differences. Correlations among IVA scales were strong (all r > .34, p < .001), and those with the widely used Mini-Mental State Examination were weaker (all r < .21, p < .05). Skewed distributions of normative data from primary IVA scales measuring response inhibition (Prudence) and inattention (Vigilance) represent a weakness of this test. This study provides IVA norms for 70-year-olds stratified by education and gender, increasing the usability of this instrument when testing persons near this age. The data presented here show some major differences from original IVA norms, and explanations for these differences are discussed. Explanations include the broad age-range used in the original IVA norms (66-99 years of age) and the passage of 15 years since the original norms were collected.
Wolfe, David E; Noguchi, Laura K
The purpose of this study was to examine the use of music to sustain attention of young children during conditions of auditory distractions. Kindergarten students (N=76) were randomly assigned to one of four conditions/groups: (a) spoken story with no distraction, (b) spoken story with distraction, (c) musical story with no distraction, musical story with distraction. Participants were asked to listen to the story and to identify specific "actions" and "animals" that were presented (i.e., spoken or sung) within the story. A tally of correct responses (child pointed to correct actions/animals at appropriate times) was recorded during the listening task. Observations of participants' behaviors while listening were also made by the experimenter using narrative recording procedures. A one-way ANOVA was computed to assess the difference in mean scores across the four experimental conditions. Significant results were found. Further analysis employing a Tukey post hoc/multiple comparisons test revealed significant differences between the spoken story with distraction condition and the musical story with distraction condition. These statistical results, along with the observations of listening behaviors, were discussed in terms of providing suggestions for future research and in lending support to the use of music with young children to improve vigilance within educational and clinical settings.
A habilidade de atenção auditiva sustentada em crianças com fissura labiopalatina e transtorno fonológico Sustained auditory attention ability in children with cleft lip and palate and phonological disorders
Tâmyne Ferreira Duarte de Moraes
Full Text Available OBJETIVO: Verificar a habilidade de atenção auditiva sustentada em crianças com fissura labiopalatina e transtorno fonológico, comparando o desempenho com crianças com fissura labiopalatina e ausência de transtorno fonológico. MÉTODOS: Dezessete crianças com idade entre 6 e 11 anos, com fissura labiopalatina transforame unilateral operada e ausência de queixa e/ou alteração auditiva, separadas em dois grupos: GI (com transtorno fonológico e GII (com auŝencia de transtorno fonológico. Para detecção de alteração auditiva foram realizadas audiometria e timpanometria. Para avaliação fonológica foram utilizados os seguintes instrumentos: Teste de Linguagem Infantil e Consciência Fonológica: Instrumento de Avaliação Sequencial. Para avaliar a habilidade de atenção auditiva foi aplicado o Teste da Habilidade de Atenção Auditiva Sustentada. RESULTADOS: Das sete crianças com transtorno fonológico (41%, duas (29% apresentaram alteração nos resultados do Teste da Habilidade de Atenção Auditiva Sustentada. Não houve diferença entre as crianças com fissura labiopalatina e transtorno fonológico e as crianças com fissura labiopalatina e ausência de transtorno fonológico quanto aos resultados do Teste de Habilidade de Atenção Auditiva Sustentada. CONCLUSÃO: A habilidade de atenção auditiva sustentada nas crianças com fissura labiopalatina e transtorno fonológico não difere da habilidade de atenção auditiva sustentada de crianças com fissura labiopalatina sem transtorno fonológico.PURPOSE: To verify the ability of sustained auditory attention in children with cleft lip and palate and phonological disorder, in comparison with the performance of children with cleft lip and palate and absence of phonological disorder. METHODS: Seventeen children with ages between 6 and 11 years, with repaired unilateral complete cleft lip and palate and absence of auditory complaints or hearing problems, were divided into two
Noles, Nicholaus S.; Gelman, Susan A.
The goal of the present study is to evaluate the claim that young children display preferences for auditory stimuli over visual stimuli. This study is motivated by concerns that the visual stimuli employed in prior studies were considerably more complex and less distinctive than the competing auditory stimuli, resulting in an illusory preference for auditory cues. Across three experiments, preschool children and adults were trained to use paired audio-visual cues to predict the location of a target. At test, the cues were switched so that auditory cues indicated one location and visual cues indicated the opposite location. In contrast to prior studies, preschool age children did not exhibit auditory dominance. Instead, children and adults flexibly shifted their preferences as a function of the degree of contrast within each modality (with high contrast leading to greater use). PMID:22513210
with two ears. Jornal f tbp & ti So iM SL Amrir, 25 975-979. Coren, S. & Girgus, J. S. (1972). Differentiation and decrement in the Meller -Lyer...behavio. New York: AL1eton-Centy-Crofts. Lwisp E. 0. (1908). The effect of practice on the perception of the Meller -Lyer illusion. Britia amrnal nt...PAR Technology Corp. 7926 Jones Branch Drive Dr. Sandra P. Marshall Suite 170 Dept. of Psychology McLean, VA 22102 San Diego State University San
Hecht, Marcus; Thiemann, Ulf; Freitag, Christine M; Bender, Stephan
Post-perceptual cues can enhance visual short term memory encoding even after the offset of the visual stimulus. However, both the mechanisms by which the sensory stimulus characteristics are buffered as well as the mechanisms by which post-perceptual selective attention enhances short term memory encoding remain unclear. We analyzed late post-perceptual event-related potentials (ERPs) in visual change detection tasks (100ms stimulus duration) by high-resolution ERP analysis to elucidate these mechanisms. The effects of early and late auditory post-cues (300ms or 850ms after visual stimulus onset) as well as the effects of a visual interference stimulus were examined in 27 healthy right-handed adults. Focusing attention with post-perceptual cues at both latencies significantly improved memory performance, i.e. sensory stimulus characteristics were available for up to 850ms after stimulus presentation. Passive watching of the visual stimuli without auditory cue presentation evoked a slow negative wave (N700) over occipito-temporal visual areas. N700 was strongly reduced by a visual interference stimulus which impeded memory maintenance. In contrast, contralateral delay activity (CDA) still developed in this condition after the application of auditory post-cues and was thereby dissociated from N700. CDA and N700 seem to represent two different processes involved in short term memory encoding. While N700 could reflect visual post processing by automatic attention attraction, CDA may reflect the top-down process of searching selectively for the required information through post-perceptual attention. Copyright © 2015 Elsevier Inc. All rights reserved.
Full Text Available Representing an intuitive spelling interface for Brain-Computer Interfaces (BCI in the auditory domain is not straightforward. In consequence, all existing approaches based on event-related potentials (ERP rely at least partially on a visual representation of the interface. This online study introduces an auditory spelling interface that eliminates the necessity for such a visualization. In up to two sessions, a group of healthy subjects (N=21 was asked to use a text entry application, utilizing the spatial cues of the AMUSE paradigm (Auditory Multiclass Spatial ERP. The speller relies on the auditory sense both for stimulation and the core feedback. Without prior BCI experience, 76% of the participants were able to write a full sentence during the first session. By exploiting the advantages of a newly introduced dynamic stopping method, a maximum writing speed of 1.41 characters/minute (7.55 bits/minute could be reached during the second session (average: .94 char/min, 5.26 bits/min. For the first time, the presented work shows that an auditory BCI can reach performances similar to state-of-the-art visual BCIs based on covert attention. These results represent an important step towards a purely auditory BCI.
Schreuder, Martijn; Rost, Thomas; Tangermann, Michael
Representing an intuitive spelling interface for brain-computer interfaces (BCI) in the auditory domain is not straight-forward. In consequence, all existing approaches based on event-related potentials (ERP) rely at least partially on a visual representation of the interface. This online study introduces an auditory spelling interface that eliminates the necessity for such a visualization. In up to two sessions, a group of healthy subjects (N = 21) was asked to use a text entry application, utilizing the spatial cues of the AMUSE paradigm (Auditory Multi-class Spatial ERP). The speller relies on the auditory sense both for stimulation and the core feedback. Without prior BCI experience, 76% of the participants were able to write a full sentence during the first session. By exploiting the advantages of a newly introduced dynamic stopping method, a maximum writing speed of 1.41 char/min (7.55 bits/min) could be reached during the second session (average: 0.94 char/min, 5.26 bits/min). For the first time, the presented work shows that an auditory BCI can reach performances similar to state-of-the-art visual BCIs based on covert attention. These results represent an important step toward a purely auditory BCI.
Full Text Available Language acquisition in infants is driven by on-going neural plasticity that is acutely sensitive to environmental acoustic cues. Recent studies showed that attention-based experience with non-linguistic, temporally-modulated auditory stimuli sharpens cortical responses. A previous ERP study from this laboratory showed that interactive auditory experience via behavior-based feedback (AEx, over a 6-week period from 4- to 7-months-of-age, confers a processing advantage, compared to passive auditory exposure (PEx or maturation alone (Naïve Control, NC. Here, we provide a follow-up investigation of the underlying neural oscillatory patterns in these three groups. In AEx infants, Standard stimuli with invariant frequency (STD elicited greater Theta-band (4–6 Hz activity in Right Auditory Cortex (RAC, as compared to NC infants, and Deviant stimuli with rapid frequency change (DEV elicited larger responses in Left Auditory Cortex (LAC. PEx and NC counterparts showed less-mature bilateral patterns. AEx infants also displayed stronger Gamma (33–37 Hz activity in the LAC during DEV discrimination, compared to NCs, while NC and PEx groups demonstrated bilateral activity in this band, if at all. This suggests that interactive acoustic experience with non-linguistic stimuli can promote a distinct, robust and precise cortical pattern during rapid auditory processing, perhaps reflecting mechanisms that support fine-tuning of early acoustic mapping.
Golumbic, Elana Zion; Cogan, Gregory B.; Schroeder, Charles E.; Poeppel, David
Our ability to selectively attend to one auditory signal amidst competing input streams, epitomized by the ‘Cocktail Party’ problem, continues to stimulate research from various approaches. How this demanding perceptual feat is achieved from a neural systems perspective remains unclear and controversial. It is well established that neural responses to attended stimuli are enhanced compared to responses to ignored ones, but responses to ignored stimuli are nonetheless highly significant, leading to interference in performance. We investigated whether congruent visual input of an attended speaker enhances cortical selectivity in auditory cortex, leading to diminished representation of ignored stimuli. We recorded magnetoencephalographic (MEG) signals from human participants as they attended to segments of natural continuous speech. Using two complementary methods of quantifying the neural response to speech, we found that viewing a speaker’s face enhances the capacity of auditory cortex to track the temporal speech envelope of that speaker. This mechanism was most effective in a ‘Cocktail Party’ setting, promoting preferential tracking of the attended speaker, whereas without visual input no significant attentional modulation was observed. These neurophysiological results underscore the importance of visual input in resolving perceptual ambiguity in a noisy environment. Since visual cues in speech precede the associated auditory signals, they likely serve a predictive role in facilitating auditory processing of speech, perhaps by directing attentional resources to appropriate points in time when to-be-attended acoustic input is expected to arrive. PMID:23345218
This book presents state-of-the-art computational attention models that have been successfully tested in diverse application areas and can build the foundation for artificial systems to efficiently explore, analyze, and understand natural scenes. It gives a comprehensive overview of the most recent computational attention models for processing visual and acoustic input. It covers the biological background of visual and auditory attention, as well as bottom-up and top-down attentional mechanisms and discusses various applications. In the first part new approaches for bottom-up visual and acoustic saliency models are presented and applied to the task of audio-visual scene exploration of a robot. In the second part the influence of top-down cues for attention modeling is investigated. .
Wegen, E. van; Goede, C. de; Lim, I.; Rietberg, M.B.; Nieuwboer, A.; Willems, A.; Jones, D.; Rochester, L.; Hetherington, V.; Berendse, H.W.; Zijlmans, J.C.M.; Wolters, E.; Kwakkel, G.
BACKGROUND AND AIMS: Gait and gait related activities in patients with Parkinson's disease (PD) can be improved with rhythmic auditory cueing (e.g. a metronome). In the context of a large European study, a portable prototype cueing device was developed to provide an alternative for rhythmic auditory
Arjona, Antonio; Escudero, Miguel; Gómez, Carlos M
The neural bases of the inter-trial validity/invalidity sequential effects in a visuo-auditory modified version of the Central Cue Posner's Paradigm (CCPP) are analyzed by means of Early Directing Attention Negativity (EDAN), Contingent Negative Variation (CNV) and Lateralized Readiness Potential (LRP). ERPs results indicated an increase in CNV and LRP in trials preceded by valid trials compared to trials preceded by invalid trials. The CNV and LRP pattern would be highly related to the behavioral pattern of lower RTs and higher number of anticipations in trials preceded by valid with respect to trials preceded by invalid trials. This effect was not preceded by a modulation of the EDAN as a result of the previous trial condition. The results suggest that there is a trial-by-trial dynamic modulation of the attentional system as a function of the validity assigned to the cue, in which conditional probabilities between cue and target are continuously updated.
Full Text Available Visual images that are not faces are sometimes perceived as faces (the pareidolia phenomenon. While the pareidolia phenomenon provides people with a strong impression that a face is present, it is unclear how deeply pareidolia faces are processed as faces. In the present study, we examined whether a shift in spatial attention would be produced by gaze cueing of face-like objects. A robust cueing effect was observed when the face-like objects were perceived as faces. The magnitude of the cueing effect was comparable between the face-like objects and a cartoon face. However, the cueing effect was eliminated when the observer did not perceive the objects as faces. These results demonstrated that pareidolia faces do more than give the impression of the presence of faces; indeed, they trigger an additional face-specific attentional process.
Boucheix, Jean-Michel; Lowe, Richard K.
Two experiments used eye tracking to investigate a novel cueing approach for directing learner attention to low salience, high relevance aspects of a complex animation. In the first experiment, comprehension of a piano mechanism animation containing spreading-colour cues was compared with comprehension obtained with arrow cues or no cues. Eye…
When insects communicate by sound, or use acoustic cues to escape predators or detect prey or hosts they have to localize the sound in most cases, to perform adaptive behavioral responses. In the case of particle velocity receivers such as the antennae of mosquitoes, directionality is no problem because such receivers are inherently directional. Insects equipped with bilateral pairs of tympanate ears could principally make use of binaural cues for sound localization, like all other animals with two ears. However, their small size is a major problem to create sufficiently large binaural cues, with respect to both interaural time differences (ITDs, because interaural distances are so small), but also with respect to interaural intensity differences (IIDs), since the ratio of body size to the wavelength of sound is rather unfavorable for diffractive effects. In my review, I will only shortly cover these biophysical aspects of directional hearing. Instead, I will focus on aspects of directional hearing which received relatively little attention previously, the evolution of a pressure difference receiver, 3D-hearing, directional hearing outdoors, and directional hearing for auditory scene analysis.
Arjona, Antonio; Escudero, Miguel; Gómez, Carlos M
The neural bases of the so-called Spatial Cueing Effect in a visuo-auditory version of the Central Cue Posneŕs Paradigm (CCPP) are analyzed by means of behavioral patterns (Reaction Times and Errors) and Event-Related Potentials (ERPs), namely the Contingent Negative Variation (CNV), N1, P2a, P2p, P3a, P3b and Negative Slow Wave (NSW). The present version consisted of three types of trial blocks with different validity/invalidity proportions: 50% valid - 50% invalid trials, 68% valid - 32% invalid trials and 86% valid - 14% invalid trials. Thus, ERPs can be analyzed as the proportion of valid trials per block increases. Behavioral (Reaction Times and Incorrect responses) and ERP (lateralized component of CNV, P2a, P3b and NSW) results showed a spatial cueing effect as the proportion of valid trials per block increased. Results suggest a brain activity modulation related to sensory-motor attention and working memory updating, in order to adapt to external unpredictable contingencies. Copyright © 2016 Elsevier B.V. All rights reserved.
Simon, Jonathan Z
Auditory objects, like their visual counterparts, are perceptually defined constructs, but nevertheless must arise from underlying neural circuitry. Using magnetoencephalography (MEG) recordings of the neural responses of human subjects listening to complex auditory scenes, we review studies that demonstrate that auditory objects are indeed neurally represented in auditory cortex. The studies use neural responses obtained from different experiments in which subjects selectively listen to one of two competing auditory streams embedded in a variety of auditory scenes. The auditory streams overlap spatially and often spectrally. In particular, the studies demonstrate that selective attentional gain does not act globally on the entire auditory scene, but rather acts differentially on the separate auditory streams. This stream-based attentional gain is then used as a tool to individually analyze the different neural representations of the competing auditory streams. The neural representation of the attended stream, located in posterior auditory cortex, dominates the neural responses. Critically, when the intensities of the attended and background streams are separately varied over a wide intensity range, the neural representation of the attended speech adapts only to the intensity of that speaker, irrespective of the intensity of the background speaker. This demonstrates object-level intensity gain control in addition to the above object-level selective attentional gain. Overall, these results indicate that concurrently streaming auditory objects, even if spectrally overlapping and not resolvable at the auditory periphery, are individually neurally encoded in auditory cortex, as separate objects. Copyright © 2014 Elsevier B.V. All rights reserved.
Slevc, L Robert; Shell, Alison R
Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. © 2015 Elsevier B.V. All rights reserved.
Full Text Available Even though auditory stimuli do not directly convey information related to visual stimuli, they often improve visual detection and identification performance. Auditory stimuli often alter visual perception depending on the reliability of the sensory input, with visual and auditory information reciprocally compensating for ambiguity in the other sensory domain. Perceptual processing is characterized by hemispheric asymmetry. While the left hemisphere is more involved in linguistic processing, the right hemisphere dominates spatial processing. In this context, we hypothesized that an auditory facilitation effect in the right visual field for the target identification task, and a similar effect would be observed in the left visual field for the target localization task. In the present study, we conducted target identification and localization tasks using a dual-stream rapid serial visual presentation. When two targets are embedded in a rapid serial visual presentation stream, the target detection or discrimination performance for the second target is generally lower than for the first target; this deficit is well known as attentional blink. Our results indicate that auditory stimuli improved target identification performance for the second target within the stream when visual stimuli were presented in the right, but not the left visual field. In contrast, auditory stimuli improved second target localization performance when visual stimuli were presented in the left visual field. An auditory facilitation effect was observed in perceptual processing, depending on the hemispheric specialization. Our results demonstrate a dissociation between the lateral visual hemifield in which a stimulus is projected and the kind of visual judgment that may benefit from the presentation of an auditory cue.
Bigliassi, Marcelo; Karageorghis, Costas I; Nowicky, Alexander V; Wright, Michael J; Orgs, Guido
Highly demanding cognitive-motor tasks can be negatively influenced by the presence of auditory stimuli. The human brain attempts to partially suppress the processing of potential distractors in order that motor tasks can be completed successfully. The present study sought to further understand the attentional neural systems that activate in response to potential distractors during the execution of movements. Nineteen participants (9 women and 10 men) were administered isometric ankle-dorsiflexion tasks for 10 s at a light intensity. Electroencephalography was used to assess the electrical activity in the brain, and a music excerpt was used to distract participants. Three conditions were administered: auditory distraction during the execution of movement (auditory distraction; AD), movement execution in the absence of auditory distraction (control; CO), and auditory distraction in the absence of movement (stimulus-only; SO). AD was compared with SO to identify the mechanisms underlying the attentional processing associated with attentional shifts from internal association (task-related) to external (task-unrelated) sensory cues. The results of the present study indicated that the EMG amplitude was not compromised when the auditory stimulus was administered. Accordingly, EEG activity was upregulated at 0.368 s in AD when compared to SO. Source reconstruction analysis indicated that right and central parietal regions of the cortex activated at 0.368 s in order to reduce the processing of task-irrelevant stimuli during the execution of movements. The brain mechanisms that underlie the control of potential distractors during exercise were possibly associated with the activity of the frontoparietal network.
Hancock, P.A.; Mercado, J.E.; Merlo, J.; Erp, J.B.F. van
The present experiment tested 60 individuals on a multiple screen, visual target detection task. Using a within-participant design, individuals received no-cue augmentation, an augmenting tactile cue alone, an augmenting auditory cue alone or both of the latter augmentations in combination. Results
Németh, Renáta; Háden, Gábor P; Török, Miklós; Winkler, István
By measuring event-related brain potentials (ERPs), the authors tested the sensitivity of the newborn auditory cortex to sound lateralization and to the most common cues of horizontal sound localization. Sixty-eight healthy full-term newborn infants were presented with auditory oddball sequences composed of frequent and rare noise segments in four experimental conditions. The authors tested in them the detection of deviations in the primary cues of sound lateralization (interaural time and level difference) and in actual sound source location (free-field and monaural sound presentation). ERP correlates of deviance detection were measured in two time windows. Deviations in both primary sound localization cues and the ear of stimulation elicited a significant ERP difference in the early (90 to 140 msec) time window. Deviance in actual sound source location (the free-field condition) elicited a significant response in the late (290 to 340 msec) time window. The early differential response may indicate the detection of a change in the respective auditory features. The authors suggest that the late differential response, which was only elicited by actual sound source location deviation, reflects the detection of location deviance integrating the various cues of sound source location. Although the results suggest that all of the tested binaural cues are processed by the neonatal auditory cortex, utilizing the cues for locating sound sources of these cues may require maturation and learning.
Full Text Available The role of body orientation in the orienting and allocation of social attention was examined using an adapted Simon paradigm. Participants categorized the facial expression of forward facing, computer-generated human figures by pressing one of two response keys, each located left or right of the observers’ body midline, while the orientation of the stimulus figure’s body (trunk, arms, and legs, which was the task-irrelevant feature of interest, was manipulated (oriented towards the left or right visual hemifield with respect to the spatial location of the required response. We found that when the orientation of the body was compatible with the required response location, responses were slower relative to when body orientation was incompatible with the response location. This reverse compatibility effect suggests that body orientation is automatically processed into a directional spatial code, but that this code is based on an integration of head and body orientation within an allocentric-based frame of reference. Moreover, we argue that this code may be derived from the motion information implied in the image of a figure when head and body orientation are incongruent. Our results have implications for understanding the nature of the information that affects the allocation of attention for social orienting.
Reches, Amit; Gutfreund, Yoram
A common visual pathway in all amniotes is the tectofugal pathway connecting the optic tectum with the forebrain. The tectofugal pathway has been suggested to be involved in tasks such as orienting and attention, tasks that may benefit from integrating information across senses. Nevertheless, previous research has characterized the tectofugal pathway as strictly visual. Here we recorded from two stations along the tectofugal pathway of the barn owl: the thalamic nucleus rotundus (nRt) and the forebrain entopallium (E). We report that neurons in E and nRt respond to auditory stimuli as well as to visual stimuli. Visual tuning to the horizontal position of the stimulus and auditory tuning to the corresponding spatial cue (interaural time difference) were generally broad, covering a large portion of the contralateral space. Responses to spatiotemporally coinciding multisensory stimuli were mostly enhanced above the responses to the single modality stimuli, whereas spatially misaligned stimuli were not. Results from inactivation experiments suggest that the auditory responses in E are of tectal origin. These findings support the notion that the tectofugal pathway is involved in multisensory processing. In addition, the findings suggest that the ascending auditory information to the forebrain is not as bottlenecked through the auditory thalamus as previously thought.
Kaganoff, Eili; Bordnick, Patrick S.; Carter, Brian Lee
Cue reactivity assessments have been widely used to assess craving and attention to cues among cigarette smokers. Cue reactivity has the potential to offer insights into treatment decisions; however, the use of cue reactivity in treatment studies has been limited. This study assessed the feasibility of using a virtual reality-based cue reactivity…
Jamet, Eric; Fernandez, Jonathan
The present study investigated whether learning how to use a web service with an interactive tutorial can be enhanced by cueing. We expected the attentional guidance provided by visual cues to facilitate the selection of information in static screen displays that corresponded to spoken explanations. Unlike most previous studies in this area, we…
Full Text Available Accurate auditory localization relies on neural computations based on spatial cues present in the sound waves at each ear. The values of these cues depend on the size, shape, and separation of the two ears and can therefore vary from one individual to another. As with other perceptual skills, the neural circuits involved in spatial hearing are shaped by experience during development and retain some capacity for plasticity in later life. However, the factors that enable and promote plasticity of auditory localization in the adult brain are unknown. Here we show that mature ferrets can rapidly relearn to localize sounds after having their spatial cues altered by reversibly occluding one ear, but only if they are trained to use these cues in a behaviorally relevant task, with greater and more rapid improvement occurring with more frequent training. We also found that auditory adaptation is possible in the absence of vision or error feedback. Finally, we show that this process involves a shift in sensitivity away from the abnormal auditory spatial cues to other cues that are less affected by the earplug. The mature auditory system is therefore capable of adapting to abnormal spatial information by reweighting different localization cues. These results suggest that training should facilitate acclimatization to hearing aids in the hearing impaired.
Nordahl, Rolf; Lecuyer, Anatole; Serafin, Stefania
This chapter presents an array of results on the perception of ground surfaces via multiple sensory modalities,with special attention to non visual perceptual cues, notably those arising from audition and haptics, as well as interactions between them. It also reviews approaches to combining...
D'Imperio, Daniela; Scandola, Michele; Gobbetto, Valeria; Bulgarelli, Cristina; Salgarello, Matteo; Avesani, Renato; Moro, Valentina
Cross-modal interactions improve the processing of external stimuli, particularly when an isolated sensory modality is impaired. When information from different modalities is integrated, object recognition is facilitated probably as a result of bottom-up and top-down processes. The aim of this study was to investigate the potential effects of cross-modal stimulation in a case of simultanagnosia. We report a detailed analysis of clinical symptoms and an 18F-fluorodeoxyglucose (FDG) brain positron emission tomography/computed tomography (PET/CT) study of a patient affected by Balint's syndrome, a rare and invasive visual-spatial disorder following bilateral parieto-occipital lesions. An experiment was conducted to investigate the effects of visual and nonvisual cues on performance in tasks involving the recognition of overlapping pictures. Four modalities of sensory cues were used: visual, tactile, olfactory, and auditory. Data from neuropsychological tests showed the presence of ocular apraxia, optic ataxia, and simultanagnosia. The results of the experiment indicate a positive effect of the cues on the recognition of overlapping pictures, not only in the identification of the congruent valid-cued stimulus (target) but also in the identification of the other, noncued stimuli. All the sensory modalities analyzed (except the auditory stimulus) were efficacious in terms of increasing visual recognition. Cross-modal integration improved the patient's ability to recognize overlapping figures. However, while in the visual unimodal modality both bottom-up (priming, familiarity effect, disengagement of attention) and top-down processes (mental representation and short-term memory, the endogenous orientation of attention) are involved, in the cross-modal integration it is semantic representations that mainly activate visual recognition processes. These results are potentially useful for the design of rehabilitation training for attentional and visual-perceptual deficits.
Getz, Laura M; Nordeen, Elke R; Vrabic, Sarah C; Toscano, Joseph C
Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues.
Georg F Meyer
Full Text Available Flight simulators which provide visual, auditory, and kinematic (physical motion cues are increasingly used for pilot training. We have previously shown that kinematic cues, but not auditory cues, representing aircraft motion improve target tracking performance for novice ‘pilots’ in a simulated flying task (Meyer et al IMRF 2010. Here we explore the effect of learning on task performance. Our subjects were first tested on a target tracking task in a helicopter flight simulation. They were then trained in a simulator-simulator, which provided full audio, simplified visuals, but not kinematic signals to test whether learning of auditory cues is possible. After training we evaluated flight performance in the full simulator again. We show that after 2 hours training auditory cues are used by our participants as efficiently as kinematic cues to improve target tracking performance. The performance improvement relative to a condition where no audio signals are presented is robust if the sound environment used during training is replaced by a very different audio signal that is modulated in amplitude and pitch in the same way as the training signal. This shows that training is not signal specific but that our participants learn to extract transferrable information on sound pitch and amplitude to improve their flying performance.
Full Text Available The auditory system of adult listeners has been shown to accommodate to altered spectral cues to sound location which presumably provides the basis for recalibration to changes in the shape of the ear over a life time. Here we review the role of auditory and non-auditory inputs to the perception of sound location and consider a range of recent experiments looking at the role of non-auditory inputs in the process of accommodation to these altered spectral cues. A number of studies have used small ear moulds to modify the spectral cues that result in significant degradation in localization performance. Following chronic exposure (10-60 days performance recovers to some extent and recent work has demonstrated that this occurs for both audio-visual and audio-only regions of space. This begs the questions as to the teacher signal for this remarkable functional plasticity in the adult nervous system. Following a brief review of influence of the motor state in auditory localisation, we consider the potential role of auditory-motor learning in the perceptual recalibration of the spectral cues. Several recent studies have considered how multi-modal and sensory-motor feedback might influence accommodation to altered spectral cues produced by ear moulds or through virtual auditory space stimulation using non-individualised spectral cues. The work with ear moulds demonstrates that a relatively short period of training involving sensory-motor feedback (5 – 10 days significantly improved both the rate and extent of accommodation to altered spectral cues. This has significant implications not only for the mechanisms by which this complex sensory information is encoded to provide a spatial code but also for adaptive training to altered auditory inputs. The review concludes by considering the implications for rehabilitative training with hearing aids and cochlear prosthesis.
Chris P Maguire
Full Text Available Environments vary stochastically, and animals need to behave in ways that best fit the conditions in which they find themselves. The social environment is particularly variable, and responding appropriately to it can be vital for an animal's success. However, cues of social environment are not always reliable, and animals may need to balance accuracy against the risk of failing to respond if local conditions or interfering signals prevent them detecting a cue. Recent work has shown that many male Drosophila fruit flies respond to the presence of rival males, and that these responses increase their success in acquiring mates and fathering offspring. In Drosophila melanogaster males detect rivals using auditory, tactile and olfactory cues. However, males fail to respond to rivals if any two of these senses are not functioning: a single cue is not enough to produce a response. Here we examined cue use in the detection of rival males in a distantly related Drosophila species, D. pseudoobscura, where auditory, olfactory, tactile and visual cues were manipulated to assess the importance of each sensory cue singly and in combination. In contrast to D. melanogaster, male D. pseudoobscura require intact olfactory and tactile cues to respond to rivals. Visual cues were not important for detecting rival D. pseudoobscura, while results on auditory cues appeared puzzling. This difference in cue use in two species in the same genus suggests that cue use is evolutionarily labile, and may evolve in response to ecological or life history differences between species.
Bouak, Fethi; Kline, Julianne; Cheung, Bob
Tactile cueing has been explored primarily for the detection of linear motion such as vertical, longitudinal, and lateral translation in the laboratory and in flight. The usefulness of tactile cues in detecting roll and pitch motion has not been fully investigated. There were 12 subjects (21-56 yr) who were exposed to controlled pitch and roll motion generated by a motion platform with and without tactile cueing. The tactile system consists of a torso vest with 24 electromechanical tactors and a tactor on each shoulder and under each thigh harness, respectively. While devoid of visual and auditory cues, each subject performed three tasks: 1) indicate motion perception without tactile cues (C1); 2) return to vertical from an offset angle (C2); and 3) maintain straight and level while the platform was continuously in motion (C3). Our results indicated that in the absence of visual and auditory cues, subjects reported that the tactile system was useful in the execution of C2 and C3 maneuvers. Specifically, the presence of tactile cues had a significant impact on the accuracy, duration, and perceived workload. In addition, tactile cueing also increased the accuracy in returning to neutral from an offset position and in maintaining the neutral position while the platform was in continuous motion. Tactile cueing appears to be effective in detecting roll and pitch motion and has the potential to reduce the workload and risks of high stress and time sensitive air operations.
Ivkovic, Vladimir; Fisher, Stanley; Paloski, William H
Visual and auditory cueing improve functional performance in Parkinson's disease (PD) patients. However, audiovisual processing shares many cognitive resources used for attention-dependent tasks such as communication, spatial orientation, and balance. Conversely, tactile cues (TC) may be processed faster, with minimal attentional demand, and may be more efficient means for modulating motor-cognitive performance. In this study we aimed to investigate the efficacy and limitations of TC for modulating simple (heel tapping) and more complex (walking) motor tasks (1) over a range of cueing intervals, (2) with/without a secondary motor task (holding tray with cups of water). Ten PD patients (71 ± 9 years) and 10 healthy controls (69 ± 7 years) participated in the study. TCs was delivered through a smart phone attached to subjects' dominant arm and were controlled by a custom-developed Android application. PD patients and healthy controls were able to use TC to modulate heel tapping (F(3.8,1866.1) = 1008.1, p usage for movement modulation and motor-cognitive integration in PD patients. The smartphone TC application was validated as a user-friendly movement modulation aid. Copyright © 2015 Elsevier Ltd. All rights reserved.
Full Text Available Attention capture by potentially relevant environmental stimuli is critical for human survival, yet it varies considerably among individuals. A large series of studies has suggested that attention capture may depend on the cognitive balance between maintenance and manipulation of mental representations and the flexible switch between goal-directed representations and potentially relevant stimuli outside the focus of attention; a balance that seems modulated by a prefrontostriatal dopamine pathway. Here, we examined inter-individual differences in the cognitive control of attention through studying the effects of two single nucleotide polymorphisms regulating dopamine at the prefrontal cortex and the striatum (i.e., COMTMet108/158Val and ANKK1/DRD2TaqIA on stimulus-driven attention capture. Healthy adult participants (N = 40 were assigned to different groups according to the combination of the polymorphisms COMTMet108/158Val and ANKK1/DRD2TaqIA, and were instructed to perform on a well-established distraction protocol. Performance in individuals with a balance between prefrontal dopamine display and striatal receptor density was slowed down by the occurrence of unexpected distracting events, while those with a rather unbalanced dopamine activity were able maintain task performance with no time delay, yet at the expense of a slightly lower accuracy. This advantage, associated to their distinct genetic profiles, was paralleled by an electrophysiological mechanism of phase-resetting of gamma neural oscillation to the novel, distracting events. Taken together, the current results suggest that the epistatic interaction between COMTVal108/158Met and ANKK1/DRD2 TaqIa genetic polymorphisms lies at the basis of stimulus-driven attention capture.
Carvajal, Juan Camilo Gil; Santurette, Sébastien; Cubick, Jens
this is due to incongruent auditory cues between the recording and playback room during sound reproduction or to an expectation effect from the visual impression of the room. This study investigated the influence of a priori acoustic and visual knowledge of the playback room on sound externalization...... between recording and playback room was found to be detrimental to virtual sound externalization. The auditory modality governed externalization in terms of perceived distance when cues from the recording and playback room were incongruent, whereby the auditory impression of the room was more critical...... the more reverberant the listening environment was. While the visual impression of the playback room did not affect perceived distance, visual cues helped resolve localization ambiguities and improved compactness perception....
Elbert, Sarah; Dijkstra, Arie
Persuasive health information can be presented through an auditory channel. Curiously enough, the effect of voice cues in health persuasion has hardly been studied. Research concerning visual persuasive messages showed that self-affirmation results in a more open-minded reaction to threatening
Ryan A. Barry
Full Text Available Previous research has shown that infants can learn from social cues. But is a social cue more effective at directing learning than a non-social cue? This study investigated whether 9-month-old infants (N=55 could learn a visual statistical regularity in the presence of a distracting visual sequence when attention was directed by either a social cue (a person or a non-social cue (a rectangle. The results show that both social and non-social cues can guide infants’ attention to a visual shape sequence (and away from a distracting sequence. The social cue more effectively directed attention than the non-social cue during the learning phase, but the social cue did not result in significantly stronger learning than the non-social cue. The findings suggest that domain general attention mechanisms allow for the comparable learning seen in both conditions.
Barry, Ryan A; Graf Estes, Katharine; Rivera, Susan M
Previous research has shown that infants can learn from social cues. But is a social cue more effective at directing learning than a non-social cue? This study investigated whether 9-month-old infants (N = 55) could learn a visual statistical regularity in the presence of a distracting visual sequence when attention was directed by either a social cue (a person) or a non-social cue (a rectangle). The results show that both social and non-social cues can guide infants' attention to a visual shape sequence (and away from a distracting sequence). The social cue more effectively directed attention than the non-social cue during the familiarization phase, but the social cue did not result in significantly stronger learning than the non-social cue. The findings suggest that domain general attention mechanisms allow for the comparable learning seen in both conditions.
Isbell, Elif; Wray, Amanda Hampton; Neville, Helen J.
Selective attention, the ability to enhance the processing of particular input while suppressing the information from other concurrent sources, has been postulated to be a foundational skill for learning and academic achievement. The neural mechanisms of this foundational ability are both vulnerable and enhanceable in children from lower…
Higgins, Nathan C.; Storace, Douglas A.; Escabí, Monty A.
Accurate orientation to sound under challenging conditions requires auditory cortex, but it is unclear how spatial attributes of the auditory scene are represented at this level. Current organization schemes follow a functional division whereby dorsal and ventral auditory cortices specialize to encode spatial and object features of sound source, respectively. However, few studies have examined spatial cue sensitivities in ventral cortices to support or reject such schemes. Here Fourier optical imaging was used to quantify best frequency responses and corresponding gradient organization in primary (A1), anterior, posterior, ventral (VAF), and suprarhinal (SRAF) auditory fields of the rat. Spike rate sensitivities to binaural interaural level difference (ILD) and average binaural level cues were probed in A1 and two ventral cortices, VAF and SRAF. Continuous distributions of best ILDs and ILD tuning metrics were observed in all cortices, suggesting this horizontal position cue is well covered. VAF and caudal SRAF in the right cerebral hemisphere responded maximally to midline horizontal position cues, whereas A1 and rostral SRAF responded maximally to ILD cues favoring more eccentric positions in the contralateral sound hemifield. SRAF had the highest incidence of binaural facilitation for ILD cues corresponding to midline positions, supporting current theories that auditory cortices have specialized and hierarchical functional organization. PMID:20980610
Theeuwes, J.; van der Burg, E.
In the present study, participants searched for an odd-man-out target within the shape dimension (either a diamond or a circle) while a colour distractor singleton could be present. In some conditions, the identity of the target singleton for the upcoming trial was cued in advance by either a word
Philosophy 2014 UNIFORMED SERVICES UNIVERSITY, SCHOOL OF MEDICINE GRADUATE PROGRAMS Graduate Education Office (A 1045), 4301 Jones Bridge Road...squared meters (m2) ( 46). BMI is divided into ranges for underweight , normal weight and obese. According to the National Heart, Lung, and Blood...repeated, chronic use (241 ). An alternative is that reduced density of dopamine D2 receptors reflects an innate vulnerability to become addicted (235