Salo, S; Lang, A H; Aaltonen, O; Lertola, K; Kärki, T
A cortical cognitive auditory evoked potential, mismatch negativity (MMN), reflects automatic discrimination and echoic memory functions of the auditory system. For this study, we examined whether this potential is dependent on the stimulus intensity. The MMN potentials were recorded from 10 subjects with normal hearing using a sine tone of 1000 Hz as the standard stimulus and a sine tone of 1141 Hz as the deviant stimulus, with probabilities of 90% and 10%, respectively. The intensities were 40, 50, 60, 70, and 80 dB HL for both standard and deviant stimuli in separate blocks. Stimulus intensity had a statistically significant effect on the mean amplitude, rise time parameter, and onset latency of the MMN. Automatic auditory discrimination seems to be dependent on the sound pressure level of the stimuli.
Dinsmoor, James A.
In his effort to distinguish operant from respondent conditioning, Skinner stressed the lack of an eliciting stimulus and rejected the prevailing stereotype of Pavlovian “stimulus—response” psychology. But control by antecedent stimuli, whether classified as conditional or discriminative, is ubiquitous in the natural setting. With both respondent and operant behavior, symmetrical gradients of generalization along unrelated dimensions may be obtained following differential reinforcement in the...
Robinson, Christopher W.; Sloutsky, Vladimir M.
Two experiments examined the effects of multimodal presentation and stimulus familiarity on auditory and visual processing. In Experiment 1, 10-month-olds were habituated to either an auditory stimulus, a visual stimulus, or an auditory-visual multimodal stimulus. Processing time was assessed during the habituation phase, and discrimination of…
Ali, M. R.; Amir, T.
Investigated relationship between critical flicker fusion (CFF) thresholds and five personality characteristics (alienation; social nonconformity; discomfort, expression, and defensiveness) under three auditory stimulus conditions (quiet, noise, meaningful verbal stimuli). Results from 60 college students revealed that auditory stimulation and…
Tremblay, Kelly L; Shahin, Antoine J; Picton, Terence; Ross, Bernhard
Auditory training alters neural activity in humans but it is unknown if these alterations are specific to the trained cue. The objective of this study was to determine if enhanced cortical activity was specific to the trained voice-onset-time (VOT) stimuli 'mba' and 'ba', or whether it generalized to the control stimulus 'a' that did not contain the trained cue. Thirteen adults were trained to identify a 10ms VOT cue that differentiated the two experimental stimuli. We recorded event-related potentials (ERPs) evoked by three different speech sounds 'ba' 'mba' and 'a' before and after six days of VOT training. The P2 wave increased in amplitude after training for both control and experimental stimuli, but the effects differed between stimulus conditions. Whereas the effects of training on P2 amplitude were greatest in the left hemisphere for the trained stimuli, enhanced P2 activity was seen in both hemispheres for the control stimulus. In addition, subjects with enhanced pre-training N1 amplitudes were more responsive to training and showed the most perceptual improvement. Both stimulus-specific and general effects of training can be measured in humans. An individual's pre-training N1 response might predict their capacity for improvement. N1 and P2 responses can be used to examine physiological correlates of human auditory perceptual learning.
Tani, Keisuke; Jono, Yasutomo; Nomura, Yoshifumi; Chujo, Yuta; Hiraoka, Koichi
This study investigated the effect of monaural auditory stimulus on hand selection when reaching. Healthy right-handed participants were asked to reach to a visual target and were free to use either the right or left hand. A visual target appeared at one of 11 positions in the visual field between -25 and 25 degrees of the horizontal visual angle. An auditory stimulus was given either in the left or right ear 100 ms after the presentation of the visual target, or no auditory stimulus was given. An auditory stimulus in the right ear increased right hand selection, and that in the left ear slightly increased left hand selection when reaching to a target around the midline of the visual field. The horizontal visual angle, where the probabilities of right hand selection and left hand selection were equal when reaching, shifted leftward when an auditory stimulus was given in the right ear, but the angle did not shift in either direction when an auditory stimulus was given in the left ear. The right-ear-dominant auditory stimulus effect on hand selection indicates hemispheric asymmetry of cortical activity for hand selection.
Donohue, Sarah E; Appelbaum, Lawrence G; Park, Christina J; Roberts, Kenneth C; Woldorff, Marty G
Cross-modal processing depends strongly on the compatibility between different sensory inputs, the relative timing of their arrival to brain processing components, and on how attention is allocated. In this behavioral study, we employed a cross-modal audio-visual Stroop task in which we manipulated the within-trial stimulus-onset-asynchronies (SOAs) of the stimulus-component inputs, the grouping of the SOAs (blocked vs. random), the attended modality (auditory or visual), and the congruency of the Stroop color-word stimuli (congruent, incongruent, neutral) to assess how these factors interact within a multisensory context. One main result was that visual distractors produced larger incongruency effects on auditory targets than vice versa. Moreover, as revealed by both overall shorter response times (RTs) and relative shifts in the psychometric incongruency-effect functions, visual-information processing was faster and produced stronger and longer-lasting incongruency effects than did auditory. When attending to either modality, stimulus incongruency from the other modality interacted with SOA, yielding larger effects when the irrelevant distractor occurred prior to the attended target, but no interaction with SOA grouping. Finally, relative to neutral-stimuli, and across the wide range of the SOAs employed, congruency led to substantially more behavioral facilitation than did incongruency to interference, in contrast to findings that within-modality stimulus-compatibility effects tend to be more evenly split between facilitation and interference. In sum, the present findings reveal several key characteristics of how we process the stimulus compatibility of cross-modal sensory inputs, reflecting stimulus processing patterns that are critical for successfully navigating our complex multisensory world.
Sarah E Donohue
Full Text Available Cross-modal processing depends strongly on the compatibility between different sensory inputs, the relative timing of their arrival to brain processing components, and on how attention is allocated. In this behavioral study, we employed a cross-modal audio-visual Stroop task in which we manipulated the within-trial stimulus-onset-asynchronies (SOAs of the stimulus-component inputs, the grouping of the SOAs (blocked vs. random, the attended modality (auditory or visual, and the congruency of the Stroop color-word stimuli (congruent, incongruent, neutral to assess how these factors interact within a multisensory context. One main result was that visual distractors produced larger incongruency effects on auditory targets than vice versa. Moreover, as revealed by both overall shorter response times (RTs and relative shifts in the psychometric incongruency-effect functions, visual-information processing was faster and produced stronger and longer-lasting incongruency effects than did auditory. When attending to either modality, stimulus incongruency from the other modality interacted with SOA, yielding larger effects when the irrelevant distractor occurred prior to the attended target, but no interaction with SOA grouping. Finally, relative to neutral-stimuli, and across the wide range of the SOAs employed, congruency led to substantially more behavioral facilitation than did incongruency to interference, in contrast to findings that within-modality stimulus-compatibility effects tend to be more evenly split between facilitation and interference. In sum, the present findings reveal several key characteristics of how we process the stimulus compatibility of cross-modal sensory inputs, reflecting stimulus processing patterns that are critical for successfully navigating our complex multisensory world.
Pineda, Gustavo; Atehortúa, Angélica; Iregui, Marcela; García-Arteaga, Juan D.; Romero, Eduardo
External auditory cues stimulate motor related areas of the brain, activating motor ways parallel to the basal ganglia circuits and providing a temporary pattern for gait. In effect, patients may re-learn motor skills mediated by compensatory neuroplasticity mechanisms. However, long term functional gains are dependent on the nature of the pathology, follow-up is usually limited and reinforcement by healthcare professionals is crucial. Aiming to cope with these challenges, several researches and device implementations provide auditory or visual stimulation to improve Parkinsonian gait pattern, inside and outside clinical scenarios. The current work presents a semiautomated strategy for spatio-temporal feature extraction to study the relations between auditory temporal stimulation and spatiotemporal gait response. A protocol for auditory stimulation was built to evaluate the integrability of the strategy in the clinic practice. The method was evaluated in transversal measurement with an exploratory group of people with Parkinson's (n = 12 in stage 1, 2 and 3) and control subjects (n =6). The result showed a strong linear relation between auditory stimulation and cadence response in control subjects (R=0.98 +/-0.008) and PD subject in stage 2 (R=0.95 +/-0.03) and stage 3 (R=0.89 +/-0.05). Normalized step length showed a variable response between low and high gait velocity (0.2> R >0.97). The correlation between normalized mean velocity and stimulus was strong in all PD stage 2 (R>0.96) PD stage 3 (R>0.84) and controls (R>0.91) for all experimental conditions. Among participants, the largest variation from baseline was found in PD subject in stage 3 (53.61 +/-39.2 step/min, 0.12 +/- 0.06 in step length and 0.33 +/- 0.16 in mean velocity). In this group these values were higher than the own baseline. These variations are related with direct effect of metronome frequency on cadence and velocity. The variation of step length involves different regulation strategies and
Campolattaro, Matthew M.; Halverson, Hunter E.; Freeman, John H.
The neural pathways that convey conditioned stimulus (CS) information to the cerebellum during eyeblink conditioning have not been fully delineated. It is well established that pontine mossy fiber inputs to the cerebellum convey CS-related stimulation for different sensory modalities (e.g., auditory, visual, tactile). Less is known about the…
Schwent, V. L.; Hillyard, S. A.; Galambos, R.
Enhancement of the auditory vertex potentials with selective attention to dichotically presented tone pips was found to be critically sensitive to the range of inter-stimulus intervals in use. Only at the shortest intervals was a clear-cut enhancement of the latency component to stimuli observed for the attended ear.
Salisbury, Dean F
Deviations from repetitive auditory stimuli evoke a mismatch negativity (MMN). Counter-intuitively, omissions of repetitive stimuli do not. Violations of patterns reflecting complex rules also evoke MMN. To detect a MMN to missing stimuli, we developed an auditory gestalt task using one stimulus. Groups of 6 pips (50 msec duration, 330 msec stimulus onset asynchrony (SOA), 400 trials), were presented with an inter-trial interval (ITI) of 750 msec while subjects (n=16) watched a silent video. Occasional deviant groups had missing 4th or 6th tones (50 trials each). Missing stimuli evoked a MMN (pgestalt grouping rule. Homogenous stimulus streams appear to differ in the relative weighting of omissions than strongly patterned streams. PMID:22221004
Kayser, Christoph; Wilson, Caroline; Safaai, Houman; Sakata, Shuzo; Panzeri, Stefano
The phase of low-frequency network activity in the auditory cortex captures changes in neural excitability, entrains to the temporal structure of natural sounds, and correlates with the perceptual performance in acoustic tasks. Although these observations suggest a causal link between network rhythms and perception, it remains unknown how precisely they affect the processes by which neural populations encode sounds. We addressed this question by analyzing neural responses in the auditory cortex of anesthetized rats using stimulus-response models. These models included a parametric dependence on the phase of local field potential rhythms in both stimulus-unrelated background activity and the stimulus-response transfer function. We found that phase-dependent models better reproduced the observed responses than static models, during both stimulation with a series of natural sounds and epochs of silence. This was attributable to two factors: (1) phase-dependent variations in background firing (most prominent for delta; 1-4 Hz); and (2) modulations of response gain that rhythmically amplify and attenuate the responses at specific phases of the rhythm (prominent for frequencies between 2 and 12 Hz). These results provide a quantitative characterization of how slow auditory cortical rhythms shape sound encoding and suggest a differential contribution of network activity at different timescales. In addition, they highlight a putative mechanism that may implement the selective amplification of appropriately timed sound tokens relative to the phase of rhythmic auditory cortex activity. Copyright © 2015 Kayser et al.
Deacon, D; Nousak, J M; Pilotti, M; Ritter, W; Yang, C M
The effects of global and feature-specific probabilities of auditory stimuli were manipulated to determine their effects on the mismatch negativity (MMN) of the human event-related potential. The question of interest was whether the automatic comparison of stimuli indexed by the MMN was performed on representations of individual stimulus features or on gestalt representations of their combined attributes. The design of the study was such that both feature and gestalt representations could have been available to the comparator mechanism generating the MMN. The data were consistent with the interpretation that the MMN was generated following an analysis of stimulus features.
Javitt, D C; Steinschneider, M; Schroeder, C E; Vaughan, H G; Arezzo, J C
Mismatch negativity (MMN) is a cognitive, auditory event-related potential (AEP) that reflects preattentive detection of stimulus deviance and indexes the operation of the auditory sensory ('echoic') memory system. MMN is elicited most commonly in an auditory oddball paradigm in which a sequence of repetitive standard stimuli is interrupted infrequently and unexpectedly by a physically deviant 'oddball' stimulus. Electro- and magnetoencephalographic dipole mapping studies have localized the generators of MMN to supratemporal auditory cortex in the vicinity of Heschl's gyrus, but have not determined the degree to which MMN reflects activation within primary auditory cortex (AI) itself. The present study, using moveable multichannel electrodes inserted acutely into superior temporal plane, demonstrates a significant contribution of AI to scalp-recorded MMN in the monkey, as reflected by greater response of AI to loud or soft clicks presented as deviants than to the same stimuli presented as repetitive standards. The MMN-like activity was localized primarily to supragranular laminae within AI. Thus, standard and deviant stimuli elicited similar degrees of initial, thalamocortical excitation. In contrast, responses within supragranular cortex were significantly larger to deviant stimuli than to standards. No MMN-like activity was detected in a limited number to passes that penetrated anterior and medial to AI. AI plays a well established role in the decoding of the acoustic properties of individual stimuli. The present study demonstrates that primary auditory cortex also plays an important role in processing the relationships between stimuli, and thus participates in cognitive, as well as purely sensory, processing of auditory information.
Roberts, T P; Ferrari, P; Stufflebeam, S M; Poeppel, D
This review will focus on investigations of the auditory evoked neuromagnetic field component, the M100, detectable in the magnetoencephalogram recorded during presentation of auditory stimuli, approximately 100 milliseconds after stimulus onset. In particular, the dependence of M100 latency on attributes of the stimulus, such as intensity, pitch and timbre will be discussed, along with evidence relating M100 latency observations to perceptual features of the stimuli. Comparison with investigation of the analogous electrical potential component, the N1, will be made. Parametric development of stimuli from pure tones through complex tones to speech elements will be made, allowing the influence of spectral pitch, virtual pitch and perceptual categorization to be delineated and suggesting implications for the role of such latency observations in the study of speech processing. The final section will deal with potential clinical applications offered by M100 latency measurements, as objective indices of normal and abnormal cortical processing.
Salisbury, Dean F
Deviations from repetitive auditory stimuli evoke a mismatch negativity (MMN). Counterintuitively, omissions of repetitive stimuli do not. Violations of patterns reflecting complex rules also evoke MMN. To detect a MMN to missing stimuli, we developed an auditory gestalt task using one stimulus. Groups of six pips (50 ms duration, 330 ms stimulus onset asynchrony [SOA], 400 trials), were presented with an intertrial interval (ITI) of 750 ms while subjects (n=16) watched a silent video. Occasional deviant groups had missing 4th or 6th tones (50 trials each). Missing stimuli evoked a MMN (pSOAs by violation of a gestalt grouping rule. Patterned stimuli appear more sensitive to omissions and ITI than homogenous streams. Copyright © 2012 Society for Psychophysiological Research.
Sundberg, Mark L.
The importance of the intraverbal relation is missed in most theories of language. Skinner (1957) attributes this to traditional semantic theories of meaning that focus on the nonverbal referents of words and neglect verbal stimuli as separate sources of control for linguistic behavior. An analysis of verbal stimulus control is presented, along…
Brino, Ana Leda F., Barros, Romariz S., Galvao, Ol; Garotti, M.; Da Cruz, Ilara R. N.; Santos, Jose R.; Dube, William V.; McIlvane, William J.
This paper reports use of sample stimulus control shaping procedures to teach arbitrary matching-to-sample to 2 capuchin monkeys ("Cebus apella"). The procedures started with identity matching-to-sample. During shaping, stimulus features of the sample were altered gradually, rendering samples and comparisons increasingly physically dissimilar. The…
Park, Seoung Hoon; Kim, Seonjin; Kwon, MinHyuk; Christou, Evangelos A
Vision and auditory information are critical for perception and to enhance the ability of an individual to respond accurately to a stimulus. However, it is unknown whether visual and auditory information contribute differentially to identify the direction and rotational motion of the stimulus. The purpose of this study was to determine the ability of an individual to accurately predict the direction and rotational motion of the stimulus based on visual and auditory information. In this study, we recruited 9 expert table-tennis players and used table-tennis service as our experimental model. Participants watched recorded services with different levels of visual and auditory information. The goal was to anticipate the direction of the service (left or right) and the rotational motion of service (topspin, sidespin, or cut). We recorded their responses and quantified the following outcomes: (i) directional accuracy and (ii) rotational motion accuracy. The response accuracy was the accurate predictions relative to the total number of trials. The ability of the participants to predict the direction of the service accurately increased with additional visual information but not with auditory information. In contrast, the ability of the participants to predict the rotational motion of the service accurately increased with the addition of auditory information to visual information but not with additional visual information alone. In conclusion, this finding demonstrates that visual information enhances the ability of an individual to accurately predict the direction of the stimulus, whereas additional auditory information enhances the ability of an individual to accurately predict the rotational motion of stimulus.
van Laarhoven, Thijs; Stekelenburg, Jeroen J; Vroomen, Jean
A rare omission of a sound that is predictable by anticipatory visual information induces an early negative omission response (oN1) in the EEG during the period of silence where the sound was expected. It was previously suggested that the oN1 was primarily driven by the identity of the anticipated sound. Here, we examined the role of temporal prediction in conjunction with identity prediction of the anticipated sound in the evocation of the auditory oN1. With incongruent audiovisual stimuli (a video of a handclap that is consistently combined with the sound of a car horn) we demonstrate in Experiment 1 that a natural match in identity between the visual and auditory stimulus is not required for inducing the oN1, and that the perceptual system can adapt predictions to unnatural stimulus events. In Experiment 2 we varied either the auditory onset (relative to the visual onset) or the identity of the sound across trials in order to hamper temporal and identity predictions. Relative to the natural stimulus with correct auditory timing and matching audiovisual identity, the oN1 was abolished when either the timing or the identity of the sound could not be predicted reliably from the video. Our study demonstrates the flexibility of the perceptual system in predictive processing (Experiment 1) and also shows that precise predictions of timing and content are both essential elements for inducing an oN1 (Experiment 2). Copyright © 2017 Elsevier B.V. All rights reserved.
Rudolph, Erica D; Ells, Emma M L; Campbell, Debra J; Abriel, Shelagh C; Tibbo, Philip G; Salisbury, Dean F; Fisher, Derek J
The mismatch negativity (MMN) is an EEG-derived event-related potential (ERP) elicited by any violation of a predicted auditory 'rule', regardless of whether one is attending to the stimuli, and is thought to reflect updating of the stimulus context. Chronic schizophrenia patients exhibit robust MMN deficits, while MMN reduction in first-episode and early phase psychosis is significantly less consistent. Traditional two-tone "oddball" MMN measures of sensory information processing may be considered too simple for use in early phase psychosis in which pathology has not progressed fully, and a paradigm that probes higher order processes may be more appropriate for elucidating auditory change detection deficits. This study investigated whether MMN deficits could be detected in early phase psychosis (EP) patients using an abstract 'missing stimulus' pattern paradigm (Salisbury, 2012). The stimuli were 400 groups of six tones (1000Hz, 50ms duration, 330ms stimulus onset asynchrony), which was presented with an inter-trial interval of 750ms. Occasionally a group contained a deviant, meaning that it was missing either the 4th or 6th tone (50 trials each). EEG recordings of 13 EP patients (≤5year duration of illness) and 15 healthy controls (HC) were collected. Patients and controls did not significantly differ on age or years of education. Analyses of MMN amplitudes elicited by missing stimuli revealed amplitude reductions in EP patients, suggesting that these deficits are present very early in the progression of the illness. While there were no correlations between MMN measures and measures such as duration of illness, medication dosage or age, MMN amplitude reductions were correlated with positive symptomatology (i.e. auditory hallucinations). These findings suggest that MMNs elicited by the 'missing stimulus' paradigm are impaired in psychosis patients early in the progression of illness and that previously reported MMN-indexed deficits related to auditory
Full Text Available Fear is one of the most potent emotional experiences and is an adaptive component of response to potentially threatening stimuli. On the other hand, too much or inappropriate fear accounts for many common psychiatric problems. Cumulative evidence suggests that the amygdala plays a central role in the acquisition, storage and expression of fear memory. Here, we developed an inducible striatal neuron ablation system in transgenic mice. The ablation of striatal neurons in the adult brain hardly affected the auditory fear learning under the standard condition in agreement with previous studies. When conditioned with a low-intensity unconditioned stimulus, however, the formation of long-term fear memory but not short-tem memory was impaired in striatal neuron-ablated mice. Consistently, the ablation of striatal neurons 24 h after conditioning with the low-intensity unconditioned stimulus, when the long-term fear memory was formed, diminished the retention of the long-term memory. Our results reveal a novel form of the auditory fear memory depending on striatal neurons at the low-intensity unconditioned stimulus.
Full Text Available BACKGROUND: Patients with cervical dystonia (CD present with an impaired performance of voluntary neck movements, which are usually slow and limited. We hypothesized that such abnormality could involve defective preparation for task execution. Therefore, we examined motor preparation in CD patients using the StartReact method. In this test, a startling auditory stimulus (SAS is delivered unexpectedly at the time of the imperative signal (IS in a reaction time task to cause a faster execution of the prepared motor programme. We expected that CD patients would show an abnormal StartReact phenomenon. METHODS: Fifteen CD patients and 15 age matched control subjects (CS were asked to perform a rotational movement (RM to either side as quick as possible immediately after IS perception (a low intensity electrical stimulus to the II finger. In randomly interspersed test trials (25% a 130 dB SAS was delivered simultaneously with the IS. We recorded RMs in the horizontal plane with a high speed video camera (2.38 ms per frame in synchronization with the IS. The RM kinematic-parameters (latency, velocity, duration and amplitude were analyzed using video-editing software and screen protractor. Patients were asked to rate the difficulty of their RMs in a numerical rating scale. RESULTS: In control trials, CD patients executed slower RMs (repeated measures ANOVA, p<0.10(-5, and reached a smaller final head position angle relative to the midline (p<0.05, than CS. In test trials, SAS improved all RMs in both groups (p<0.10(-14. In addition, patients were more likely to reach beyond their baseline RM than CS (χ(2, p<0.001 and rated their performance better than in control trials (t-test, p<0.01. CONCLUSION: We found improvement of kinematic parameters and subjective perception of motor performance in CD patients with StartReact testing. Our results suggest that CD patients reach an adequate level of motor preparation before task execution.
Miller, Lee M; Recanzone, Gregg H
The auditory cortex is critical for perceiving a sound's location. However, there is no topographic representation of acoustic space, and individual auditory cortical neurons are often broadly tuned to stimulus location. It thus remains unclear how acoustic space is represented in the mammalian cerebral cortex and how it could contribute to sound localization. This report tests whether the firing rates of populations of neurons in different auditory cortical fields in the macaque monkey carry sufficient information to account for horizontal sound localization ability. We applied an optimal neural decoding technique, based on maximum likelihood estimation, to populations of neurons from 6 different cortical fields encompassing core and belt areas. We found that the firing rate of neurons in the caudolateral area contain enough information to account for sound localization ability, but neurons in other tested core and belt cortical areas do not. These results provide a detailed and plausible population model of how acoustic space could be represented in the primate cerebral cortex and support a dual stream processing model of auditory cortical processing.
Full Text Available Hearing losses during infancy and childhood have many negative future effects and impacts on the child life and productivity. The earlier detection of hearing losses, the earlier medical intervention and then the greater benefit of remediation will be. During this research a PC-based audiometer is designed and, currently, the audiometer prototype is in its final development steps. It is based on the auditory brainstem response (ABR method. Chirp stimuli instead of traditional click stimuli will be used to invoke the ABR signal. The stimulus is designed to synchronize the hair cells movement when it spreads out over the cochlea. In addition to the available hardware utilization (PC and PCI board, the efforts confined to design and implement a hardware prototype and to develop a software package that enables the system to behave as ABR audiometer. By using such a method and chirp stimulus, it is expected to be able to detect hearing impairment (sensorineural in the first few days of the life and conduct hearing test at low frequency of stimulus. Currently, the intended chirp stimulus has been successfully generated and the implemented module is able to amplify a signal (on the order of ABR signal to a recordable level. Moreover, a NI-DAQ data acquisition board has been chosen to implement the PC-prototype interface.
Fogerty, Daniel; Humes, Larry E; Busey, Thomas A
Age-related temporal-processing declines of rapidly presented sequences may involve contributions of sensory memory. This study investigated recall for rapidly presented auditory (vowel) and visual (letter) sequences presented at six different stimulus onset asynchronies (SOA) that spanned threshold SOAs for sequence identification. Younger, middle-aged, and older adults participated in all tasks. Results were investigated at both equivalent performance levels (i.e., SOA threshold) and at identical physical stimulus values (i.e., SOAs). For four-item sequences, results demonstrated best performance for the first and last items in the auditory sequences, but only the first item for visual sequences. For two-item sequences, adults identified the second vowel or letter significantly better than the first. Overall, when temporal-order performance was equated for each individual by testing at SOA thresholds, recall accuracy for each position across the age groups was highly similar. These results suggest that modality-specific processing declines of older adults primarily determine temporal-order performance for rapid sequences. However, there is some evidence for a second amodal processing decline in older adults related to early sensory memory for final items in a sequence. This selective deficit was observed particularly for longer sequence lengths and was not accounted for by temporal masking.
Salisbury, Dean F; McCathern, Alexis G
The simple mismatch negativity (MMN) to tones deviating physically (in pitch, loudness, duration, etc.) from repeated standard tones is robustly reduced in schizophrenia. Although generally interpreted to reflect memory or cognitive processes, simple MMN likely contains some activity from non-adapted sensory cells, clouding what process is affected in schizophrenia. Research in healthy participants has demonstrated that MMN can be elicited by deviations from abstract auditory patterns and complex rules that do not cause sensory adaptation. Whether persons with schizophrenia show abnormalities in the complex MMN is unknown. Fourteen schizophrenia participants and 16 matched healthy underwent EEG recording while listening to 400 groups of 6 tones 330 ms apart, separated by 800 ms. Occasional deviant groups were missing the 4th or 6th tone (50 groups each). Healthy participants generated a robust response to a missing but expected tone. The schizophrenia group was significantly impaired in activating the missing stimulus MMN, generating no significant activity at all. Schizophrenia affects the ability of "primitive sensory intelligence" and pre-attentive perceptual mechanisms to form implicit groups in the auditory environment. Importantly, this deficit must relate to abnormalities in abstract complex pattern analysis rather than sensory problems in the disorder. The results indicate a deficit in parsing of the complex auditory scene which likely impacts negatively on successful social navigation in schizophrenia. Knowledge of the location and circuit architecture underlying the true novelty-related MMN and its pathophysiology in schizophrenia will help target future interventions.
Howell, Tiffani J; Conduit, Russell; Toukhsati, Samia; Bennett, Pauleen
Dog cognition research tends to rely on behavioural response, which can be confounded by obedience or motivation, as the primary means of indexing dog cognitive abilities. A physiological method of measuring dog cognitive processing would be instructive and could complement behavioural response. Electroencephalogram (EEG) has been used in humans to study stimulus processing, which results in waveforms called event-related potentials (ERPs). One ERP component, mismatch negativity (MMN), is a negative deflection approximately 160-200 ms after stimulus onset, which may be related to change detection from echoic sensory memory. We adapted a minimally invasive technique to record MMN in dogs. Dogs were exposed to an auditory oddball paradigm in which deviant tones (10% probability) were pseudo-randomly interspersed throughout an 8 min sequence of standard tones (90% probability). A significant difference in MMN ERP amplitude was observed after the deviant tone in comparison to the standard tone, t5 = -2.98, p = 0.03. This difference, attributed to discrimination of an unexpected stimulus in a series of expected stimuli, was not observed when both tones occurred 50% of the time, t1 = -0.82, p > 0.05. Dogs showed no evidence of pain or distress at any point. We believe this is the first illustration of MMN in a group of dogs and anticipate that this technique may provide valuable insights in cognitive tasks such as object discrimination. Copyright © 2011 Elsevier B.V. All rights reserved.
Kotilahti, Kalle; Nissila, Ilkka; Makela, Riikka; Noponen, Tommi; Lipiainen, Lauri; Gavrielides, Nasia; Kajava, Timo; Huotilainen, Minna; Fellman, Vineta; Merilainen, Pekka; Katila, Toivo
We have used near-infrared spectroscopy (NIRS) to study hemodynamic auditory evoked responses on 7 full-term neonates. Measurements were done simultaneously above both auditory cortices to study the distribution of speech and music processing between hemispheres using a 16-channel frequency-domain instrument. The stimulation consisted of 5-second samples of music and speech with a 25-second silent interval. In response to stimulation, a significant increase in the concentration of oxygenated hemoglobin ([HbO2]) was detected in 6 out of 7 subjects. The strongest responses in [HbO2] were seen near the measurement location above the ear on both hemispheres. The mean latency of the maximum responses was 9.42+/-1.51 s. On the left hemisphere (LH), the maximum amplitude of the average [HbO2] response to the music stimuli was 0.76+/- 0.38 μ M (mean+/-std.) and to the speech stimuli 1.00+/- 0.45 μ+/- μM. On the right hemisphere (RH), the maximum amplitude of the average [HbO2] response was 1.29+/- 0.85 μM to the music stimuli and 1.23+/- 0.93 μM to the speech stimuli. The results indicate that auditory information is processed on both auditory cortices, but LH is more concentrated to process speech than music information. No significant differences in the locations and the latencies of the maximum responses relative to the stimulus type were found.
Tohar S Yarden
Full Text Available Stimulus-specific adaptation (SSA occurs when neurons decrease their responses to frequently-presented (standard stimuli but not, or not as much, to other, rare (deviant stimuli. SSA is present in all mammalian species in which it has been tested as well as in birds. SSA confers short-term memory to neuronal responses, and may lie upstream of the generation of mismatch negativity (MMN, an important human event-related potential. Previously published models of SSA mostly rely on synaptic depression of the feedforward, thalamocortical input. Here we study SSA in a recurrent neural network model of primary auditory cortex. When the recurrent, intracortical synapses display synaptic depression, the network generates population spikes (PSs. SSA occurs in this network when deviants elicit a PS but standards do not, and we demarcate the regions in parameter space that allow SSA. While SSA based on PSs does not require feedforward depression, we identify feedforward depression as a mechanism for expanding the range of parameters that support SSA. We provide predictions for experiments that could help differentiate between SSA due to synaptic depression of feedforward connections and SSA due to synaptic depression of recurrent connections. Similar to experimental data, the magnitude of SSA in the model depends on the frequency difference between deviant and standard, probability of the deviant, inter-stimulus interval and input amplitude. In contrast to models based on feedforward depression, our model shows true deviance sensitivity as found in experiments.
Wasmuht, Dante F; Pena, Jose L; Gutfreund, Yoram
Whether the auditory and visual systems use a similar coding strategy to represent motion direction is an open question. We investigated this question in the barn owl's optic tectum (OT) testing stimulus-specific adaptation (SSA) to the direction of motion. SSA, the reduction of the response to a repetitive stimulus that does not generalize to other stimuli, has been well established in OT neurons. SSA suggests a separate representation of the adapted stimulus in upstream pathways. So far, only SSA to static stimuli has been studied in the OT. Here, we examined adaptation to moving auditory and visual stimuli. SSA to motion direction was examined using repeated presentations of moving stimuli, occasionally switching motion to the opposite direction. Acoustic motion was either mimicked by varying binaural spatial cues or implemented in free field using a speaker array. While OT neurons displayed SSA to motion direction in visual space, neither stimulation paradigms elicited significant SSA to auditory motion direction. These findings show a qualitative difference in how auditory and visual motion is processed in the OT and support the existence of dedicated circuitry for representing motion direction in the early stages of visual but not the auditory system. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Staib, Matthias; Bach, Dominik R
Learning to predict threat depends on amygdala plasticity and does not require auditory cortex (ACX) when threat predictors (conditioned stimuli, CS) are simple sine tones. However, ACX is required in rodents to learn from some naturally occurring CS. Yet, the precise function of ACX, and whether it differs for different CS types, is unknown. Here, we address how ACX encodes threat predictions during human fear conditioning using functional magnetic resonance imaging (fMRI) with multivariate pattern analysis. As in previous rodent work, CS+ and CS- were defined either by direction of frequency modulation (complex) or by frequency of pure tones (simple). In an instructed non-reinforcement context, different sets of simple and complex sounds were always presented without reinforcement (neutral sounds, NS). Threat encoding was measured by separation of fMRI response patterns induced by CS+/CS-, or similar NS1/NS2 pairs. We found that fMRI patterns in Heschl's gyrus encoded threat prediction over and above encoding the physical stimulus features also present in NS, i.e. CS+/CS- could be separated better than NS1/NS2. This was the case both for simple and complex CS. Furthermore, cross-prediction demonstrated that threat representations were similar for simple and complex CS, and thus unlikely to emerge from stimulus-specific top-down, or learning-induced, receptive field plasticity. Searchlight analysis across the entire ACX demonstrated further threat representations in a region including BA22 and BA42. However, in this region, patterns were distinct for simple and complex sounds, and could thus potentially arise from receptive field plasticity. Strikingly, across participants, individual size of Heschl's gyrus predicted strength of fear learning for complex sounds. Overall, our findings suggest that ACX represents threat predictions, and that Heschl's gyrus contains a threat representation that is invariant across physical stimulus categories. Copyright © 2017 The
Full Text Available There is increasing interest in multisensory influences upon sensory-specific judgements, such as when auditory stimuli affect visual perception. Here we studied whether the duration of an auditory event can objectively affect the perceived duration of a co-occurring visual event. On each trial, participants were presented with a pair of successive flashes and had to judge whether the first or second was longer. Two beeps were presented with the flashes. The order of short and long stimuli could be the same across audition and vision (audiovisual congruent or reversed, so that the longer flash was accompanied by the shorter beep and vice versa (audiovisual incongruent; or the two beeps could have the same duration as each other. Beeps and flashes could onset synchronously or asynchronously. In a further control experiment, the beep durations were much longer (tripled than the flashes. Results showed that visual duration-discrimination sensitivity (d' was significantly higher for congruent (and significantly lower for incongruent audiovisual synchronous combinations, relative to the visual only presentation. This effect was abolished when auditory and visual stimuli were presented asynchronously, or when sound durations tripled those of flashes. We conclude that the temporal properties of co-occurring auditory stimuli influence the perceived duration of visual stimuli and that this can reflect genuine changes in visual sensitivity rather than mere response bias.
Smulders, Tom V; Jarvis, Erich D
Repeated exposure to an auditory stimulus leads to habituation of the electrophysiological and immediate-early-gene (IEG) expression response in the auditory system. A novel auditory stimulus reinstates this response in a form of dishabituation. This has been interpreted as the start of new memory formation for this novel stimulus. Changes in the location of an otherwise identical auditory stimulus can also dishabituate the IEG expression response. This has been interpreted as an integration of stimulus identity and stimulus location into a single auditory object, encoded in the firing patterns of the auditory system. In this study, we further tested this hypothesis. Using chronic multi-electrode arrays to record multi-unit activity from the auditory system of awake and behaving zebra finches, we found that habituation occurs to repeated exposure to the same song and dishabituation with a novel song, similar to that described in head-fixed, restrained animals. A large proportion of recording sites also showed dishabituation when the same auditory stimulus was moved to a novel location. However, when the song was randomly moved among 8 interleaved locations, habituation occurred independently of the continuous changes in location. In contrast, when 8 different auditory stimuli were interleaved all from the same location, a separate habituation occurred to each stimulus. This result suggests that neuronal memories of the acoustic identity and spatial location are different, and that allocentric location of a stimulus is not encoded as part of the memory for an auditory object, while its acoustic properties are. We speculate that, instead, the dishabituation that occurs with a change from a stable location of a sound is due to the unexpectedness of the location change, and might be due to different underlying mechanisms than the dishabituation and separate habituations to different acoustic stimuli. Copyright © 2013 Elsevier Inc. All rights reserved.
Araneda, Rodrigo; De Volder, Anne G; Deggouj, Naïma; Philippot, Pierre; Heeren, Alexandre; Lacroix, Emilie; Decat, Monique; Rombaux, Philippe; Renier, Laurent
Tinnitus is the perception of a sound in the absence of external stimulus. Currently, the pathophysiology of tinnitus is not fully understood, but recent studies indicate that alterations in the brain involve non-auditory areas, including the prefrontal cortex. Here, we hypothesize that these brain alterations affect top-down cognitive control mechanisms that play a role in the regulation of sensations, emotions and attention resources. The efficiency of the executive control as well as simple reaction speed and processing speed were evaluated in tinnitus participants (TP) and matched control subjects (CS) in both the auditory and the visual modalities using a spatial Stroop paradigm. TP were slower and less accurate than CS during both the auditory and the visual spatial Stroop tasks, while simple reaction speed and stimulus processing speed were affected in TP in the auditory modality only. Tinnitus is associated both with modality-specific deficits along the auditory processing system and an impairment of cognitive control mechanisms that are involved both in vision and audition (i.e. that are supra-modal). We postulate that this deficit in the top-down cognitive control is a key-factor in the development and maintenance of tinnitus and may also explain some of the cognitive difficulties reported by tinnitus sufferers.
Meier, Matt E.; Kane, Michael J.
Three experiments examined the relation between working memory capacity (WMC) and two different forms of cognitive conflict: stimulus-stimulus (S-S) and stimulus-response (SR) interference. Our goal was to test whether WMC’s relation to conflict-task performance is mediated by stimulus-identification processes (captured by S-S conflict), response-selection processes (captured by S-R conflict), or both. In Experiment 1, subjects completed a single task presenting both S-S and S-R conflict trials, plus trials that combined the two conflict types. We limited ostensible goal-maintenance contributions to performance by requiring the same goal for all trial types and by presenting frequent conflict trials that reinforced the goal. WMC predicted resolution of S-S conflict as expected: Higher-WMC subjects showed reduced response time interference. Although WMC also predicted S-R interference, here, higher-WMC subjects showed increased error interference. Experiment 2A replicated these results in a version of the conflict task without combined S-S/S-R trials. Experiment 2B increased the proportion of congruent (non-conflict) trials to promote reliance on goal-maintenance processes. Here, higher-WMC subjects resolved both S-S and S-R conflict more successfully than did lower-WMC subjects. The results were consistent with Kane and Engle’s (2003) two-factor theory of cognitive control, according to which WMC predicts executive-task performance through goal-maintenance and conflict-resolution processes. However, the present results add specificity to the account by suggesting that higher-WMC subjects better resolve cognitive conflict because they more efficiently select relevant stimulus features against irrelevant, distracting ones. PMID:26120774
Mittag, Maria; Takegata, Rika; Winkler, István
Representations encoding the probabilities of auditory events do not directly support predictive processing. In contrast, information about the probability with which a given sound follows another (transitional probability) allows predictions of upcoming sounds. We tested whether behavioral and cortical auditory deviance detection (the latter indexed by the mismatch negativity event-related potential) relies on probabilities of sound patterns or on transitional probabilities. We presented healthy adult volunteers with three types of rare tone-triplets among frequent standard triplets of high-low-high (H-L-H) or L-H-L pitch structure: proximity deviant (H-H-H/L-L-L), reversal deviant (L-H-L/H-L-H), and first-tone deviant (L-L-H/H-H-L). If deviance detection was based on pattern probability, reversal and first-tone deviants should be detected with similar latency because both differ from the standard at the first pattern position. If deviance detection was based on transitional probabilities, then reversal deviants should be the most difficult to detect because, unlike the other two deviants, they contain no low-probability pitch transitions. The data clearly showed that both behavioral and cortical auditory deviance detection uses transitional probabilities. Thus, the memory traces underlying cortical deviance detection may provide a link between stimulus probability-based change/novelty detectors operating at lower levels of the auditory system and higher auditory cognitive functions that involve predictive processing. Our research presents the first definite evidence for the auditory system prioritizing transitional probabilities over probabilities of individual sensory events. Forming representations for transitional probabilities paves the way for predictions of upcoming sounds. Several recent theories suggest that predictive processing provides the general basis of human perception, including important auditory functions, such as auditory scene analysis. Our
Meier, Matt E.; Kane, Michael J.
Three experiments examined the relation between working memory capacity (WMC) and 2 different forms of cognitive conflict: stimulus-stimulus (S-S) and stimulus-response (S-R) interference. Our goal was to test whether WMC's relation to conflict-task performance is mediated by stimulus-identification processes (captured by S-S conflict),…
Sun Da; Xu Wei; Zhan Hongwei; Liu Hongbiao
Purpose: To detect the cerebral functional location in normal subjects with Chinese classical national music auditory stimulus. Methods: 10 normal young students of the medical collage of Zhejiang University,22-24 years old,5 male and 5 female. The first they underwent a 99mTc-ECD brain imaging during a rest state using a dual detectors gamma camera with fan beam collimators. After 2-4 days they were asked to listen a Chinese classical national music that was played by Erhu and Guzheng for 20 minters. They were also asked to pay special attention to the name of the music, what musical instruments they played and what imagination was opened out in the music. 99mTc-ECD was administered in the first 3 minutes during thy listened the music. The brain imaging was performed in 30-60 minutes after the tracer was administered. Results: To compare the rest state, during listening the Chinese classical national music and paying special attention to the imagination of music the right midtemporal in 6 cases, left midtemporal in 2 cases, right superior temporal in 2 cases, left superior temporal in 6 cases, and right inferior temporal in 2 cases were activated. Among them, dual temporal were activated in 6 cases, right temporal in 3 cases and left temporal in 1 case. It is very interesting that the inferior frontal and/or medial frontal lobes were activated in all 10 subjects, and the activity was markedly higher in frontal than in temporal. Among them dual frontal lobes were activated in 9 subjects, and only right frontal in 1 case. The right superior frontal lobes were activated in 2 cases. The occipital lobes were activated in 4 subjects, and dual occipital in 3 cases, right occipital in 1 case. These 4 subjects stated after listening that they imagined the natural landscape and imagination that was opened out in the music follow the music. Other regions that were activated included parietal lobes (right and left in 1 respectively), pre-cingulated gyms (in 2 cases), and left
Full Text Available Background and Aim: The auditory system changes by increasing age in both central and peripheral parts. The purpose of this study was to investigate the effect of the increasing the stimulus rate on auditory brainstem response (ABR waves latency in old population with normal hearing. Materials and Methods: In this cross-sectional study click ABR test performed on 20 young normal-hearing subjects with mean age of 20.8 years old and 10 old normal-hearing subjects with mean age of 66.4 years old. ABR results with different stimulus rates were compared between two groups. Results: ABR peak latencies and interpeak intervals were prolonged with increasing the click repetition rate. Peak latencies were slightly prolonged in older adults and the I-V interval did not differ with age but prolongation of III-V interval were significantly differs in older population compared to young adults. Conclusion: Using high click rates may sensitize the ABR to the identification of lesions of auditory nerve or brainstem, but before that, we need to know the normal range of different age groups, so that we can decide about probability of a retrocochlear lesion.
Kayser, Jürgen; Tenke, Craig E; Gil, Roberto B; Bruder, Gerard E
Examining visual word recognition memory (WRM) with nose-referenced EEGs, we reported a preserved ERP 'old-new effect' (enhanced parietal positivity 300-800 ms to correctly-recognized repeated items) in schizophrenia ([Kayser, J., Bruder, G.E., Friedman, D., Tenke, C.E., Amador, X.F., Clark, S.C., Malaspina, D., Gorman, J.M., 1999. Brain event-related potentials (ERPs) in schizophrenia during a word recognition memory task. Int. J. Psychophysiol. 34(3), 249-265.]). However, patients showed reduced early negative potentials (N1, N2) and poorer WRM. Because group differences in neuronal generator patterns (i.e., sink-source orientation) may be masked by choice of EEG recording reference, the current study combined surface Laplacians and principal components analysis (PCA) to clarify ERP component topography and polarity and to disentangle stimulus- and response-related contributions. To investigate the impact of stimulus modality, 31-channel ERPs were recorded from 20 schizophrenic patients (15 male) and 20 age-, gender-, and handedness-matched healthy adults during parallel visual and auditory continuous WRM tasks. Stimulus- and response-locked reference-free current source densities (spherical splines) were submitted to unrestricted Varimax-PCA to identify and measure neuronal generator patterns underlying ERPs. Poorer (78.2+/-18.7% vs. 87.8+/-11.3% correct) and slower (958+/-226 vs. 773+/-206 ms) performance in patients was accompanied by reduced stimulus-related left-parietal P3 sources (150 ms pre-response) and vertex N2 sinks (both overall and old/new effects) but modality-specific N1 sinks were not significantly reduced. A distinct mid-frontal sink 50-ms post-response was markedly attenuated in patients. Reductions were more robust for auditory stimuli. However, patients showed increased lateral-frontotemporal sinks (T7 maximum) concurrent with auditory P3 sources. Electrophysiologic correlates of WRM deficits in schizophrenia suggest functional impairments of
Klein, David J; Simon, Jonathan Z; Depireux, Didier A; Shamma, Shihab A
...) functional characterization of single cells in primary auditory cortex (AI). We explore in this paper the origin and relationship between several different ways of measuring and analyzing the STRF...
Samson, Fabienne; Zeffiro, Thomas A.; Toussaint, Alain; Belin, Pascal
Investigations of the functional organization of human auditory cortex typically examine responses to different sound categories. An alternative approach is to characterize sounds with respect to their amount of variation in the time and frequency domains (i.e., spectral and temporal complexity). Although the vast majority of published studies examine contrasts between discrete sound categories, an alternative complexity-based taxonomy can be evaluated through meta-analysis. In a quantitative meta-analysis of 58 auditory neuroimaging studies, we examined the evidence supporting current models of functional specialization for auditory processing using grouping criteria based on either categories or spectro-temporal complexity. Consistent with current models, analyses based on typical sound categories revealed hierarchical auditory organization and left-lateralized responses to speech sounds, with high speech sensitivity in the left anterior superior temporal cortex. Classification of contrasts based on spectro-temporal complexity, on the other hand, revealed a striking within-hemisphere dissociation in which caudo-lateral temporal regions in auditory cortex showed greater sensitivity to spectral changes, while anterior superior temporal cortical areas were more sensitive to temporal variation, consistent with recent findings in animal models. The meta-analysis thus suggests that spectro-temporal acoustic complexity represents a useful alternative taxonomy to investigate the functional organization of human auditory cortex. PMID:21833294
BLUMENFELD, HENRIKE K.; MARIAN, VIORICA
Bilinguals have been shown to outperform monolinguals at suppressing task-irrelevant information and on overall speed during cognitive control tasks. Here, monolinguals’ and bilinguals’ performance was compared on two nonlinguistic tasks: a Stroop task (with perceptual Stimulus–Stimulus conflict among stimulus features) and a Simon task (with Stimulus–Response conflict). Across two experiments testing bilinguals with different language profiles, bilinguals showed more efficient Stroop than Simon performance, relative to monolinguals, who showed fewer differences across the two tasks. Findings suggest that bilingualism may engage Stroop-type cognitive control mechanisms more than Simon-type mechanisms, likely due to increased Stimulus–Stimulus conflict during bilingual language processing. Findings are discussed in light of previous research on bilingual Stroop and Simon performance. PMID:25093009
Fobel, Oliver; Dau, Torsten
This study examines auditory brainstem responses (ABR) elicited by rising frequency chirps. Two chirp stimuli were developed and designed such as to compensate for cochlear travel-time differences across frequency, in order to maximize neural synchrony. One chirp, referred to as the O-chirp, was ......This study examines auditory brainstem responses (ABR) elicited by rising frequency chirps. Two chirp stimuli were developed and designed such as to compensate for cochlear travel-time differences across frequency, in order to maximize neural synchrony. One chirp, referred to as the O...
Winter, J C; Rice, K C; Amorosi, D J; Rabin, R A
Although psilocybin has been trained in the rat as a discriminative stimulus, little is known of the pharmacological receptors essential for stimulus control. In the present investigation rats were trained with psilocybin and tests were then conducted employing a series of other hallucinogens and presumed antagonists. An intermediate degree of antagonism of psilocybin was observed following treatment with the 5-HT(2A) receptor antagonist, M100907. In contrast, no significant antagonism was observed following treatment with the 5-HT(1A/7) receptor antagonist, WAY-100635, or the DA D(2) antagonist, remoxipride. Psilocybin generalized fully to DOM, LSD, psilocin, and, in the presence of WAY-100635, DMT while partial generalization was seen to 2C-T-7 and mescaline. LSD and MDMA partially generalized to psilocybin and these effects were completely blocked by M-100907; no generalization of PCP to psilocybin was seen. The present data suggest that psilocybin induces a compound stimulus in which activity at the 5-HT(2A) receptor plays a prominent but incomplete role. In addition, psilocybin differs from closely related hallucinogens such as 5-MeO-DMT in that agonism at 5-HT(1A) receptors appears to play no role in psilocybin-induced stimulus control.
Martens, William L.
This paper reports the results of a study designed to evaluate the effectiveness of synthetic cues to the range of auditory images created via headphone display of virtual sound sources processed using individualized HRTFs. The particular focus of the study was to determine how well auditory range could be controlled when independent adjustment of loudness was also desired. Variation in perceived range of the resulting auditory spatial images was assessed using a two-alternative, forced choice procedure in which listeners indicated which of two successively presented sound sources seemed to be more closely positioned. The first of the two sources served as a fixed standard stimulus positioned using a binaural HRTF measured at ear level, 1.5 m from the listeners head at an azimuth angle of 120 deg. The second source served as a variable loudness comparison stimulus processed using the same pair of HRTFs, with the same interaural time difference but with a manipulated interaural level difference. From the obtained choice proportions for each pairwise comparison of stimuli, numerical scale values for auditory source range were generated using Thurstone's Case IV method for indirect scaling. Results provide a basis for calibrated control over auditory range for virtual sources varying in loudness.
Gaucher, Quentin; Edeline, Jean-Marc
Many studies have described the action of Noradrenaline (NA) on the properties of cortical receptive fields, but none has assessed how NA affects the discrimination abilities of cortical cells between natural stimuli. In the present study, we compared the consequences of NA topical application on spectro-temporal receptive fields (STRFs) and responses to communication sounds in the primary auditory cortex. NA application reduced the STRFs (an effect replicated by the alpha1 agonist Phenylephrine) but did not change, on average, the responses to communication sounds. For cells exhibiting increased evoked responses during NA application, the discrimination abilities were enhanced as quantified by Mutual Information. The changes induced by NA on parameters extracted from the STRFs and from responses to communication sounds were not related. The alterations exerted by neuromodulators on neuronal selectivity have been the topic of a vast literature in the visual, somatosensory, auditory and olfactory cortices. However, very few studies have investigated to what extent the effects observed when testing these functional properties with artificial stimuli can be transferred to responses evoked by natural stimuli. Here, we tested the effect of noradrenaline (NA) application on the responses to pure tones and communication sounds in the guinea-pig primary auditory cortex. When pure tones were used to assess the spectro-temporal receptive field (STRF) of cortical cells, NA triggered a transient reduction of the STRFs in both the spectral and the temporal domain, an effect replicated by the α1 agonist phenylephrine whereas α2 and β agonists induced STRF expansion. When tested with communication sounds, NA application did not produce significant effects on the firing rate and spike timing reliability, despite the fact that α1, α2 and β agonists by themselves had significant effects on these measures. However, the cells whose evoked responses were increased by NA
Full Text Available All sensory systems need to continuously prioritize and select incoming stimuli in order to avoid overflow or interference, and provide a structure to the brain's input. However, the characteristics of this input differ across sensory systems; therefore, and as a direct consequence, each sensory system might have developed specialized strategies to cope with the continuous stream of incoming information. Neural oscillations are intimately connected with this selection process, as they can be used by the brain to rhythmically amplify or attenuate input and therefore represent an optimal tool for stimulus selection. In this paper, we focus on oscillatory processes for stimulus selection in the visual and auditory systems. We point out both commonalities and differences between the two systems and develop several hypotheses, inspired by recently published findings: (1 The rhythmic component in its input is crucial for the auditory, but not for the visual system. The alignment between oscillatory phase and rhythmic input (phase entrainment is therefore an integral part of stimulus selection in the auditory system whereas the visual system merely adjusts its phase to upcoming events, without the need for any rhythmic component. (2 When input is unpredictable, the visual system can maintain its oscillatory sampling, whereas the auditory system switches to a different, potentially internally oriented, “mode” of processing that might be characterized by alpha oscillations. (3 Visual alpha can be divided into a faster occipital alpha (10 Hz and a slower frontal alpha (7 Hz that critically depends on attention.
Zoefel, Benedikt; VanRullen, Rufin
All sensory systems need to continuously prioritize and select incoming stimuli in order to avoid overflow or interference, and provide a structure to the brain's input. However, the characteristics of this input differ across sensory systems; therefore, and as a direct consequence, each sensory system might have developed specialized strategies to cope with the continuous stream of incoming information. Neural oscillations are intimately connected with this selection process, as they can be used by the brain to rhythmically amplify or attenuate input and therefore represent an optimal tool for stimulus selection. In this paper, we focus on oscillatory processes for stimulus selection in the visual and auditory systems. We point out both commonalities and differences between the two systems and develop several hypotheses, inspired by recently published findings: (1) The rhythmic component in its input is crucial for the auditory, but not for the visual system. The alignment between oscillatory phase and rhythmic input (phase entrainment) is therefore an integral part of stimulus selection in the auditory system whereas the visual system merely adjusts its phase to upcoming events, without the need for any rhythmic component. (2) When input is unpredictable, the visual system can maintain its oscillatory sampling, whereas the auditory system switches to a different, potentially internally oriented, "mode" of processing that might be characterized by alpha oscillations. (3) Visual alpha can be divided into a faster occipital alpha (10 Hz) and a slower frontal alpha (7 Hz) that critically depends on attention.
Sugi, Miho; Hagimoto, Yutaka; Nambu, Isao; Gonzalez, Alejandro; Takei, Yoshinori; Yano, Shohei; Hokari, Haruhide; Wada, Yasuhiro
Recently, a brain-computer interface (BCI) using virtual sound sources has been proposed for estimating user intention via electroencephalogram (EEG) in an oddball task. However, its performance is still insufficient for practical use. In this study, we examine the impact that shortening the stimulus onset asynchrony (SOA) has on this auditory BCI. While very short SOA might improve its performance, sound perception and task performance become difficult, and event-related potentials (ERPs) may not be induced if the SOA is too short. Therefore, we carried out behavioral and EEG experiments to determine the optimal SOA. In the experiments, participants were instructed to direct attention to one of six virtual sounds (target direction). We used eight different SOA conditions: 200, 300, 400, 500, 600, 700, 800, and 1,100 ms. In the behavioral experiment, we recorded participant behavioral responses to target direction and evaluated recognition performance of the stimuli. In all SOA conditions, recognition accuracy was over 85%, indicating that participants could recognize the target stimuli correctly. Next, using a silent counting task in the EEG experiment, we found significant differences between target and non-target sound directions in all but the 200-ms SOA condition. When we calculated an identification accuracy using Fisher discriminant analysis (FDA), the SOA could be shortened by 400 ms without decreasing the identification accuracies. Thus, improvements in performance (evaluated by BCI utility) could be achieved. On average, higher BCI utilities were obtained in the 400 and 500-ms SOA conditions. Thus, auditory BCI performance can be optimized for both behavioral and neurophysiological responses by shortening the SOA. PMID:29535602
Morgan, Simeon J; Paolini, Antonio G
Acute animal preparations have been used in research prospectively investigating electrode designs and stimulation techniques for integration into neural auditory prostheses, such as auditory brainstem implants and auditory midbrain implants. While acute experiments can give initial insight to the effectiveness of the implant, testing the chronically implanted and awake animals provides the advantage of examining the psychophysical properties of the sensations induced using implanted devices. Several techniques such as reward-based operant conditioning, conditioned avoidance, or classical fear conditioning have been used to provide behavioral confirmation of detection of a relevant stimulus attribute. Selection of a technique involves balancing aspects including time efficiency (often poor in reward-based approaches), the ability to test a plurality of stimulus attributes simultaneously (limited in conditioned avoidance), and measure reliability of repeated stimuli (a potential constraint when physiological measures are employed). Here, a classical fear conditioning behavioral method is presented which may be used to simultaneously test both detection of a stimulus, and discrimination between two stimuli. Heart-rate is used as a measure of fear response, which reduces or eliminates the requirement for time-consuming video coding for freeze behaviour or other such measures (although such measures could be included to provide convergent evidence). Animals were conditioned using these techniques in three 2-hour conditioning sessions, each providing 48 stimulus trials. Subsequent 48-trial testing sessions were then used to test for detection of each stimulus in presented pairs, and test discrimination between the member stimuli of each pair. This behavioral method is presented in the context of its utilisation in auditory prosthetic research. The implantation of electrocardiogram telemetry devices is shown. Subsequent implantation of brain electrodes into the Cochlear
Sanne Ten Oever
Full Text Available In recent years it has become increasingly clear that both the power and phase of oscillatory brain activity can influence the processing and perception of sensory stimuli. Transcranial alternating current stimulation (tACS can phase-align and amplify endogenous brain oscillations and has often been used to control and thereby study oscillatory power. Causal investigation of oscillatory phase is more difficult, as it requires precise real-time temporal control over both oscillatory phase and sensory stimulation. Here, we present hardware and software solutions allowing temporally precise presentation of sensory stimuli during tACS at desired tACS phases, enabling causal investigations of oscillatory phase. We developed freely available and easy to use software, which can be coupled with standard commercially available hardware to allow flexible and multi-modal stimulus presentation (visual, auditory, magnetic stimuli, etc. at pre-determined tACS-phases, opening up a range of new research opportunities. We validate that stimulus presentation at tACS phase in our setup is accurate to the sub-millisecond level with high inter-trial consistency. Conventional methods investigating the role of oscillatory phase such as magneto-/electroencephalography can only provide correlational evidence. Using brain stimulation with the described methodology enables investigations of the causal role of oscillatory phase. This setup turns oscillatory phase into an independent variable, allowing innovative and systematic studies of its functional impact on perception and cognition.
Ten Oever, Sanne; de Graaf, Tom A; Bonnemayer, Charlie; Ronner, Jacco; Sack, Alexander T; Riecke, Lars
In recent years, it has become increasingly clear that both the power and phase of oscillatory brain activity can influence the processing and perception of sensory stimuli. Transcranial alternating current stimulation (tACS) can phase-align and amplify endogenous brain oscillations and has often been used to control and thereby study oscillatory power. Causal investigation of oscillatory phase is more difficult, as it requires precise real-time temporal control over both oscillatory phase and sensory stimulation. Here, we present hardware and software solutions allowing temporally precise presentation of sensory stimuli during tACS at desired tACS phases, enabling causal investigations of oscillatory phase. We developed freely available and easy to use software, which can be coupled with standard commercially available hardware to allow flexible and multi-modal stimulus presentation (visual, auditory, magnetic stimuli, etc.) at pre-determined tACS-phases, opening up a range of new research opportunities. We validate that stimulus presentation at tACS phase in our setup is accurate to the sub-millisecond level with high inter-trial consistency. Conventional methods investigating the role of oscillatory phase such as magneto-/electroencephalography can only provide correlational evidence. Using brain stimulation with the described methodology enables investigations of the causal role of oscillatory phase. This setup turns oscillatory phase into an independent variable, allowing innovative, and systematic studies of its functional impact on perception and cognition.
Basura, Gregory J; Koehler, Seth D; Shore, Susan E
Central auditory circuits are influenced by the somatosensory system, a relationship that may underlie tinnitus generation. In the guinea pig dorsal cochlear nucleus (DCN), pairing spinal trigeminal nucleus (Sp5) stimulation with tones at specific intervals and orders facilitated or suppressed subsequent tone-evoked neural responses, reflecting spike timing-dependent plasticity (STDP). Furthermore, after noise-induced tinnitus, bimodal responses in DCN were shifted from Hebbian to anti-Hebbian timing rules with less discrete temporal windows, suggesting a role for bimodal plasticity in tinnitus. Here, we aimed to determine if multisensory STDP principles like those in DCN also exist in primary auditory cortex (A1), and whether they change following noise-induced tinnitus. Tone-evoked and spontaneous neural responses were recorded before and 15 min after bimodal stimulation in which the intervals and orders of auditory-somatosensory stimuli were randomized. Tone-evoked and spontaneous firing rates were influenced by the interval and order of the bimodal stimuli, and in sham-controls Hebbian-like timing rules predominated as was seen in DCN. In noise-exposed animals with and without tinnitus, timing rules shifted away from those found in sham-controls to more anti-Hebbian rules. Only those animals with evidence of tinnitus showed increased spontaneous firing rates, a purported neurophysiological correlate of tinnitus in A1. Together, these findings suggest that bimodal plasticity is also evident in A1 following noise damage and may have implications for tinnitus generation and therapeutic intervention across the central auditory circuit. Copyright © 2015 the American Physiological Society.
Giard, M H; Lavikahen, J; Reinikainen, K; Perrin, F; Bertrand, O; Pernier, J; Näätänen, R
Abstract The present study analyzed the neural correlates of acoustic stimulus representation in echoic sensory memory. The neural traces of auditory sensory memory were indirectly studied by using the mismatch negativity (MMN), an event-related potential component elicited by a change in a repetitive sound. The MMN is assumed to reflect change detection in a comparison process between the sensory input from a deviant stimulus and the neural representation of repetitive stimuli in echoic memory. The scalp topographies of the MMNs elicited by pure tones deviating from standard tones by either frequency, intensity, or duration varied according to the type of stimulus deviance, indicating that the MMNs for different attributes originate, at least in part, from distinct neural populations in the auditory cortex. This result was supported by dipole-model analysis. If the MMN generator process occurs where the stimulus information is stored, these findings strongly suggest that the frequency, intensity, and duration of acoustic stimuli have a separate neural representation in sensory memory.
Morey, Rajendra A.; Mitchell, Teresa V.; Inan, Seniha; Lieberman, Jeffrey A.; Belger, Aysenil
Individuals with schizophrenia demonstrate impairments in selective attention and sensory processing. The authors assessed differences in brain function between 26 participants with schizophrenia and 17 comparison subjects engaged in automatic (unattended) and controlled (attended) auditory information processing using event-related functional MRI. Lower regional neural activation during automatic auditory processing in the schizophrenia group was not confined to just the temporal lobe, but also extended to prefrontal regions. Controlled auditory processing was associated with a distributed frontotemporal and subcortical dysfunction. Differences in activation between these two modes of auditory information processing were more pronounced in the comparison group than in the patient group. PMID:19196926
Zhang, Honghui; Wang, Qingyun; Chen, Guanrong
Experimental studies have shown that neuron population located in the basal ganglia of parkinsonian primates can exhibit characteristic firings with certain firing rates differing from normal brain activities. Motivated by recent experimental findings, we investigate the effects of various stimulation paradigms on the firing rates of parkinsonism based on the proposed dynamical models. Our results show that the closed-loop deep brain stimulation is superior in ameliorating the firing behaviors of the parkinsonism, and other control strategies have similar effects according to the observation of electrophysiological experiments. In addition, in conformity to physiological experiments, we found that there exists optimal delay of input in the closed-loop GPtrain|M1 paradigm, where more normal behaviors can be obtained. More interestingly, we observed that W-shaped curves of the firing rates always appear as stimulus delay varies. We furthermore verify the robustness of the obtained results by studying three pallidal discharge rates of the parkinsonism based on the conductance-based model, as well as the integrate-and-fire-or-burst model. Finally, we show that short-term plasticity can improve the firing rates and optimize the control effects on parkinsonism. Our conclusions may give more theoretical insight into Parkinson's disease studies.
Full Text Available Intermittent smokers (ITS - who smoke less than daily - comprise an increasing proportion of adult smokers. Their smoking patterns challenge theoretical models of smoking motivation, which emphasize regular and frequent smoking to maintain nicotine levels and avoid withdrawal, but yet have gone largely unexamined. We characterized smoking patterns among 212 ITS (smoking 4-27 days per month compared to 194 daily smokers (DS; smoking 5-30 cigarettes daily who monitored situational antecedents of smoking using ecological momentary assessment. Subjects recorded each cigarette on an electronic diary, and situational variables were assessed in a random subset (n=21,539 smoking episodes; parallel assessments were obtained by beeping subjects at random when they were not smoking (n=26,930 non-smoking occasions. Compared to DS, ITS' smoking was more strongly associated with being away from home, being in a bar, drinking alcohol, socializing, being with friends and acquaintances, and when others were smoking. Mood had only modest effects in either group. DS' and ITS' smoking were substantially and equally suppressed by smoking restrictions, although ITS more often cited self-imposed restrictions. ITS' smoking was consistently more associated with environmental cues and contexts, especially those associated with positive or "indulgent" smoking situations. Stimulus control may be an important influence in maintaining smoking and making quitting difficult among ITS.
Shiffman, Saul; Dunbar, Michael S.; Li, Xiaoxue; Scholl, Sarah M.; Tindle, Hilary A.; Anderson, Stewart J.; Ferguson, Stuart G.
Intermittent smokers (ITS) – who smoke less than daily – comprise an increasing proportion of adult smokers. Their smoking patterns challenge theoretical models of smoking motivation, which emphasize regular and frequent smoking to maintain nicotine levels and avoid withdrawal, but yet have gone largely unexamined. We characterized smoking patterns among 212 ITS (smoking 4–27 days per month) compared to 194 daily smokers (DS; smoking 5–30 cigarettes daily) who monitored situational antecedents of smoking using ecological momentary assessment. Subjects recorded each cigarette on an electronic diary, and situational variables were assessed in a random subset (n = 21,539 smoking episodes); parallel assessments were obtained by beeping subjects at random when they were not smoking (n = 26,930 non-smoking occasions). Compared to DS, ITS' smoking was more strongly associated with being away from home, being in a bar, drinking alcohol, socializing, being with friends and acquaintances, and when others were smoking. Mood had only modest effects in either group. DS' and ITS' smoking were substantially and equally suppressed by smoking restrictions, although ITS more often cited self-imposed restrictions. ITS' smoking was consistently more associated with environmental cues and contexts, especially those associated with positive or “indulgent” smoking situations. Stimulus control may be an important influence in maintaining smoking and making quitting difficult among ITS. PMID:24599056
Full Text Available Tinnitus is the perception of sound in the absence of external stimulus. Currently, the pathophysiology of tinnitus is not fully understood, but recent studies indicate that alterations in the brain involve non-auditory areas, including the prefrontal cortex. In experiment 1, we used a go/no-go paradigm to evaluate the target detection speed and the inhibitory control in tinnitus participants (TP and control subjects (CS, both in unimodal and bimodal conditions in the auditory and visual modalities. We also tested whether the sound frequency used for target and distractors affected the performance. We observed that TP were slower and made more false alarms than CS in all unimodal auditory conditions. TP were also slower than CS in the bimodal conditions. In addition, when comparing the response times in bimodal and auditory unimodal conditions, the expected gain in bimodal conditions was present in CS, but not in TP when tinnitus-matched frequency sounds were used as targets. In experiment 2, we tested the sensitivity to cross-modal interference in TP during auditory and visual go/no-go tasks where each stimulus was preceded by an irrelevant pre-stimulus in the untested modality (e.g. high frequency auditory pre-stimulus in visual no/no-go condition. We observed that TP had longer response times than CS and made more false alarms in all conditions. In addition, the highest false alarm rate occurred in TP when tinnitus-matched/high frequency sounds were used as pre-stimulus. We conclude that the inhibitory control is altered in TP and that TP are abnormally sensitive to cross-modal interference, reflecting difficulties to ignore irrelevant stimuli. The fact that the strongest interference effect was caused by tinnitus-like auditory stimulation is consistent with the hypothesis according to which such stimulations generate emotional responses that affect cognitive processing in TP. We postulate that executive functions deficits play a key-role in
Ozdamar, Ozcan; Bohorquez, Jorge; Mihajloski, Todor; Yavuz, Erdem; Lachowska, Magdalena
Electrophysiological indices of auditory binaural beats illusions are studied using late latency evoked responses. Binaural beats are generated by continuous monaural FM tones with slightly different ascending and descending frequencies lasting about 25 ms presented at 1 sec intervals. Frequency changes are carefully adjusted to avoid any creation of abrupt waveform changes. Binaural Interaction Component (BIC) analysis is used to separate the neural responses due to binaural involvement. The results show that the transient auditory evoked responses can be obtained from the auditory illusion of binaural beats.
Agessi, Larissa Mendonça; Villa, Thaís Rodrigues; Carvalho, Deusvenir de Souza; Pereira, Liliane Desgualdo
Background This study aimed to investigate central auditory processing performance in children with migraine and compared with controls without headache. Methods Twenty-eight children of both sexes, aged between 8 and 12 years, diagnosed with migraine with and without aura, and a control group of the same age range and with no headache history, were included. Gaps-in-noise (GIN), duration pattern test (DPT), synthetic sentence identification (SSI) test, and nonverbal dichotic test (NVDT) were used to assess central auditory processing performance. Results Children with migraine performed significantly worse in DPT, SSI test, and NVDT when compared with controls without headache; however, no significant differences were found in the GIN test. Conclusions Children with migraine demonstrate impairment in the physiologic mechanism of temporal processing and selective auditory attention. In our short communication, migraine could be related to impaired central auditory processing in children. Georg Thieme Verlag KG Stuttgart · New York.
Washburn, David A
For more than 80 years, researchers have examined the interference between automatic processing of stimuli, such as the meaning of color words, on performance of a controlled-processing task such as naming the color in which words are printed. The Stroop effect and its many variations provide an ideal test platform for examining the competition between stimulus control and cognitive control of attention, as reflected in behavior. The two experiments reported here show that rhesus monkeys, like human adults, show interference from incongruous stimulus conditions in a number-Stroop task, and that the monkeys may be particularly susceptible to influence from response strength and less able, relative to human adults, of using executive attention to minimize this interference. © 2016 Society for the Experimental Analysis of Behavior.
Poulet, James F. A.; Hedwig, Berthold
Many groups of insects are specialists in exploiting sensory cues to locate food resources or conspecifics. To achieve orientation, bees and ants analyze the polarization pattern of the sky, male moths orient along the females' odor plume, and cicadas, grasshoppers, and crickets use acoustic signals to locate singing conspecifics. In comparison with olfactory and visual orientation, where learning is involved, auditory processing underlying orientation in insects appears to be more hardwired and genetically determined. In each of these examples, however, orientation requires a recognition process identifying the crucial sensory pattern to interact with a localization process directing the animal's locomotor activity. Here, we characterize this interaction. Using a sensitive trackball system, we show that, during cricket auditory behavior, the recognition process that is tuned toward the species-specific song pattern controls the amplitude of auditory evoked steering responses. Females perform small reactive steering movements toward any sound patterns. Hearing the male's calling song increases the gain of auditory steering within 2-5 s, and the animals even steer toward nonattractive sound patterns inserted into the speciesspecific pattern. This gain control mechanism in the auditory-to-motor pathway allows crickets to pursue species-specific sound patterns temporarily corrupted by environmental factors and may reflect the organization of recognition and localization networks in insects. localization | phonotaxis
Full Text Available Abstract Background The P300 component of the auditory evoked potential is an indicator of attention dependent target processing. Only a few studies have assessed cognitive function in substituted opiate addicts by means of evoked potential recordings. In addition, P300 data suggest that chronic nicotine use reduces P300 amplitudes. While nicotine and opiate effects combine in addicted subjects, here we investigated the P300 component of the auditory event related potential in methadone substituted opiate addicts with and without concomitant non-opioid drug use in comparison to a group of control subjects with and without nicotine consumption. Methods We assessed 47 opiate addicted out-patients under current methadone substitution and 65 control subjects matched for age and gender in an 2-stimulus auditory oddball paradigm. Patients were grouped for those with and without additional non-opioid drug use and controls were grouped for current nicotine use. P300 amplitude and latency data were analyzed at electrodes Fz, Cz and Pz. Results Patients and controls did not differ with regard to P300 amplitudes and latencies when whole groups were compared. Subgroup analyses revealed significantly reduced P300 amplitudes in controls with nicotine use when compared to those without. P300 amplitudes of methadone substituted opiate addicts were in between the two control groups and did not differ with regard to additional non-opioid use. Controls with nicotine had lower P300 amplitudes when compared to patients with concomitant non-opioid drugs. No P300 latency effects were found. Conclusion Attention dependent target processing as indexed by the P300 component amplitudes and latencies is not reduced in methadone substituted opiate addicts when compared to controls. The effect of nicotine on P300 amplitudes in healthy subjects exceeds the effects of long term opioid addiction under methadone substitution.
Camille F. Chavan
Full Text Available Inhibitory control refers to the ability to suppress planned or ongoing cognitive or motor processes. Electrophysiological indices of inhibitory control failure have been found to manifest even before the presentation of the stimuli triggering the inhibition, suggesting that pre-stimulus brain-states modulate inhibition performance. However, previous electrophysiological investigations on the state-dependency of inhibitory control were based on averaged event-related potentials, a method eliminating the variability in the ongoing brain activity not time-locked to the event of interest. These studies thus left unresolved whether spontaneous variations in the brain-state immediately preceding unpredictable inhibition-triggering stimuli also influence inhibitory control performance.To address this question, we applied single-trial EEG topographic analyses on the time interval immediately preceding NoGo stimuli in conditions where the responses to NoGo trials were correctly inhibited (correct rejection vs. committed (false alarms during an auditory spatial Go/NoGo task.We found a specific configuration of the EEG voltage field manifesting more frequently before correctly inhibited responses to NoGo stimuli than before false alarms. There was no evidence for an EEG topography occurring more frequently before false alarms than before correct rejections. The visualization of distributed electrical source estimations of the EEG topography preceding successful response inhibition suggested that it resulted from the activity of a right fronto-parietal brain network.Our results suggest that the fluctuations in the ongoing brain activity immediately preceding stimulus presentation contribute to the behavioral outcomes during an inhibitory control task. Our results further suggest that the state-dependency of sensory-cognitive processing might not only concern perceptual processes, but also high-order, top-down inhibitory control mechanisms.
Mitchell, Teresa V.; Morey, Rajendra A.; Inan, Seniha; Belger, Aysenil
Activity within fronto-striato-temporal regions during processing of unattended auditory deviant tones and an auditory target detection task was investigated using event-related functional magnetic resonance imaging. Activation within the middle frontal gyrus, inferior frontal gyrus, anterior cingulate gyrus, superior temporal gyrus, thalamus, and basal ganglia were analyzed for differences in activity patterns between the two stimulus conditions. Unattended deviant tones elicited robust acti...
Stretch, Roger; Skinner, Nicholas
The introduction of a warning signal that preceded a scheduled shock modified the temporal distribution of free-operant avoidance responses. With response-shock and shock-shock intervals held constant, response rates increased only slightly when the response-signal interval was reduced. The result is consistent with Sidman's (1955) findings under different conditions, but at variance with Ulrich, Holz, and Azrin's (1964) findings under similar conditions. Methylphenidate in graded doses increased response rates, modifying frequency distributions of interresponse times. Drug treatment may have disrupted a “temporal discrimination” formed within the signal-shock interval. More simply, methylphenidate influenced response rates by increasing short response latencies after signal onset; this effect was more prominent than the drug's tendency to increase the frequency of pre-signal responses. When signal-onset preceded shock by 2 sec, individual differences in performance were marked; methylphenidate suppressed responding in one rat as a function of increasing dose levels to a greater degree than in a second animal, but both subjects received more shocks than under control conditions. PMID:6050059
Full Text Available The answer to the question of how the brain incorporates sensory feedback and links it with motor function to achieve goal-directed movement during vocalization remains unclear. We investigated the mechanisms of voice pitch motor control by examining the spectro-temporal dynamics of EEG signals when non-musicians (NM, relative pitch (RP and absolute pitch (AP musicians maintained vocalizations of a vowel sound and received randomized ±100 cents pitch-shift stimuli in their auditory feedback. We identified a phase-synchronized (evoked fronto-central activation within the theta band (5-8 Hz that temporally overlapped with compensatory vocal responses to pitch-shifted auditory feedback and was significantly stronger in RP and AP musicians compared with non-musicians. A second component involved a non-phase-synchronized (induced frontal activation within the delta band (1-4 Hz that emerged at approximately 1 second after the stimulus onset. The delta activation was significantly stronger in the NM compared with RP and AP groups and correlated with the pitch rebound error (PRE, indicating the degree to which subjects failed to re-adjust their voice pitch to baseline after the stimulus offset. We propose that the evoked theta is a neurophysiological marker of enhanced pitch processing in musicians and reflects mechanisms by which humans incorporate auditory feedback to control their voice pitch. We also suggest that the delta activation reflects adaptive neural processes by which vocal production errors are monitored and used to update the state of sensory-motor networks for driving subsequent vocal behaviors. This notion is corroborated by our findings showing that larger PREs were associated with greater delta band activity in the NM compared with RP and AP groups. These findings provide new insights into the neural mechanisms of auditory feedback processing for vocal pitch motor control.
Luke, Steven G.; Nuthmann, Antje; Henderson, John M.
The present study used the stimulus onset delay paradigm to investigate eye movement control in reading and in scene viewing in a within-participants design. Short onset delays (0, 25, 50, 200, and 350 ms) were chosen to simulate the type of natural processing difficulty encountered in reading and scene viewing. Fixation duration increased…
Palmer, David C.
The task of extending Skinner's (1957) interpretation of verbal behavior includes accounting for the moment-to-moment changes in stimulus control as one speaks. A consideration of the behavior of the reader reminds us of the continuous evocative effect of verbal stimuli on readers, listeners, and speakers. Collateral discriminative responses to…
McGowan, Sarah Kate; Behar, Evelyn
For individuals with generalized anxiety disorder, worry becomes associated with numerous aspects of life (e.g., time of day, specific stimuli, environmental cues) and is thus under poor discriminative stimulus control (SC). In addition, excessive worry is associated with anxiety, depressed mood, and sleep difficulties. This investigation sought…
Eikeseth, Svein; Smith, Dean P.
A common characteristic of the language deficits experienced by children with autism (and other developmental disorders) is their failure to acquire a complex intraverbal repertoire. The difficulties with learning intraverbal behaviors may, in part, be related to the fact that the stimulus control for such behaviors usually involves highly complex…
Stoppel, Christian Michael; Boehler, Carsten Nicolas; Strumpf, Hendrik; Krebs, Ruth Marie; Heinze, Hans-Jochen; Hopf, Jens-Max; Schoenfeld, Mircea Ariel
Efficient interaction with the sensory environment requires the rapid reallocation of attentional resources between spatial locations, perceptual features, and objects. It is still a matter of debate whether one single domain-general network or multiple independent domain-specific networks mediate control during shifts of attention across features, locations, and objects. Here, we employed functional magnetic resonance imaging to directly compare the neural mechanisms controlling attention during voluntary and stimulus-driven shifts across objects and locations. Subjects either maintained or switched voluntarily and involuntarily their attention to objects located at the same or at a different visual location. Our data demonstrate shift-related activity in multiple frontoparietal, extrastriate visual, and default-mode network regions, several of which were commonly recruited by voluntary and stimulus-driven shifts between objects and locations. However, our results also revealed object- and location-selective activations, which, moreover, differed substantially between voluntary and stimulus-driven attention. These results suggest that voluntary and stimulus-driven shifts between objects and locations recruit partially overlapping, but also separable, cortical regions, implicating the parallel existence of domain-independent and domain-specific reconfiguration signals that initiate attention shifts in dependence of particular demands.
N. Jeremy Hill
Full Text Available Most brain-computer interface (BCI systems require users to modulate brain signals in response to visual stimuli. Thus, they may not be useful to people with limited vision, such as those with severe paralysis. One important approach for overcoming this issue is auditory streaming, an approach whereby a BCI system is driven by shifts of attention between two dichotically presented auditory stimulus streams. Motivated by the long-term goal of translating such a system into a reliable, simple yes-no interface for clinical usage, we aim to answer two main questions. First, we asked which of two previously-published variants provides superior performance: a fixed-phase (FP design in which the streams have equal period and opposite phase, or a drifting-phase (DP design where the periods are unequal. We found FP to be superior to DP (p = 0.002: average performance levels were 80% and 72% correct, respectively. We were also able to show, in a pilot with one subject, that auditory streaming can support continuous control and neurofeedback applications: by shifting attention between ongoing left and right auditory streams, the subject was able to control the position of a paddle in a computer game. Second, we examined whether the system is dependent on eye movements, since it is known that eye movements and auditory attention may influence each other, and any dependence on the ability to move one’s eyes would be a barrier to translation to paralyzed users. We discovered that, despite instructions, some subjects did make eye movements that were indicative of the direction of attention. However, there was no correlation, across subjects, between the reliability of the eye movement signal and the reliability of the BCI system, indicating that our system was configured to work independently of eye movement. Together, these findings are an encouraging step forward toward BCIs that provide practical communication and control options for the most severely
Hill, N Jeremy; Moinuddin, Aisha; Häuser, Ann-Katrin; Kienzle, Stephan; Schalk, Gerwin
Most brain-computer interface (BCI) systems require users to modulate brain signals in response to visual stimuli. Thus, they may not be useful to people with limited vision, such as those with severe paralysis. One important approach for overcoming this issue is auditory streaming, an approach whereby a BCI system is driven by shifts of attention between two simultaneously presented auditory stimulus streams. Motivated by the long-term goal of translating such a system into a reliable, simple yes-no interface for clinical usage, we aim to answer two main questions. First, we asked which of two previously published variants provides superior performance: a fixed-phase (FP) design in which the streams have equal period and opposite phase, or a drifting-phase (DP) design where the periods are unequal. We found FP to be superior to DP (p = 0.002): average performance levels were 80 and 72% correct, respectively. We were also able to show, in a pilot with one subject, that auditory streaming can support continuous control and neurofeedback applications: by shifting attention between ongoing left and right auditory streams, the subject was able to control the position of a paddle in a computer game. Second, we examined whether the system is dependent on eye movements, since it is known that eye movements and auditory attention may influence each other, and any dependence on the ability to move one's eyes would be a barrier to translation to paralyzed users. We discovered that, despite instructions, some subjects did make eye movements that were indicative of the direction of attention. However, there was no correlation, across subjects, between the reliability of the eye movement signal and the reliability of the BCI system, indicating that our system was configured to work independently of eye movement. Together, these findings are an encouraging step forward toward BCIs that provide practical communication and control options for the most severely paralyzed users.
Chen, Xingyu; Qu, Xingda
The main purpose of this study was to examine the effects of affective auditory stimuli on balance control during static stance. Twelve female and 12 male participants were recruited. Each participant completed four upright standing trials including three auditory stimuli trials and one baseline trial (ie no auditory stimuli). The three auditory stimuli trials corresponded to the pleasant, neutral and unpleasant sound conditions. Center of pressure (COP) measures were used to quantify balance control performance. It was found that unpleasant auditory stimuli were associated with larger COP amplitude in the AP direction compared to the rest testing conditions. There were no significant interaction effects between 'auditory stimuli' and gender. These findings suggested that some specificities presented by auditory stimuli are important for balance control, and the effects of auditory stimuli on balance control were dependent on their affective components. Practitioner Summary: Findings from this study can aid in better understanding of the relationship between auditory stimuli and balance control. In particular, unpleasant auditory stimuli were found to result in poorer balance control and higher fall risks. Therefore, to prevent fall accidents, interventions should be developed to reduce exposures to unpleasant sound.
Lee, M D
Two experiments are presented that serve as a framework for exploring auditory information processing. The framework is referred to as polychotic listening or auditory search, and it requires a listener to scan multiple simultaneous auditory streams for the appearance of a target word (the name of a letter such as A or M). Participants' ability to scan between two and six simultaneous auditory streams of letter and digit names for the name of a target letter was examined using six loudspeakers. The main independent variable was auditory load, or the number of active audio streams on a given trial. The primary dependent variables were target localization accuracy and reaction time. Results showed that as load increased, performance decreased. The performance decrease was evident in reaction time, accuracy, and sensitivity measures. The second study required participants to practice the same task for 10 sessions, for a total of 1800 trials. Results indicated that even with extensive practice, performance was still affected by auditory load. The present results are compared with findings in the visual search literature. The implications for the use of multiple auditory displays are discussed. Potential applications include cockpit and automobile warning displays, virtual reality systems, and training systems.
Pan, Jeng-Shyang; Lo, Chi-Chun; Tsai, Shang-Ho; Lin, Bor-Shyh
The design of a novel non-contact multimedia controller is proposed in this study. Nowadays, multimedia controllers are generally used by patients and nursing assistants in the hospital. Conventional multimedia controllers usually involve in manual operation or other physical movements. However, it is more difficult for the disabled patients to operate the conventional multimedia controller by themselves; they might totally depend on others. Different from other multimedia controllers, the proposed system provides a novel concept of controlling multimedia via visual stimuli, without manual operation. The disabled patients can easily operate the proposed multimedia system by focusing on the control icons of a visual stimulus device, where a commercial tablet is used as the visual stimulus device. Moreover, a wearable and wireless electroencephalogram (EEG) acquisition device is also designed and implemented to easily monitor the user's EEG signals in daily life. Finally, the proposed system has been validated. The experimental result shows that the proposed system can effectively measure and extract the EEG feature related to visual stimuli, and its information transfer rate is also good. Therefore, the proposed non-contact multimedia controller exactly provides a good prototype of novel multimedia controlling scheme. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Liu, Ying; Feng, Jiang; Metzner, Walter
Auditory feedback from the animal's own voice is essential during bat echolocation: to optimize signal detection, bats continuously adjust various call parameters in response to changing echo signals. Auditory feedback seems also necessary for controlling many bat communication calls, although it remains unclear how auditory feedback control differs in echolocation and communication. We tackled this question by analyzing echolocation and communication in greater horseshoe bats, whose echoloca...
Hu, Bing; Guo, Yu; Zou, Xiaoqiang; Dong, Jing; Pan, Long; Yu, Min; Yang, Zhejia; Zhou, Chaowei; Cheng, Zhang; Tang, Wanyue; Sun, Haochen
Based on a classical model of the basal ganglia thalamocortical network, in this paper, we employed a type of the deep brain stimulus voltage on the subthalamic nucleus to study the control mechanism of absence epilepsy seizures. We found that the seizure can be well controlled by turning the period and the duration of current stimulation into suitable ranges. It is the very interesting bidirectional periodic adjustment phenomenon. These parameters are easily regulated in clinical practice, therefore, the results obtained in this paper may further help us to understand the treatment mechanism of the epilepsy seizure.
Full Text Available Auditory reafferences are real-time auditory products created by a person’s own movements. Whereas the interdependency of action and perception is generally well studied, the auditory feedback channel and the influence of perceptual processes during movement execution remain largely unconsidered. We argue that movements have a rhythmic character that is closely connected to sound, making it possible to manipulate auditory reafferences online to understand their role in motor control. We examined if step sounds, occurring as a by-product of running, have an influence on the performance of a complex movement task. Twenty participants completed a hurdling task in three auditory feedback conditions: a control condition with normal auditory feedback, a white noise condition in which sound was masked, and a delayed auditory feedback condition. Overall time and kinematic data were collected. Results show that delayed auditory feedback led to a significantly slower overall time and changed kinematic parameters. Our findings complement previous investigations in a natural movement situation with nonartificial auditory cues. Our results support the existing theoretical understanding of action–perception coupling and hold potential for applied work, where naturally occurring movement sounds can be implemented in the motor learning processes.
Koch, Iring; Lawo, Vera; Fels, Janina; Vorlander, Michael
Using a novel variant of dichotic selective listening, we examined the control of auditory selective attention. In our task, subjects had to respond selectively to one of two simultaneously presented auditory stimuli (number words), always spoken by a female and a male speaker, by performing a numerical size categorization. The gender of the…
Lancioni, Giulio E; Singh, Nirbhay N; O'Reilly, Mark F; Sigafoos, Jeff; Perilli, Viviana; Campodonico, Francesca; Marchiani, Paola; Lang, Russell
Technology-aided programs have been reported to help persons with disabilities develop adaptive responding and control problem behavior/posture. This study assessed one such program in which choice of stimulus events was used as adaptive responding for three adults with multiple disabilities. A computer system presented the participants stimulus samples. For each sample, they could perform a choice response (gaining access to the related stimulus whose length they could extend) or abstain from responding (making the system proceed to the next sample). Once choice responding had strengthened, the program also targeted the participants' problem posture (i.e., head and trunk forward bending). The stimulus exposure gained with a choice response was interrupted if the problem posture occurred. All three participants successfully (a) managed choice responses and access to preferred stimuli and (b) gained postural control (i.e., reducing the problem posture to very low levels). The practical implications of those results are discussed. © The Author(s) 2015.
Merrick, Christina; Farnia, Melika; Jantz, Tiffany K; Gazzaley, Adam; Morsella, Ezequiel
The stream of consciousness often appears whimsical and free from external control. Recent advances, however, reveal that the stream is more susceptible to external influence than previously assumed. Thoughts can be triggered by external stimuli in a manner that is involuntary, systematic, and nontrivial. Based on these advances, our experimental manipulation systematically triggered a sequence of, not one, but two involuntary thoughts. Participants were instructed to (a) not subvocalize the name of visual objects and (b) not count the number of letters comprising object names. On a substantial proportion of trials, participants experienced both kinds of involuntary thoughts. Each thought arose from distinct, high-level processes (naming versus counting). This is the first demonstration of the induction of two involuntary thoughts into the stream of consciousness. Stimulus word length influenced dependent measures systematically. Our findings are relevant to many fields associated with the study of consciousness, including attention, imagery, and action control. Copyright © 2014 Elsevier Inc. All rights reserved.
Hu, Bing; Wang, Qingyun
Epilepsy is a typical neural disease in nervous system, and the control of seizures is very important for treating the epilepsy. It is well known that the drug treatment is the main strategy for controlling the epilepsy. However, there are about 10–15 percent of patients, whose seizures cannot be effectively controlled by means of the drug. Alternatively, the deep brain stimulus (DBS) technology is a feasible method to control the serious seizures. However, theoretical explorations of DBS are still absent, and need to be further made. Presently, we will explore to control the absence seizures by introducing the DBS to a basal ganglia thalamocortical network model. In particular, we apply DBS onto substantia nigra pars reticulata (SNr) and the cortex to explore its effects on controlling absence seizures, respectively. We can find that the absence seizure can be well controlled within suitable parameter ranges by tuning the period and duration of current stimulation as DBS is implemented in the SNr. And also, as the DBS is applied onto the cortex, it is shown that for the ranges of present parameters, only adjusting the duration of current stimulation is an effective control method for the absence seizures. The obtained results can have better understanding for the mechanism of DBS in the medical treatment.
Schüz, Benjamin; Bower, Jodie; Ferguson, Stuart G
Dietary behaviours are substantially influenced by environmental and internal stimuli, such as mood, social situation, and food availability. However, little is known about the role of stimulus control for eating in non-clinical populations, and no studies so far have looked at eating and drinking behaviour simultaneously. 53 individuals from the general population took part in an intensive longitudinal study with repeated, real-time assessments of eating and drinking using Ecological Momentary Assessment. Eating was assessed as main meals and snacks, drinks assessments were separated along alcoholic and non-alcoholic drinks. Situational and internal stimuli were assessed during both eating and drinking events, and during randomly selected non-eating occasions. Hierarchical multinomial logistic random effects models were used to analyse data, comparing dietary events to non-eating occasions. Several situational and affective antecedents of dietary behaviours could be identified. Meals were significantly associated with having food available and observing others eat. Snacking was associated with negative affect, having food available, and observing others eat. Engaging in activities and being with others decreased the likelihood of eating behaviours. Non-alcoholic drinks were associated with observing others eat, and less activities and company. Alcoholic drinks were associated with less negative affect and arousal, and with observing others eat. RESULTS support the role of stimulus control in dietary behaviours, with support for both internal and external, in particular availability and social stimuli. The findings for negative affect support the idea of comfort eating, and results point to the formation of eating habits via cue-behaviour associations. Copyright © 2015 Elsevier Ltd. All rights reserved.
Sample, Camille H.; Martin, Ashley A.; Jones, Sabrina; Hargrave, Sara L.; Davidson, Terry L.
In western and westernized societies, large portions of the population live in what are considered to be “obesogenic” environments. Among other things, obesogenic environments are characterized by a high prevalence of external cues that are associated with highly palatable, energy-dense foods. One prominent hypothesis suggests that these external cues become such powerful conditioned elicitors of appetitive and eating behavior that they overwhelm the internal, physiological mechanisms that serve to maintain energy balance. The present research investigated a learning mechanism that may underlie this loss of internal relative to external control. In Experiment 1, rats were provided with both auditory cues (external stimuli) and varying levels of food deprivation (internal stimuli) that they could use to solve a simple discrimination task. Despite having access to clearly discriminable external cues, we found that the deprivation cues gained substantial discriminative control over conditioned responding. Experiment 2 found that, compared to standard chow, maintenance on a “western-style” diet high in saturated fat and sugar weakened discriminative control by food deprivation cues, but did not impair learning when external cues were also trained as relevant discriminative signals for sucrose. Thus, eating a western-style diet contributed to a loss of internal control over appetitive behavior relative to external cues. We discuss how this relative loss of control by food deprivation signals may result from interference with hippocampal-dependent learning and memory processes, forming the basis of a vicious-cycle of excessive intake, body weight gain, and progressive cognitive decline that may begin very early in life. PMID:26002280
Thomas, Roha M; Kaipa, Ramesh; Ganesh, Attigodu Chandrashekara
The current study aimed to compare the auditory interference control of participants with Learning Disability (LD) to a control group on two versions of an auditory Stroop task. A group of eight children with LD (clinical group) and another group of eight typically developing children (control group) served as participants. All the participants were involved in a semantic and a gender identification-based auditory Stroop task. Each participant was presented with eight different words (10 times) that were pre-recorded by a male and a female speaker. The semantic task required the participants to ignore the speaker's gender and attend to the meaning of the word, and vice-versa for the gender identification task. The participants' performance accuracy and reaction time (RT) was measured on both the tasks. Control group participants significantly outperformed the clinical group participants on both the tasks with regard to performance accuracy as well as RT. The results suggest that children with LD have problems in suppressing irrelevant auditory stimuli and focusing on the relevant auditory stimuli. This can be attributed to the auditory processing problems in these children. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Barras, Caroline; Kerzel, Dirk
Some points of criticism against the idea that attentional selection is controlled by bottom-up processing were dispelled by the attentional window account. The attentional window account claims that saliency computations during visual search are only performed for stimuli inside the attentional window. Therefore, a small attentional window may avoid attentional capture by salient distractors because it is likely that the salient distractor is located outside the window. In contrast, a large attentional window increases the chances of attentional capture by a salient distractor. Large and small attentional windows have been associated with efficient (parallel) and inefficient (serial) search, respectively. We compared the effect of a salient color singleton on visual search for a shape singleton during efficient and inefficient search. To vary search efficiency, the nontarget shapes were either similar or dissimilar with respect to the shape singleton. We found that interference from the color singleton was larger with inefficient than efficient search, which contradicts the attentional window account. While inconsistent with the attentional window account, our results are predicted by computational models of visual search. Because of target-nontarget similarity, the target was less salient with inefficient than efficient search. Consequently, the relative saliency of the color distractor was higher with inefficient than with efficient search. Accordingly, stronger attentional capture resulted. Overall, the present results show that bottom-up control by stimulus saliency is stronger when search is difficult, which is inconsistent with the attentional window account.
McGowan, Sarah Kate; Behar, Evelyn
For individuals with generalized anxiety disorder, worry becomes associated with numerous aspects of life (e.g., time of day, specific stimuli, environmental cues) and is thus under poor discriminative stimulus control (SC). In addition, excessive worry is associated with anxiety, depressed mood, and sleep difficulties. This investigation sought to provide preliminary evidence for the efficacy of SC procedures in reducing anxiety-, mood-, and sleep-related symptoms. A total of 53 participants with high trait worry were randomly assigned to receive 2 weeks of either SC training (consisting of a 30-min time- and place-restricted worry period each day) or a control condition called focused worry (FW; consisting of instructions to not avoid naturally occurring worry so that worry and anxiety would not paradoxically increase). At post-training, SC was superior to FW in producing reductions on measures of worry, anxiety, negative affect, and insomnia, but not on measures of depression or positive affect. Moreover, SC was superior to FW in producing clinically significant change on measures of worry and anxiety. Results provide preliminary support for the use of SC training techniques in larger treatment packages for individuals who experience high levels of worry.
Behroozmand, Roozbeh; Liu, Hanjun; Larson, Charles R
The neural responses to sensory consequences of a self-produced motor act are suppressed compared with those in response to a similar but externally generated stimulus. Previous studies in the somatosensory and auditory systems have shown that the motor-induced suppression of the sensory mechanisms is sensitive to delays between the motor act and the onset of the stimulus. The present study investigated time-dependent neural processing of auditory feedback in response to self-produced vocalizations. ERPs were recorded in response to normal and pitch-shifted voice auditory feedback during active vocalization and passive listening to the playback of the same vocalizations. The pitch-shifted stimulus was delivered to the subjects' auditory feedback after a randomly chosen time delay between the vocal onset and the stimulus presentation. Results showed that the neural responses to delayed feedback perturbations were significantly larger than those in response to the pitch-shifted stimulus occurring at vocal onset. Active vocalization was shown to enhance neural responsiveness to feedback alterations only for nonzero delays compared with passive listening to the playback. These findings indicated that the neural mechanisms of auditory feedback processing are sensitive to timing between the vocal motor commands and the incoming auditory feedback. Time-dependent neural processing of auditory feedback may be an important feature of the audio-vocal integration system that helps to improve the feedback-based monitoring and control of voice structure through vocal error detection and correction.
Van Wouwe, N.C.; van den Wildenberg, W.P.M.; Ridderinkhof, K. R.; Claassen, D.O.; Neimat, J.S.; Wylie, S.A.
The inhibition of impulsive response tendencies that conflict with goal-directed action is a key component of executive control. An emerging literature reveals that the proficiency of inhibitory control is modulated by expected or unexpected opportunities to earn reward or avoid punishment. However, less is known about how inhibitory control is impacted by the processing of task-irrelevant stimulus information that has been associated previously with particular outcomes (reward or punishment) or response tendencies (action or inaction). We hypothesized that stimulus features associated with particular action-valence tendencies, even though task irrelevant, would modulate inhibitory control processes. Participants first learned associations between stimulus features (color), actions, and outcomes using an action-valence learning task that orthogonalizes action (action, inaction) and valence (reward, punishment). Next, these stimulus features were embedded in a Simon task as a task-irrelevant stimulus attribute. We analyzed the effects of action-valence associations on the Simon task by means of distributional analysis to reveal the temporal dynamics. Learning patterns replicated previously reported biases; inherent, Pavlovian-like mappings (action-reward, inaction-punishment avoidance) were easier to learn than mappings conflicting with these biases (action-punishment avoidance, inaction-reward). More importantly, results from two experiments demonstrated that the easier to learn, Pavlovian-like action-valence associations interfered with the proficiency of inhibiting impulsive actions in the Simon task. Processing conflicting associations led to more proficient inhibitory control of impulsive actions, similar to Simon trials without any association. Fast impulsive errors were reduced for trials associated with punishment in comparison to reward trials or trials without any valence association. These findings provide insight into the temporal dynamics of task
Hofstadter-Duke, Kristi L; Daly, Edward J
This study investigated a method for conducting experimental analyses of academic responding. In the experimental analyses, academic responding (math computation), rather than problem behavior, was reinforced across conditions. Two separate experimental analyses (one with fluent math computation problems and one with non-fluent math computation problems) were conducted with three elementary school children using identical contingencies while math computation rate was measured. Results indicate that the experimental analysis with non-fluent problems produced undifferentiated responding across participants; however, differentiated responding was achieved for all participants in the experimental analysis with fluent problems. A subsequent comparison of the single-most effective condition from the experimental analyses replicated the findings with novel computation problems. Results are discussed in terms of the critical role of stimulus control in identifying controlling consequences for academic deficits, and recommendations for future research refining and extending experimental analysis to academic responding are made. © The Author(s) 2014.
Hasegawa, Naoya; Takeda, Kenta; Sakuma, Moe; Mani, Hiroki; Maejima, Hiroshi; Asaka, Tadayoshi
Augmented sensory biofeedback (BF) for postural control is widely used to improve postural stability. However, the effective sensory information in BF systems of motor learning for postural control is still unknown. The purpose of this study was to investigate the learning effects of visual versus auditory BF training in dynamic postural control. Eighteen healthy young adults were randomly divided into two groups (visual BF and auditory BF). In test sessions, participants were asked to bring the real-time center of pressure (COP) in line with a hidden target by body sway in the sagittal plane. The target moved in seven cycles of sine curves at 0.23Hz in the vertical direction on a monitor. In training sessions, the visual and auditory BF groups were required to change the magnitude of a visual circle and a sound, respectively, according to the distance between the COP and target in order to reach the target. The perceptual magnitudes of visual and auditory BF were equalized according to Stevens' power law. At the retention test, the auditory but not visual BF group demonstrated decreased postural performance errors in both the spatial and temporal parameters under the no-feedback condition. These findings suggest that visual BF increases the dependence on visual information to control postural performance, while auditory BF may enhance the integration of the proprioceptive sensory system, which contributes to motor learning without BF. These results suggest that auditory BF training improves motor learning of dynamic postural control. Copyright © 2017 Elsevier B.V. All rights reserved.
Sabri, Merav; Humphries, Colin; Verber, Matthew; Liebenthal, Einat; Binder, Jeffrey R; Mangalathu, Jain; Desai, Anjali
Whether and how working memory disrupts or alters auditory selective attention is unclear. We compared simultaneous event-related potentials (ERP) and functional magnetic resonance imaging (fMRI) responses associated with task-irrelevant sounds across high and low working memory load in a dichotic-listening paradigm. Participants performed n-back tasks (1-back, 2-back) in one ear (Attend ear) while ignoring task-irrelevant speech sounds in the other ear (Ignore ear). The effects of working memory load on selective attention were observed at 130-210ms, with higher load resulting in greater irrelevant syllable-related activation in localizer-defined regions in auditory cortex. The interaction between memory load and presence of irrelevant information revealed stronger activations primarily in frontal and parietal areas due to presence of irrelevant information in the higher memory load. Joint independent component analysis of ERP and fMRI data revealed that the ERP component in the N1 time-range is associated with activity in superior temporal gyrus and medial prefrontal cortex. These results demonstrate a dynamic relationship between working memory load and auditory selective attention, in agreement with the load model of attention and the idea of common neural resources for memory and attention. Copyright © 2014 Elsevier Ltd. All rights reserved.
Beurms, Sarah; Traets, Frits; De Houwer, Jan; Beckers, Tom
Symmetry refers to the observation that subjects will derive B-A (e.g., in the presence of B, select A) after being trained on A-B (e.g., in the presence of A, select B). Whereas symmetry is readily shown in humans, it has been difficult to demonstrate in nonhuman animals. This difficulty, at least in pigeons, may result from responding to specific stimulus properties that change when sample and comparison stimuli switch roles between training and testing. In three experiments with humans, we investigated to what extent human responding is influenced by the temporal location of stimuli using a successive matching-to-sample procedure. Our results indicate that temporal location does not spontaneously control responding in humans, although it does in pigeons. Therefore, the number of functional stimuli that humans respond to in this procedure may be half of the number of functional stimuli that the pigeons respond to. In a fourth experiment, we tested this assumption by doubling the number of functional stimuli controlling responding in human participants in an attempt to make the test more comparable to symmetry tests with pigeons. Here, we found that humans responded according to indirect class formation in the same manner as pigeons do. In sum, our results indicate that functional symmetry is readily observed in humans, even in cases where the temporal features of the stimuli prevent functional symmetry in pigeons. We argue that this difference in behavior between the two species does not necessarily reflect a difference in capacity to show functional symmetry between both species, but could also reflect a difference in the functional stimuli each species responds to. © 2017 Society for the Experimental Analysis of Behavior.
Alho, Kimmo; Rinne, Teemu; Herron, Timothy J; Woods, David L
We meta-analyzed 115 functional magnetic resonance imaging (fMRI) studies reporting auditory-cortex (AC) coordinates for activations related to active and passive processing of pitch and spatial location of non-speech sounds, as well as to the active and passive speech and voice processing. We aimed at revealing any systematic differences between AC surface locations of these activations by statistically analyzing the activation loci using the open-source Matlab toolbox VAMCA (Visualization and Meta-analysis on Cortical Anatomy). AC activations associated with pitch processing (e.g., active or passive listening to tones with a varying vs. fixed pitch) had median loci in the middle superior temporal gyrus (STG), lateral to Heschl's gyrus. However, median loci of activations due to the processing of infrequent pitch changes in a tone stream were centered in the STG or planum temporale (PT), significantly posterior to the median loci for other types of pitch processing. Median loci of attention-related modulations due to focused attention to pitch (e.g., attending selectively to low or high tones delivered in concurrent sequences) were, in turn, centered in the STG or superior temporal sulcus (STS), posterior to median loci for passive pitch processing. Activations due to spatial processing were centered in the posterior STG or PT, significantly posterior to pitch processing loci (processing of infrequent pitch changes excluded). In the right-hemisphere AC, the median locus of spatial attention-related modulations was in the STS, significantly inferior to the median locus for passive spatial processing. Activations associated with speech processing and those associated with voice processing had indistinguishable median loci at the border of mid-STG and mid-STS. Median loci of attention-related modulations due to attention to speech were in the same mid-STG/STS region. Thus, while attention to the pitch or location of non-speech sounds seems to recruit AC areas less
Lie, Celia; Alsop, Brent
The present experiment examined the effects of varying stimulus disparity and relative punisher frequencies on signal detection by humans. Participants were placed into one of two groups. Group 3 participants were presented with 1:3 and 3:1 punisher frequency ratios, while Group 11 participants were presented with 1:11 and 11:1 punisher frequency…
Scherbaum, Stefan; Frisch, Simon; Dshemuchadse, Maja
Selective attention and its adaptation by cognitive control processes are considered a core aspect of goal-directed action. Often, selective attention is studied behaviorally with conflict tasks, but an emerging neuroscientific method for the study of selective attention is EEG frequency tagging. It applies different flicker frequencies to the stimuli of interest eliciting steady state visual evoked potentials (SSVEPs) in the EEG. These oscillating SSVEPs in the EEG allow tracing the allocation of selective attention to each tagged stimulus continuously over time. The present behavioral investigation points to an important caveat of using tagging frequencies: The flicker of stimuli not only produces a useful neuroscientific marker of selective attention, but interacts with the adaptation of selective attention itself. Our results indicate that RT patterns of adaptation after response conflict (so-called conflict adaptation) are reversed when flicker frequencies switch at once. However, this effect of frequency switches is specific to the adaptation by conflict-driven control processes, since we find no effects of frequency switches on inhibitory control processes after no-go trials. We discuss the theoretical implications of this finding and propose precautions that should be taken into account when studying conflict adaptation using frequency tagging in order to control for the described confounds. Copyright © 2015 Elsevier B.V. All rights reserved.
Schwent, V. L.; Hillyard, S. A.; Galambos, R.
The effects of varying the rate of delivery of dichotic tone pip stimuli on selective attention measured by evoked-potential amplitudes and signal detectability scores were studied. The subjects attended to one channel (ear) of tones, ignored the other, and pressed a button whenever occasional targets - tones of a slightly higher pitch were detected in the attended ear. Under separate conditions, randomized interstimulus intervals were short, medium, and long. Another study compared the effects of attention on the N1 component of the auditory evoked potential for tone pips presented alone and when white noise was added to make the tones barely above detectability threshold in a three-channel listening task. Major conclusions are that (1) N1 is enlarged to stimuli in an attended channel only in the short interstimulus interval condition (averaging 350 msec), (2) N1 and P3 are related to different modes of selective attention, and (3) attention selectivity in multichannel listening task is greater when tones are faint and/or difficult to detect.
Winter, J. C.; Amorosi, D. J.; Rice, Kenner C.; Cheng, Kejun; Yu, Ai-Ming
In previous studies we have observed that, in comparison with wild type mice, Tg-CYP2D6 mice have increased serum levels of bufotenine [5-hydroxy-N,N-dimethyltryptamine] following the administration of 5-MeO-DMT. Furthermore, following the injection of 5-MeO-DMT, harmaline was observed to increase serum levels of bufotenine and 5-MeO-DMT in both wild-type and Tg-CYP2D6 mice. In the present investigation, 5-MeO-DMT-induced stimulus control was established in wild-type and Tg-CYP2D6 mice. The two groups did not differ in their rate of acquisition of stimulus control. When tested with bufotenine, no 5-MeO-DMT-appropriate responding was observed. In contrast, the more lipid soluble analog of bufotenine, acetylbufotenine, was followed by an intermediate level of responding. The combination of harmaline with 5-MeO-DMT yielded a statistically significant increase in 5-MeO-DMT-appropriate responding in Tg-CYP2D6 mice; a comparable increase occurred in wild-type mice. In addition, it was noted that harmaline alone was followed by a significant degree of 5-MeO-DMT-appropriate responding in Tg-CYP2D6 mice. It is concluded that wild-type and Tg-CYPD2D6 mice do not differ in terms of acquisition of stimulus control by 5-MeO-DMT or in their response to bufotenine and acetylbufotenine. In both groups of mice, harmaline was found to enhance the stimulus effects of 5-MeO-DMT. PMID:21624387
Winter, J C; Amorosi, D J; Rice, Kenner C; Cheng, Kejun; Yu, Ai-Ming
In previous studies we have observed that, in comparison with wild type mice, Tg-CYP2D6 mice have increased serum levels of bufotenine [5-hydroxy-N,N-dimethyltryptamine] following the administration of 5-MeO-DMT. Furthermore, following the injection of 5-MeO-DMT, harmaline was observed to increase serum levels of bufotenine and 5-MeO-DMT in both wild-type and Tg-CYP2D6 mice. In the present investigation, 5-MeO-DMT-induced stimulus control was established in wild-type and Tg-CYP2D6 mice. The two groups did not differ in their rate of acquisition of stimulus control. When tested with bufotenine, no 5-MeO-DMT-appropriate responding was observed. In contrast, the more lipid soluble analog of bufotenine, acetylbufotenine, was followed by an intermediate level of responding. The combination of harmaline with 5-MeO-DMT yielded a statistically significant increase in 5-MeO-DMT-appropriate responding in Tg-CYP2D6 mice; a comparable increase occurred in wild-type mice. In addition, it was noted that harmaline alone was followed by a significant degree of 5-MeO-DMT-appropriate responding in Tg-CYP2D6 mice. It is concluded that wild-type and Tg-CYPD2D6 mice do not differ in terms of acquisition of stimulus control by 5-MeO-DMT or in their response to bufotenine and acetylbufotenine. In both groups of mice, harmaline was found to enhance the stimulus effects of 5-MeO-DMT. Copyright © 2011 Elsevier Inc. All rights reserved.
Moench, Tobias; Hollmann, Maurice; Bernarding, Johannes
The real-time analysis of brain activation using functional MRI data offers a wide range of new experiments such as investigating self-regulation or learning strategies. However, besides special data acquisition and real-time data analysing techniques such examination requires dynamic and adaptive stimulus paradigms and self-optimising MRI-sequences. This paper presents an approach that enables the unified handling of parameters influencing the different software systems involved in the acquisition and analysing process. By developing a custom-made Experiment Description Language (EDL) this concept is used for a fast and flexible software environment which treats aspects like extraction and analysis of activation as well as the modification of the stimulus presentation. We describe how extracted real-time activation is subsequently evaluated by comparing activation patterns to previous acquired templates representing activated regions of interest for different predefined conditions. According to those results the stimulus presentation is adapted. The results showed that the developed system in combination with EDL is able to reliably detect and evaluate activation patterns in real-time. With a processing time for data analysis of about one second the approach is only limited by the natural time course of the hemodynamic response function of the brain activation.
Batista, Gervasio; Johnson, Jennifer Leigh; Dominguez, Elena; Costa-Mattioli, Mauro; Pena, Jose L
The formation of imprinted memories during a critical period is crucial for vital behaviors, including filial attachment. Yet, little is known about the underlying molecular mechanisms. Using a combination of behavior, pharmacology, in vivo surface sensing of translation (SUnSET) and DiOlistic labeling we found that, translational control by the eukaryotic translation initiation factor 2 alpha (eIF2α) bidirectionally regulates auditory but not visual imprinting and related changes in structural plasticity in chickens. Increasing phosphorylation of eIF2α (p-eIF2α) reduces translation rates and spine plasticity, and selectively impairs auditory imprinting. By contrast, inhibition of an eIF2α kinase or blocking the translational program controlled by p-eIF2α enhances auditory imprinting. Importantly, these manipulations are able to reopen the critical period. Thus, we have identified a translational control mechanism that selectively underlies auditory imprinting. Restoring translational control of eIF2α holds the promise to rejuvenate adult brain plasticity and restore learning and memory in a variety of cognitive disorders. DOI: http://dx.doi.org/10.7554/eLife.17197.001 PMID:28009255
Yu, Bo; Wang, Xunda; Ma, Lin; Li, Liang; Li, Haifeng
Cognitive control has been extensively studied from Event-Related Potential (ERP) point of view in visual modality using Stroop paradigms. Little work has been done in auditory Stroop paradigms, and inconsistent conclusions have been reported, especially on the conflict detection stage of cognitive control. This study investigated the early ERP components in an auditory Stroop paradigm, during which participants were asked to identify the volume of spoken words and ignore the word meanings. A series of significant ERP components were revealed that distinguished incongruent and congruent trials: two declined negative polarity waves (the N1 and the N2) and three declined positive polarity wave (the P1, the P2 and the P3) over the fronto-central area for the incongruent trials. These early ERP components imply that both a perceptual stage and an identification stage exist in the auditory Stroop effect. A 3-stage cognitive control model was thus proposed for a more detailed description of the human cognitive control mechanism in the auditory Stroop tasks.
Liu, Ying; Feng, Jiang; Metzner, Walter
Auditory feedback from the animal's own voice is essential during bat echolocation: to optimize signal detection, bats continuously adjust various call parameters in response to changing echo signals. Auditory feedback seems also necessary for controlling many bat communication calls, although it remains unclear how auditory feedback control differs in echolocation and communication. We tackled this question by analyzing echolocation and communication in greater horseshoe bats, whose echolocation pulses are dominated by a constant frequency component that matches the frequency range they hear best. To maintain echoes within this "auditory fovea", horseshoe bats constantly adjust their echolocation call frequency depending on the frequency of the returning echo signal. This Doppler-shift compensation (DSC) behavior represents one of the most precise forms of sensory-motor feedback known. We examined the variability of echolocation pulses emitted at rest (resting frequencies, RFs) and one type of communication signal which resembles an echolocation pulse but is much shorter (short constant frequency communication calls, SCFs) and produced only during social interactions. We found that while RFs varied from day to day, corroborating earlier studies in other constant frequency bats, SCF-frequencies remained unchanged. In addition, RFs overlapped for some bats whereas SCF-frequencies were always distinctly different. This indicates that auditory feedback during echolocation changed with varying RFs but remained constant or may have been absent during emission of SCF calls for communication. This fundamentally different feedback mechanism for echolocation and communication may have enabled these bats to use SCF calls for individual recognition whereas they adjusted RF calls to accommodate the daily shifts of their auditory fovea.
Full Text Available Auditory feedback from the animal's own voice is essential during bat echolocation: to optimize signal detection, bats continuously adjust various call parameters in response to changing echo signals. Auditory feedback seems also necessary for controlling many bat communication calls, although it remains unclear how auditory feedback control differs in echolocation and communication. We tackled this question by analyzing echolocation and communication in greater horseshoe bats, whose echolocation pulses are dominated by a constant frequency component that matches the frequency range they hear best. To maintain echoes within this "auditory fovea", horseshoe bats constantly adjust their echolocation call frequency depending on the frequency of the returning echo signal. This Doppler-shift compensation (DSC behavior represents one of the most precise forms of sensory-motor feedback known. We examined the variability of echolocation pulses emitted at rest (resting frequencies, RFs and one type of communication signal which resembles an echolocation pulse but is much shorter (short constant frequency communication calls, SCFs and produced only during social interactions. We found that while RFs varied from day to day, corroborating earlier studies in other constant frequency bats, SCF-frequencies remained unchanged. In addition, RFs overlapped for some bats whereas SCF-frequencies were always distinctly different. This indicates that auditory feedback during echolocation changed with varying RFs but remained constant or may have been absent during emission of SCF calls for communication. This fundamentally different feedback mechanism for echolocation and communication may have enabled these bats to use SCF calls for individual recognition whereas they adjusted RF calls to accommodate the daily shifts of their auditory fovea.
Winter, J. C.; Amorosi, D. J.; Rice, Kenner C.; Cheng, Kejun; Yu, Ai-Ming
In previous studies we have observed that, in comparison with wild type mice, Tg-CYP2D6 mice have increased serum levels of bufotenine [5-hydroxy-N,N-dimethyltryptamine] following the administration of 5-MeO-DMT. Furthermore, following the injection of 5-MeO-DMT, harmaline was observed to increase serum levels of bufotenine and 5-MeO-DMT in both wild-type and Tg-CYP2D6 mice. In the present investigation, 5-MeO-DMT-induced stimulus control was established in wild-type and Tg-CYP2D6 mice. The t...
BOLHUIS, JJ; VANKAMPEN, HS
The characteristics of auditory learning in filial imprinting in precocial birds are reviewed. Numerous studies have demonstrated that the addition of an auditory stimulus improves following of a visual stimulus. This paper evaluates whether there is genuine auditory imprinting, i.e. the formation
HOFFMAN, H S; FLESHLER, M
A tone ending with electrical shock was periodically presented to pigeons while they pecked a key for food. Pairs of birds were run simultaneously under a yoked program which insured that both birds received the same number and temporal distribution of shocks. For one of the birds, shock was always initiated by a peck; for the other, shock was unavoidable. Both procedures led to reduced rates of pecking in the presence of the tone, and gradients of stimulus generalization were obtained. But the effects of response contingent shock extinguished more rapidly than the effects of unavoidable shock. In general, birds exposed to unavoidable shock tended to respond at intermediate rates throughout tone, whereas those exposed to response contingent shock ceased to peck for part or all of the tone period.
Schmidt, James R; De Houwer, Jan
The Gratton (or sequential congruency) effect is the finding that conflict effects (e.g., Stroop and Eriksen flanker effects) are larger following congruent trials relative to incongruent trials. The standard account given for this is that a cognitive control mechanism detects conflict when it occurs and adapts to this conflict on the following trial. Others, however, have questioned the conflict adaptation account and suggested that sequential biases might account for the Gratton effect. In two experiments, contingency biases were removed from the task and stimulus repetitions were deleted to control for stimulus bindings. This eliminated the Gratton effect in the response times in both experiments, supporting a non-conflict explanation of the Gratton effect. A Gratton effect did persist in the errors of Experiment 1; however, this effect was not produced by the type of errors (word reading errors) that a conflict adaptation account should predict. Instead, tentative support was found for a congruency switch cost hypothesis. In all, the conflict adaptation account failed to account for any of the reported data. Implications for future work on cognitive control are discussed. Copyright © 2011 Elsevier B.V. All rights reserved.
Piray, Payam; Zeighami, Yashar; Bahrami, Fariba; Eissa, Abeer M; Hewedi, Doaa H; Moustafa, Ahmed A
A substantial subset of Parkinson's disease (PD) patients suffers from impulse control disorders (ICDs), which are side effects of dopaminergic medication. Dopamine plays a key role in reinforcement learning processes. One class of reinforcement learning models, known as the actor-critic model, suggests that two components are involved in these reinforcement learning processes: a critic, which estimates values of stimuli and calculates prediction errors, and an actor, which estimates values of potential actions. To understand the information processing mechanism underlying impulsive behavior, we investigated stimulus and action value learning from reward and punishment in four groups of participants: on-medication PD patients with ICD, on-medication PD patients without ICD, off-medication PD patients without ICD, and healthy controls. Analysis of responses suggested that participants used an actor-critic learning strategy and computed prediction errors based on stimulus values rather than action values. Quantitative model fits also revealed that an actor-critic model of the basal ganglia with different learning rates for positive and negative prediction errors best matched the choice data. Moreover, whereas ICDs were associated with model parameters related to stimulus valuation (critic), PD was associated with parameters related to action valuation (actor). Specifically, PD patients with ICD exhibited lower learning from negative prediction errors in the critic, resulting in an underestimation of adverse consequences associated with stimuli. These findings offer a specific neurocomputational account of the nature of compulsive behaviors induced by dopaminergic drugs. Copyright © 2014 the authors 0270-6474/14/347814-11$15.00/0.
Full Text Available Background and Aim: Tinnitus is an unpleasant sound which can cause some behavioral disorders. According to evidence the origin of tinnitus is not only in peripheral but also in central auditory system. So evaluation of central auditory system function is necessary. In this study Auditory brainstem responses (ABR were compared in noise induced tinnitus and non-tinnitus control subjects.Materials and Methods: This cross-sectional, descriptive and analytic study is conducted in 60 cases in two groups including of 30 noise induced tinnitus and 30 non-tinnitus control subjects. ABRs were recorded ipsilateraly and contralateraly and their latencies and amplitudes were analyzed.Results: Mean interpeak latencies of III-V (p= 0.022, I-V (p=0.033 in ipsilatral electrode array and mean absolute latencies of IV (p=0.015 and V (p=0.048 in contralatral electrode array were significantly increased in noise induced tinnitus group relative to control group. Conclusion: It can be concluded from that there are some decrease in neural transmission time in brainstem and there are some sign of involvement of medial nuclei in olivery complex in addition to lateral lemniscus.
Forlano, Paul M; Sisneros, Joseph A; Rohmann, Kevin N; Bass, Andrew H
Seasonal changes in reproductive-related vocal behavior are widespread among fishes. This review highlights recent studies of the vocal plainfin midshipman fish, Porichthys notatus, a neuroethological model system used for the past two decades to explore neural and endocrine mechanisms of vocal-acoustic social behaviors shared with tetrapods. Integrative approaches combining behavior, neurophysiology, neuropharmacology, neuroanatomy, and gene expression methodologies have taken advantage of simple, stereotyped and easily quantifiable behaviors controlled by discrete neural networks in this model system to enable discoveries such as the first demonstration of adaptive seasonal plasticity in the auditory periphery of a vertebrate as well as rapid steroid and neuropeptide effects on vocal physiology and behavior. This simple model system has now revealed cellular and molecular mechanisms underlying seasonal and steroid-driven auditory and vocal plasticity in the vertebrate brain. Copyright © 2014 Elsevier Inc. All rights reserved.
Eric O. Boyer
Full Text Available As eye movements are mostly automatic and overtly generated to attain visual goals, individuals have a poor metacognitive knowledge of their own eye movements. We present an exploratory study on the effects of real-time continuous auditory feedback generated by eye movements. We considered both a tracking task and a production task where smooth pursuit eye movements (SPEM can be endogenously generated. In particular, we used a visual paradigm which enables to generate and control SPEM in the absence of a moving visual target. We investigated whether real-time auditory feedback of eye movement dynamics might improve learning in both tasks, through a training protocol over 8 days. The results indicate that real-time sonification of eye movements can actually modify the oculomotor behavior, and reinforce intrinsic oculomotor perception. Nevertheless, large inter-individual differences were observed preventing us from reaching a strong conclusion on sensorimotor learning improvements.
Pallesen, Karen Johanne; Brattico, Elvira; Bailey, Christopher J; Korvenoja, Antti; Koivisto, Juha; Gjedde, Albert; Carlson, Synnöve
Musical competence may confer cognitive advantages that extend beyond processing of familiar musical sounds. Behavioural evidence indicates a general enhancement of both working memory and attention in musicians. It is possible that musicians, due to their training, are better able to maintain focus on task-relevant stimuli, a skill which is crucial to working memory. We measured the blood oxygenation-level dependent (BOLD) activation signal in musicians and non-musicians during working memory of musical sounds to determine the relation among performance, musical competence and generally enhanced cognition. All participants easily distinguished the stimuli. We tested the hypothesis that musicians nonetheless would perform better, and that differential brain activity would mainly be present in cortical areas involved in cognitive control such as the lateral prefrontal cortex. The musicians performed better as reflected in reaction times and error rates. Musicians also had larger BOLD responses than non-musicians in neuronal networks that sustain attention and cognitive control, including regions of the lateral prefrontal cortex, lateral parietal cortex, insula, and putamen in the right hemisphere, and bilaterally in the posterior dorsal prefrontal cortex and anterior cingulate gyrus. The relationship between the task performance and the magnitude of the BOLD response was more positive in musicians than in non-musicians, particularly during the most difficult working memory task. The results confirm previous findings that neural activity increases during enhanced working memory performance. The results also suggest that superior working memory task performance in musicians rely on an enhanced ability to exert sustained cognitive control. This cognitive benefit in musicians may be a consequence of focused musical training.
Karen Johanne Pallesen
Full Text Available Musical competence may confer cognitive advantages that extend beyond processing of familiar musical sounds. Behavioural evidence indicates a general enhancement of both working memory and attention in musicians. It is possible that musicians, due to their training, are better able to maintain focus on task-relevant stimuli, a skill which is crucial to working memory. We measured the blood oxygenation-level dependent (BOLD activation signal in musicians and non-musicians during working memory of musical sounds to determine the relation among performance, musical competence and generally enhanced cognition. All participants easily distinguished the stimuli. We tested the hypothesis that musicians nonetheless would perform better, and that differential brain activity would mainly be present in cortical areas involved in cognitive control such as the lateral prefrontal cortex. The musicians performed better as reflected in reaction times and error rates. Musicians also had larger BOLD responses than non-musicians in neuronal networks that sustain attention and cognitive control, including regions of the lateral prefrontal cortex, lateral parietal cortex, insula, and putamen in the right hemisphere, and bilaterally in the posterior dorsal prefrontal cortex and anterior cingulate gyrus. The relationship between the task performance and the magnitude of the BOLD response was more positive in musicians than in non-musicians, particularly during the most difficult working memory task. The results confirm previous findings that neural activity increases during enhanced working memory performance. The results also suggest that superior working memory task performance in musicians rely on an enhanced ability to exert sustained cognitive control. This cognitive benefit in musicians may be a consequence of focused musical training.
Overgaard, Morten; Lindeløv, Jonas Kristoffer; Svejstrup, Stinna
to this knowledge (whether the stimulus was visual, auditory, or something else). We test this hypothesis in healthy subjects, asking them to report whether a masked stimulus was presented auditorily or visually, what the stimulus was, and how clearly they experienced the stimulus using the Perceptual Awareness...
Passow, Susanne; Westerhausen, René; Hugdahl, Kenneth; Wartenburger, Isabell; Heekeren, Hauke R; Lindenberger, Ulman; Li, Shu-Chen
In addition to sensory decline, age-related losses in auditory perception also reflect impairments in attentional modulation of perceptual saliency. Using an attention and intensity-modulated dichotic listening paradigm, we investigated electrophysiological correlates of processing conflicts between attentional focus and perceptual saliency in 25 younger and 26 older adults. Participants were instructed to attend to the right or left ear, and perceptual saliency was manipulated by varying the intensities of both ears. Attentional control demand was higher in conditions when attentional focus and perceptual saliency favored opposing ears than in conditions without such conflicts. Relative to younger adults, older adults modulated their attention less flexibly and were more influenced by perceptual saliency. Our results show, for the first time, that in younger adults a late negativity in the event-related potential (ERP) at fronto-central and parietal electrodes was sensitive to perceptual-attentional conflicts during auditory processing (N450 modulation effect). Crucially, the magnitude of the N450 modulation effect correlated positively with task performance. In line with lower attentional flexibility, the ERP waveforms of older adults showed absence of the late negativity and the modulation effect. This suggests that aging compromises the activation of the fronto-parietal attentional network when processing the competing and conflicting auditory information.
Villar, Anna C N W B; Korn, Gustavo P; Azevedo, Renata R
To characterize the vocal quality and acoustic parameters of voices of air traffic controllers (ATCs) without any vocal complaints before and after a shift. The voices of a group of 45 ATCs were recorded before and after a 2-hour shift, regardless of their operational position or number of previously worked shifts; both genders were included, participants had a mean age of 25 years, and they had a mean length of occupational experience of 4 years and 2 months. Each of these professionals was recorded phonating a sustained /a/ vowel and counting from 1 to 20, and the recordings were acoustically analyzed using the Praat software. A perceptual-auditory analysis of the recordings was then performed by three speech therapists specializing in voice, who evaluated the characteristics of each emission using a visual analog scale (VAS). The acoustic analysis was performed on the sustained /a/ vowel. The measures of intensity; frequency; maximum phonation time (MPT); and the first, second, third, and fourth formants were considered in this analysis. There were no significant differences between the random pre- and postshift samples, either in the acoustic or in the perceptual-auditory analysis. The perceptual-auditory analysis revealed that 44% (n = 20) of ATCs showed alterations in vocal quality during the sustained /a/ vowel emission, and this dysphonia was also observed in connected speech in 25% (n = 5) of this group. Perceptual-auditory analysis of the /a/ vowel revealed that a high percentage of ATCs had vocal alterations (44%), even among a group of subjects without vocal complaints. Copyright Â© 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
Corbett, Blythe A; Constantine, Laura J
Symptoms of attention deficit hyperactivity disorder (ADHD) have been widely reported in children with autism spectrum disorder (ASD). The current study investigated attention and response control in children with ASD, ADHD, and typical development using the Integrated Visual and Auditory Continuous Performance Test. Results indicate that many children with ASD show significant deficits in visual and auditory attention and greater deficits in impulsivity than children with ADHD or typical dev...
Kells, B E; Kennedy, J G; Biagioni, P A; Lamey, P J
To investigate the rewarming pattern and rewarming rate of clinically healthy teeth following a controlled cold stimulus using TI techniques. A controlled cold stimulus was developed using an air stream at 20 degrees C. Gingival and incisal sites on 12 healthy maxillary lateral incisors in six patients were imaged under rubber dam following 20 s cooling. Images were captured at 10 s intervals during a 3-min rewarming period and the data used to construct graphs of the rewarming rate. Log transformation of the data was used to produce 'best fit' straight line graphs. Linear regression analysis was used to examine three variables, viz. the side of the mouth (right or left), the site of measurement (gingival or incisal) and the phase of rewarming (early 0-90 s, late 91-180 s). The mean temperature change (delta t degree C) during rewarming was 8.5 degrees C (SD 1.0 degree C) for gingival sites and 7.2 degrees C (SD 1.1 degrees C) for incisal sites. The slope of the 'best fit' straight line data enabled a rewarming index to be calculated for each site on each tooth. Linear regression analysis showed that the phase of rewarming was highly significant but the other variables were not. A one-way ANOVA showed no significant differences between or within groups. Three min is an appropriate time to record rewarming of teeth cooled for 20 s with an airstream at 20 degrees C. The side or site used to record surface temperatures using this technique is not significant. Rewarming is exponential and log transformation of the data produces a well-fitting straight line graph. The slope of this line provides a rewarming index which should enable comparison of TI and laser Doppler flowmetry in determining pulpal blood flow as a measure of tooth vitality.
Donmez, Birsen; Cummings, M L; Graham, Hudson D
This article is an investigation of the effectiveness of sonifications, which are continuous auditory alerts mapped to the state of a monitored task, in supporting unmanned aerial vehicle (UAV) supervisory control. UAV supervisory control requires monitoring a UAV across multiple tasks (e.g., course maintenance) via a predominantly visual display, which currently is supported with discrete auditory alerts. Sonification has been shown to enhance monitoring performance in domains such as anesthesiology by allowing an operator to immediately determine an entity's (e.g., patient) current and projected states, and is a promising alternative to discrete alerts in UAV control. However, minimal research compares sonification to discrete alerts, and no research assesses the effectiveness of sonification for monitoring multiple entities (e.g., multiple UAVs). The authors conducted an experiment with 39 military personnel, using a simulated setup. Participants controlled single and multiple UAVs and received sonifications or discrete alerts based on UAV course deviations and late target arrivals. Regardless of the number of UAVs supervised, the course deviation sonification resulted in reactions to course deviations that were 1.9 s faster, a 19% enhancement, compared with discrete alerts. However, course deviation sonifications interfered with the effectiveness of discrete late arrival alerts in general and with operator responses to late arrivals when supervising multiple vehicles. Sonifications can outperform discrete alerts when designed to aid operators to predict future states of monitored tasks. However, sonifications may mask other auditory alerts and interfere with other monitoring tasks that require divided attention. This research has implications for supervisory control display design.
Georgiev, Dejan; Jahanshahi, Marjan; Dreo, Jurij; Čuš, Anja; Pirtošek, Zvezdan; Repovš, Grega
Parkinson's disease (PD) patients show signs of cognitive impairment, such as executive dysfunction, working memory problems and attentional disturbances, even in the early stages of the disease. Though motor symptoms of the disease are often successfully addressed by dopaminergic medication, it still remains unclear, how dopaminergic therapy affects cognitive function. The main objective of this study was to assess the effect of dopaminergic medication on visual and auditory attentional processing. 14 PD patients and 13 matched healthy controls performed a three-stimulus auditory and visual oddball task while their EEG was recorded. The patients performed the task twice, once on- and once off-medication. While the results showed no significant differences between PD patients and controls, they did reveal a significant increase in P3 amplitude on- vs. off-medication specific to processing of auditory distractors and no other stimuli. These results indicate significant effect of dopaminergic therapy on processing of distracting auditory stimuli. With a lack of between group differences the effect could reflect either 1) improved recruitment of attentional resources to auditory distractors; 2) reduced ability for cognitive inhibition of auditory distractors; 3) increased response to distractor stimuli resulting in impaired cognitive performance; or 4) hindered ability to discriminate between auditory distractors and targets. Further studies are needed to differentiate between these possibilities. Copyright © 2015 Elsevier B.V. All rights reserved.
Wightman, Frederic L.; Jenison, Rick
All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.
Wang, Yanan; Qin, Qing-Hua
The control mechanism of mechanical bone remodeling at cellular level was investigated by means of an extensive parametric study on a theoretical model described in this paper. From a perspective of control mechanism, it was found that there are several control mechanisms working simultaneously in bone remodeling which is a complex process. Typically, an extensive parametric study was carried out for investigating model parameter space related to cell differentiation and apoptosis which can describe the fundamental cell lineage behaviors. After analyzing all the combinations of 728 permutations in six model parameters, we have identified a small number of parameter combinations that can lead to physiologically realistic responses which are similar to theoretically idealized physiological responses. The results presented in the work enhanced our understanding on mechanical bone remodeling and the identified control mechanisms can help researchers to develop combined pharmacological-mechanical therapies to treat bone loss diseases such as osteoporosis.
Varella, André A B; de Souza, Deisy G
Empirical studies have demonstrated that class-specific contingencies may engender stimulus-reinforcer relations. In these studies, crossmodal relations emerged when crossmodal relations comprised the baseline, and intramodal relations emerged when intramodal relations were taught during baseline. This study investigated whether auditory-visual relations (crossmodal) would emerge after participants learned a visual-visual baseline (intramodal) with auditory stimuli presented as specific consequences. Four individuals with autism learned AB and CD relations with class-specific reinforcers. When A1 and C1 were presented as samples, the selections of B1 and D1, respectively, were followed by an edible (R1) and a sound (S1). Selections of B2 and D2 under the control of A2 and C2, respectively, were followed by R2 and S2. Probe trials tested for visual-visual AC, CA, AD, DA, BC, CB, BD, and DB emergent relations and auditory-visual SA, SB, SC, and SD emergent relations. All of the participants demonstrated the emergence of all auditory-visual relations, and three of four participants showed emergence of all visual-visual relations. Thus, the emergence of auditory-visual relations from specific auditory consequences suggests that these relations do not depend on crossmodal baseline training. The procedure has great potential for applied technology to generate auditory-visual discriminations and stimulus classes in the context of behavior-analytic interventions for autism. © Society for the Experimental Analysis of Behavior.
Groskreutz, Nicole C.; Karsina, Allen; Miguel, Caio F.; Groskreutz, Mark P.
Six participants with autism learned conditional relations between complex auditory-visual sample stimuli (dictated words and pictures) and simple visual comparisons (printed words) using matching-to-sample training procedures. Pre- and posttests examined potential stimulus control by each element of the complex sample when presented individually…
Villar, Anna Carolina Nascimento Waack Braga; Pereira, Liliane Desgualdo
To investigate the auditory skills of closure and figure-ground and factors associated with health, communication, and attention in air traffic controllers, and compare these variables with those of other civil and military servants. Study participants were sixty adults with normal audiometric thresholds divided into two groups matched for age and gender: study group (SG), comprising 30 air traffic controllers and control group (CG), composed of 30 other military and civil servants. All participants were asked a number of questions regarding their health, communication, and attention, and underwent the Speech-in-Noise Test (SIN) to assess their closure skills and the Synthetic Sentence Identification Test - Ipsilateral Competitive Message (SSI-ICM) in monotic listening to evaluate their figure-ground abilities. Data were compared using nonparametric statistical tests and logistic regression analysis. More individuals in the SG reported fatigue and/or burnout and work-related stress and showed better performance than that of individuals in the CG for the figure-ground ability. Both groups performed similarly and satisfactorily in the other hearing tests. The odds ratio for participants belonging in the SG was 5.59 and 1.24 times regarding work-related stress and SSI-ICM (right ear), respectively. Results for the variables auditory closure, self-reported health, attention, and communication were similar in both groups. The SG presented significantly better performance in auditory figure-ground compared with that of the CG. Self-reported stress and right-ear SSI-ICM were significant predictors of individuals belonging to the SG.
Full Text Available Auditory hallucination is one of the most common symptoms in schizophrenia and the other psychotic disorders. The frequency of the auditory hallucinations and ensuing distress make the individual believe that these voices are not able to be controlled and to be coped. This situation can cause patients to be hopeless and desperate and lead to harm themselves or the others. Furthermore, the time they lose and preoccupation with these symptoms reduce their social and occupational functioning significantly. Auditory hallucinations are fundamentally mean thoughts causing to attribute internal stimulus to the external sources, thus they are the internal speech of the individual. These internal speeches are inaccurately interpreted due to dysfunctions in basic processes of self monitoring. Using cognitive behavioral techniques at this stage are thought to be effective in eliminating cognitive difficulties, understanding feelings, actions and somatic reactions which auditory hallucinations caused; and coping with these symptoms. The fundamental aim of this review was to clarify cognitive behavioral interventions intended for auditory hallucinations and to discuss its practice.
Zarkesh-Ha, Payman [University of New Mexico
The main goal of this research grant is to develop a system-level solution leveraging novel technologies that enable network communications at 100 Gb/s or beyond. University of New Mexico in collaboration with Acadia Optronics LLC has been working on this project to develop the 100 Gb/s Network Interface Controller (NIC) under this Department of Energy (DOE) grant.
Fujita, Toshitsugu; Piuz, Isabelle; Schlegel, Werner
The transcription rate of immediate early genes (IEGs) is controlled directly by transcription elongation factors at the transcription elongation step. Negative elongation factor (NELF) and 5,6-dichloro-1-β-D-ribofuranosylbenzimidazole (DRB) sensitivity-inducing factor (DSIF) stall RNA polymerase II (pol II) soon after transcription initiation. Upon induction of IEG transcription, DSIF is converted into an accelerator for pol II elongation. To address whether and how NELF as well as DSIF controls overall IEG transcription, its expression was reduced using stable RNA interference in GH4C1 cells. NELF knock-down reduced thyrotropin-releasing hormone (TRH)-induced transcription of the IEGs c-fos, MKP-1, and junB. In contrast, epidermal growth factor (EGF)-induced transcription of these IEGs was unaltered or even slightly increased by NELF knock-down. Thus, stable knock-down of NELF affects IEG transcription stimulation-specifically. Conversely, DSIF knock-down reduced both TRH- and EGF-induced transcription of the three IEGs. Interestingly, TRH-induced activation of the MAP kinase pathway, a pathway essential for transcription of the three IEGs, was down-regulated by NELF knock-down. Thus, stable knock-down of NELF, by modulating intracellular signaling pathways, caused stimulation-specific loss of IEG transcription. These observations indicate that NELF controls overall IEG transcription via multiple mechanisms both directly and indirectly
Winter, J C; Filipink, R A; Timineri, D; Helsley, S E; Rabin, R A
Stimulus control was established in rats trained to discriminate either 5-methoxy-N,N-dimethyltryptamine (3 mg/kg) or (-)-2,5-dimethoxy-4-methylamphetamine (0.56 mg/kg) from saline. Tests of antagonism of stimulus control were conducted using the 5-HT1A antagonists (+/-)-pindolol and WAY-100635, and the 5-HT2 receptor antagonist pirenperone. In rats trained with 5-MeO-DMT, pindolol and WAY-100635 both produced a significant degree of antagonism of stimulus control, but pirenperone was much less effective. Likewise, the full generalization of 5-MeO-DMT to the selective 5-HT1A agonist [+/-]-8-hydroxy-dipropylaminotetralin was blocked by WAY-100635, but unaffected by pirenperone. In contrast, the partial generalization of 5-MeO-DMT to the 5-HT2 agonist DOM was completely antagonized by pirenperone, but was unaffected by WAY-100635. Similarly, in rats trained with (-)-DOM, pirenperone completely blocked stimulus control, but WAY-100635 was inactive. The results obtained in rats trained with (-)-DOM and tested with 5-MeO-DMT were more complex. Although the intraperitoneal route had been used for both training drugs, a significant degree of generalization of (-)-DOM to 5-MeO-DMT was seen only when the latter drug was administered subcutaneously. Furthermore, when the previously effective dose of pirenperone was given in combination with 5-MeO-DMT (s.c.), complete suppression of responding resulted. However, the combination of pirenperone and WAY-100635 given prior to 5-MeO-DMT restored responding in (-)-DOM-trained rats, and provided evidence of antagonism of the partial substitution of 5-MeO-DMT for (-)-DOM. The present data indicate that 5-MeO-DMT-induced stimulus control is mediated primarily by interactions with 5-HT1A receptors. In addition, however, the present findings suggest that 5-MeO-DMT induces a compound stimulus that includes an element mediated by interactions with a 5-HT2 receptors. The latter component is not essential for 5-MeO-DMT-induced stimulus
Pérez-Díaz, Francisco; Díaz, Estrella; Sánchez, Natividad; Vargas, Juan Pedro; Pearce, John M; López, Juan Carlos
Recent studies support the idea that stimulus processing in latent inhibition can vary during the course of preexposure. Controlled attentional mechanisms are said to be important in the early stages of preexposure, while in later stages animals adopt automatic processing of the stimulus to be used for conditioning. Given this distinction, it is possible that both types of processing are governed by different neural systems, affecting differentially the retrieval of information about the stimulus. In the present study we tested if a lesion to the dorso-lateral striatum or to the medial prefrontal cortex has a selective effect on exposure to the future conditioned stimulus (CS). With this aim, animals received different amounts of exposure to the future CS. The results showed that a lesion to the medial prefrontal cortex enhanced latent inhibition in animals receiving limited preexposure to the CS, but had no effect in animals receiving extended preexposure to the CS. The lesion of the dorso-lateral striatum produced a decrease in latent inhibition, but only in animals with an extended exposure to the future conditioned stimulus. These results suggest that the dorsal striatum and medial prefrontal cortex play essential roles in controlled and automatic processes. Automatic attentional processes appear to be impaired by a lesion to the dorso-lateral striatum and facilitated by a lesion to the prefrontal cortex.
Szalóki, György; Croué, Vincent; Carré, Vincent; Aubriet, Frédéric; Alévêque, Olivier; Levillain, Eric; Allain, Magali; Aragó, Juan; Ortí, Enrique; Goeb, Sébastien; Sallé, Marc
A proof-of-concept related to the redox-control of the binding/releasing process in a host-guest system is achieved by designing a neutral and robust Pt-based redox-active metallacage involving two extended-tetrathiafulvalene (exTTF) ligands. When neutral, the cage is able to bind a planar polyaromatic guest (coronene). Remarkably, the chemical or electrochemical oxidation of the host-guest complex leads to the reversible expulsion of the guest outside the cavity, which is assigned to a drastic change of the host-guest interaction mode, illustrating the key role of counteranions along the exchange process. The reversible process is supported by various experimental data ( 1 H NMR spectroscopy, ESI-FTICR, and spectroelectrochemistry) as well as by in-depth theoretical calculations performed at the density functional theory (DFT) level. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Bachiller, Alejandro; Poza, Jesús; Gómez, Carlos; Molina, Vicente; Suazo, Vanessa; Hornero, Roberto
Objective. The aim of this research is to explore the coupling patterns of brain dynamics during an auditory oddball task in schizophrenia (SCH). Approach. Event-related electroencephalographic (ERP) activity was recorded from 20 SCH patients and 20 healthy controls. The coupling changes between auditory response and pre-stimulus baseline were calculated in conventional EEG frequency bands (theta, alpha, beta-1, beta-2 and gamma), using three coupling measures: coherence, phase-locking value and Euclidean distance. Main results. Our results showed a statistically significant increase from baseline to response in theta coupling and a statistically significant decrease in beta-2 coupling in controls. No statistically significant changes were observed in SCH patients. Significance. Our findings support the aberrant salience hypothesis, since SCH patients failed to change their coupling dynamics between stimulus response and baseline when performing an auditory cognitive task. This result may reflect an impaired communication among neural areas, which may be related to abnormal cognitive functions.
volume. The conference's topics include auditory exploration of data via sonification and audification; real time monitoring of multivariate date; sound in immersive interfaces and teleoperation; perceptual issues in auditory display; sound in generalized computer interfaces; technologies supporting...... auditory display creation; data handling for auditory display systems; applications of auditory display....
Varsamis, Panagiotis; Staikopoulos, Konstantinos; Kartasidou, Lefkothea
One of the purposes of Rhythmic Auditory Stimulation (RAS) is to improve the control of dysfunctional movement patterns. This study aimed to extend the line of research by focussing on secondary students with mental retardation and cerebral palsy. According to the study's assumption, cadence can be controlled through a stable and low signal…
Brennan, J; Kowalska, D; Zieliński, K
Two experiments involving parallel procedures to investigate stimulus generalization in prefrontal dogs under alimentary and defensive reinforcement were compared. Twelve dogs in the alimentary study were trained on a 50 percent partial reinforcement schedule, and 24 dogs were trained to avoid shock with either continuous shock availability and response contingent CS termination or with only 50 percent partial shock availability and response independent CS termination. One third of the subjects received bilateral medial prefrontal lesions, 12 dogs were given bilateral lesions of the lateral prefrontal cortex and the remaining subjects served as nonoperated controls. Generalization along the frequency dimension of the tonal CS was assessed during a sampling procedure within normal acquisition training, during complete extinction and following differentiation training. The results indicate specific effects from both the quality and the contingency of reinforcement. Within the limits of each reinforcement treatment, a dissociation occurred such that medial subjects tended to show heightened sensitivity to reinforcement density, while lateral subjects showed characteristic elevated reactivity during all generalization tests.
Full Text Available Abstract Background Recent research has implicated deficits of the working memory (WM and attention in dyslexia. The N100 component of event-related potentials (ERP is thought to reflect attention and working memory operation. However, previous studies showed controversial results concerning the N100 in dyslexia. Variability in this issue may be the result of inappropriate match up of the control sample, which is usually based exclusively on age and gender. Methods In order to address this question the present study aimed at investigating the auditory N100 component elicited during a WM test in 38 dyslexic children in comparison to those of 19 unaffected sibling controls. Both groups met the criteria of the International Classification of Diseases (ICD-10. ERP were evoked by two stimuli, a low (500 Hz and a high (3000 Hz frequency tone indicating forward and reverse digit span respectively. Results As compared to their sibling controls, dyslexic children exhibited significantly reduced N100 amplitudes induced by both reverse and forward digit span at Fp1, F3, Fp2, Fz, C4, Cz and F4 and at Fp1, F3, C5, C3, Fz, F4, C6, P4 and Fp2 leads respectively. Memory performance of the dyslexics group was not significantly lower than that of the controls. However, enhanced memory performance in the control group is associated with increased N100 amplitude induced by high frequency stimuli at the C5, C3, C6 and P4 leads and increased N100 amplitude induced by low frequency stimuli at the P4 lead. Conclusion The present findings are in support of the notion of weakened capture of auditory attention in dyslexia, allowing for a possible impairment in the dynamics that link attention with short memory, suggested by the anchoring-deficit hypothesis.
Full Text Available To improve the performance of cochlear implants, we have integrated a microdevice into a model of the auditory periphery with the goal of creating a microprocessor. We constructed an artificial peripheral auditory system using a hybrid model in which polyvinylidene difluoride was used as a piezoelectric sensor to convert mechanical stimuli into electric signals. To produce frequency selectivity, the slit on a stainless steel base plate was designed such that the local resonance frequency of the membrane over the slit reflected the transfer function. In the acoustic sensor, electric signals were generated based on the piezoelectric effect from local stress in the membrane. The electrodes on the resonating plate produced relatively large electric output signals. The signals were fed into a computer model that mimicked some functions of inner hair cells, inner hair cell–auditory nerve synapses, and auditory nerve fibers. In general, the responses of the model to pure-tone burst and complex stimuli accurately represented the discharge rates of high-spontaneous-rate auditory nerve fibers across a range of frequencies greater than 1 kHz and middle to high sound pressure levels. Thus, the model provides a tool to understand information processing in the peripheral auditory system and a basic design for connecting artificial acoustic sensors to the peripheral auditory nervous system. Finally, we discuss the need for stimulus control with an appropriate model of the auditory periphery based on auditory brainstem responses that were electrically evoked by different temporal pulse patterns with the same pulse number.
Pinaud, R.; Terleph, T. A.; Wynne, R. D.; Tremere, L. A.
Songbirds have emerged as powerful experimental models for the study of auditory processing of complex natural communication signals. Intact hearing is necessary for several behaviors in developing and adult animals including vocal learning, territorial defense, mate selection and individual recognition. These behaviors are thought to require the processing, discrimination and memorization of songs. Although much is known about the brain circuits that participate in sensorimotor (auditory-vocal) integration, especially the ``song-control" system, less is known about the anatomical and functional organization of central auditory pathways. Here we discuss findings associated with a telencephalic auditory area known as the caudomedial nidopallium (NCM). NCM has attracted significant interest as it exhibits functional properties that may support higher order auditory functions such as stimulus discrimination and the formation of auditory memories. NCM neurons are vigorously dr iven by auditory stimuli. Interestingly, these responses are selective to conspecific, relative to heterospecific songs and artificial stimuli. In addition, forms of experience-dependent plasticity occur in NCM and are song-specific. Finally, recent experiments employing high-throughput quantitative proteomics suggest that complex protein regulatory pathways are engaged in NCM as a result of auditory experience. These molecular cascades are likely central to experience-associated plasticity of NCM circuitry and may be part of a network of calcium-driven molecular events that support the formation of auditory memory traces.
Lehmann, Alexandre; Skoe, Erika; Moreau, Patricia; Peretz, Isabelle; Kraus, Nina
Congenital amusia is a neurogenetic condition, characterized by a deficit in music perception and production, not explained by hearing loss, brain damage or lack of exposure to music. Despite inferior musical performance, amusics exhibit normal auditory cortical responses, with abnormal neural correlates suggested to lie beyond auditory cortices. Here we show, using auditory brainstem responses to complex sounds in humans, that fine-grained automatic processing of sounds is impoverished in amusia. Compared with matched non-musician controls, spectral amplitude was decreased in amusics for higher harmonic components of the auditory brainstem response. We also found a delayed response to the early transient aspects of the auditory stimulus in amusics. Neural measures of spectral amplitude and response timing correlated with participants' behavioral assessments of music processing. We demonstrate, for the first time, that amusia affects how complex acoustic signals are processed in the auditory brainstem. This neural signature of amusia mirrors what is observed in musicians, such that the aspects of the auditory brainstem responses that are enhanced in musicians are degraded in amusics. By showing that gradients of music abilities are reflected in the auditory brainstem, our findings have implications not only for current models of amusia but also for auditory functioning in general. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Full Text Available The specialized hairs and slit sensillae of spiders (Cupiennius salei can sense the airflow and auditory signals in a low-frequency range. They provide the sensor information for reactive behavior, like e.g. capturing a prey. In analogy, in this paper a setup is described where two microphones and a neural preprocessing system together with a modular neural controller are used to generate a sound tropism of a four-legged walking machine. The neural preprocessing network is acting as a low-pass filter and it is followed by a network which discerns between signals coming from the left or the right. The parameters of these networks are optimized by an evolutionary algorithm. In addition, a simple modular neural controller then generates the desired different walking patterns such that the machine walks straight, then turns towards a switched-on sound source, and then stops near to it.
Ismail, Naema; Sallam, Yossra; Behery, Reda; Al Boghdady, Ameera
It has been hypothesized that impaired auditory processing influence the occurrence of stuttering. Also, it is suggested that speech perception in children who stutter differed from normal. Auditory processing should be investigated in children who stutter shortly after the onset of stuttering in order to evaluate the extent to which impaired auditory processing contributes to the development of stuttering. CAEPs provide the necessary temporal and spatial resolution to detect differences in auditory processing and the neural activity that is related or time-locked to the auditory stimulus. The primary goal of the present study was to determine the difference in latency and amplitude of P1-N2 complex between children who stutter and non-stuttering children in response to speech stimuli. This case-control study was performed over 60children, 30were non-stuttering children (control group) and 30were children who stutter (study group) ranging in severity from Bloodstien I to Bloodstien IV in the age range of 8-18 years. CAEPs of children who stutter with stuttering severity Bloodstien IV showed significant prolonged latencies and reduced amplitudes when blocks and IPDs were the most predominant core behaviors. P1 and N1 were prolonged in concomitant behaviors. It could be speculated that speech processing was affected in children who stutter with stuttering severity Bloodstien IV at the level of early perceptual auditory cortex. Copyright © 2017 Elsevier B.V. All rights reserved.
Julie M. Bugg
Full Text Available Cognitive control is by now a large umbrella term referring collectively to multiple processes that plan and coordinate actions to meet task goals. A common feature of paradigms that engage cognitive control is the task requirement to select relevant information despite a habitual tendency (or bias to select goal-irrelevant information. At least since the 70s, researchers have employed proportion congruent manipulations to experimentally establish selection biases and evaluate the mechanisms used to control attention. Proportion congruent manipulations vary the frequency with which irrelevant information conflicts (i.e., is incongruent with relevant information. The purpose of this review is to summarize the growing body of literature on proportion congruent effects across selective attention paradigms, beginning first with Stroop, and then describing parallel effects in flanker and task-switching paradigms. The review chronologically tracks the expansion of the proportion congruent manipulation from its initial implementation at the list-wide level, to more recent implementations at the item-specific and context-specific levels. An important theoretical aim is demonstrating that proportion congruent effects at different levels (e.g., list-wide vs. item or context-specific support a distinction between voluntary forms of cognitive control, which operate based on anticipatory information, and relatively automatic or reflexive forms of cognitive control, which are rapidly triggered by the processing of particular stimuli or stimulus features. A further aim is to highlight those proportion congruent manipulations that allow researchers to dissociate stimulus-driven control from other stimulus-driven processes (e.g., S-R responding; episodic retrieval. We conclude by discussing the utility of proportion congruent manipulations for exploring the distinction between voluntary control and stimulus-driven control in other relevant paradigms.
Chenausky, Karen; Norton, Andrea; Tager-Flusberg, Helen; Schlaug, Gottfried
This study compared Auditory-Motor Mapping Training (AMMT), an intonation-based treatment for facilitating spoken language in minimally verbal children with autism spectrum disorder (ASD), to a matched control treatment, Speech Repetition Therapy (SRT). 23 minimally verbal children with ASD (20 male, mean age 6;5) received at least 25 sessions of AMMT. Seven (all male) were matched on age and verbal ability to seven participants (five male) who received SRT. Outcome measures were Percent Syllables Approximated, Percent Consonants Correct (of 86), and Percent Vowels Correct (of 61) produced on two sets of 15 bisyllabic stimuli. All subjects were assessed on these measures several times at baseline and after 10, 15, 20, and 25 sessions. The post-25 session assessment timepoint, common to all participants, was compared to Best Baseline performance. Overall, after 25 sessions, AMMT participants increased by 19.4% Syllables Approximated, 13.8% Consonants Correct, and19.1% Vowels Correct, compared to Best Baseline. In the matched AMMT-SRT group, after 25 sessions, AMMT participants produced 29.0% more Syllables Approximated (SRT 3.6%);17.9% more Consonants Correct (SRT 0.5); and 17.6% more Vowels Correct (SRT 0.8%). Chi-square tests showed that significantly more AMMT than SRT participants in both the overall and matched groups improved significantly in number of Syllables Approximated per stimulus and number of Consonants Correct per stimulus. Pre-treatment ability to imitate phonemes, but not chronological age or baseline performance on outcome measures, was significantly correlated with amount of improvement after 25 sessions. Intonation-based therapy may offer a promising new interventional approach for teaching spoken language to minimally verbal children with ASD.
Hughes, Robert W.; Hurlstone, Mark J.; Marsh, John E.; Vachon, Francois; Jones, Dylan M.
The influence of top-down cognitive control on 2 putatively distinct forms of distraction was investigated. Attentional capture by a task-irrelevant auditory deviation (e.g., a female-spoken token following a sequence of male-spoken tokens)--as indexed by its disruption of a visually presented recall task--was abolished when focal-task engagement…
Smith, Sherri L.; Saunders, Gabrielle H.; Chisolm, Theresa H.; Frederick, Melissa; Bailey, Beth A.
Purpose: The purpose of this study was to determine if patient characteristics or clinical variables could predict who benefits from individual auditory training. Method: A retrospective series of analyses were performed using a data set from a large, multisite, randomized controlled clinical trial that compared the treatment effects of at-home…
Wigestrand, Mattis B.; Schiff, Hillary C.; Fyhn, Marianne; LeDoux, Joseph E.; Sears, Robert M.
Distinguishing threatening from nonthreatening stimuli is essential for survival and stimulus generalization is a hallmark of anxiety disorders. While auditory threat learning produces long-lasting plasticity in primary auditory cortex (Au1), it is not clear whether such Au1 plasticity regulates memory specificity or generalization. We used…
Burnham, Denis; Dodd, Barbara
The McGurk effect, in which auditory [ba] dubbed onto [ga] lip movements is perceived as "da" or "tha," was employed in a real-time task to investigate auditory-visual speech perception in prelingual infants. Experiments 1A and 1B established the validity of real-time dubbing for producing the effect. In Experiment 2, 4 1/2-month-olds were tested in a habituation-test paradigm, in which an auditory-visual stimulus was presented contingent upon visual fixation of a live face. The experimental group was habituated to a McGurk stimulus (auditory [ba] visual [ga]), and the control group to matching auditory-visual [ba]. Each group was then presented with three auditory-only test trials, [ba], [da], and [(delta)a] (as in then). Visual-fixation durations in test trials showed that the experimental group treated the emergent percept in the McGurk effect, [da] or [(delta)a], as familiar (even though they had not heard these sounds previously) and [ba] as novel. For control group infants [da] and [(delta)a] were no more familiar than [ba]. These results are consistent with infants' perception of the McGurk effect, and support the conclusion that prelinguistic infants integrate auditory and visual speech information. Copyright 2004 Wiley Periodicals, Inc.
Zhang, Xin Wen; Zeng, Shao Ju; Zuo, Ming Xue
The expression of substance P in the vocal control and auditory nuclei was investigated by wsing immunohistochemical methods, and the gray density of numerical value was measured with the image processing system in females and males of Carduelis spinus. Then, the distribution and the gray density of substance P were compared between males and females. The results indicate that: 1) Substance P labeled terminal and part cells were distributed in the Area X; 2) Substance P labeled cells were distributed in the nucleus high vocal center (HVc), magnocellular nucleus of the anterior neostriatum (MAN), robust nucleus of the archistriatum (RA) and dorsolateral nucleus of the anterior thalamus (DLM); 3) Substance P labeled terminal and fibers were distributed in the vocal control nuclei such as nucleus dorsalis medialis (DM) and the nucleus hypoglossi, pars tracheosyringealis (nXI-Its), and in the auditory nuclei such as the nucleus ovidalisashell (Ov shell), the shell regions of mesencephalicus lateralis, pars dorsalis (MLd shell) and the nucleus intercollicularis (ICo). The values of gray degree of substance P labeled cells or fibers were significantly higher in males than that in females. The present study indicates that the distribution of substance P exhibits significantly sexual difference in the songbird. The presence of substance P in most auditory and vocal control nuclei suggests that substance P may play an important physiological role in the auditory perception and vocal production.
Kuppen, Sarah; Huss, Martina; Fosker, Tim; Fegan, Natasha; Goswami, Usha
We explore the relationships between basic auditory processing, phonological awareness, vocabulary, and word reading in a sample of 95 children, 55 typically developing children, and 40 children with low IQ. All children received nonspeech auditory processing tasks, phonological processing and literacy measures, and a receptive vocabulary task.…
Four experiments explored the applicability of auditory stimulus presentation in affective priming tasks. In Experiment 1, it was found that standard affective priming effects occur when prime and target words are presented simultaneously via headphones similar to a dichotic listening procedure. In Experiment 2, stimulus onset asynchrony (SOA) was…
Sanju, Himanshu Kumar
Full Text Available Introduction Mismatch Negativity is a negative component of the event-related potential (ERP elicited by any discriminable changes in auditory stimulation. Objective The present study aimed to assess pre-attentive auditory discrimination skill with fine and gross difference between auditory stimuli. Method Seventeen normal hearing individual participated in the study. To assess pre-attentive auditory discrimination skill with fine difference between auditory stimuli, we recorded mismatch negativity (MMN with pair of stimuli (pure tones, using /1000 Hz/ and /1010 Hz/ with /1000 Hz/ as frequent stimulus and /1010 Hz/ as infrequent stimulus. Similarly, we used /1000 Hz/ and /1100 Hz/ with /1000 Hz/ as frequent stimulus and /1100 Hz/ as infrequent stimulus to assess pre-attentive auditory discrimination skill with gross difference between auditory stimuli. The study included 17 subjects with informed consent. We analyzed MMN for onset latency, offset latency, peak latency, peak amplitude, and area under the curve parameters. Result Results revealed that MMN was present only in 64% of the individuals in both conditions. Further Multivariate Analysis of Variance (MANOVA showed no significant difference in all measures of MMN (onset latency, offset latency, peak latency, peak amplitude, and area under the curve in both conditions. Conclusion The present study showed similar pre-attentive skills for both conditions: fine (1000 Hz and 1010 Hz and gross (1000 Hz and 1100 Hz difference in auditory stimuli at a higher level (endogenous of the auditory system.
Sanju, Himanshu Kumar; Kumar, Prawin
Introduction Mismatch Negativity is a negative component of the event-related potential (ERP) elicited by any discriminable changes in auditory stimulation. Objective The present study aimed to assess pre-attentive auditory discrimination skill with fine and gross difference between auditory stimuli. Method Seventeen normal hearing individual participated in the study. To assess pre-attentive auditory discrimination skill with fine difference between auditory stimuli, we recorded mismatch negativity (MMN) with pair of stimuli (pure tones), using /1000 Hz/ and /1010 Hz/ with /1000 Hz/ as frequent stimulus and /1010 Hz/ as infrequent stimulus. Similarly, we used /1000 Hz/ and /1100 Hz/ with /1000 Hz/ as frequent stimulus and /1100 Hz/ as infrequent stimulus to assess pre-attentive auditory discrimination skill with gross difference between auditory stimuli. The study included 17 subjects with informed consent. We analyzed MMN for onset latency, offset latency, peak latency, peak amplitude, and area under the curve parameters. Result Results revealed that MMN was present only in 64% of the individuals in both conditions. Further Multivariate Analysis of Variance (MANOVA) showed no significant difference in all measures of MMN (onset latency, offset latency, peak latency, peak amplitude, and area under the curve) in both conditions. Conclusion The present study showed similar pre-attentive skills for both conditions: fine (1000 Hz and 1010 Hz) and gross (1000 Hz and 1100 Hz) difference in auditory stimuli at a higher level (endogenous) of the auditory system.
Yoder, Kathleen M; Lu, Kai; Vicario, David S
Estradiol (E2) has recently been shown to modulate sensory processing in an auditory area of the songbird forebrain, the caudomedial nidopallium (NCM). When a bird hears conspecific song, E2 increases locally in NCM, where neurons express both the aromatase enzyme that synthesizes E2 from precursors and estrogen receptors. Auditory responses in NCM show a form of neuronal memory: repeated playback of the unique learned vocalizations of conspecific individuals induces long-lasting stimulus-specific adaptation of neural responses to each vocalization. To test the role of E2 in this auditory memory, we treated adult male zebra finches (n=16) with either the aromatase inhibitor fadrozole (FAD) or saline for 8 days. We then exposed them to 'training' songs and, 6 h later, recorded multiunit auditory responses with an array of 16 microelectrodes in NCM. Adaptation rates (a measure of stimulus-specific adaptation) to playbacks of training and novel songs were computed, using established methods, to provide a measure of neuronal memory. Recordings from the FAD-treated birds showed a significantly reduced memory for the training songs compared with saline-treated controls, whereas auditory processing for novel songs did not differ between treatment groups. In addition, FAD did not change the response bias in favor of conspecific over heterospecific song stimuli. Our results show that E2 depletion affects the neuronal memory for vocalizations in songbird NCM, and suggest that E2 plays a necessary role in auditory processing and memory for communication signals.
Colzato, Lorenza S; Steenbergen, Laura; Hommel, Bernhard
The aim of the study was to throw more light on the relationship between rumination and cognitive-control processes. Seventy-eight adults were assessed with respect to rumination tendencies by means of the LEIDS-r before performing a Stroop task, an event-file task assessing the automatic retrieval of irrelevant information, an attentional set-shifting task, and the Attentional Network Task, which provided scores for alerting, orienting, and executive control functioning. The size of the Stroop effect and irrelevant retrieval in the event-five task were positively correlated with the tendency to ruminate, while all other scores did not correlate with any rumination scale. Controlling for depressive tendencies eliminated the Stroop-related finding (an observation that may account for previous failures to replicate), but not the event-file finding. Taken altogether, our results suggest that rumination does not affect attention, executive control, or response selection in general, but rather selectively impairs the control of stimulus-induced retrieval of irrelevant information.
Full Text Available Recent studies have found that self-motion perception induced by simultaneous presentation of visual and auditory motion is facilitated when the directions of visual and auditory motion stimuli are identical. They did not, however, examine possible contributions of auditory motion information for determining direction of self-motion perception. To examine this, a visual stimulus projected on a hemisphere screen and an auditory stimulus presented through headphones were presented separately or simultaneously, depending on experimental conditions. The participant continuously indicated the direction and strength of self-motion during the 130-s experimental trial. When the visual stimulus with a horizontal shearing rotation and the auditory stimulus with a horizontal one-directional rotation were presented simultaneously, the duration and strength of self-motion perceived in the opposite direction of the auditory rotation stimulus were significantly longer and stronger than those perceived in the same direction of the auditory rotation stimulus. However, the auditory stimulus alone could not sufficiently induce self-motion perception, and if it did, its direction was not consistent within each experimental trial. We concluded that auditory motion information can determine perceived direction of self-motion during simultaneous presentation of visual and auditory motion information, at least when visual stimuli moved in opposing directions (around the yaw-axis. We speculate that the contribution of auditory information depends on the plausibility and information balance of visual and auditory information.
Tanahashi, Shigehito; Ashihara, Kaoru; Ujike, Hiroyasu
Recent studies have found that self-motion perception induced by simultaneous presentation of visual and auditory motion is facilitated when the directions of visual and auditory motion stimuli are identical. They did not, however, examine possible contributions of auditory motion information for determining direction of self-motion perception. To examine this, a visual stimulus projected on a hemisphere screen and an auditory stimulus presented through headphones were presented separately or simultaneously, depending on experimental conditions. The participant continuously indicated the direction and strength of self-motion during the 130-s experimental trial. When the visual stimulus with a horizontal shearing rotation and the auditory stimulus with a horizontal one-directional rotation were presented simultaneously, the duration and strength of self-motion perceived in the opposite direction of the auditory rotation stimulus were significantly longer and stronger than those perceived in the same direction of the auditory rotation stimulus. However, the auditory stimulus alone could not sufficiently induce self-motion perception, and if it did, its direction was not consistent within each experimental trial. We concluded that auditory motion information can determine perceived direction of self-motion during simultaneous presentation of visual and auditory motion information, at least when visual stimuli moved in opposing directions (around the yaw-axis). We speculate that the contribution of auditory information depends on the plausibility and information balance of visual and auditory information. PMID:26113828
Rominger, Christian; Bleier, Angelika; Fitz, Werner; Marksteiner, Josef; Fink, Andreas; Papousek, Ilona; Weiss, Elisabeth M
Social cognitive impairments may represent a core feature of schizophrenia and above all are a strong predictor of positive psychotic symptoms. Previous studies could show that reduced inhibitory top-down control contributes to deficits in theory of mind abilities and is involved in the genesis of hallucinations. The current study aimed to investigate the relationship between auditory inhibition, affective theory of mind and the experience of hallucinations in patients with schizophrenia. In the present study, 20 in-patients with schizophrenia and 20 healthy controls completed a social cognition task (the Reading the Mind in the Eyes Test) and an inhibitory top-down Dichotic Listening Test. Schizophrenia patients with greater severity of hallucinations showed impaired affective theory of mind as well as impaired inhibitory top-down control. More dysfunctional top-down inhibition was associated with poorer affective theory of mind performance, and seemed to mediate the association between impairment to affective theory of mind and severity of hallucinations. The findings support the idea of impaired theory of mind as a trait marker of schizophrenia. In addition, dysfunctional top-down inhibition may give rise to hallucinations and may further impair affective theory of mind skills in schizophrenia. Copyright © 2016 Elsevier B.V. All rights reserved.
1. Weakly electric fish generate around their bodies low-amplitude, AC electric fields which are used both for the detection of objects and intraspecific communication. The types of modulation in this signal of which the high-frequency wave-type gymnotiform, Apteronotus, is capable are relatively few and stereotyped. Chief among these is the chirp, a signal used in courtship and agonistic displays. Chirps are brief and rapid accelerations in the normally highly regular electric organ discharge (EOD) frequency. 2. Chirping can be elicited artificially in these animals by the use of a stimulus regime identical to that typically used to elicit another behavior, the jamming avoidance response (JAR). The neuronal basis for the JAR, a much slower and lesser alteration in EOD frequency, is well understood. Examination of the stimulus features which induce chirping show that, like the JAR, there is a region of frequency differences between the fish's EOD and the interfering signal that maximally elicits the response. Moreover, the response is sex-specific with regard to the sign of the frequency difference, with females chirping preferentially on the positive and most males on the negative Df. These features imply that the sensory mechanisms involved in the triggering of these communicatory behaviors are fundamentally similar to those explicated for the JAR. 3. Additionally, two other modulatory behaviors of unknown significance are described. The first is a non-selective rise in EOD frequency associated with a JAR stimulus, occurring regardless of the sign of the Df. This modulation shares many characteristics with the JAR. The second behavior, which we have termed a 'yodel', is distinct from and kinetically intermediate to chirping and the JAR. Moreover, unlike the other studied electromotor behaviors it is generally produced only after the termination of the eliciting stimulus.
Rattat, Anne-Claire; Picard, Delphine
The present study sought to determine the format in which visual, auditory and auditory-visual durations ranging from 400 to 600 ms are encoded and maintained in short-term memory, using suppression conditions. Participants compared two stimulus durations separated by an interval of 8 s. During this time, they performed either an articulatory suppression task, a visuospatial tracking task or no specific task at all (control condition). The results showed that the articulatory suppression task decreased recognition performance for auditory durations but not for visual or bimodal ones, whereas the visuospatial task decreased recognition performance for visual durations but not for auditory or bimodal ones. These findings support the modality-specific account of short-term memory for durations.
Robbins, Lindsey; Margulis, Susan W
Several studies have demonstrated that auditory enrichment can reduce stereotypic behaviors in captive animals. The purpose of this study was to determine the relative effectiveness of three different types of auditory enrichment-naturalistic sounds, classical music, and rock music-in reducing stereotypic behavior displayed by Western lowland gorillas (Gorilla gorilla gorilla). Three gorillas (one adult male, two adult females) were observed at the Buffalo Zoo for a total of 24 hr per music trial. A control observation period, during which no sounds were presented, was also included. Each music trial consisted of a total of three weeks with a 1-week control period in between each music type. The results reveal a decrease in stereotypic behaviors from the control period to naturalistic sounds. The naturalistic sounds also affected patterns of several other behaviors including locomotion. In contrast, stereotypy increased in the presence of classical and rock music. These results suggest that auditory enrichment, which is not commonly used in zoos in a systematic way, can be easily utilized by keepers to help decrease stereotypic behavior, but the nature of the stimulus, as well as the differential responses of individual animals, need to be considered. © 2014 Wiley Periodicals, Inc.
Shigehito eTanahashi; Kaoru eAshihara; Hiroyasu eUjike
Recent studies have found that self-motion perception induced by simultaneous presentation of visual and auditory motion is facilitated when the directions of visual and auditory motion stimuli are identical. They did not, however, examine possible contributions of auditory motion information for determining direction of self-motion perception. To examine this, a visual stimulus projected on a hemisphere screen and an auditory stimulus presented through headphones were presented separately or...
Menceloglu, Melisa; Grabowecky, Marcia; Suzuki, Satoru
Temporal expectation is a process by which people use temporally structured sensory information to explicitly or implicitly predict the onset and/or the duration of future events. Because timing plays a critical role in crossmodal interactions, we investigated how temporal expectation influenced auditory-visual interaction, using an auditory-visual crossmodal congruity effect as a measure of crossmodal interaction. For auditory identification, an incongruent visual stimulus produced stronger interference when the crossmodal stimulus was presented with an expected rather than an unexpected timing. In contrast, for visual identification, an incongruent auditory stimulus produced weaker interference when the crossmodal stimulus was presented with an expected rather than an unexpected timing. The fact that temporal expectation made visual distractors more potent and visual targets less susceptible to auditory interference suggests that temporal expectation increases the perceptual weight of visual signals.
Salmi, Juha; Rinne, Teemu; Koistinen, Sonja; Salonen, Oili; Alho, Kimmo
During functional magnetic resonance imaging (fMRI), our participants selectively attended to tone streams at the left or right, and occasionally shifted their attention from one stream to another as guided by a centrally presented visual cue. Duration changes in the to-be-attended stream served as targets. Loudness deviating tones (LDTs) occurred infrequently in both streams to catch attention in a bottom-up manner, as indicated by their effects on reaction times to targets. LDTs activated the right temporo-parietal junction (TPJ), posterior parts of the left inferior/middle frontal gyrus (IFG/MFG), ventromedial parts of the superior parietal lobule (SPL), and left frontal eye field/premotor cortex (FEF/PMC). In addition, LDTs in the to-be-ignored sound stream were associated with enhanced activity in the ventromedial prefrontal cortex (VMPFC) possibly related to evaluation of the distracting event. Top-down controlled cue-guided attention shifts (CASs) activated bilateral areas in the SPL, intraparietal sulcus (IPS), FEF/PMC, TPJ, IFG/MFG, and cingulate/medial frontal gyrus, and crus I/II of the cerebellum. Thus, our results suggest that in audition top-down controlled and bottom-up triggered shifting of attention activate largely overlapping temporo-parietal, superior parietal and frontal areas. As the IPS, superior parts of the SPL, and crus I/II were activated specifically by top-down controlled attention shifts, and the VMPFC was specifically activated by bottom-up triggered attention shifts, our results also suggest some differences between auditory top-down controlled and bottom-up triggered shifting of attention.
... with auditory neuropathy have greater impairment in speech perception than hearing health experts would predict based upon their degree of hearing loss on a hearing test. For example, a person with auditory neuropathy may be able to hear ...
Larry E Roberts
Full Text Available Sensory training therapies for tinnitus are based on the assumption that, notwithstanding neural changes related to tinnitus, auditory training can alter the response properties of neurons in auditory pathways. To address this question, we investigated whether brain changes induced by sensory training in tinnitus sufferers and measured by EEG are similar to those induced in age and hearing loss matched individuals without tinnitus trained on the same auditory task. Auditory training was given using a 5 kHz 40-Hz amplitude-modulated sound that was in the tinnitus frequency region of the tinnitus subjects and enabled extraction of the 40-Hz auditory steady-state response (ASSR and P2 transient response known to localize to primary and nonprimary auditory cortex, respectively. P2 amplitude increased with training equally in participants with tinnitus and in control subjects, suggesting normal remodeling of nonprimary auditory regions in tinnitus. However, training-induced changes in the ASSR differed between the tinnitus and control groups. In controls ASSR phase advanced toward the stimulus waveform by about ten degrees over training, in agreement with previous results obtained in young normal hearing individuals. However, ASSR phase did not change significantly with training in the tinnitus group, although some participants showed phase shifts resembling controls. On the other hand, ASSR amplitude increased with training in the tinnitus group, whereas in controls this response (which is difficult to remodel in young normal hearing subjects did not change with training. These results suggest that neural changes related to tinnitus altered how neural plasticity was expressed in the region of primary but not nonprimary auditory cortex. Auditory training did not reduce tinnitus loudness although a small effect on the tinnitus spectrum was detected.
Alho, Kimmo; Salmi, Juha; Koistinen, Sonja; Salonen, Oili; Rinne, Teemu
A number of previous studies have suggested segregated networks of brain areas for top-down controlled and bottom-up triggered orienting of visual attention. However, the corresponding networks involved in auditory attention remain less studied. Our participants attended selectively to a tone stream with either a lower pitch or higher pitch in order to respond to infrequent changes in duration of attended tones. The participants were also required to shift their attention from one stream to the other when guided by a visual arrow cue. In addition to these top-down controlled cued attention shifts, infrequent task-irrelevant louder tones occurred in both streams to trigger attention in a bottom-up manner. Both cued shifts and louder tones were associated with enhanced activity in the superior temporal gyrus and sulcus, temporo-parietal junction, superior parietal lobule, inferior and middle frontal gyri, frontal eye field, supplementary motor area, and anterior cingulate gyrus. Thus, the present findings suggest that in the auditory modality, unlike in vision, top-down controlled and bottom-up triggered attention activate largely the same cortical networks. Comparison of the present results with our previous results from a similar experiment on spatial auditory attention suggests that fronto-parietal networks of attention to location or pitch overlap substantially. However, the auditory areas in the anterior superior temporal cortex might have a more important role in attention to the pitch than location of sounds. This article is part of a Special Issue entitled SI: Prediction and Attention. Copyright © 2014 Elsevier B.V. All rights reserved.
Scott, Brian H; Mishkin, Mortimer
Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory. Published by Elsevier B.V.
Ushioda, Takashi; Watanabe, Yutaka; Sanjo, Yusuke; Yamane, Gen-Yuki; Abe, Shinichi; Tsuji, Yusuke; Ishiyama, Atushi
In the present study, we evaluated activated areas of the cerebral cortex with regard to the mirror neuron system during swallowing. To identify the activated areas, we used magnetoencephalography. Subjects were ten consenting volunteers. Swallowing-related stimuli comprised an animated image of the left profile of a person swallowing water with laryngeal elevation as a visual swallowing trigger stimulus and a swallowing sound as an auditory swallowing trigger stimulus. As control stimuli, a still frame image of the left profile without an additional trigger was shown, and an artificial sound as a false auditory trigger was provided. Triggers were presented at 3,000 ms after the start of image presentation. The stimuli were combined and presented and the areas activated were identified for each stimulus. With animation and still-frame stimuli, the visual association area (Brodmann area (BA) 18) was activated at the start of image presentation, while with the swallowing sound and artificial sound stimuli, the auditory areas BA 41 and BA 42 were activated at the time of trigger presentation. However, with animation stimuli (animation stimulus, animation + swallowing sound stimuli, and animation + artificial sound stimuli), activation in BA 6 and BA 40, corresponding to mirror neurons, was observed between 620 and 720 ms before the trigger. Besides, there were also significant differences in latency time and peak intensity between animation stimulus and animation + swallowing sound stimuli. Our results suggest that mirror neurons are activated by swallowing-related visual and auditory stimuli.
Gherri, Elena; Driver, Jon; Eimer, Martin
To investigate whether saccade preparation can modulate processing of auditory stimuli in a spatially-specific fashion, ERPs were recorded for a Saccade task, in which the direction of a prepared saccade was cued, prior to an imperative auditory stimulus indicating whether to execute or withhold that saccade. For comparison, we also ran a conventional Covert Attention task, where the same cue now indicated the direction for a covert endogenous attentional shift prior to an auditory target-nontarget discrimination. Lateralised components previously observed during cued shifts of attention (ADAN, LDAP) did not differ significantly across tasks, indicating commonalities between auditory spatial attention and oculomotor control. Moreover, in both tasks, spatially-specific modulation of auditory processing was subsequently found, with enhanced negativity for lateral auditory nontarget stimuli at cued versus uncued locations. This modulation started earlier and was more pronounced for the Covert Attention task, but was also reliably present in the Saccade task, demonstrating that the effects of covert saccade preparation on auditory processing can be similar to effects of endogenous covert attentional orienting, albeit smaller. These findings provide new evidence for similarities but also some differences between oculomotor preparation and shifts of endogenous spatial attention. They also show that saccade preparation can affect not just vision, but also sensory processing of auditory events.
Slevc, L Robert; Shell, Alison R
Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. © 2015 Elsevier B.V. All rights reserved.
Saunders, Gabrielle H; Smith, Sherri L; Chisolm, Theresa H; Frederick, Melissa T; McArdle, Rachel A; Wilson, Richard H
To examine the effectiveness of the Listening and Communication Enhancement (LACE) program as a supplement to standard-of-care hearing aid intervention in a Veteran population. A multisite randomized controlled trial was conducted to compare outcomes following standard-of-care hearing aid intervention supplemented with (1) LACE training using the 10-session DVD format, (2) LACE training using the 20-session computer-based format, (3) placebo auditory training (AT) consisting of actively listening to 10 hr of digitized books on a computer, and (4) educational counseling-the control group. The study involved 3 VA sites and enrolled 279 veterans. Both new and experienced hearing aid users participated to determine if outcomes differed as a function of hearing aid user status. Data for five behavioral and two self-report measures were collected during three research visits: baseline, immediately following the intervention period, and at 6 months postintervention. The five behavioral measures were selected to determine whether the perceptual and cognitive skills targeted in LACE training generalized to untrained tasks that required similar underlying skills. The two self-report measures were completed to determine whether the training resulted in a lessening of activity limitations and participation restrictions. Outcomes were obtained from 263 participants immediately following the intervention period and from 243 participants 6 months postintervention. Analyses of covariance comparing performance on each outcome measure separately were conducted using intervention and hearing aid user status as between-subject factors, visit as a within-subject factor, and baseline performance as a covariate. No statistically significant main effects or interactions were found for the use of LACE on any outcome measure. Findings from this randomized controlled trial show that LACE training does not result in improved outcomes over standard-of-care hearing aid intervention alone
Issa, Mohamad; Bisconti, Silvia; Kovelman, Ioulia; Kileny, Paul; Basura, Gregory J
Tinnitus is the phantom perception of sound in the absence of an acoustic stimulus. To date, the purported neural correlates of tinnitus from animal models have not been adequately characterized with translational technology in the human brain. The aim of the present study was to measure changes in oxy-hemoglobin concentration from regions of interest (ROI; auditory cortex) and non-ROI (adjacent nonauditory cortices) during auditory stimulation and silence in participants with subjective tinnitus appreciated equally in both ears and in nontinnitus controls using functional near-infrared spectroscopy (fNIRS). Control and tinnitus participants with normal/near-normal hearing were tested during a passive auditory task. Hemodynamic activity was monitored over ROI and non-ROI under episodic periods of auditory stimulation with 750 or 8000 Hz tones, broadband noise, and silence. During periods of silence, tinnitus participants maintained increased hemodynamic responses in ROI, while a significant deactivation was seen in controls. Interestingly, non-ROI activity was also increased in the tinnitus group as compared to controls during silence. The present results demonstrate that both auditory and select nonauditory cortices have elevated hemodynamic activity in participants with tinnitus in the absence of an external auditory stimulus, a finding that may reflect basic science neural correlates of tinnitus that ultimately contribute to phantom sound perception.
Full Text Available Tinnitus is the phantom perception of sound in the absence of an acoustic stimulus. To date, the purported neural correlates of tinnitus from animal models have not been adequately characterized with translational technology in the human brain. The aim of the present study was to measure changes in oxy-hemoglobin concentration from regions of interest (ROI; auditory cortex and non-ROI (adjacent nonauditory cortices during auditory stimulation and silence in participants with subjective tinnitus appreciated equally in both ears and in nontinnitus controls using functional near-infrared spectroscopy (fNIRS. Control and tinnitus participants with normal/near-normal hearing were tested during a passive auditory task. Hemodynamic activity was monitored over ROI and non-ROI under episodic periods of auditory stimulation with 750 or 8000 Hz tones, broadband noise, and silence. During periods of silence, tinnitus participants maintained increased hemodynamic responses in ROI, while a significant deactivation was seen in controls. Interestingly, non-ROI activity was also increased in the tinnitus group as compared to controls during silence. The present results demonstrate that both auditory and select nonauditory cortices have elevated hemodynamic activity in participants with tinnitus in the absence of an external auditory stimulus, a finding that may reflect basic science neural correlates of tinnitus that ultimately contribute to phantom sound perception.
Laurent, Agathe; Arzimanoglou, Alexis; Panagiotakaki, Eleni; Sfaello, Ignacio; Kahane, Philippe; Ryvlin, Philippe; Hirsch, Edouard; de Schonen, Scania
A high rate of abnormal social behavioural traits or perceptual deficits is observed in children with unilateral temporal lobe epilepsy. In the present study, perception of auditory and visual social signals, carried by faces and voices, was evaluated in children or adolescents with temporal lobe epilepsy. We prospectively investigated a sample of 62 children with focal non-idiopathic epilepsy early in the course of the disorder. The present analysis included 39 children with a confirmed diagnosis of temporal lobe epilepsy. Control participants (72), distributed across 10 age groups, served as a control group. Our socio-perceptual evaluation protocol comprised three socio-visual tasks (face identity, facial emotion and gaze direction recognition), two socio-auditory tasks (voice identity and emotional prosody recognition), and three control tasks (lip reading, geometrical pattern and linguistic intonation recognition). All 39 patients also benefited from a neuropsychological examination. As a group, children with temporal lobe epilepsy performed at a significantly lower level compared to the control group with regards to recognition of facial identity, direction of eye gaze, and emotional facial expressions. We found no relationship between the type of visual deficit and age at first seizure, duration of epilepsy, or the epilepsy-affected cerebral hemisphere. Deficits in socio-perceptual tasks could be found independently of the presence of deficits in visual or auditory episodic memory, visual non-facial pattern processing (control tasks), or speech perception. A normal FSIQ did not exempt some of the patients from an underlying deficit in some of the socio-perceptual tasks. Temporal lobe epilepsy not only impairs development of emotion recognition, but can also impair development of perception of other socio-perceptual signals in children with or without intellectual deficiency. Prospective studies need to be designed to evaluate the results of appropriate re
Coelho, Cesar A. O.; Dunsmoor, Joseph E.; Phelps, Elizabeth A.
Fear-related behaviors are prone to relapse following extinction. We tested in humans a compound extinction design ("deepened extinction") shown in animal studies to reduce post-extinction fear recovery. Adult subjects underwent fear conditioning to a visual and an auditory conditioned stimulus (CSA and CSB, respectively) separately…
Scott, Brian H.; Mishkin, Mortimer
Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ‘working memory’ bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ‘match’ stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. PMID:26541581
van der Kooij, Herman; Peterka, Robert J
We developed a theory of human stance control that predicted (1) how subjects re-weight their utilization of proprioceptive and graviceptive orientation information in experiments where eyes closed stance was perturbed by surface-tilt stimuli with different amplitudes, (2) the experimentally observed increase in body sway variability (i.e. the "remnant" body sway that could not be attributed to the stimulus) with increasing surface-tilt amplitude, (3) neural controller feedback gains that determine the amount of corrective torque generated in relation to sensory cues signaling body orientation, and (4) the magnitude and structure of spontaneous body sway. Responses to surface-tilt perturbations with different amplitudes were interpreted using a feedback control model to determine control parameters and changes in these parameters with stimulus amplitude. Different combinations of internal sensory and/or motor noise sources were added to the model to identify the properties of noise sources that were able to account for the experimental remnant sway characteristics. Various behavioral criteria were investigated to determine if optimization of these criteria could predict the identified model parameters and amplitude-dependent parameter changes. Robust findings were that remnant sway characteristics were best predicted by models that included both sensory and motor noise, the graviceptive noise magnitude was about ten times larger than the proprioceptive noise, and noise sources with signal-dependent properties provided better explanations of remnant sway. Overall results indicate that humans dynamically weight sensory system contributions to stance control and tune their corrective responses to minimize the energetic effects of sensory noise and external stimuli.
Zmigrod, Sharon; Hommel, Bernhard
The features of perceived objects are processed in distinct neural pathways, which call for mechanisms that integrate the distributed information into coherent representations (the binding problem). Recent studies of sequential effects have demonstrated feature binding not only in perception, but also across (visual) perception and action planning. We investigated whether comparable effects can be obtained in and across auditory perception and action. The results from two experiments revealed effects indicative of spontaneous integration of auditory features (pitch and loudness, pitch and location), as well as evidence for audio-manual stimulus-response integration. Even though integration takes place spontaneously, features related to task-relevant stimulus or response dimensions are more likely to be integrated. Moreover, integration seems to follow a temporal overlap principle, with features coded close in time being more likely to be bound together. Taken altogether, the findings are consistent with the idea of episodic event files integrating perception and action plans.
Robert J Zatorre
Full Text Available We tested changes in cortical functional response to auditory configural learning by training ten human listeners to discriminate micromelodies (consisting of smaller pitch intervals than normally used in Western music. We measured covariation in blood oxygenation signal to increasing pitch-interval size in order to dissociate global changes in activity from those specifically associated with the stimulus feature of interest. A psychophysical staircase procedure with feedback was used for training over a two-week period. Behavioral tests of discrimination ability performed before and after training showed significant learning on the trained stimuli, and generalization to other frequencies and tasks; no learning occurred in an untrained control group. Before training the functional MRI data showed the expected systematic increase in activity in auditory cortices as a function of increasing micromelody pitch-interval size. This function became shallower after training, with the maximal change observed in the right posterior auditory cortex. Global decreases in activity in auditory regions, along with global increases in frontal cortices also occurred after training. Individual variation in learning rate was related to the hemodynamic slope to pitch-interval size, such that those who had a higher sensitivity to pitch-interval variation prior to learning achieved the fastest learning. We conclude that configural auditory learning entails modulation in the response of auditory cortex specifically to the trained stimulus feature. Reduction in blood oxygenation response to increasing pitch-interval size suggests that fewer computational resources, and hence lower neural recruitment, is associated with learning, in accord with models of auditory cortex function, and with data from other modalities.
Leite, Renata Aparecida; Magliaro, Fernanda Cristina Leite; Raimundo, Jeziela Cristina; Gândara, Mara; Garbi, Sergio; Bento, Ricardo Ferreira; Matas, Carla Gentile
The electrophysiological responses obtained with the complex auditory brainstem response (cABR) provide objective measures of subcortical processing of speech and other complex stimuli. The cABR has also been used to verify the plasticity in the auditory pathway in the subcortical regions. To compare the results of cABR obtained in children using hearing aids before and after 9 months of adaptation, as well as to compare the results of these children with those obtained in children with normal hearing. Fourteen children with normal hearing (Control Group - CG) and 18 children with mild to moderate bilateral sensorineural hearing loss (Study Group - SG), aged 7-12 years, were evaluated. The children were submitted to pure tone and vocal audiometry, acoustic immittance measurements and ABR with speech stimulus, being submitted to the evaluations at three different moments: initial evaluation (M0), 3 months after the initial evaluation (M3) and 9 months after the evaluation (M9); at M0, the children assessed in the study group did not use hearing aids yet. When comparing the CG and the SG, it was observed that the SG had a lower median for the V-A amplitude at M0 and M3, lower median for the latency of the component V at M9 and a higher median for the latency of component O at M3 and M9. A reduction in the latency of component A at M9 was observed in the SG. Children with mild to moderate hearing loss showed speech stimulus processing deficits and the main impairment is related to the decoding of the transient portion of this stimulus spectrum. It was demonstrated that the use of hearing aids promoted neuronal plasticity of the Central Auditory Nervous System after an extended time of sensory stimulation. Copyright © 2016 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Renata Aparecida Leite
Full Text Available Abstract Introduction The electrophysiological responses obtained with the complex auditory brainstem response (cABR provide objective measures of subcortical processing of speech and other complex stimuli. The cABR has also been used to verify the plasticity in the auditory pathway in the subcortical regions. Objective To compare the results of cABR obtained in children using hearing aids before and after 9 months of adaptation, as well as to compare the results of these children with those obtained in children with normal hearing. Methods Fourteen children with normal hearing (Control Group - CG and 18 children with mild to moderate bilateral sensorineural hearing loss (Study Group - SG, aged 7-12 years, were evaluated. The children were submitted to pure tone and vocal audiometry, acoustic immittance measurements and ABR with speech stimulus, being submitted to the evaluations at three different moments: initial evaluation (M0, 3 months after the initial evaluation (M3 and 9 months after the evaluation (M9; at M0, the children assessed in the study group did not use hearing aids yet. Results When comparing the CG and the SG, it was observed that the SG had a lower median for the V-A amplitude at M0 and M3, lower median for the latency of the component V at M9 and a higher median for the latency of component O at M3 and M9. A reduction in the latency of component A at M9 was observed in the SG. Conclusion Children with mild to moderate hearing loss showed speech stimulus processing deficits and the main impairment is related to the decoding of the transient portion of this stimulus spectrum. It was demonstrated that the use of hearing aids promoted neuronal plasticity of the Central Auditory Nervous System after an extended time of sensory stimulation.
van der Aa, J.; Honing, H.; ten Cate, C.
Perceiving temporal regularity in an auditory stimulus is considered one of the basic features of musicality. Here we examine whether zebra finches can detect regularity in an isochronous stimulus. Using a go/no go paradigm we show that zebra finches are able to distinguish between an isochronous
Bayat, Arash; Farhadi, Mohammad; Pourbakht, Akram; Sadjedi, Hamed; Emamdjomeh, Hesam; Kamali, Mohammad; Mirmomeni, Golshan
Background Auditory scene analysis (ASA) is the process by which the auditory system separates individual sounds in natural-world situations. ASA is a key function of auditory system, and contributes to speech discrimination in noisy backgrounds. It is known that sensorineural hearing loss (SNHL) detrimentally affects auditory function in complex environments, but relatively few studies have focused on the influence of SNHL on higher level processes which are likely involved in auditory perception in different situations. Objectives The purpose of the current study was to compare the auditory system ability of normally hearing and SNHL subjects using the ASA examination. Materials and Methods A total of 40 right-handed adults (age range: 18 - 45 years) participated in this study. The listeners were divided equally into control and mild to moderate SNHL groups. ASA ability was measured using an ABA-ABA sequence. The frequency of the "A" was kept constant at 500, 1000, 2000 or 4000 Hz, while the frequency of the "B" was set at 3 to 80 percent above the" A" tone. For ASA threshold detection, the frequency of the B stimulus was decreased until listeners reported that they could no longer hear two separate sounds. Results The ASA performance was significantly better for controls than the SNHL group; these differences were more obvious at higher frequencies. We found no significant differences between ASA ability as a function of tone durations in both groups. Conclusions The present study indicated that SNHL may cause a reduction in perceptual separation of the incoming acoustic information to form accurate representations of our acoustic world. PMID:24719695
Full Text Available The early stages of the auditory system need to preserve the timing information of sounds in order to extract the basic features of acoustic stimuli. At the same time, different processes of neuronal adaptation occur at several levels to further process the auditory information. For instance, auditory nerve fiber responses already experience adaptation of their firing rates, a type of response that can be found in many other auditory nuclei and may be useful for emphasizing the onset of the stimuli. However, it is at higher levels in the auditory hierarchy where more sophisticated types of neuronal processing take place. For example, stimulus-specific adaptation, where neurons show adaptation to frequent, repetitive stimuli, but maintain their responsiveness to stimuli with different physical characteristics, thus representing a distinct kind of processing that may play a role in change and deviance detection. In the auditory cortex, adaptation takes more elaborate forms, and contributes to the processing of complex sequences, auditory scene analysis and attention. Here we review the multiple types of adaptation that occur in the auditory system, which are part of the pool of resources that the neurons employ to process the auditory scene, and are critical to a proper understanding of the neuronal mechanisms that govern auditory perception.
Li, Qi; Wang, Kai; Nan, Weizhi; Zheng, Ya; Wu, Haiyan; Wang, Hongbin; Liu, Xun
The present study examined electroencephalogram profiles on a novel stimulus-response compatibility (SRC) task in order to elucidate the distinct brain mechanisms of stimulus-stimulus (S-S) and stimulus-response (S-R) conflict processing. The results showed that the SRC effects on reaction times (RTs) and N2 amplitudes were additive when both S-S and S-R conflicts existed. We also observed that, for both RTs and N2 amplitudes, the conflict adaptation effects-the reduced SRC effect following an incongruent trial versus a congruent trial-were present only when two consecutive trials involved the same type of conflict. Time-frequency analysis revealed that both S-S and S-R conflicts modulated power in the theta band, whereas S-S conflict additionally modulated power in the alpha and beta bands. In summary, our findings provide insight into the domain-specific conflict processing and the modular organization of cognitive control. Copyright © 2014 Society for Psychophysiological Research.
Gielen, Jeroen; Wiels, Wietse; Van Schependom, Jeroen; Laton, Jorne; Van Hecke, Wim; Parizel, Paul M; D'hooghe, Marie Beatrice; Nagels, Guy
The paced serial addition test (PSAT) is regularly used to assess cognitive deficits in various neuropsychiatric conditions. Being a complex test, it reflects the status of multiple cognitive domains such as working memory, information processing speed and executive functioning. Two versions of the PSAT exist. One uses auditory stimuli through spoken numbers and is known as the PASAT, while the other one presents patients with visual stimuli and is called PVSAT. The PASAT is considered more frustrating by patients, and hence the visual version is usually preferred. Research has suggested that an interference might exist between patients' verbal answers and the auditory presentation of stimuli. We therefore removed the verbal response in this study, and aimed to investigate differences in functional brain activity through functional magnetic resonance imaging. Fifteen healthy controls performed the two test versions inside an MRI scanner-switching between stimulus modality (auditory vs. visual) as well as inter-stimulus frequency (3s vs. 2s). We extracted 11 independent components from the data: attentional, visual, auditory, sensorimotor and default mode networks. We then performed statistical analyses of mean network activity within each component, as well as inter-network connectivity of each component pair during the different task types. Unsurprisingly, we noted an effect of modality on activity in the visual and auditory components. However, we also describe bilateral frontoparietal, anterior cingulate and insular attentional network activity. An effect of frequency was noted only in the sensorimotor network. Effects were found on edges linking visual and auditory regions. Task modality influenced an attentional-sensorimotor connection, while stimulus frequency had an influence on sensorimotor-default mode connections. Scanner noise during functional MRI may interfere with brain activation-especially during tasks involving auditory pathways. The question
Lee, Heon-Jeong; Kim, Leen; Han, Chang-Su; Kim, Yong-Ku; Kim, Seung-Hyun; Lee, Min-Soo; Joe, Sook-Haeng; Jung, In-Kwa
The reception, processing, and storage of information about experience define personality. The present study investigated the relationship between auditory event-related potentials (AERP) and personality traits. The AERP were recorded using a standard auditory oddball paradigm, and personality was evaluated by Cattell's Sixteen Personality Factor Questionnaire (16PF) in 20 healthy young male subjects. The P300 latency was found to be significantly associated with rule consciousness (factor G in the 16PF), perfectionism (factor Q3), and self-control (factor SC): it was negatively correlated with G score (r = -0.56, P = 0.01), Q3 score (r = -0.67, P = 0.001), and SC score (r = -0.65, P = 0.002). Moreover, the P300 amplitude and N100 amplitude were negatively correlated with reasoning (factor B; r = -0.46, P = 0.044; and r = -0.72, P = 0.002, respectively). These results indicate that the personality traits of self-control, perfectionism, high superego, and reasoning are related to information processing in the brain.
Riecke, Lars; Scharke, Wolfgang; Valente, Giancarlo; Gutschalk, Alexander
Auditory selective attention plays an essential role for identifying sounds of interest in a scene, but the neural underpinnings are still incompletely understood. Recent findings demonstrate that neural activity that is time-locked to a particular amplitude-modulation (AM) is enhanced in the auditory cortex when the modulated stream of sounds is selectively attended to under sensory competition with other streams. However, the target sounds used in the previous studies differed not only in their AM, but also in other sound features, such as carrier frequency or location. Thus, it remains uncertain whether the observed enhancements reflect AM-selective attention. The present study aims at dissociating the effect of AM frequency on response enhancement in auditory cortex by using an ongoing auditory stimulus that contains two competing targets differing exclusively in their AM frequency. Electroencephalography results showed a sustained response enhancement for auditory attention compared to visual attention, but not for AM-selective attention (attended AM frequency vs. ignored AM frequency). In contrast, the response to the ignored AM frequency was enhanced, although a brief trend toward response enhancement occurred during the initial 15 s. Together with the previous findings, these observations indicate that selective enhancement of attended AMs in auditory cortex is adaptive under sustained AM-selective attention. This finding has implications for our understanding of cortical mechanisms for feature-based attentional gain control.
Sininger, Yvonne S; Bhatara, Anjali
Laterality (left-right ear differences) of auditory processing was assessed using basic auditory skills: (1) gap detection, (2) frequency discrimination, and (3) intensity discrimination. Stimuli included tones (500, 1000, and 4000 Hz) and wide-band noise presented monaurally to each ear of typical adult listeners. The hypothesis tested was that processing of tonal stimuli would be enhanced by left ear (LE) stimulation and noise by right ear (RE) presentations. To investigate the limits of laterality by (1) spectral width, a narrow-band noise (NBN) of 450-Hz bandwidth was evaluated using intensity discrimination, and (2) stimulus duration, 200, 500, and 1000 ms duration tones were evaluated using frequency discrimination. A left ear advantage (LEA) was demonstrated with tonal stimuli in all experiments, but an expected REA for noise stimuli was not found. The NBN stimulus demonstrated no LEA and was characterised as a noise. No change in laterality was found with changes in stimulus durations. The LEA for tonal stimuli is felt to be due to more direct connections between the left ear and the right auditory cortex, which has been shown to be primary for spectral analysis and tonal processing. The lack of a REA for noise stimuli is unexplained. Sex differences in laterality for noise stimuli were noted but were not statistically significant. This study did establish a subtle but clear pattern of LEA for processing of tonal stimuli.
Halverson, Hunter E.; Poremba, Amy; Freeman, John H.
Associative learning tasks commonly involve an auditory stimulus, which must be projected through the auditory system to the sites of memory induction for learning to occur. The cochlear nucleus (CN) projection to the pontine nuclei has been posited as the necessary auditory pathway for cerebellar learning, including eyeblink conditioning.…
Tonelli, Alessia; Cuturi, Luigi F; Gori, Monica
Size perception can be influenced by several visual cues, such as spatial (e.g., depth or vergence) and temporal contextual cues (e.g., adaptation to steady visual stimulation). Nevertheless, perception is generally multisensory and other sensory modalities, such as auditory, can contribute to the functional estimation of the size of objects. In this study, we investigate whether auditory stimuli at different sound pitches can influence visual size perception after visual adaptation. To this aim, we used an adaptation paradigm (Pooresmaeili et al., 2013) in three experimental conditions: visual-only, visual-sound at 100 Hz and visual-sound at 9,000 Hz. We asked participants to judge the size of a test stimulus in a size discrimination task. First, we obtained a baseline for all conditions. In the visual-sound conditions, the auditory stimulus was concurrent to the test stimulus. Secondly, we repeated the task by presenting an adapter (twice as big as the reference stimulus) before the test stimulus. We replicated the size aftereffect in the visual-only condition: the test stimulus was perceived smaller than its physical size. The new finding is that we found the auditory stimuli have an effect on the perceived size of the test stimulus after visual adaptation: low frequency sound decreased the effect of visual adaptation, making the stimulus perceived bigger compared to the visual-only condition, and contrarily, the high frequency sound had the opposite effect, making the test size perceived even smaller.
Parving, A; Salomon, G; Elberling, Claus
An investigation of the middle components of the auditory evoked response (10--50 msec post-stimulus) in a patient with auditory agnosia is reported. Bilateral temporal lobe infarctions were proved by means of brain scintigraphy, CAT scanning, and regional cerebral blood flow measurements...... that the middle components cannot be generated exclusively, if at all, in the primary auditory cortex, located in the temporal lobe. Furthermore, the responses are found to be of neurogenic origin according to the methodological procedure applied....
Huang, Xiyan; Chen, Xi; Yan, Nan; Jones, Jeffery A; Wang, Emily Q; Chen, Ling; Guo, Zhiqiang; Li, Weifeng; Liu, Peng; Liu, Hanjun
Several studies have shown sensorimotor deficits in speech processing in individuals with idiopathic Parkinson's disease (PD). The underlying neural mechanisms, however, remain poorly understood. In the present event-related potential (ERP) study, 18 individuals with PD and 18 healthy controls were exposed to frequency-altered feedback (FAF) while producing a sustained vowel and listening to the playback of their own voice. Behavioral results revealed that individuals with PD produced significantly larger vocal compensation for pitch feedback errors than healthy controls, and exhibited a significant positive correlation between the magnitude of their vocal responses and the variability of their unaltered vocal pitch. At the cortical level, larger P2 responses were observed for individuals with PD compared with healthy controls during active vocalization due to left-lateralized enhanced activity in the superior and inferior frontal gyrus, premotor cortex, inferior parietal lobule, and superior temporal gyrus. These two groups did not differ, however, when they passively listened to the playback of their own voice. Individuals with PD also exhibited larger P2 responses during active vocalization when compared with passive listening due to enhanced activity in the inferior frontal gyrus, precental gyrus, postcentral gyrus, and middle temporal gyrus. This enhancement effect, however, was not observed for healthy controls. These findings provide neural evidence for the abnormal auditory-vocal integration for voice control in individuals with PD, which may be caused by their deficits in the detection and correction of errors in voice auditory feedback. Hum Brain Mapp 37:4248-4261, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Gulberti, A; Hamel, W; Buhmann, C; Boelmans, K; Zittel, S; Gerloff, C; Westphal, M; Engel, A K; Schneider, T R; Moll, C K E
While motor effects of dopaminergic medication and subthalamic nucleus deep brain stimulation (STN-DBS) in Parkinson's disease (PD) patients are well explored, their effects on sensory processing are less well understood. Here, we studied the impact of levodopa and STN-DBS on auditory processing. Rhythmic auditory stimulation (RAS) was presented at frequencies between 1 and 6Hz in a passive listening paradigm. High-density EEG-recordings were obtained before (levodopa ON/OFF) and 5months following STN-surgery (ON/OFF STN-DBS). We compared auditory evoked potentials (AEPs) elicited by RAS in 12 PD patients to those in age-matched controls. Tempo-dependent amplitude suppression of the auditory P1/N1-complex was used as an indicator of auditory gating. Parkinsonian patients showed significantly larger AEP-amplitudes (P1, N1) and longer AEP-latencies (N1) compared to controls. Neither interruption of dopaminergic medication nor of STN-DBS had an immediate effect on these AEPs. However, chronic STN-DBS had a significant effect on abnormal auditory gating characteristics of parkinsonian patients and restored a physiological P1/N1-amplitude attenuation profile in response to RAS with increasing stimulus rates. This differential treatment effect suggests a divergent mode of action of levodopa and STN-DBS on auditory processing. STN-DBS may improve early attentive filtering processes of redundant auditory stimuli, possibly at the level of the frontal cortex. Copyright © 2014 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Hames, Elizabeth’ C.; Murphy, Brandi; Rajmohan, Ravi; Anderson, Ronald C.; Baker, Mary; Zupancic, Stephen; O’Boyle, Michael; Richman, David
Electroencephalography (EEG) and blood oxygen level dependent functional magnetic resonance imagining (BOLD fMRI) assessed the neurocorrelates of sensory processing of visual and auditory stimuli in 11 adults with autism (ASD) and 10 neurotypical (NT) controls between the ages of 20–28. We hypothesized that ASD performance on combined audiovisual trials would be less accurate with observable decreased EEG power across frontal, temporal, and occipital channels and decreased BOLD fMRI activity in these same regions; reflecting deficits in key sensory processing areas. Analysis focused on EEG power, BOLD fMRI, and accuracy. Lower EEG beta power and lower left auditory cortex fMRI activity were seen in ASD compared to NT when they were presented with auditory stimuli as demonstrated by contrasting the activity from the second presentation of an auditory stimulus in an all auditory block vs. the second presentation of a visual stimulus in an all visual block (AA2-VV2).We conclude that in ASD, combined audiovisual processing is more similar than unimodal processing to NTs. PMID:27148020
Easton, R. D.; Greene, A. J.; DiZio, P.; Lackner, J. R.
This study assessed whether stationary auditory information could affect body and head sway (as does visual and haptic information) in sighted and congenitally blind people. Two speakers, one placed adjacent to each ear, significantly stabilized center-of-foot-pressure sway in a tandem Romberg stance, while neither a single speaker in front of subjects nor a head-mounted sonar device reduced center-of-pressure sway. Center-of-pressure sway was reduced to the same level in the two-speaker condition for sighted and blind subjects. Both groups also evidenced reduced head sway in the two-speaker condition, although blind subjects' head sway was significantly larger than that of sighted subjects. The advantage of the two-speaker condition was probably attributable to the nature of distance compared with directional auditory information. The results rule out a deficit model of spatial hearing in blind people and are consistent with one version of a compensation model. Analysis of maximum cross-correlations between center-of-pressure and head sway, and associated time lags suggest that blind and sighted people may use different sensorimotor strategies to achieve stability.
Jones, Catherine R. G.; Happe, Francesca; Baird, Gillian; Simonoff, Emily; Marsden, Anita J. S.; Tregay, Jenifer; Phillips, Rebecca J.; Goswami, Usha; Thomson, Jennifer M.; Charman, Tony
It has been hypothesised that auditory processing may be enhanced in autism spectrum disorders (ASD). We tested auditory discrimination ability in 72 adolescents with ASD (39 childhood autism; 33 other ASD) and 57 IQ and age-matched controls, assessing their capacity for successful discrimination of the frequency, intensity and duration…
Stekelenburg, Jeroen J; Vroomen, Jean
The amplitude of auditory components of the event-related potential (ERP) is attenuated when sounds are self-generated compared to externally generated sounds. This effect has been ascribed to internal forward modals predicting the sensory consequences of one's own motor actions. Auditory potentials are also attenuated when a sound is accompanied by a video of anticipatory visual motion that reliably predicts the sound. Here, we investigated whether the neural underpinnings of prediction of upcoming auditory stimuli are similar for motor-auditory (MA) and visual-auditory (VA) events using a stimulus omission paradigm. In the MA condition, a finger tap triggered the sound of a handclap whereas in the VA condition the same sound was accompanied by a video showing the handclap. In both conditions, the auditory stimulus was omitted in either 50% or 12% of the trials. These auditory omissions induced early and mid-latency ERP components (oN1 and oN2, presumably reflecting prediction and prediction error), and subsequent higher-order error evaluation processes. The oN1 and oN2 of MA and VA were alike in amplitude, topography, and neural sources despite that the origin of the prediction stems from different brain areas (motor versus visual cortex). This suggests that MA and VA predictions activate a sensory template of the sound in auditory cortex. This article is part of a Special Issue entitled SI: Prediction and Attention. Copyright © 2015 Elsevier B.V. All rights reserved.
Seyed Kazem Mousavi-Sadati
Full Text Available Objective: This research was aimed at investigating the theory of multiple resources and central resource of attention on secondary task performance of talking with two types of cell phone during driving. Materials & Methods: Using disposal sampling, 25 male participants were selected and their reaction to auditory stimulus in three different driving conditions (no conversation with phone, conversation with handheld phone and hands-free phone were recorded. Driving conditions have been changed from a participant to another participant in order to control the sequence of tests and participants familiarity with the test conditions. Results: the results of data analysis with descriptive statistics and Mauchly’s Test of Sphericity, One- factor repeated measures ANOVA and Paired-Samples T test showed that different driving conditions can affect the reaction time (P0.001. Phone Conversation with hands-free phone increases drivers’ simple reaction time to auditory stimulus (P<0.001. Using handheld phone does not increase drivers’ reaction time to auditory stimulus over hands-free phone (P<0.001. Conclusion: The results confirmed that the performance quality of dual tasks and multiple tasks can be predicted by Four-dimensional multiple resources model of attention and all traffic laws in connection with the handheld phone also have to be spread to the use of hands-free phone.
Blom, Jan Dirk
Auditory hallucinations constitute a phenomenologically rich group of endogenously mediated percepts which are associated with psychiatric, neurologic, otologic, and other medical conditions, but which are also experienced by 10-15% of all healthy individuals in the general population. The group of phenomena is probably best known for its verbal auditory subtype, but it also includes musical hallucinations, echo of reading, exploding-head syndrome, and many other types. The subgroup of verbal auditory hallucinations has been studied extensively with the aid of neuroimaging techniques, and from those studies emerges an outline of a functional as well as a structural network of widely distributed brain areas involved in their mediation. The present chapter provides an overview of the various types of auditory hallucination described in the literature, summarizes our current knowledge of the auditory networks involved in their mediation, and draws on ideas from the philosophy of science and network science to reconceptualize the auditory hallucinatory experience, and point out directions for future research into its neurobiologic substrates. In addition, it provides an overview of known associations with various clinical conditions and of the existing evidence for pharmacologic and non-pharmacologic treatments. © 2015 Elsevier B.V. All rights reserved.
Four experiments explored the applicability of auditory stimulus presentation in affective priming tasks. In Experiment 1, it was found that standard affective priming effects occur when prime and target words are presented simultaneously via headphones similar to a dichotic listening procedure. In
Boller, F; Vrtunski, P B; Kim, Y; Mack, J L
The effect of Delayed Auditory Feedback (DAF) was evaluated in three groups of subjects: 10 normal controls, 10 non-fluent aphasics, and 10 fluent aphasics. Speec production tasks consisted of (1) repeating sound and words; (2) naming objects; (3) producing sentences from given stimulus words; (4) answering questions; (5) reciting nursery rhymes; and (6) reading. Two delays were used, 180 and 360 msec. Two independent judges rated patients' responses for changes in intensity, duration, and quality of speech. Inter-judge reliability was considered satisfactory. Contrary to some previous reports, all subjects, including all the fluent aphasics, showed some DAF effect. Fluent aphasics, however, showed a significantly smaller DAF effect than non-fluent aphasics. Patient with conduction aphasia appeared to be the least impaired. Overall DAF effect was greater with 180 msec. than with 360 msec. The largest DAF effect occurred during answering question, followed by repeating, reading, nursery rhymes, sentence production, and naming, in that order. Repetition of a complex word produced a greater DAF effect than repetition of a simple sound. Finally, we found a differential effect of DAF on the three measures used in the study. We hypothesize that DAF effects result from changes in two separate monitoring systems. One systems is related to changes in the intensity of speech and does not appear to be affected by aphasia. The other is responsible for duration and qualitative changes in speech and is differentially affected in relation to pathology producing aphasia.
Nees, Michael A.
Researchers have shown increased interest in mechanisms of working memory for nonverbal sounds such as music and environmental sounds. These studies often have used two-stimulus comparison tasks: two sounds separated by a brief retention interval (often 3 to 5 s) are compared, and a same or different judgment is recorded. Researchers seem to have assumed that sensory memory has a negligible impact on performance in auditory two-stimulus comparison tasks. This assumption is examined in detai...
Ron-Angevin, Ricardo; Velasco-Álvarez, Francisco; Fernández-Rodríguez, Álvaro; Díaz-Estrella, Antonio; Blanca-Mena, María José; Vizcaíno-Martín, Francisco Javier
Certain diseases affect brain areas that control the movements of the patients' body, thereby limiting their autonomy and communication capacity. Research in the field of Brain-Computer Interfaces aims to provide patients with an alternative communication channel not based on muscular activity, but on the processing of brain signals. Through these systems, subjects can control external devices such as spellers to communicate, robotic prostheses to restore limb movements, or domotic systems. The present work focus on the non-muscular control of a robotic wheelchair. A proposal to control a wheelchair through a Brain-Computer Interface based on the discrimination of only two mental tasks is presented in this study. The wheelchair displacement is performed with discrete movements. The control signals used are sensorimotor rhythms modulated through a right-hand motor imagery task or mental idle state. The peculiarity of the control system is that it is based on a serial auditory interface that provides the user with four navigation commands. The use of two mental tasks to select commands may facilitate control and reduce error rates compared to other endogenous control systems for wheelchairs. Seventeen subjects initially participated in the study; nine of them completed the three sessions of the proposed protocol. After the first calibration session, seven subjects were discarded due to a low control of their electroencephalographic signals; nine out of ten subjects controlled a virtual wheelchair during the second session; these same nine subjects achieved a medium accuracy level above 0.83 on the real wheelchair control session. The results suggest that more extensive training with the proposed control system can be an effective and safe option that will allow the displacement of a wheelchair in a controlled environment for potential users suffering from some types of motor neuron diseases.
Pacheco-Unguetti, Antonia Pilar; Parmentier, Fabrice B R
Rare and unexpected changes (deviants) in an otherwise repeated stream of task-irrelevant auditory distractors (standards) capture attention and impair behavioural performance in an ongoing visual task. Recent evidence indicates that this effect is increased by sadness in a task involving neutral stimuli. We tested the hypothesis that such effect may not be limited to negative emotions but reflect a general depletion of attentional resources by examining whether a positive emotion (happiness) would increase deviance distraction too. Prior to performing an auditory-visual oddball task, happiness or a neutral mood was induced in participants by means of the exposure to music and the recollection of an autobiographical event. Results from the oddball task showed significantly larger deviance distraction following the induction of happiness. Interestingly, the small amount of distraction typically observed on the standard trial following a deviant trial (post-deviance distraction) was not increased by happiness. We speculate that happiness might interfere with the disengagement of attention from the deviant sound back towards the target stimulus (through the depletion of cognitive resources and/or mind wandering) but help subsequent cognitive control to recover from distraction. © 2015 The British Psychological Society.
Ramirez, Luz Angela; Arenas, Angela Maria; Henao, Gloria Cecilia
Introduction: This investigation describes and compares characteristics of visual, semantic and auditory memory in a group of children diagnosed with combined-type attention deficit with hyperactivity, attention deficit predominating, and a control group. Method: 107 boys and girls were selected, from 7 to 11 years of age, all residents in the…
He, Jia; Sun, Hong-Qiang; Li, Su-Xia; Zhang, Wei-Hua; Shi, Jie; Ai, Si-Zhi; Li, Yun; Li, Xiao-Jun; Tang, Xiang-Dong; Lu, Lin
Repeated exposure to a neutral conditioned stimulus (CS) in the absence of a noxious unconditioned stimulus (US) elicits fear memory extinction. The aim of the current study was to investigate the effects of mild tone exposure (CS) during slow wave sleep (SWS) on fear memory extinction in humans. The healthy volunteers underwent an auditory fear conditioning paradigm on the experimental night, during which tones served as the CS, and a mild shock served as the US. They were then randomly assigned to four groups. Three groups were exposed to the CS for 3 or 10 min or an irrelevant tone (control stimulus, CtrS) for 10 min during SWS. The fourth group served as controls and was not subjected to any interventions. All of the subjects completed a memory test 4 h after SWS-rich stage to evaluate the effect on fear extinction. Moreover, we conducted similar experiments using an independent group of subjects during the daytime to test whether the memory extinction effect was specific to the sleep condition. Ninety-six healthy volunteers (44 males) aged 18-28 y. Participants exhibited undisturbed sleep during 2 consecutive nights, as assessed by sleep variables (all P > 0.05) from polysomnographic recordings and power spectral analysis. Participants who were re-exposed to the 10 min CS either during SWS and wakefulness exhibited attenuated fear responses (wake-10 min CS, P memory extinction without altering sleep profiles. © 2015 Associated Professional Sleep Societies, LLC.
Full Text Available In this study, we focus our investigation on task-specific cognitive modulation of early cortical auditory processing in human cerebral cortex. During the experiments, we acquired whole-head magnetoencephalography (MEG data while participants were performing an auditory delayed-match-to-sample (DMS task and associated control tasks. Using a spatial filtering beamformer technique to simultaneously estimate multiple source activities inside the human brain, we observed a significant DMS-specific suppression of the auditory evoked response to the second stimulus in a sound pair, with the center of the effect being located in the vicinity of the left auditory cortex. For the right auditory cortex, a non-invariant suppression effect was observed in both DMS and control tasks. Furthermore, analysis of coherence revealed a beta band (12 ~ 20 Hz DMS-specific enhanced functional interaction between the sources in left auditory cortex and those in left inferior frontal gyrus, which has been shown to involve in short-term memory processing during the delay period of DMS task. Our findings support the view that early evoked cortical responses to incoming acoustic stimuli can be modulated by task-specific cognitive functions by means of frontal-temporal functional interactions.
Wang, Wuyi; Viswanathan, Shivakumar; Lee, Taraz; Grafton, Scott T
Cortical theta band oscillations (4-8 Hz) in EEG signals have been shown to be important for a variety of different cognitive control operations in visual attention paradigms. However the synchronization source of these signals as defined by fMRI BOLD activity and the extent to which theta oscillations play a role in multimodal attention remains unknown. Here we investigated the extent to which cross-modal visual and auditory attention impacts theta oscillations. Using a simultaneous EEG-fMRI paradigm, healthy human participants performed an attentional vigilance task with six cross-modal conditions using naturalistic stimuli. To assess supramodal mechanisms, modulation of theta oscillation amplitude for attention to either visual or auditory stimuli was correlated with BOLD activity by conjunction analysis. Negative correlation was localized to cortical regions associated with the default mode network and positively with ventral premotor areas. Modality-associated attention to visual stimuli was marked by a positive correlation of theta and BOLD activity in fronto-parietal area that was not observed in the auditory condition. A positive correlation of theta and BOLD activity was observed in auditory cortex, while a negative correlation of theta and BOLD activity was observed in visual cortex during auditory attention. The data support a supramodal interaction of theta activity with of DMN function, and modality-associated processes within fronto-parietal networks related to top-down theta related cognitive control in cross-modal visual attention. On the other hand, in sensory cortices there are opposing effects of theta activity during cross-modal auditory attention.
Roy, Saborni; Nag, Tapas C; Upadhyay, Ashish Datt; Mathur, Rashmi; Jain, Suman
The extrinsic sensory stimulation plays a crucial role in the formation and integration of sensory modalities during development. Postnatal behavior is thereby influenced by the type and timing of presentation of prenatal sensory stimuli. In this study, fertilized eggs of white Leghorn chickens during incubation were exposed to either species-specific calls or no sound. To find the prenatal critical period when auditory stimulation can modulate visual system development, the former group was divided into three subgroups: in subgroup A (SGA), the stimulus was provided during embryonic day (E)10 to E16, in SGB E17- hatching, and in SGC E10-hatching. The auditory and visual perceptual learning was recorded at posthatch day (PH) 1-3, whereas synaptic plasticity (evident from synaptophysin and PSD-95 expression), was observed at E19, E20, and PH 1-3. An increased number of responders were observed in both auditory and visual preference tests at PH 1 following stimulation. Although a decrease in latency of entry and an increase in total time spent were observed in all stimulated groups, it was most significant in SGC in auditory preference and in SGB and SGC in visual preference test. The auditory cortex of SGC and visual Wulst of SGB and SGC revealed higher expression of synaptic proteins, compared to control and SGA. A significant inter-hemispheric and gender-based difference in expression was also found in all groups. These results indicate facilitation of postnatal behaviour and synaptogenesis in both auditory and visual systems following prenatal repetitive auditory stimulation, only when given during prenatal critical period of development. Copyright © 2013 Wiley Periodicals, Inc.
Wada, Hiromi; Yumoto, Shoko; Iso, Hiroyuki
We examined the effect of perinatal hypothyroidism on auditory function in rats using a prepulse inhibition paradigm. Pregnant rats were treated with the antithyroid drug methimazole (1-methyl-2-mercaptoimidazole) from gestational day 15 to postnatal day 21 via drinking water at concentrations (w/v) of 0 (control), 0.002 (low dose), or 0.02% (high dose). Rats from methimazole-treated mothers were tested at ages 1, 6, and 12months using techniques to examine prepulse inhibition and startle response. The startle stimulus consisted of 40ms of white noise at 115dB, whereas the prepulse, which preceded the startle stimulus by 30ms, consisted of 20ms of white noise at 75, 85, or 95dB. When the prepulse intensity was 75 or 85dB, the high-dose group showed decreased prepulse inhibition percentages compared with the control and low-dose groups. The reduced percentages of prepulse inhibition did not return to control levels over the 12-month study period. In contrast, no differences in prepulse inhibition were observed among the three dose groups when prepulse intensity was 95dB. Moreover, the high-dose group displayed excessive reaction to auditory startle stimuli compared with the other groups. Reductions in plasma free thyroxine and body weight gain were observed in the high-dose group. We conclude that perinatal hypothyroidism results in irreversible damage to auditory function in rats. Copyright © 2013 Elsevier Inc. All rights reserved.
Jennifer L. O’Brien
Full Text Available Auditory cognitive training (ACT improves attention in older adults; however, the underlying neurophysiological mechanisms are still unknown. The present study examined the effects of ACT on the P3b event-related potential reflecting attention allocation (amplitude and speed of processing (latency during stimulus categorization and the P1-N1-P2 complex reflecting perceptual processing (amplitude and latency. Participants completed an auditory oddball task before and after 10 weeks of ACT (n = 9 or a no contact control period (n = 15. Parietal P3b amplitudes to oddball stimuli decreased at post-test in the trained group as compared to those in the control group, and frontal P3b amplitudes show a similar trend, potentially reflecting more efficient attentional allocation after ACT. No advantages for the ACT group were evident for auditory perceptual processing or speed of processing in this small sample. Our results provide preliminary evidence that ACT may enhance the efficiency of attention allocation, which may account for the positive impact of ACT on the everyday functioning of older adults.
Kimura, Hiroshi; Kanahara, Nobuhisa; Takase, Masayuki; Yoshida, Taisuke; Watanabe, Hiroyuki; Iyo, Masaomi
Chronic auditory verbal hallucinations (AVHs) in patients with schizophrenia are sometimes resistant to standard pharmacotherapy. Repetitive transcranial magnetic stimulation (rTMS) may be a promising treatment modality for AVHs, but the best protocol has yet to be identified. We used a double-blind randomized sham-controlled design aimed at 30 patients (active group N=16 vs. sham group N=14) with chronic AVHs that persisted regardless of adequate pharmacotherapy. The protocol was a total of four sessions of high-frequency (20-Hz) rTMS targeting the left temporoparietal cortex over 2 days (total 10,400 stimulations) administered to each patient. After the rTMS session the patients were followed for 4 weeks and evaluated with the Auditory Hallucination Rating Scale (AHRS). The mean changes of AHRS score were 22.9 (baseline) to 18.4 (4th week) in the Active group and 24.2 (baseline) to 21.8 (4th week) in the Sham group, indicating no significant difference by mix model analysis. As regards other secondary end points (each subscore of AHRS, BPRS, GAF and CGI-S), none of these parameters showed a significant between-group difference. The present study's rTMS protocol was ineffective for our patients. However, several previous studies demonstrated that high-frequency rTMS is a possible strategy to ameliorate pharmacotherapy-resistant AVH. It is important to establish a high-frequency rTMS protocol with more reliability. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Leiva, Alicia; Parmentier, Fabrice B R; Andrés, Pilar
We report the results of oddball experiments in which an irrelevant stimulus (standard, deviant) was presented before a target stimulus and the modality of these stimuli was manipulated orthogonally (visual/auditory). Experiment 1 showed that auditory deviants yielded distraction irrespective of the target's modality while visual deviants did not impact on performance. When participants were forced to attend the distractors in order to detect a rare target ("target-distractor"), auditory deviants yielded distraction irrespective of the target's modality and visual deviants yielded a small distraction effect when targets were auditory (Experiments 2 & 3). Visual deviants only produced distraction for visual targets when deviant stimuli were not visually distinct from the other distractors (Experiment 4). Our results indicate that while auditory deviants yield distraction irrespective of the targets' modality, visual deviants only do so when attended and under selective conditions, at least when irrelevant and target stimuli are temporally and perceptually decoupled.
Wijngaarden, S.J. van; Bronkhorst, A.W.; Boer, L.C.
Auditory evacuation beacons can be used to guide people to safe exits, even when vision is totally obscured by smoke. Conventional beacons make use of modulated noise signals. Controlled evacuation experiments show that such signals require explicit instructions and are often misunderstood. A new
Marks, Kendra L; Martel, David T; Wu, Calvin; Basura, Gregory J; Roberts, Larry E; Schvartz-Leyzac, Kara C; Shore, Susan E
The dorsal cochlear nucleus is the first site of multisensory convergence in mammalian auditory pathways. Principal output neurons, the fusiform cells, integrate auditory nerve inputs from the cochlea with somatosensory inputs from the head and neck. In previous work, we developed a guinea pig model of tinnitus induced by noise exposure and showed that the fusiform cells in these animals exhibited increased spontaneous activity and cross-unit synchrony, which are physiological correlates of tinnitus. We delivered repeated bimodal auditory-somatosensory stimulation to the dorsal cochlear nucleus of guinea pigs with tinnitus, choosing a stimulus interval known to induce long-term depression (LTD). Twenty minutes per day of LTD-inducing bimodal (but not unimodal) stimulation reduced physiological and behavioral evidence of tinnitus in the guinea pigs after 25 days. Next, we applied the same bimodal treatment to 20 human subjects with tinnitus using a double-blinded, sham-controlled, crossover study. Twenty-eight days of LTD-inducing bimodal stimulation reduced tinnitus loudness and intrusiveness. Unimodal auditory stimulation did not deliver either benefit. Bimodal auditory-somatosensory stimulation that induces LTD in the dorsal cochlear nucleus may hold promise for suppressing chronic tinnitus, which reduces quality of life for millions of tinnitus sufferers worldwide. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.
Full Text Available The human auditory system is adept at detecting sound sources of interest from a complex mixture of several other simultaneous sounds. The ability to selectively attend to the speech of one speaker whilst ignoring other speakers and background noise is of vital biological significance-the capacity to make sense of complex 'auditory scenes' is significantly impaired in aging populations as well as those with hearing loss. We investigated this problem by designing a synthetic signal, termed the 'stochastic figure-ground' stimulus that captures essential aspects of complex sounds in the natural environment. Previously, we showed that under controlled laboratory conditions, young listeners sampled from the university subject pool (n = 10 performed very well in detecting targets embedded in the stochastic figure-ground signal. Here, we presented a modified version of this cocktail party paradigm as a 'game' featured in a smartphone app (The Great Brain Experiment and obtained data from a large population with diverse demographical patterns (n = 5148. Despite differences in paradigms and experimental settings, the observed target-detection performance by users of the app was robust and consistent with our previous results from the psychophysical study. Our results highlight the potential use of smartphone apps in capturing robust large-scale auditory behavioral data from normal healthy volunteers, which can also be extended to study auditory deficits in clinical populations with hearing impairments and central auditory disorders.
Kara, Inci; Apiliogullari, Seza; Bagcı Taylan, Sengal; Bariskaner, Hulagu; Celik, Jale Bengi
This study was designed to investigate whether dexketoprofen added to perineuraly or subcutaneously alters the effects of levobupivacaine in a rat model of sciatic nerve blockade. Thirty-six rats received unilateral sciatic nerve blocks along with a subcutaneous injection by a blinded investigator assigned at random. Combinations were as follows: Group 1 (sham) perineural and subcutaneous saline; Group 2, perineural levobupivacaine alone and subcutaneous saline; Group 3, perineural levobupivacaine plus dexketoprofen and subcutaneous saline; Group 4, perineural levobupivacaine and subcutaneous dexketoprofen; Group 5, perineural dexketoprofen and subcutaneous saline; and Group 6, perineural saline and subcutaneous dexketoprofen. The levobupivacaine concentration was fixed at 0.05%, and the dose of dexketoprofen was 1 mg kg(-1) . Sensory analgesia was assessed by paw withdrawal latency to a thermal stimulus every 30 min. The unblocked paw served as the control for the assessment of systemic, centrally mediated analgesia. Perineural and subcutaneous dexketoprofen coadministered with perineural levobupivacaine did not enhance the duration of sensory blockade when compared with levobupivacaine alone. There were significant differences between the operative and control paws for time points 30-90 min in the perineural levobupivacaine alone, levobupivacaine + dexketoprofen and subcutaneous dexketoprofen added levobupivacaine group. Significant differences were not determined between the levobupivacaine alone group and dexketoprofen added groups in operative paw. The effects of dexketoprofen are unknown for perineural administration. There is no significant difference between the analgesic effects of peripheral nerve blocks using levobupivacaine alone and plus subcutaneous or perineural dexketoprofen. © 2012 The Authors Fundamental and Clinical Pharmacology © 2012 Société Française de Pharmacologie et de Thérapeutique.
Gandhi, Pritesh Hariprasad; Gokhale, Pradnya A; Mehta, H B; Shah, C J
Reaction time is the time interval between the application of a stimulus and the appearance of appropriate voluntary response by a subject. It involves stimulus processing, decision making, and response programming. Reaction time study has been popular due to their implication in sports physiology. Reaction time has been widely studied as its practical implications may be of great consequence e.g., a slower than normal reaction time while driving can have grave results. To study simple auditory reaction time in congenitally blind subjects and in age sex matched sighted subjects. To compare the simple auditory reaction time between congenitally blind subjects and healthy control subjects. STUDY HAD BEEN CARRIED OUT IN TWO GROUPS: The 1(st) of 50 congenitally blind subjects and 2(nd) group comprises of 50 healthy controls. It was carried out on Multiple Choice Reaction Time Apparatus, Inco Ambala Ltd. (Accuracy±0.001 s) in a sitting position at Government Medical College and Hospital, Bhavnagar and at a Blind School, PNR campus, Bhavnagar, Gujarat, India. Simple auditory reaction time response with four different type of sound (horn, bell, ring, and whistle) was recorded in both groups. According to our study, there is no significant different in reaction time between congenital blind and normal healthy persons. Blind individuals commonly utilize tactual and auditory cues for information and orientation and they reliance on touch and audition, together with more practice in using these modalities to guide behavior, is often reflected in better performance of blind relative to sighted participants in tactile or auditory discrimination tasks, but there is not any difference in reaction time between congenitally blind and sighted people.
Simon, Jonathan Z
Auditory objects, like their visual counterparts, are perceptually defined constructs, but nevertheless must arise from underlying neural circuitry. Using magnetoencephalography (MEG) recordings of the neural responses of human subjects listening to complex auditory scenes, we review studies that demonstrate that auditory objects are indeed neurally represented in auditory cortex. The studies use neural responses obtained from different experiments in which subjects selectively listen to one of two competing auditory streams embedded in a variety of auditory scenes. The auditory streams overlap spatially and often spectrally. In particular, the studies demonstrate that selective attentional gain does not act globally on the entire auditory scene, but rather acts differentially on the separate auditory streams. This stream-based attentional gain is then used as a tool to individually analyze the different neural representations of the competing auditory streams. The neural representation of the attended stream, located in posterior auditory cortex, dominates the neural responses. Critically, when the intensities of the attended and background streams are separately varied over a wide intensity range, the neural representation of the attended speech adapts only to the intensity of that speaker, irrespective of the intensity of the background speaker. This demonstrates object-level intensity gain control in addition to the above object-level selective attentional gain. Overall, these results indicate that concurrently streaming auditory objects, even if spectrally overlapping and not resolvable at the auditory periphery, are individually neurally encoded in auditory cortex, as separate objects. Copyright © 2014 Elsevier B.V. All rights reserved.
Borders, Alyssa A; Aly, Mariam; Parks, Colleen M; Yonelinas, Andrew P
The medial temporal lobe (MTL) is critical for binding together different attributes that together form memory for prior episodes, but whether it is preferentially involved in supporting specific types of associations is a topic of much debate. Some have argued that the MTL, specifically the hippocampus, may be specialized for binding information from different stimulus domains (e.g., linking visual and auditory stimuli). In the current study, we examined the role of the MTL in memory for associations within- vs. across-domains. Patients with either selective hippocampal lesions or more extensive MTL lesions studied pairs of items within the same stimulus domain (i.e., image-image or sound-sound pairs) or across different domains (i.e., image-sound pairs). Associative memory was subsequently tested by having participants discriminate between previously studied and rearranged pairs. Compared to healthy controls, the patients were significantly more impaired in the across-domain condition than the within-domain conditions. Similar deficits were observed for patients with hippocampal lesions and those with more extensive MTL lesions, suggesting that the hippocampus itself is particularly important for binding associations across stimulus domains. Copyright © 2017. Published by Elsevier Ltd.
Coleman, A Rand; Williams, J Michael
This study examined implicit semantic and rhyming cues on perception of auditory stimuli among nonaphasic participants who suffered a lesion of the right cerebral hemisphere and auditory neglect of sound perceived by the left ear. Because language represents an elaborate processing of auditory stimuli and the language centers were intact among these patients, it was hypothesized that interactive verbal stimuli presented in a dichotic manner would attenuate neglect. The selected participants were administered an experimental dichotic listening test composed of six types of word pairs: unrelated words, synonyms, antonyms, categorically related words, compound words, and rhyming words. Presentation of word pairs that were semantically related resulted in a dramatic reduction of auditory neglect. Dichotic presentations of rhyming words exacerbated auditory neglect. These findings suggest that the perception of auditory information is strongly affected by the specific content conveyed by the auditory system. Language centers will process a degraded stimulus that contains salient language content. A degraded auditory stimulus is neglected if it is devoid of content that activates the language centers or other cognitive systems. In general, these findings suggest that auditory neglect involves a complex interaction of intact and impaired cerebral processing centers with content that is selectively processed by these centers.
Schnakenberg Martin, Ashley M; Bartolomeo, Lisa; Howell, Josselyn; Hetrick, William P; Bolbecker, Amanda R; Breier, Alan; Kidd, Gary; O'Donnell, Brian F
Schizophrenia spectrum disorder (SZ) is associated with deficits in auditory perception as well as auditory verbal hallucinations (AVH). However, the relationship between auditory feature perception and auditory verbal hallucinations (AVH), one of the most commonly occurring symptoms in psychosis, has not been well characterized. This study evaluated perception of a broad range of auditory features in SZ and determined whether current AVHs relate to auditory feature perception. Auditory perception, including frequency, intensity, duration, pulse-train and temporal order discrimination, as well as an embedded tone task, was assessed in both AVH (n = 20) and non-AVH (n = 24) SZ individuals and in healthy controls (n = 29) with the Test of Basic Auditory Capabilities (TBAC). The Hamilton Program for Schizophrenia Voices Questionnaire (HPSVQ) was used to assess the experience of auditory hallucinations in patients with SZ. Findings suggest that compared to controls, the SZ group had greater deficits on an array of auditory features, with non-AVH SZ individuals showing the most severe degree of abnormality. IQ and measures of cognitive processing were positively associated with performance on the TBAC for all SZ individuals, but not with the HPSVQ scores. These findings indicate that persons with SZ demonstrate impaired auditory perception for a broad range of features. It does not appear that impaired auditory perception is associated with recent auditory verbal hallucinations, but instead associated with the degree of intellectual impairment in SZ.
Elizabeth C Hames
Full Text Available Electroencephalography (EEG and Blood Oxygen Level Dependent Functional Magnetic Resonance Imagining (BOLD fMRI assessed the neurocorrelates of sensory processing of visual and auditory stimuli in 11 adults with autism (ASD and 10 neurotypical (NT controls between the ages of 20-28. We hypothesized that ASD performance on combined audiovisual trials would be less accurate with observable decreased EEG power across frontal, temporal, and occipital channels and decreased BOLD fMRI activity in these same regions; reflecting deficits in key sensory processing areas. Analysis focused on EEG power, BOLD fMRI, and accuracy. Lower EEG beta power and lower left auditory cortex fMRI activity were seen in ASD compared to NT when they were presented with auditory stimuli as demonstrated by contrasting the activity from the second presentation of an auditory stimulus in an all auditory block versus the second presentation of a visual stimulus in an all visual block (AA2VV2. We conclude that in ASD, combined audiovisual processing is more similar than unimodal processing to NTs.
Tillery, Kim L.; Katz, Jack; Keller, Warren D.
A double-blind, placebo-controlled study examined effects of methylphenidate (Ritalin) on auditory processing in 32 children with both attention deficit hyperactivity disorder and central auditory processing (CAP) disorder. Analyses revealed that Ritalin did not have a significant effect on any of the central auditory processing measures, although…
Zhang, Yu-Xuan; Moore, David R; Guiraud, Jeanne; Molloy, Katharine; Yan, Ting-Ting; Amitay, Sygal
Perceptual training is generally assumed to improve perception by modifying the encoding or decoding of sensory information. However, this assumption is incompatible with recent demonstrations that transfer of learning can be enhanced by across-trial variation of training stimuli or task. Here we present three lines of evidence from healthy adults in support of the idea that the enhanced transfer of auditory discrimination learning is mediated by working memory (WM). First, the ability to discriminate small differences in tone frequency or duration was correlated with WM measured with a tone n-back task. Second, training frequency discrimination around a variable frequency transferred to and from WM learning, but training around a fixed frequency did not. The transfer of learning in both directions was correlated with a reduction of the influence of stimulus variation in the discrimination task, linking WM and its improvement to across-trial stimulus interaction in auditory discrimination. Third, while WM training transferred broadly to other WM and auditory discrimination tasks, variable-frequency training on duration discrimination did not improve WM, indicating that stimulus variation challenges and trains WM only if the task demands stimulus updating in the varied dimension. The results provide empirical evidence as well as a theoretic framework for interactions between cognitive and sensory plasticity during perceptual experience.
Abdollah Moossavi; Saeideh Mehrkian; Yones Lotfi; Soghrat Faghih zadeh; Hamed Adjedi
Objectives: This study investigated the efficacy of working memory training for improving working memory capacity and related auditory stream segregation in auditory processing disorders children. Methods: Fifteen subjects (9-11 years), clinically diagnosed with auditory processing disorder participated in this non-randomized case-controlled trial. Working memory abilities and auditory stream segregation were evaluated prior to beginning and six weeks after completing the training program...
van der Aa, Jeroen; Honing, Henkjan; ten Cate, Carel
Perceiving temporal regularity in an auditory stimulus is considered one of the basic features of musicality. Here we examine whether zebra finches can detect regularity in an isochronous stimulus. Using a go/no go paradigm we show that zebra finches are able to distinguish between an isochronous and an irregular stimulus. However, when the tempo of the isochronous stimulus is changed, it is no longer treated as similar to the training stimulus. Training with three isochronous and three irregular stimuli did not result in improvement of the generalization. In contrast, humans, exposed to the same stimuli, readily generalized across tempo changes. Our results suggest that zebra finches distinguish the different stimuli by learning specific local temporal features of each individual stimulus rather than attending to the global structure of the stimuli, i.e., to the temporal regularity. Copyright © 2015 Elsevier B.V. All rights reserved.
Zokoll, Melanie A; Klump, Georg M; Langemann, Ulrike
This study evaluates auditory memory for variations in the rate of sinusoidal amplitude modulation (SAM) of noise bursts in the European starling (Sturnus vulgaris). To estimate the extent of the starling's auditory short-term memory store, a delayed non-matching-to-sample paradigm was applied. The birds were trained to discriminate between a series of identical "sample stimuli" and a single "test stimulus". The birds classified SAM rates of sample and test stimuli as being either the same or different. Memory performance of the birds was measured as the percentage of correct classifications. Auditory memory persistence time was estimated as a function of the delay between sample and test stimuli. Memory performance was significantly affected by the delay between sample and test and by the number of sample stimuli presented before the test stimulus, but was not affected by the difference in SAM rate between sample and test stimuli. The individuals' auditory memory persistence times varied between 2 and 13 s. The starlings' auditory memory persistence in the present study for signals varying in the temporal domain was significantly shorter compared to that of a previous study (Zokoll et al. in J Acoust Soc Am 121:2842, 2007) applying tonal stimuli varying in the spectral domain.
Bais, Leonie; Vercammen, Ans; Stewart, Roy; van Es, Frank; Visser, Bert; Aleman, André; Knegtering, Henderikus
Repetitive transcranial magnetic stimulation of the left temporo-parietal junction area has been studied as a treatment option for auditory verbal hallucinations. Although the right temporo-parietal junction area has also shown involvement in the genesis of auditory verbal hallucinations, no studies have used bilateral stimulation. Moreover, little is known about durability effects. We studied the short and long term effects of 1 Hz treatment of the left temporo-parietal junction area in schizophrenia patients with persistent auditory verbal hallucinations, compared to sham stimulation, and added an extra treatment arm of bilateral TPJ area stimulation. In this randomized controlled trial, 51 patients diagnosed with schizophrenia and persistent auditory verbal hallucinations were randomly allocated to treatment of the left or bilateral temporo-parietal junction area or sham treatment. Patients were treated for six days, twice daily for 20 minutes. Short term efficacy was measured with the Positive and Negative Syndrome Scale (PANSS), the Auditory Hallucinations Rating Scale (AHRS), and the Positive and Negative Affect Scale (PANAS). We included follow-up measures with the AHRS and PANAS at four weeks and three months. The interaction between time and treatment for Hallucination item P3 of the PANSS showed a trend for significance, caused by a small reduction of scores in the left group. Although self-reported hallucination scores, as measured with the AHRS and PANAS, decreased significantly during the trial period, there were no differences between the three treatment groups. We did not find convincing evidence for the efficacy of left-sided rTMS, compared to sham rTMS. Moreover, bilateral rTMS was not superior over left rTMS or sham in improving AVH. Optimizing treatment parameters may result in stronger evidence for the efficacy of rTMS treatment of AVH. Moreover, future research should consider investigating factors predicting individual response. Dutch Trial
Kelly L Tremblay
Full Text Available Auditory training programs are being developed to remediate various types of communication disorders. Biological changes have been shown to coincide with improved perception following auditory training so there is interest in determining if these changes represent biologic markers of auditory learning. Here we examine the role of stimulus exposure and listening tasks, in the absence of training, on the modulation of evoked brain activity. Twenty adults were divided into two groups and exposed to two similar sounding speech syllables during four electrophysiological recording sessions (24 hours, one week, and up to one year later. In between each session, members of one group were asked to identify each stimulus. Both groups showed enhanced neural activity from session-to-session, in the same P2 latency range previously identified as being responsive to auditory training. The enhancement effect was most pronounced over temporal-occipital scalp regions and largest for the group who participated in the identification task. The effects were rapid and long-lasting with enhanced synchronous activity persisting months after the last auditory experience. Physiological changes did not coincide with perceptual changes so results are interpreted to mean stimulus exposure, with and without being paired with an identification task, alters the way sound is processed in the brain. The cumulative effect likely involves auditory memory; however, in the absence of training, the observed physiological changes are insufficient to result in changes in learned behavior.
Full Text Available The present paper proposes a highly reconfigurable beamformer stimulus generator of radar antenna array, which includes three main blocks: settings of antenna array, settings of objects (signal sources and a beamforming simulator. Following from the configuration of antenna array and object settings, different stimulus can be generated as the input signal for a beamformer. This stimulus generator is developed under a greater concept with two utterly independent paths where one is the stimulus generator and the other is the hardware beamformer. Both paths can be complemented in final and in intermediate steps as well to check and improve system performance. This way the technology development process is promoted by making each of the future hardware steps more substantive. Stimulus generator configuration capabilities and test results are presented proving the application of the stimulus generator for FPGA based beamforming unit development and tuning as an alternative to an actual antenna system.
Full Text Available Abstract Background Parkinson's disease is a progressive neurological disorder resulting from a degeneration of dopamine producing cells in the substantia nigra. Clinical symptoms typically affect gait pattern and motor performance. Evidence suggests that the use of individual auditory cueing devices may be used effectively for the management of gait and freezing in people with Parkinson's disease. The primary aim of the randomised controlled trial is to evaluate the effect of an individual auditory cueing device on freezing and gait speed in people with Parkinson's disease. Methods A prospective multi-centre randomised cross over design trial will be conducted. Forty-seven subjects will be randomised into either Group A or Group B, each with a control and intervention phase. Baseline measurements will be recorded using the Freezing of Gait Questionnaire as the primary outcome measure and 3 secondary outcome measures, the 10 m Walk Test, Timed "Up & Go" Test and the Modified Falls Efficacy Scale. Assessments are taken 3-times over a 3-week period. A follow-up assessment will be completed after three months. A secondary aim of the study is to evaluate the impact of such a device on the quality of life of people with Parkinson's disease using a qualitative methodology. Conclusion The Apple iPod-Shuffle™ and similar devices provide a cost effective and an innovative platform for integration of individual auditory cueing devices into clinical, social and home environments and are shown to have immediate effect on gait, with improvements in walking speed, stride length and freezing. It is evident that individual auditory cueing devices are of benefit to people with Parkinson's disease and the aim of this randomised controlled trial is to maximise the benefits by allowing the individual to use devices in both a clinical and social setting, with minimal disruption to their daily routine. Trial registration The protocol for this study is registered
Manoonpong, Poramate; Pasemann, Frank; Fischer, Joern
and a neural preprocessing system together with a modular neural controller are used to generate a sound tropism of a four-legged walking machine. The neural preprocessing network is acting as a low-pass filter and it is followed by a network which discerns between signals coming from the left or the right....... The parameters of these networks are optimized by an evolutionary algorithm. In addition, a simple modular neural controller then generates the desired different walking patterns such that the machine walks straight, then turns towards a switched-on sound source, and then stops near to it....
Ciaramitaro, Vivian M; Chow, Hiu Mei; Eglington, Luke G
We used a cross-modal dual task to examine how changing visual-task demands influenced auditory processing, namely auditory thresholds for amplitude- and frequency-modulated sounds. Observers had to attend to two consecutive intervals of sounds and report which interval contained the auditory stimulus that was modulated in amplitude (Experiment 1) or frequency (Experiment 2). During auditory-stimulus presentation, observers simultaneously attended to a rapid sequential visual presentation-two consecutive intervals of streams of visual letters-and had to report which interval contained a particular color (low load, demanding less attentional resources) or, in separate blocks of trials, which interval contained more of a target letter (high load, demanding more attentional resources). We hypothesized that if attention is a shared resource across vision and audition, an easier visual task should free up more attentional resources for auditory processing on an unrelated task, hence improving auditory thresholds. Auditory detection thresholds were lower-that is, auditory sensitivity was improved-for both amplitude- and frequency-modulated sounds when observers engaged in a less demanding (compared to a more demanding) visual task. In accord with previous work, our findings suggest that visual-task demands can influence the processing of auditory information on an unrelated concurrent task, providing support for shared attentional resources. More importantly, our results suggest that attending to information in a different modality, cross-modal attention, can influence basic auditory contrast sensitivity functions, highlighting potential similarities between basic mechanisms for visual and auditory attention.
Manoonpong, Poramate; Pasemann, Frank; Fischer, Joern
and a neural preprocessing system together with a modular neural controller are used to generate a sound tropism of a four-legged walking machine. The neural preprocessing network is acting as a low-pass filter and it is followed by a network which discerns between signals coming from the left or the right...
Full Text Available Hearing loss, presbycusis, is one of the most common sensory declines in the ageing population. Presbycusis is characterised by a deterioration in the processing of temporal sound features as well as a decline in speech perception, thus indicating a possible central component. With the aim to explore the central component of presbycusis, we studied the function of the auditory cortex by functional MRI in two groups of elderly subjects (>65 years and compared the results with young subjects (
Profant, Oliver; Tintěra, Jaroslav; Balogová, Zuzana; Ibrahim, Ibrahim; Jilek, Milan; Syka, Josef
Hearing loss, presbycusis, is one of the most common sensory declines in the ageing population. Presbycusis is characterised by a deterioration in the processing of temporal sound features as well as a decline in speech perception, thus indicating a possible central component. With the aim to explore the central component of presbycusis, we studied the function of the auditory cortex by functional MRI in two groups of elderly subjects (>65 years) and compared the results with young subjects (presbycusis (EP) differed from the elderly group with mild presbycusis (MP) in hearing thresholds measured by pure tone audiometry, presence and amplitudes of transient otoacoustic emissions (TEOAE) and distortion-product oto-acoustic emissions (DPOAE), as well as in speech-understanding under noisy conditions. Acoustically evoked activity (pink noise centered around 350 Hz, 700 Hz, 1.5 kHz, 3 kHz, 8 kHz), recorded by BOLD fMRI from an area centered on Heschl’s gyrus, was used to determine age-related changes at the level of the auditory cortex. The fMRI showed only minimal activation in response to the 8 kHz stimulation, despite the fact that all subjects heard the stimulus. Both elderly groups showed greater activation in response to acoustical stimuli in the temporal lobes in comparison with young subjects. In addition, activation in the right temporal lobe was more expressed than in the left temporal lobe in both elderly groups, whereas in the young control subjects (YC) leftward lateralization was present. No statistically significant differences in activation of the auditory cortex were found between the MP and EP groups. The greater extent of cortical activation in elderly subjects in comparison with young subjects, with an asymmetry towards the right side, may serve as a compensatory mechanism for the impaired processing of auditory information appearing as a consequence of ageing. PMID:25734519
Brasileiro, A; Gama, G; Trigueiro, L; Ribeiro, T; Silva, E; Galvão, É; Lindquist, A
Stroke is an important causal factor of deficiency and functional dependence worldwide. To determine the immediate effects of visual and auditory biofeedback, combined with partial body weight supported (PBWS) treadmill training on the gait of individuals with chronic hemiparesis. Randomized controlled trial. Outpatient rehabilitation hospital. Thirty subjects with chronic hemiparesis and ability to walk with some help. Participants were randomized to a control group that underwent only PBWS treadmill training; or experimental I group with visual biofeedback from the display monitor, in the form of symbolic feet as the subject took a step; or experimental group II with auditory biofeedback associated display, using a metronome at 115% of the individual's preferred cadence. They trained for 20 minutes and were evaluated before and after training. Spatio-temporal and angular gait variables were obtained by kinematics from the Qualisys Motion Analysis system. Increases in speed and stride length were observed for all groups over time (speed: F=25.63; Ptraining of individuals with chronic hemiparesis, in short term. Additional studies are needed to determine whether, in long term, the biofeedback will promote additional benefit to the PBWS treadmill training. The findings of this study indicate that visual and auditory biofeedback does not bring immediate benefits on PBWS treadmill training of individuals with chronic hemiparesis. This suggest that, for additional benefits are achieved with biofeedback, effects should be investigated after long-term training, which may determine if some kind of biofeedback is superior to another to improve the hemiparetic gait.
Sanjuán Juaristi, Julio; Sanjuán Martínez-Conde, Mar
Given the relevance of possible hearing losses due to sound overloads and the short list of references of objective procedures for their study, we provide a technique that gives precise data about the audiometric profile and recruitment factor. Our objectives were to determine peripheral fatigue, through the cochlear microphonic response to sound pressure overload stimuli, as well as to measure recovery time, establishing parameters for differentiation with regard to current psychoacoustic and clinical studies. We used specific instruments for the study of cochlear microphonic response, plus a function generator that provided us with stimuli of different intensities and harmonic components. In Wistar rats, we first measured the normal microphonic response and then the effect of auditory fatigue on it. Using a 60dB pure tone acoustic stimulation, we obtained a microphonic response at 20dB. We then caused fatigue with 100dB of the same frequency, reaching a loss of approximately 11dB after 15minutes; after that, the deterioration slowed and did not exceed 15dB. By means of complex random tone maskers or white noise, no fatigue was caused to the sensory receptors, not even at levels of 100dB and over an hour of overstimulation. No fatigue was observed in terms of sensory receptors. Deterioration of peripheral perception through intense overstimulation may be due to biochemical changes of desensitisation due to exhaustion. Auditory fatigue in subjective clinical trials presumably affects supracochlear sections. The auditory fatigue tests found are not in line with those obtained subjectively in clinical and psychoacoustic trials. Copyright © 2013 Elsevier España, S.L.U. y Sociedad Española de Otorrinolaringología y Patología Cérvico-Facial. All rights reserved.
Jones, L.A.; Hills, P.J.; Dick, K.M.; Jones, S.P.; Bright, P.
Sensory gating is a neurophysiological measure of inhibition that is characterised by a reduction in the P50 event-related potential to a repeated identical stimulus. The objective of this work was to determine the cognitive mechanisms that relate to the neurological phenomenon of auditory sensory gating. Sixty participants underwent a battery of 10 cognitive tasks, including qualitatively different measures of attentional inhibition, working memory, and fluid intelligence. Participants additionally completed a paired-stimulus paradigm as a measure of auditory sensory gating. A correlational analysis revealed that several tasks correlated significantly with sensory gating. However once fluid intelligence and working memory were accounted for, only a measure of latent inhibition and accuracy scores on the continuous performance task showed significant sensitivity to sensory gating. We conclude that sensory gating reflects the identification of goal-irrelevant information at the encoding (input) stage and the subsequent ability to selectively attend to goal-relevant information based on that previous identification. PMID:26716891
Christian Harm Uhlig
Full Text Available While strong activation of auditory cortex is generally found for exogenous orienting of attention, endogenous, intra-modal shifting of auditory attention has not yet been demonstrated to evoke transient activation of the auditory cortex. Here, we used fMRI to test if endogenous shifting of attention is also associated with transient activation of the auditory cortex. In contrast to previous studies, attention shifts were completely self-initiated and not cued by transient auditory or visual stimuli. Stimuli were two dichotic, continuous streams of tones, whose perceptual grouping was not ambiguous. Participants were instructed to continuously focus on one of the streams and switch between the two after a while, indicating the time and direction of each attentional shift by pressing one of two response buttons. The BOLD response around the time of the button presses revealed robust activation of the auditory cortex, along with activation of a distributed task network. To test if the transient auditory cortex activation was specifically related to auditory orienting, a self-paced motor task was added, where participants were instructed to ignore the auditory stimulation while they pressed the response buttons in alternation and at a similar pace. Results showed that attentional orienting produced stronger activity in auditory cortex, but auditory cortex activation was also observed for button presses without focused attention to the auditory stimulus. The response related to attention shifting was stronger contralateral to the side where attention was shifted to. Contralateral-dominant activation was also observed in dorsal parietal cortex areas, confirming previous observations for auditory attention shifting in studies that used auditory cues.
Kraus, Thomas; Kiess, Olga; Hösl, Katharina; Terekhin, Pavel; Kornhuber, Johannes; Forster, Clemens
It has recently been shown that electrical stimulation of sensory afferents within the outer auditory canal may facilitate a transcutaneous form of central nervous system stimulation. Functional magnetic resonance imaging (fMRI) blood oxygenation level dependent (BOLD) effects in limbic and temporal structures have been detected in two independent studies. In the present study, we investigated BOLD fMRI effects in response to transcutaneous electrical stimulation of two different zones in the left outer auditory canal. It is hypothesized that different central nervous system (CNS) activation patterns might help to localize and specifically stimulate auricular cutaneous vagal afferents. 16 healthy subjects aged between 20 and 37 years were divided into two groups. 8 subjects were stimulated in the anterior wall, the other 8 persons received transcutaneous vagus nervous stimulation (tVNS) at the posterior side of their left outer auditory canal. For sham control, both groups were also stimulated in an alternating manner on their corresponding ear lobe, which is generally known to be free of cutaneous vagal innervation. Functional MR data from the cortex and brain stem level were collected and a group analysis was performed. In most cortical areas, BOLD changes were in the opposite direction when comparing anterior vs. posterior stimulation of the left auditory canal. The only exception was in the insular cortex, where both stimulation types evoked positive BOLD changes. Prominent decreases of the BOLD signals were detected in the parahippocampal gyrus, posterior cingulate cortex and right thalamus (pulvinar) following anterior stimulation. In subcortical areas at brain stem level, a stronger BOLD decrease as compared with sham stimulation was found in the locus coeruleus and the solitary tract only during stimulation of the anterior part of the auditory canal. The results of the study are in line with previous fMRI studies showing robust BOLD signal decreases in
Gibson, Brett M; Wasserman, Edward A
The authors taught pigeons to discriminate displays of 16 identical items from displays of 16 nonidentical items. Unlike most same-different discrimination studies--where only stimulus relations could serve a discriminative function--both the identity of the items and the relations among the items were discriminative features of the displays. The pigeons learned about both stimulus identity and stimulus relations when these 2 sources of information served as redundant, relevant cues. In tests of associative competition, identity cues exerted greater stimulus control than relational cues. These results suggest that the pigeon can respond to both specific stimuli and general relations in the environment.
Full Text Available A set of images can be considered as meaningfully different for an observer if they can be distinguished phenomenally from one another. Each phenomenal difference must be supported by some neurophysiological differences. Differentiation analysis aims to quantify neurophysiological differentiation evoked by a given set of stimuli to assess its meaningfulness to the individual observer. As a proof of concept using high-density EEG, we show increased neurophysiological differentiation for a set of natural, meaningfully different images in contrast to another set of artificially generated, meaninglessly different images in nine participants. Stimulus-evoked neurophysiological differentiation (over 257 channels, 800 ms was systematically greater for meaningful vs. meaningless stimulus categories both at the group level and for individual subjects. Spatial breakdown showed a central-posterior peak of differentiation, consistent with the visual nature of the stimulus sets. Temporal breakdown revealed an early peak of differentiation around 110 ms, prominent in the central-posterior region; and a later, longer-lasting peak at 300–500 ms that was spatially more distributed. The early peak of differentiation was not accompanied by changes in mean ERP amplitude, whereas the later peak was associated with a higher amplitude ERP for meaningful images. An ERP component similar to visual-awareness-negativity occurred during the nadir of differentiation across all image types. Control stimulus sets and further analysis indicate that changes in neurophysiological differentiation between meaningful and meaningless stimulus sets could not be accounted for by spatial properties of the stimuli or by stimulus novelty and predictability.
Wu, Calvin; Stefanescu, Roxana A; Martel, David T; Shore, Susan E
Tinnitus, the phantom perception of sound, is physiologically characterized by an increase in spontaneous neural activity in the central auditory system. However, as tinnitus is often associated with hearing impairment, it is unclear how a decrease of afferent drive can result in central hyperactivity. In this review, we first assess methods for tinnitus induction and objective measures of the tinnitus percept in animal models. From animal studies, we discuss evidence that tinnitus originates in the cochlear nucleus (CN), and hypothesize mechanisms whereby hyperactivity may develop in the CN after peripheral auditory nerve damage. We elaborate how this process is likely mediated by plasticity of auditory-somatosensory integration in the CN: the circuitry in normal circumstances maintains a balance of auditory and somatosensory activities, and loss of auditory inputs alters the balance of auditory somatosensory integration in a stimulus timing dependent manner, which propels the circuit towards hyperactivity. Understanding the mechanisms underlying tinnitus generation is essential for its prevention and treatment. This article is part of a Special Issue entitled . Copyright © 2015 Elsevier B.V. All rights reserved.
Full Text Available People often coordinate their movement with visual and auditory environmental rhythms. Previous research showed better performances when coordinating with auditory compared to visual stimuli, and with bimodal compared to unimodal stimuli. However, these results have been demonstrated with discrete rhythms and it is possible that such effects depend on the continuity of the stimulus rhythms (i.e., whether they are discrete or continuous. The aim of the current study was to investigate the influence of the continuity of visual and auditory rhythms on sensorimotor coordination. We examined the dynamics of synchronized oscillations of a wrist pendulum with auditory and visual rhythms at different frequencies, which were either unimodal or bimodal and discrete or continuous. Specifically, the stimuli used were a light flash, a fading light, a short tone and a frequency-modulated tone. The results demonstrate that the continuity of the stimulus rhythms strongly influences visual and auditory motor coordination. Participants' movement led continuous stimuli and followed discrete stimuli. Asymmetries between the half-cycles of the movement in term of duration and nonlinearity of the trajectory occurred with slower discrete rhythms. Furthermore, the results show that the differences of performance between visual and auditory modalities depend on the continuity of the stimulus rhythms as indicated by movements closer to the instructed coordination for the auditory modality when coordinating with discrete stimuli. The results also indicate that visual and auditory rhythms are integrated together in order to better coordinate irrespective of their continuity, as indicated by less variable coordination closer to the instructed pattern. Generally, the findings have important implications for understanding how we coordinate our movements with visual and auditory environmental rhythms in everyday life.
Varnhagen, Connie K.; And Others
Auditory and visual memory span were examined with 13 Down Syndrome and 15 other trainable mentally retarded young adults. Although all subjects demonstrated relatively poor auditory memory span, Down Syndrome subjects were especially poor at long-term memory access for visual stimulus identification and short-term storage and processing of…
Fujioka, Takako; Ross, Bernhard; Kakigi, Ryusuke; Pantev, Christo; Trainor, Laurel J.
Auditory evoked responses to a violin tone and a noise-burst stimulus were recorded from 4- to 6-year-old children in four repeated measurements over a 1-year period using magnetoencephalography (MEG). Half of the subjects participated in musical lessons throughout the year; the other half had no music lessons. Auditory evoked magnetic fields…
Kim, Do-Won; Cho, Jae-Hyun; Hwang, Han-Jeong; Lim, Jeong-Hwan; Im, Chang-Hwan
Majority of the recently developed brain computer interface (BCI) systems have been using visual stimuli or visual feedbacks. However, the BCI paradigms based on visual perception might not be applicable to severe locked-in patients who have lost their ability to control their eye movement or even their vision. In the present study, we investigated the feasibility of a vision-free BCI paradigm based on auditory selective attention. We used the power difference of auditory steady-state responses (ASSRs) when the participant modulates his/her attention to the target auditory stimulus. The auditory stimuli were constructed as two pure-tone burst trains with different beat frequencies (37 and 43 Hz) which were generated simultaneously from two speakers located at different positions (left and right). Our experimental results showed high classification accuracies (64.67%, 30 commands/min, information transfer rate (ITR) = 1.89 bits/min; 74.00%, 12 commands/min, ITR = 2.08 bits/min; 82.00%, 6 commands/min, ITR = 1.92 bits/min; 84.33%, 3 commands/min, ITR = 1.12 bits/min; without any artifact rejection, inter-trial interval = 6 sec), enough to be used for a binary decision. Based on the suggested paradigm, we implemented a first online ASSR-based BCI system that demonstrated the possibility of materializing a totally vision-free BCI system.
van Vugt, F T; Kafczyk, T; Kuhn, W; Rollnik, J D; Tillmann, B; Altenmüller, E
Learning to play musical instruments such as piano was previously shown to benefit post-stroke motor rehabilitation. Previous work hypothesised that the mechanism of this rehabilitation is that patients use auditory feedback to correct their movements and therefore show motor learning. We tested this hypothesis by manipulating the auditory feedback timing in a way that should disrupt such error-based learning. We contrasted a patient group undergoing music-supported therapy on a piano that emits sounds immediately (as in previous studies) with a group whose sounds are presented after a jittered delay. The delay was not noticeable to patients. Thirty-four patients in early stroke rehabilitation with moderate motor impairment and no previous musical background learned to play the piano using simple finger exercises and familiar children's songs. Rehabilitation outcome was not impaired in the jitter group relative to the normal group. Conversely, some clinical tests suggests the jitter group outperformed the normal group. Auditory feedback-based motor learning is not the beneficial mechanism of music-supported therapy. Immediate auditory feedback therapy may be suboptimal. Jittered delay may increase efficacy of the proposed therapy and allow patients to fully benefit from motivational factors of music training. Our study shows a novel way to test hypotheses concerning music training in a single-blinded way, which is an important improvement over existing unblinded tests of music interventions.
Jones, David L.; Gao, Sujuan; Svirsky, Mario A.
A study investigated whether two speech measures (peak intraoral air pressure (IOP) and IOP duration) obtained during production of intervocalic stops would be altered by the presence or absence of a cochlear implant in five children (ages 7-10). The auditory condition affected peak IOP more than IOP duration. (Contains references.) (Author/CR)
Vercammen, Ans; Knegtering, Henderikus; Bruggeman, Richard; Westenbroek, Hanneke. M.; Jenner, Jack A.; Slooff, Cees J.; Wunderink, Lex; Aleman, Andre
Background: Neuroimaging findings implicate bilateral superior temporal regions in the genesis of auditory-verbal hallucinations (AVH). This study aimed to investigate whether 1 Hz repetitive transcranial magnetic stimulation (rTMS) of the bilateral temporo-parietal region would lead to increased
Soskey, Laura N; Allen, Paul D; Bennetto, Loisa
One of the earliest observable impairments in autism spectrum disorder (ASD) is a failure to orient to speech and other social stimuli. Auditory spatial attention, a key component of orienting to sounds in the environment, has been shown to be impaired in adults with ASD. Additionally, specific deficits in orienting to social sounds could be related to increased acoustic complexity of speech. We aimed to characterize auditory spatial attention in children with ASD and neurotypical controls, and to determine the effect of auditory stimulus complexity on spatial attention. In a spatial attention task, target and distractor sounds were played randomly in rapid succession from speakers in a free-field array. Participants attended to a central or peripheral location, and were instructed to respond to target sounds at the attended location while ignoring nearby sounds. Stimulus-specific blocks evaluated spatial attention for simple non-speech tones, speech sounds (vowels), and complex non-speech sounds matched to vowels on key acoustic properties. Children with ASD had significantly more diffuse auditory spatial attention than neurotypical children when attending front, indicated by increased responding to sounds at adjacent non-target locations. No significant differences in spatial attention emerged based on stimulus complexity. Additionally, in the ASD group, more diffuse spatial attention was associated with more severe ASD symptoms but not with general inattention symptoms. Spatial attention deficits have important implications for understanding social orienting deficits and atypical attentional processes that contribute to core deficits of ASD. Autism Res 2017, 10: 1405-1416. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.
Atilgan, Huriye; Town, Stephen M; Wood, Katherine C; Jones, Gareth P; Maddox, Ross K; Lee, Adrian K C; Bizley, Jennifer K
How and where in the brain audio-visual signals are bound to create multimodal objects remains unknown. One hypothesis is that temporal coherence between dynamic multisensory signals provides a mechanism for binding stimulus features across sensory modalities. Here, we report that when the luminance of a visual stimulus is temporally coherent with the amplitude fluctuations of one sound in a mixture, the representation of that sound is enhanced in auditory cortex. Critically, this enhancement extends to include both binding and non-binding features of the sound. We demonstrate that visual information conveyed from visual cortex via the phase of the local field potential is combined with auditory information within auditory cortex. These data provide evidence that early cross-sensory binding provides a bottom-up mechanism for the formation of cross-sensory objects and that one role for multisensory binding in auditory cortex is to support auditory scene analysis. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.
Corneil, B D; Van Wanrooij, M; Munoz, D P; Van Opstal, A J
This study addresses the integration of auditory and visual stimuli subserving the generation of saccades in a complex scene. Previous studies have shown that saccadic reaction times (SRTs) to combined auditory-visual stimuli are reduced when compared with SRTs to either stimulus alone. However, these results have been typically obtained with high-intensity stimuli distributed over a limited number of positions in the horizontal plane. It is less clear how auditory-visual interactions influence saccades under more complex but arguably more natural conditions, when low-intensity stimuli are embedded in complex backgrounds and distributed throughout two-dimensional (2-D) space. To study this problem, human subjects made saccades to visual-only (V-saccades), auditory-only (A-saccades), or spatially coincident auditory-visual (AV-saccades) targets. In each trial, the low-intensity target was embedded within a complex auditory-visual background, and subjects were allowed over 3 s to search for and foveate the target at 1 of 24 possible locations within the 2-D oculomotor range. We varied systematically the onset times of the targets and the intensity of the auditory target relative to background [i.e., the signal-to-noise (S/N) ratio] to examine their effects on both SRT and saccadic accuracy. Subjects were often able to localize the target within one or two saccades, but in about 15% of the trials they generated scanning patterns that consisted of many saccades. The present study reports only the SRT and accuracy of the first saccade in each trial. In all subjects, A-saccades had shorter SRTs than V-saccades, but were more inaccurate than V-saccades when generated to auditory targets presented at low S/N ratios. AV-saccades were at least as accurate as V-saccades but were generated at SRTs typical of A-saccades. The properties of AV-saccades depended systematically on both stimulus timing and S/N ratio of the auditory target. Compared with unimodal A- and V
Thomas A Christensen
Full Text Available A common explanation for the interference effect in the classic visual Stroop test is that reading a word (the more automatic semantic response must be suppressed in favor of naming the text color (the slower sensory response. Neuroimaging studies also consistently report anterior cingulate/medial frontal, lateral prefrontal, and anterior insular structures as key components of a network for Stroop-conflict processing. It remains unclear, however, whether automatic processing of semantic information can explain the interference effect in other variants of the Stroop test. It also is not known if these frontal regions serve a specific role in visual Stroop conflict, or instead play a more universal role as components of a more generalized, supramodal executive-control network for conflict processing. To address these questions, we developed a novel auditory Stroop test in which the relative dominance of semantic and sensory feature processing is reversed. Listeners were asked to focus either on voice gender (a more automatic sensory discrimination task or on the gender meaning of the word (a less automatic semantic task while ignoring the conflicting stimulus feature. An auditory Stroop effect was observed when voice features replaced semantic content as the "to-be-ignored" component of the incongruent stimulus. Also, in sharp contrast to previous Stroop studies, neural responses to incongruent stimuli studied with functional magnetic resonance imaging revealed greater recruitment of conflict loci when selective attention was focused on gender meaning (semantic task over voice gender (sensory task. Furthermore, in contrast to earlier Stroop studies that implicated dorsomedial cortex in visual conflict processing, interference-related activation in both of our auditory tasks was localized ventrally in medial frontal areas, suggesting a dorsal-to-ventral separation of function in medial frontal cortex that is sensitive to stimulus context.
Stekelenburg, J.J.; Keetels, M.N.
The Colavita effect refers to the phenomenon that when confronted with an audiovisual stimulus, observers report more often to have perceived the visual than the auditory component. The Colavita effect depends on low-level stimulus factors such as spatial and temporal proximity between the unimodal
Mullen, Stuart; Dixon, Mark R.; Belisle, Jordan; Stanley, Caleb
The current study sought to evaluate the efficacy of a stimulus equivalence training procedure in establishing auditory-tactile-visual stimulus classes with 2 children with autism and developmental delays. Participants were exposed to vocal-tactile (A-B) and tactile-picture (B-C) conditional discrimination training and were tested for the…
Bardy, Fabrice; Van Dun, Bram; Dillon, Harvey; Cowan, Robert
To evaluate the viability of disentangling a series of overlapping 'cortical auditory evoked potentials' (CAEPs) elicited by different stimuli using least-squares (LS) deconvolution, and to assess the adaptation of CAEPs for different stimulus onset-asynchronies (SOAs). Optimal aperiodic stimulus sequences were designed by controlling the condition number of matrices associated with the LS deconvolution technique. First, theoretical considerations of LS deconvolution were assessed in simulations in which multiple artificial overlapping responses were recovered. Second, biological CAEPs were recorded in response to continuously repeated stimulus trains containing six different tone-bursts with frequencies 8, 4, 2, 1, 0.5, 0.25 kHz separated by SOAs jittered around 150 (120-185), 250 (220-285) and 650 (620-685) ms. The control condition had a fixed SOA of 1175 ms. In a second condition, using the same SOAs, trains of six stimuli were separated by a silence gap of 1600 ms. Twenty-four adults with normal hearing (LS deconvolution on simulated waveforms as well as on real EEG data. The use of rapid presentation and LS deconvolution did not however, allow the recovered CAEPs to have a higher signal-to-noise ratio than for slowly presented stimuli. The LS deconvolution technique enables the analysis of a series of overlapping responses in EEG. LS deconvolution is a useful technique for the study of adaptation mechanisms of CAEPs for closely spaced stimuli whose characteristics change from stimulus to stimulus. High-rate presentation is necessary to develop an understanding of how the auditory system encodes natural speech or other intrinsically high-rate stimuli.
Shrem, Talia; Murray, Micah M; Deouell, Leon Y
Space is a dimension shared by different modalities, but at what stage spatial encoding is affected by multisensory processes is unclear. Early studies observed attenuation of N1/P2 auditory evoked responses following repetition of sounds from the same location. Here, we asked whether this effect is modulated by audiovisual interactions. In two experiments, using a repetition-suppression paradigm, we presented pairs of tones in free field, where the test stimulus was a tone presented at a fixed lateral location. Experiment 1 established a neural index of auditory spatial sensitivity, by comparing the degree of attenuation of the response to test stimuli when they were preceded by an adapter sound at the same location versus 30° or 60° away. We found that the degree of attenuation at the P2 latency was inversely related to the spatial distance between the test stimulus and the adapter stimulus. In Experiment 2, the adapter stimulus was a tone presented from the same location or a more medial location than the test stimulus. The adapter stimulus was accompanied by a simultaneous flash displayed orthogonally from one of the two locations. Sound-flash incongruence reduced accuracy in a same-different location discrimination task (i.e., the ventriloquism effect) and reduced the location-specific repetition-suppression at the P2 latency. Importantly, this multisensory effect included topographic modulations, indicative of changes in the relative contribution of underlying sources across conditions. Our findings suggest that the auditory response at the P2 latency is affected by spatially selective brain activity, which is affected crossmodally by visual information. © 2017 Society for Psychophysiological Research.
Kuckuck, Karl; Schröder, Hanna; Rossaint, Rolf; Stieger, Lina; Beckers, Stefan K; Sopka, Sasa
The study objective was to implement two strategies (short emotional stimulus vs announced practical assessment) in the teaching of resuscitation skills in order to evaluate whether one led to superior outcomes. This study is an educational intervention provided in one German academic university hospital. First-yearmedical students (n=271) during the first3 weeks of their studies. Participants were randomly assigned to one of two groups following a sequence of random numbers: the emotional stimulus group (EG) and the assessment group (AG). In the EG, the intervention included watching an emotionally stimulating video prior to the Basic Life Support (BLS) course. In the AG, a practical assessment of the BLS algorithm was announced and tested within a 2 min simulated cardiac arrest scenario. After the baseline testing, a standardised BLS course was provided. Evaluation points were defined 1 week and 6 months after. Compression depth (CD) and compression rate (CR) were recorded as the primary endpoints for BLS quality. Within the study, 137 participants were allocated to the EG and 134 to the AG. 104 participants from EG and 120 from AG were analysed1 week after the intervention, where they reached comparable chest-compression performance without significant differences (CR P=0.49; CD P=0.28). The chest-compression performance improved significantly for the EG (Ptraining. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Moraes, Michele M; Rabelo, Patrícia C R; Pinto, Valéria A; Pires, Washington; Wanner, Samuel P; Szawka, Raphael E; Soares, Danusa D
Listening to melodic music is regarded as a non-pharmacological intervention that ameliorates various disease symptoms, likely by changing the activity of brain monoaminergic systems. Here, we investigated the effects of exposure to melodic music on the concentrations of dopamine (DA), serotonin (5-HT) and their respective metabolites in the caudate-putamen (CPu) and nucleus accumbens (NAcc), areas linked to reward and motor control. Male adult Wistar rats were randomly assigned to a control group or a group exposed to music. The music group was submitted to 8 music sessions [Mozart's sonata for two pianos (K. 488) at an average sound pressure of 65 dB]. The control rats were handled in the same way but were not exposed to music. Immediately after the last exposure or control session, the rats were euthanized, and their brains were quickly removed to analyze the concentrations of 5-HT, DA, 5-hydroxyindoleacetic acid (5-HIAA) and 3,4-dihydroxyphenylacetic acid (DOPAC) in the CPu and NAcc. Auditory stimuli affected the monoaminergic system in these two brain structures. In the CPu, auditory stimuli increased the concentrations of DA and 5-HIAA but did not change the DOPAC or 5-HT levels. In the NAcc, music markedly increased the DOPAC/DA ratio, suggesting an increase in DA turnover. Our data indicate that auditory stimuli, such as exposure to melodic music, increase DA levels and the release of 5-HT in the CPu as well as DA turnover in the NAcc, suggesting that the music had a direct impact on monoamine activity in these brain areas. Copyright © 2018 Elsevier B.V. All rights reserved.
Bareham, Corinne A; Georgieva, Stanimira D; Kamke, Marc R; Lloyd, David; Bekinschtein, Tristan A; Mattingley, Jason B
Selective attention is the process of directing limited capacity resources to behaviourally relevant stimuli while ignoring competing stimuli that are currently irrelevant. Studies in healthy human participants and in individuals with focal brain lesions have suggested that the right parietal cortex is crucial for resolving competition for attention. Following right-hemisphere damage, for example, patients may have difficulty reporting a brief, left-sided stimulus if it occurs with a competitor on the right, even though the same left stimulus is reported normally when it occurs alone. Such "extinction" of contralesional stimuli has been documented for all the major sense modalities, but it remains unclear whether its occurrence reflects involvement of one or more specific subregions of the temporo-parietal cortex. Here we employed repetitive transcranial magnetic stimulation (rTMS) over the right hemisphere to examine the effect of disruption of two candidate regions - the supramarginal gyrus (SMG) and the superior temporal gyrus (STG) - on auditory selective attention. Eighteen neurologically normal, right-handed participants performed an auditory task, in which they had to detect target digits presented within simultaneous dichotic streams of spoken distractor letters in the left and right channels, both before and after 20 min of 1 Hz rTMS over the SMG, STG or a somatosensory control site (S1). Across blocks, participants were asked to report on auditory streams in the left, right, or both channels, which yielded focused and divided attention conditions. Performance was unchanged for the two focused attention conditions, regardless of stimulation site, but was selectively impaired for contralateral left-sided targets in the divided attention condition following stimulation of the right SMG, but not the STG or S1. Our findings suggest a causal role for the right inferior parietal cortex in auditory selective attention. Copyright © 2017 Elsevier Ltd. All rights
Binder, Marek; Górska, Urszula; Griskova-Bulanova, Inga
We aimed to elucidate whether 40Hz auditory steady-state response (ASSR) could be sensitive to the state of patients with disorders of consciousness (DOC) as estimated with Coma Recovery Scale-Revised (CRS-R) diagnostic tool. Fifteen DOC patients and 24 healthy controls took part in the study. The 40Hz click trains were used to evoke ASSRs. Mean evoked amplitude (EA) and phase-locking index (PLI) within 38-42Hz window were calculated for 100ms bins, starting from -200 to 700ms relative to stimulus onset. The PLI values from the patient group in the period of 200-500ms after the stimulus onset positively correlated with the CRS-R total score and with the scores of the Auditory and Visual subscales. The phase-locking index of 40Hz auditory steady-state responses can be an indicator of the level of dysfunction of the central nervous system in DOC. Our results emphasize the role of central auditory system integrity in determining the level of functioning of DOC patients and suggest the possibility to use the ASSR protocol as an objective diagnostic method in DOC patients. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.
Paris, Tim; Kim, Jeesun; Davis, Chris
Auditory-visual (AV) events often involve a leading visual cue (e.g. auditory-visual speech) that allows the perceiver to generate predictions about the upcoming auditory event. Electrophysiological evidence suggests that when an auditory event is predicted, processing is sped up, i.e., the N1 component of the ERP occurs earlier (N1 facilitation). However, it is not clear (1) whether N1 facilitation is based specifically on predictive rather than multisensory integration and (2) which particular properties of the visual cue it is based on. The current experiment used artificial AV stimuli in which visual cues predicted but did not co-occur with auditory cues. Visual form cues (high and low salience) and the auditory-visual pairing were manipulated so that auditory predictions could be based on form and timing or on timing only. The results showed that N1 facilitation occurred only for combined form and temporal predictions. These results suggest that faster auditory processing (as indicated by N1 facilitation) is based on predictive processing generated by a visual cue that clearly predicts both what and when the auditory stimulus will occur. Copyright © 2016. Published by Elsevier Ltd.
Moossavi, Abdollah; Mehrkian, Saiedeh; Lotfi, Yones; Faghihzadeh, Soghrat; sajedi, Hamed
Auditory processing disorder (APD) describes a complex and heterogeneous disorder characterized by poor speech perception, especially in noisy environments. APD may be responsible for a range of sensory processing deficits associated with learning difficulties. There is no general consensus about the nature of APD and how the disorder should be assessed or managed. This study assessed the effect of cognition abilities (working memory capacity) on sound lateralization in children with auditory processing disorders, in order to determine how "auditory cognition" interacts with APD. The participants in this cross-sectional comparative study were 20 typically developing and 17 children with a diagnosed auditory processing disorder (9-11 years old). Sound lateralization abilities investigated using inter-aural time (ITD) differences and inter-aural intensity (IID) differences with two stimuli (high pass and low pass noise) in nine perceived positions. Working memory capacity was evaluated using the non-word repetition, and forward and backward digits span tasks. Linear regression was employed to measure the degree of association between working memory capacity and localization tests between the two groups. Children in the APD group had consistently lower scores than typically developing subjects in lateralization and working memory capacity measures. The results showed working memory capacity had significantly negative correlation with ITD errors especially with high pass noise stimulus but not with IID errors in APD children. The study highlights the impact of working memory capacity on auditory lateralization. The finding of this research indicates that the extent to which working memory influences auditory processing depend on the type of auditory processing and the nature of stimulus/listening situation. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Herrmann, Björn; Maess, Burkhard; Johnsrude, Ingrid S
Optimal perception requires efficient and adaptive neural processing of sensory input. Neurons in nonhuman mammals adapt to the statistical properties of acoustic feature distributions such that they become sensitive to sounds that are most likely to occur in the environment. However, whether human auditory responses adapt to stimulus statistical distributions and how aging affects adaptation to stimulus statistics is unknown. We used MEG to study how exposure to different distributions of sound levels affects adaptation in auditory cortex of younger (mean: 25 years; n = 19) and older (mean: 64 years; n = 20) adults (male and female). Participants passively listened to two sound-level distributions with different modes (either 15 or 45 dB sensation level). In a control block with long interstimulus intervals, allowing neural populations to recover from adaptation, neural response magnitudes were similar between younger and older adults. Critically, both age groups demonstrated adaptation to sound-level stimulus statistics, but adaptation was altered for older compared with younger people: in the older group, neural responses continued to be sensitive to sound level under conditions in which responses were fully adapted in the younger group. The lack of full adaptation to the statistics of the sensory environment may be a physiological mechanism underlying the known difficulty that older adults have with filtering out irrelevant sensory information. SIGNIFICANCE STATEMENT Behavior requires efficient processing of acoustic stimulation. Animal work suggests that neurons accomplish efficient processing by adjusting their response sensitivity depending on statistical properties of the acoustic environment. Little is known about the extent to which this adaptation to stimulus statistics generalizes to humans, particularly to older humans. We used MEG to investigate how aging influences adaptation to sound-level statistics. Listeners were presented with sounds drawn from
van Kesteren, Marlieke T. R.; Wierslnca-Post, J. Esther C.
Purpose: Several studies on auditory temporal-order processing showed gender differences. Women needed longer inter-stimulus intervals than men when indicating the temporal order of two clicks presented to the left and right ear. In this study, we examined whether we could reproduce these results in
Kirkwood, Brent Christopher
Humans are capable of hearing the lengths of wooden rods dropped onto hard floors. In an attempt to understand the influence of the stimulus presentation method for testing this kind of everyday listening task, listener performance was compared for three presentation methods in an auditory length...
Thoma, Robert J; Meier, Andrew; Houck, Jon; Clark, Vincent P; Lewine, Jeffrey D; Turner, Jessica; Calhoun, Vince; Stephen, Julia
Auditory sensory gating, assessed in a paired-click paradigm, indicates the extent to which incoming stimuli are filtered, or "gated", in auditory cortex. Gating is typically computed as the ratio of the peak amplitude of the event related potential (ERP) to a second click (S2) divided by the peak amplitude of the ERP to a first click (S1). Higher gating ratios are purportedly indicative of incomplete suppression of S2 and considered to represent sensory processing dysfunction. In schizophrenia, hallucination severity is positively correlated with gating ratios, and it was hypothesized that a failure of sensory control processes early in auditory sensation (gating) may represent a larger system failure within the auditory data stream; resulting in auditory verbal hallucinations (AVH). EEG data were collected while patients (N=12) with treatment-resistant AVH pressed a button to indicate the beginning (AVH-on) and end (AVH-off) of each AVH during a paired click protocol. For each participant, separate gating ratios were computed for the P50, N100, and P200 components for each of the AVH-off and AVH-on states. AVH trait severity was assessed using the Psychotic Symptoms Rating Scales AVH Total score (PSYRATS). The results of a mixed model ANOVA revealed an overall effect for AVH state, such that gating ratios were significantly higher during the AVH-on state than during AVH-off for all three components. PSYRATS score was significantly and negatively correlated with N100 gating ratio only in the AVH-off state. These findings link onset of AVH with a failure of an empirically-defined auditory inhibition system, auditory sensory gating, and pave the way for a sensory gating model of AVH. Copyright © 2017 Elsevier B.V. All rights reserved.
Full Text Available Background and Purpose. Training in the virtual environment in post stroke rehab is being established as a new approach for neurorehabilitation, specifically, ReoTherapy (REO a robot-assisted virtual training device. Trunk stabilization strapping has been part of the concept with this device, and literature is lacking to support this for long-term functional changes with individuals after stroke. The purpose of this case series was to measure the feasibility of auditory trunk sensor feedback during REO therapy, in moderate to severely impaired individuals after stroke. Case Description. Using an open label crossover comparison design, 3 chronic stroke subjects were trained for 12 sessions over six weeks on either the REO or the control condition of task related training (TRT; after a washout period of 4 weeks; the alternative therapy was given. Outcomes. With both interventions, clinically relevant improvements were found for measures of body function and structure, as well as for activity, for two participants. Providing auditory feedback during REO training for trunk control was found to be feasible. Discussion. The degree of changes evident varied per protocol and may be due to the appropriateness of the technique chosen, as well as based on patients impaired arm motor control.
Kopp, M; Gruzelier, J
Patients diagnosed (DSM III) with anxiety disorders (agoraphobia, panic syndrome, generalised anxiety syndrome) were classified along with controls as electrodermally stabile or labile on the basis of non-specific electrodermal activity and rate of habituation to tones. While patients showed more evidence of psychopathology than controls on scales of anxiety, neuroticism, depression and agoraphobic fear, patient labiles scored higher than stabiles on agoraphobic fear and were differentiated by higher scores of Beck depression. They were also more sensitive to pain, whereas patient stabiles were less sensitive at absolute somatosensory threshold. Amongst controls agoraphobic fear was associated with lability and stabiles scored higher on autonomy in locus of control. Lateral asymmetries in auditory thresholds were consistent with reciprocal hemispheric influences on electrodermal reactivity and habituation, modifiable by anxiety. Interrelationships between fear, depression, sensitivity to somatosensory stimulation, pain, and superior vigilance performance in patient labiles were consistent with elevated right hemisphere function.
Full Text Available If it is well known that knowledge facilitates higher cognitive functions, such as visual and auditory word recognition, little is known about the influence of knowledge on detection, particularly in the auditory modality. Our study tested the influence of phonological and lexical knowledge on auditory detection. Words, pseudo words and complex non phonological sounds, energetically matched as closely as possible, were presented at a range of presentation levels from sub threshold to clearly audible. The participants performed a detection task (Experiments 1 and 2 that was followed by a two alternative forced choice recognition task in Experiment 2. The results of this second task in Experiment 2 suggest a correct recognition of words in the absence of detection with a subjective threshold approach. In the detection task of both experiments, phonological stimuli (words and pseudo words were better detected than non phonological stimuli (complex sounds, presented close to the auditory threshold. This finding suggests an advantage of speech for signal detection. An additional advantage of words over pseudo words was observed in Experiment 2, suggesting that lexical knowledge could also improve auditory detection when listeners had to recognize the stimulus in a subsequent task. Two simulations of detection performance performed on the sound signals confirmed that the advantage of speech over non speech processing could not be attributed to energetic differences in the stimuli.
Full Text Available To compare the development of the auditory system in hearing and completely acoustically deprived animals, naive congenitally deaf white cats (CDCs and hearing controls (HCs were investigated at different developmental stages from birth till adulthood. The CDCs had no hearing experience before the acute experiment. In both groups of animals, responses to cochlear implant stimulation were acutely assessed. Electrically evoked auditory brainstem responses (E-ABRs were recorded with monopolar stimulation at different current levels. CDCs demonstrated extensive development of E-ABRs, from first signs of responses at postnatal (p.n. day 3 through appearance of all waves of brainstem response at day 8 p.n. to mature responses around day 90 p.n.. Wave I of E-ABRs could not be distinguished from the artifact in majority of CDCs, whereas in HCs, it was clearly separated from the stimulus artifact. Waves II, III, and IV demonstrated higher thresholds in CDCs, whereas this difference was not found for wave V. Amplitudes of wave III were significantly higher in HCs, whereas wave V amplitudes were significantly higher in CDCs. No differences in latencies were observed between the animal groups. These data demonstrate significant postnatal subcortical development in absence of hearing, and also divergent effects of deafness on early waves II–IV and wave V of the E-ABR.
Joos, Kathleen; Gilles, Annick; Van de Heyning, Paul; De Ridder, Dirk; Vanneste, Sven
An external auditory stimulus induces an auditory sensation which may lead to a conscious auditory perception. Although the sensory aspect is well known, it is still a question how an auditory stimulus results in an individual's conscious percept. To unravel the uncertainties concerning the neural correlates of a conscious auditory percept, event-related potentials may serve as a useful tool. In the current review we mainly wanted to shed light on the perceptual aspects of auditory processing and therefore we mainly focused on the auditory late-latency responses. Moreover, there is increasing evidence that perception is an active process in which the brain searches for the information it expects to be present, suggesting that auditory perception requires the presence of both bottom-up, i.e. sensory and top-down, i.e. prediction-driven processing. Therefore, the auditory evoked potentials will be interpreted in the context of the Bayesian brain model, in which the brain predicts which information it expects and when this will happen. The internal representation of the auditory environment will be verified by sensation samples of the environment (P50, N100). When this incoming information violates the expectation, it will induce the emission of a prediction error signal (Mismatch Negativity), activating higher-order neural networks and inducing the update of prior internal representations of the environment (P300). Copyright © 2014 Elsevier Ltd. All rights reserved.
Steinmann, Tobias P.; Andrew, Colin M.; Thomsen, Carsten E.
Abstract—In this study event-related potentials (ERPs) were used to investigate the effects of prenatal alcohol exposure on response inhibition identified during task performance. ERPs were recorded during a auditory Go/No Go task in two groups of children with mean age of 12:8years (11years to 14......:7years): one diagnosed with fetal alcohol syndrome (FAS) or partial FAS (FAS/PFAS; n = 12) and a control group of children of same age whose mothers abstained from alcohol or drank minimally during pregnancy (n = 11). The children were instructed to push a button in response to the Go stimulus...
Asger Emil Munch Schrøder
Full Text Available Echolocating animals reduce their output level and hearing sensitivity with decreasing echo delays, presumably to stabilize the perceived echo intensity during target approaches. In bats, this variation in hearing sensitivity is formed by a call-induced stapedial reflex that tapers off over time after the call. Here, we test the hypothesis that a similar mechanism exists in toothed whales by subjecting a trained harbour porpoise to a series of double sound pulses varying in delay and frequency, while measuring the magnitudes of the evoked auditory brainstem responses (ABRs. We find that the recovery of the ABR to the second pulse is frequency dependent, and that a stapedial reflex therefore cannot account for the reduced hearing sensitivity at short pulse delays. We propose that toothed whale auditory time-varying gain control during echolocation is not enabled by the middle ear as in bats, but rather by frequency-dependent mechanisms such as forward masking and perhaps higher-order control of efferent feedback to the outer hair cells.
HONIG, W K; SLIVKA, R M
Three pigeons were trained to respond to seven spectral stimulus values ranging from 490 to 610 mmu and displayed in random order on a response key. After response rates had equalized to these values, a brief electric shock was administered when the subject (S) responded to the central value (550 mmu) while positive reinforcement for all values was maintained. Initially, there was broad generalization of the resulting depression in response rate, but the gradients grew steeper in the course of testing. When punishment was discontinued, the rates to all values recovered, and equal responding to all stimuli was reattained by two of the Ss. Stimulus control over the effects of punishment was clearly demonstrated in the form of a generalization gradient; this probably resulted from the combined effects of generalization of the depression associated with punishment and discrimination between the punished value and neutral stimuli.
Full Text Available A series of computer simulations using variants of a formal model of attention (Melara & Algom, 2003 probed the role of rejection positivity (RP, a slow-wave electroencephalographic (EEG component, in the inhibitory control of distraction. Behavioral and EEG data were recorded as participants performed auditory selective attention tasks. Simulations that modulated processes of distractor inhibition accounted well for reaction-time (RT performance, whereas those that modulated target excitation did not. A model that incorporated RP from actual EEG recordings in estimating distractor inhibition was superior in predicting changes in RT as a function of distractor salience across conditions. A model that additionally incorporated momentary fluctuations in EEG as the source of trial-to-trial variation in performance precisely predicted individual RTs within each condition. The results lend support to the linking proposition that RP controls the speed of responding to targets through the inhibitory control of distractors.
Karla M. I. Freiria Elias
Full Text Available Objective To investigate central auditory processing in children with unilateral stroke and to verify whether the hemisphere affected by the lesion influenced auditory competence. Method 23 children (13 male between 7 and 16 years old were evaluated through speech-in-noise tests (auditory closure; dichotic digit test and staggered spondaic word test (selective attention; pitch pattern and duration pattern sequence tests (temporal processing and their results were compared with control children. Auditory competence was established according to the performance in auditory analysis ability. Results Was verified similar performance between groups in auditory closure ability and pronounced deficits in selective attention and temporal processing abilities. Most children with stroke showed an impaired auditory ability in a moderate degree. Conclusion Children with stroke showed deficits in auditory processing and the degree of impairment was not related to the hemisphere affected by the lesion.
Grosso, A; Cambiaghi, M; Concina, G; Sacco, T; Sacchetti, B
Emotional memories represent the core of human and animal life and drive future choices and behaviors. Early research involving brain lesion studies in animals lead to the idea that the auditory cortex participates in emotional learning by processing the sensory features of auditory stimuli paired with emotional consequences and by transmitting this information to the amygdala. Nevertheless, electrophysiological and imaging studies revealed that, following emotional experiences, the auditory cortex undergoes learning-induced changes that are highly specific, associative and long lasting. These studies suggested that the role played by the auditory cortex goes beyond stimulus elaboration and transmission. Here, we discuss three major perspectives created by these data. In particular, we analyze the possible roles of the auditory cortex in emotional learning, we examine the recruitment of the auditory cortex during early and late memory trace encoding, and finally we consider the functional interplay between the auditory cortex and subcortical nuclei, such as the amygdala, that process affective information. We conclude that, starting from the early phase of memory encoding, the auditory cortex has a more prominent role in emotional learning, through its connections with subcortical nuclei, than is typically acknowledged. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.
McCourt, Mark E; Leone, Lynnette M
We asked whether the perceived direction of visual motion and contrast thresholds for motion discrimination are influenced by the concurrent motion of an auditory sound source. Visual motion stimuli were counterphasing Gabor patches, whose net motion energy was manipulated by adjusting the contrast of the leftward-moving and rightward-moving components. The presentation of these visual stimuli was paired with the simultaneous presentation of auditory stimuli, whose apparent motion in 3D auditory space (rightward, leftward, static, no sound) was manipulated using interaural time and intensity differences, and Doppler cues. In experiment 1, observers judged whether the Gabor visual stimulus appeared to move rightward or leftward. In experiment 2, contrast discrimination thresholds for detecting the interval containing unequal (rightward or leftward) visual motion energy were obtained under the same auditory conditions. Experiment 1 showed that the perceived direction of ambiguous visual motion is powerfully influenced by concurrent auditory motion, such that auditory motion 'captured' ambiguous visual motion. Experiment 2 showed that this interaction occurs at a sensory stage of processing as visual contrast discrimination thresholds (a criterion-free measure of sensitivity) were significantly elevated when paired with congruent auditory motion. These results suggest that auditory and visual motion signals are integrated and combined into a supramodal (audiovisual) representation of motion.
Sharma, Vishnu; McCreery, Douglas B; Han, Martin; Pikov, Victor
We present versatile multifunctional programmable controller with bidirectional data telemetry, implemented using existing commercial microchips and standard Bluetooth protocol, which adds convenience, reliability, and ease-of-use to neuroprosthetic devices. Controller, weighing 190 g, is placed on animal's back and provides bidirectional sustained telemetry rate of 500 kb/s , allowing real-time control of stimulation parameters and viewing of acquired data. In continuously-active state, controller consumes approximately 420 mW and operates without recharge for 8 h . It features independent 16-channel current-controlled stimulation, allowing current steering; customizable stimulus current waveforms; recording of stimulus voltage waveforms and evoked neuronal responses with stimulus artifact blanking circuitry. Flexibility, scalability, cost-efficiency, and a user-friendly computer interface of this device allow use in animal testing for variety of neuroprosthetic applications. Initial testing of the controller has been done in a feline model of brainstem auditory prosthesis. In this model, the electrical stimulation is applied to the array of microelectrodes implanted in the ventral cochlear nucleus, while the evoked neuronal activity was recorded with the electrode implanted in the contralateral inferior colliculus. Stimulus voltage waveforms to monitor the access impedance of the electrodes were acquired at the rate of 312 kilosamples/s. Evoked neuronal activity in the inferior colliculus was recorded after the blanking (transient silencing) of the recording amplifier during the stimulus pulse, allowing the detection of neuronal responses within 100 mus after the end of the stimulus pulse applied in the cochlear nucleus.
Grau, C; Polo, M D; Yago, E; Gual, A; Escera, C
A pre-conscious auditory sensory (echoic) memory of about 10 s duration can be studied with the event-related brain potential mismatch negativity (MMN). Previous work indicates that this memory is preserved in abstinent chronic alcoholics for a duration of up to 2 s. The authors' aim was to determine the integrity of auditory sensory memory as indexed by MMN in chronic alcoholism, when this memory has to be functionally active for a longer period of time. The presence of MMN for stimuli that differ in duration was tested at memory probe intervals (MPIs) of 0.4 and 5.0 s in 17 abstinent chronic alcoholic patients and in 17 healthy age-matched control subjects. MMN was similar in alcoholics and controls when the MPI was 0.4 s, whereas MMN could not be observed in the patients when the MPI was increased to 5.0 s. These results provide evidence of an impairment of auditory sensory memory in abstinent chronic alcoholics, whereas the automatic stimulus-change detector mechanism, involved in MMN generation, is preserved.
speech response, while a task %with spatial processing components is best served by vizual input and manuai response. A model is proposed that predicts...Auditory Displays and Speech Control In the high information processing environment of the modern tactical aircraft, auditory display and speech control...there exist inevitable limitations in auditory display and speech control capabilities that could hamper information transmission. This is particularly
Full Text Available Sequences of higher frequency A and lower frequency B tones repeating in an ABA- triplet pattern are widely used to study auditory streaming. One may experience either an integrated percept, a single ABA-ABA- stream, or a segregated percept, separate but simultaneous streams A-A-A-A- and -B---B--. During minutes-long presentations, subjects may report irregular alternations between these interpretations. We combine neuromechanistic modeling and psychoacoustic experiments to study these persistent alternations and to characterize the effects of manipulating stimulus parameters. Unlike many phenomenological models with abstract, percept-specific competition and fixed inputs, our network model comprises neuronal units with sensory feature dependent inputs that mimic the pulsatile-like A1 responses to tones in the ABA- triplets. It embodies a neuronal computation for percept competition thought to occur beyond primary auditory cortex (A1. Mutual inhibition, adaptation and noise are implemented. We include slow NDMA recurrent excitation for local temporal memory that enables linkage across sound gaps from one triplet to the next. Percepts in our model are identified in the firing patterns of the neuronal units. We predict with the model that manipulations of the frequency difference between tones A and B should affect the dominance durations of the stronger percept, the one dominant a larger fraction of time, more than those of the weaker percept-a property that has been previously established and generalized across several visual bistable paradigms. We confirm the qualitative prediction with our psychoacoustic experiments and use the behavioral data to further constrain and improve the model, achieving quantitative agreement between experimental and modeling results. Our work and model provide a platform that can be extended to consider other stimulus conditions, including the effects of context and volition.
Andersen, Tobias; Mamassian, Pascal
A change in sound intensity can facilitate luminance change detection. We found that this effect did not depend on whether sound intensity and luminance increased or decreased. In contrast, luminance identification was strongly influenced by the congruence of luminance and sound intensity change...... leaving only unsigned stimulus transients as the basis for audiovisual integration. Facilitation of luminance detection occurred even with varying audiovisual stimulus onset asynchrony and even when the sound lagged behind the luminance change by 75 ms supporting the interpretation that perceptual...... integration rather than a reduction of temporal uncertainty or effects of attention caused the effect....
Keynesians know that if US austerity advocates had received just a few more votes in the November 2008 election, there would have been no fiscal stimulus or financial rescue in 2009 and the Great Recession would have turned into a second great depression. â€˜Keynesianâ€™ means recognizing the crucial role of aggregate demand, grasping the paradox of saving, advocating fiscal stimulus (tax cuts as well as government spending) in a recession despite the temporary increase in debt that it genera...
Hubbard, Timothy L.
The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d)…
Cambiaghi, Marco; Grosso, Anna; Renna, Annamaria; Sacchetti, Benedetto
Memories of frightening events require a protracted consolidation process. Sensory cortex, such as the auditory cortex, is involved in the formation of fearful memories with a more complex sensory stimulus pattern. It remains controversial, however, whether the auditory cortex is also required for fearful memories related to simple sensory stimuli. In the present study, we found that, 1 d after training, the temporary inactivation of either the most anterior region of the auditory cortex, including the primary (Te1) cortex, or the most posterior region, which included the secondary (Te2) component, did not affect the retention of recent memories, which is consistent with the current literature. However, at this time point, the inactivation of the entire auditory cortices completely prevented the formation of new memories. Amnesia was site specific and was not due to auditory stimuli perception or processing and strictly related to the interference with memory consolidation processes. Strikingly, at a late time interval 4 d after training, blocking the posterior part (encompassing the Te2) alone impaired memory retention, whereas the inactivation of the anterior part (encompassing the Te1) left memory unaffected. Together, these data show that the auditory cortex is necessary for the consolidation of auditory fearful memories related to simple tones in rats. Moreover, these results suggest that, at early time intervals, memory information is processed in a distributed network composed of both the anterior and the posterior auditory cortical regions, whereas, at late time intervals, memory processing is concentrated in the most posterior part containing the Te2 region. Memories of threatening experiences undergo a prolonged process of "consolidation" to be maintained for a long time. The dynamic of fearful memory consolidation is poorly understood. Here, we show that 1 d after learning, memory is processed in a distributed network composed of both primary Te1 and
Stufflebeam, S M; Poeppel, D; Rowley, H A; Roberts, T P
Recent work has suggested that, in addition to spatial tonotopy, pitch and timbre information may be encoded in the temporal activity of the auditory cortex. Specifically, the post-stimulus latency of the maximal cortical evoked neuromagnetic field (M100 or N1m) is a function of stimulus frequency. We investigated the additional effect of varying the stimulus intensity on the M100 response. A 37-channel biomagnetometer recorded neuromagnetic fields over the temporal lobe of healthy volunteers in response to monaurally presented tones. The frequency dependence of the M100 latency remained remarkably invariant even at low stimulus intensity. Thus, for peri-threshold stimuli, frequency information appears encoded in the temporal form of the evoked response.
Weinberger, Norman M.
Standard beliefs that the function of the primary auditory cortex (A1) is the analysis of sound have proven to be incorrect. Its involvement in learning, memory and other complex processes in both animals and humans is now well-established, although often not appreciated. Auditory coding is strongly modifed by associative learning, evident as associative representational plasticity (ARP) in which the representation of an acoustic dimension, like frequency, is re-organized to emphasize a sound that has become behaviorally important. For example, the frequency tuning of a cortical neuron can be shifted to match that of a significant sound and the representational area of sounds that acquire behavioral importance can be increased. ARP depends on the learning strategy used to solve an auditory problem and the increased cortical area confers greater strength of auditory memory. Thus, primary auditory cortex is involved in cognitive processes, transcending its assumed function of auditory stimulus analysis. The implications for basic neuroscience and clinical auditory neuroscience are presented and suggestions for remediation of auditory processing disorders are introduced. PMID:25356375
Yang, Ming-Tao; Hsu, Chun-Hsien; Yeh, Pei-Wen; Lee, Wang-Tso; Liang, Jao-Shwann; Fu, Wen-Mei; Lee, Chia-Ying
Inattention (IA) has been a major problem in children with attention deficit/hyperactivity disorder (ADHD), accounting for their behavioral and cognitive dysfunctions. However, there are at least three processing steps underlying attentional control for auditory change detection, namely pre-attentive change detection, involuntary attention orienting, and attention reorienting for further evaluation. This study aimed to examine whether children with ADHD would show deficits in any of these subcomponents by using mismatch negativity (MMN), P3a, and late discriminative negativity (LDN) as event-related potential (ERP) markers, under the passive auditory oddball paradigm. Two types of stimuli-pure tones and Mandarin lexical tones-were used to examine if the deficits were general across linguistic and non-linguistic domains. Participants included 15 native Mandarin-speaking children with ADHD and 16 age-matched controls (across groups, age ranged between 6 and 15 years). Two passive auditory oddball paradigms (lexical tones and pure tones) were applied. The pure tone oddball paradigm included a standard stimulus (1000 Hz, 80%) and two deviant stimuli (1015 and 1090 Hz, 10% each). The Mandarin lexical tone oddball paradigm's standard stimulus was /yi3/ (80%) and two deviant stimuli were /yi1/ and /yi2/ (10% each). The results showed no MMN difference, but did show attenuated P3a and enhanced LDN to the large deviants for both pure and lexical tone changes in the ADHD group. Correlation analysis showed that children with higher ADHD tendency, as indexed by parents' and teachers' ratings on ADHD symptoms, showed less positive P3a amplitudes when responding to large lexical tone deviants. Thus, children with ADHD showed impaired auditory change detection for both pure tones and lexical tones in both involuntary attention switching, and attention reorienting for further evaluation. These ERP markers may therefore be used for the evaluation of anti-ADHD drugs that aim to
Schwarz, D W F; Taylor, P
Binaural beat sensations depend upon a central combination of two different temporally encoded tones, separately presented to the two ears. We tested the feasibility to record an auditory steady state evoked response (ASSR) at the binaural beat frequency in order to find a measure for temporal coding of sound in the human EEG. We stimulated each ear with a distinct tone, both differing in frequency by 40Hz, to record a binaural beat ASSR. As control, we evoked a beat ASSR in response to both tones in the same ear. We band-pass filtered the EEG at 40Hz, averaged with respect to stimulus onset and compared ASSR amplitudes and phases, extracted from a sinusoidal non-linear regression fit to a 40Hz period average. A 40Hz binaural beat ASSR was evoked at a low mean stimulus frequency (400Hz) but became undetectable beyond 3kHz. Its amplitude was smaller than that of the acoustic beat ASSR, which was evoked at low and high frequencies. Both ASSR types had maxima at fronto-central leads and displayed a fronto-occipital phase delay of several ms. The dependence of the 40Hz binaural beat ASSR on stimuli at low, temporally coded tone frequencies suggests that it may objectively assess temporal sound coding ability. The phase shift across the electrode array is evidence for more than one origin of the 40Hz oscillations. The binaural beat ASSR is an evoked response, with novel diagnostic potential, to a signal that is not present in the stimulus, but generated within the brain.
Daniel B Forger
Full Text Available An important problem in neuronal computation is to discern how features of stimuli control the timing of action potentials. One aspect of this problem is to determine how an action potential, or spike, can be elicited with the least energy cost, e.g., a minimal amount of applied current. Here we show in the Hodgkin & Huxley model of the action potential and in experiments on squid giant axons that: 1 spike generation in a neuron can be highly discriminatory for stimulus shape and 2 the optimal stimulus shape is dependent upon inputs to the neuron. We show how polarity and time course of post-synaptic currents determine which of these optimal stimulus shapes best excites the neuron. These results are obtained mathematically using the calculus of variations and experimentally using a stochastic search methodology. Our findings reveal a surprising complexity of computation at the single cell level that may be relevant for understanding optimization of signaling in neurons and neuronal networks.
Puvvada, Krishna C; Simon, Jonathan Z
The ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically based representations in the auditory nerve, into perceptually distinct auditory-object-based representations in the auditory cortex. Here, using magnetoencephalography recordings from men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of the auditory cortex. Using systems-theoretic methods of stimulus reconstruction, we show that the primary-like areas in the auditory cortex contain dominantly spectrotemporal-based representations of the entire auditory scene. Here, both attended and ignored speech streams are represented with almost equal fidelity, and a global representation of the full auditory scene with all its streams is a better candidate neural representation than that of individual streams being represented separately. We also show that higher-order auditory cortical areas, by contrast, represent the attended stream separately and with significantly higher fidelity than unattended streams. Furthermore, the unattended background streams are more faithfully represented as a single unsegregated background object rather than as separated objects. Together, these findings demonstrate the progression of the representations and processing of a complex acoustic scene up through the hierarchy of the human auditory cortex. SIGNIFICANCE STATEMENT Using magnetoencephalography recordings from human listeners in a simulated cocktail party environment, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in separate hierarchical stages of the auditory cortex. We show that the primary-like areas in the auditory cortex use a dominantly spectrotemporal-based representation of the entire auditory
Full Text Available Background and Aim: Learning disability(LD is one of the most prevalent problems among elementary school children. Approximately 10 percent of all elementary school children suffer from this problem. It has been determined that learning disability is predominantly accompanied with subtle impairment in central auditory nervous system. The main idea of this study was to evaluate middle latency auditory evoked potential (MLAEPs in learning disabled children. Materials and Method: This cross-sectional study investigated middle latency auditory evoked potential in children with learning disability (n = 31 compared to normal children (n = 31. Latencies and amplitudes of MLAEPs results with different stimulus intensity and binaural stimulation were compared between two groups. Results: Compared to control group, learning disabled children exhibited smaller amplitudes for all the components except the right ear Na and Pa. There is no significant difference between two groups for latencies of the components. Conclusion: It seems that middle latency auditory evoked potential may be useful in diagnosis and evaluation of learning disabled children although more investigation is required.
Masoud Motalebi Kashani
Full Text Available Background and Aim: Sound conditioning is exposure to a non-traumatic, moderate level of sound which increases inner ear resistance against further severe noise. In this study, we aimed to survey the effect of sound conditioning on auditory brainstem response (ABR threshold shifts using click stimulus, and the effect of the frequency of conditioning on hearing protection.Methods: Fifteen guinea pigs were randomly divided into 3 groups. Two conditioned groups were exposed to 1 kHz, and 4 kHz octave band noise at 85 dB SPL, 6 hours per day for 5 days, respectively.On the sixth day, the animals were exposed to 4 kHz octave band noise at 105 dB SPL, for 4 hours.The control group was exposed to intense noise, 4 kHz at 105 Db SPL for 4 hours (withoutconditioning. After exposure, ABR thresholds using click were recorded an hour, and 7 days after noise exposure.Results: The results of the ABR with click stimulus showed less thresold shifts in conditioned groups than control (p≤0.001. Comparison of the results of conditioned groups, showed less threshold shift by 4 kHz conditioning, however, this difference was not statistically significant (p>0.05.Conclusion: Electrophysiological data of our study showed that sound conditioning has a protective effect against subsequent intensive noise exposure, and the frequency of conditioning does not havesignificant effect on ABR threshold shifts when using click stimulus.
Chermak, Gail D
APD is not a label for a unitary disease entity but rather a description of functional deficits . It is a complex and heterogeneous group of auditory-specific disorders usually associated with a range of listening and learning deficits [3,4]. Underlying APD is a deficit observed in one or more of the auditory processes responsible for generating the auditory evoked potentials and the following behaviors: around localization and lateralization; auditory discrimination; auditory pattern recognition; temporal aspects of audition, including temporal resolution, masking, integration, and ordering; auditory performance with competing acoustic signals; and auditory performance with degraded acoustic signals . Comprehensive assessment is necessary for the accurate differential diagnosis of APD from other "look-alike" disorders, most notably ADHD and language processing disorders. Speech-language pathologists, psychologists, educators, and physicians contribute to this more comprehensive assessment. The primary role of otolaryngologists is to evaluate and treat peripheral hearing disorders, such as otitis media. Children with APDs may present to an otolaryngologist, thus requiring the physician to make appropriate referral for assessment and intervention. Currently, diagnosis of APD is based on the outcomes of behavioral tests, supplemented by electroacoustic measures and, to a lesser extent, by electrophysiologic measures . Intervention for APD focuses on improving the quality of the acoustic signal and the listening environment, improving auditory skills, and enhancing utilization of metacognitive and language resources . Additional controlled case studies and single-subject and group research designs are needed to ascertain systematically the relative efficacy of various treatment and management approaches.
Davison, Michael; Baum, William M.
Four pigeons were trained in a procedure in which concurrent-schedule food ratios changed unpredictably across seven unsignaled components after 10 food deliveries. Additional green-key stimulus presentations also occurred on the two alternatives, sometimes in the same ratio as the component food ratio, and sometimes in the inverse ratio. In eight…
Clark, Camilla N; Nicholas, Jennifer M; Agustus, Jennifer L; Hardy, Christopher J D; Russell, Lucy L; Brotherhood, Emilie V; Dick, Katrina M; Marshall, Charles R; Mummery, Catherine J; Rohrer, Jonathan D; Warren, Jason D
Impaired analysis of signal conflict and congruence may contribute to diverse socio-emotional symptoms in frontotemporal dementias, however the underlying mechanisms have not been defined. Here we addressed this issue in patients with behavioural variant frontotemporal dementia (bvFTD; n = 19) and semantic dementia (SD; n = 10) relative to healthy older individuals (n = 20). We created auditory scenes in which semantic and emotional congruity of constituent sounds were independently probed; associated tasks controlled for auditory perceptual similarity, scene parsing and semantic competence. Neuroanatomical correlates of auditory congruity processing were assessed using voxel-based morphometry. Relative to healthy controls, both the bvFTD and SD groups had impaired semantic and emotional congruity processing (after taking auditory control task performance into account) and reduced affective integration of sounds into scenes. Grey matter correlates of auditory semantic congruity processing were identified in distributed regions encompassing prefrontal, parieto-temporal and insular areas and correlates of auditory emotional congruity in partly overlapping temporal, insular and striatal regions. Our findings suggest that decoding of auditory signal relatedness may probe a generic cognitive mechanism and neural architecture underpinning frontotemporal dementia syndromes. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Schulz, Andreas L.; Woldeit, Marie L.; Gonçalves, Ana I.; Saldeitis, Katja; Ohl, Frank W.
Goal directed behavior and associated learning processes are tightly linked to neuronal activity in the ventral striatum. Mechanisms that integrate task relevant sensory information into striatal processing during decision making and learning are implicitly assumed in current reinforcement models, yet they are still weakly understood. To identify the functional activation of cortico-striatal subpopulations of connections during auditory discrimination learning, we trained Mongolian gerbils in a two-way active avoidance task in a shuttlebox to discriminate between falling and rising frequency modulated tones with identical spectral properties. We assessed functional coupling by analyzing the field-field coherence between the auditory cortex and the ventral striatum of animals performing the task. During the course of training, we observed a selective increase of functional coupling during Go-stimulus presentations. These results suggest that the auditory cortex functionally interacts with the ventral striatum during auditory learning and that the strengthening of these functional connections is selectively goal-directed. PMID:26793085
Andreas L. Schulz
Full Text Available Goal directed behavior and associated learning processes are tightly linked to neuronal activity in the ventral striatum. Mechanisms that integrate task relevant sensory information into striatal processing during decision making and learning are implicitly assumed in current reinforcementmodels, yet they are still weakly understood. To identify the functional activation of cortico-striatal subpopulations of connections during auditory discrimination learning, we trained Mongolian gerbils in a two-way active avoidance task in a shuttlebox to discriminate between falling and rising frequency modulated tones with identical spectral properties. We assessed functional coupling by analyzing the field-field coherence between the auditory cortex and the ventral striatum of animals performing the task. During the course of training, we observed a selective increase of functionalcoupling during Go-stimulus presentations. These results suggest that the auditory cortex functionally interacts with the ventral striatum during auditory learning and that the strengthening of these functional connections is selectively goal-directed.
Morrill, Ryan J; Hasenstaub, Andrea R
The cerebral cortex is a major hub for the convergence and integration of signals from across the sensory modalities; sensory cortices, including primary regions, are no exception. Here we show that visual stimuli influence neural firing in the auditory cortex of awake male and female mice, using multisite probes to sample single units across multiple cortical layers. We demonstrate that visual stimuli influence firing in both primary and secondary auditory cortex. We then determine the laminar location of recording sites through electrode track tracing with fluorescent dye and optogenetic identification using layer-specific markers. Spiking responses to visual stimulation occur deep in auditory cortex and are particularly prominent in layer 6. Visual modulation of firing rate occurs more frequently at areas with secondary-like auditory responses than those with primary-like responses. Auditory cortical responses to drifting visual gratings are not orientation-tuned, unlike visual cortex responses. The deepest cortical layers thus appear to be an important locus for cross-modal integration in auditory cortex. SIGNIFICANCE STATEMENT The deepest layers of the auditory cortex are often considered its most enigmatic, possessing a wide range of cell morphologies and atypical sensory responses. Here we show that, in mouse auditory cortex, these layers represent a locus of cross-modal convergence, containing many units responsive to visual stimuli. Our results suggest that this visual signal conveys the presence and timing of a stimulus rather than specifics about that stimulus, such as its orientation. These results shed light on both how and what types of cross-modal information is integrated at the earliest stages of sensory cortical processing. Copyright © 2018 the authors 0270-6474/18/382854-09$15.00/0.
Eliades, Steven J; Wang, Xiaoqin
During speech, humans continuously listen to their own vocal output to ensure accurate communication. Such self-monitoring is thought to require the integration of information about the feedback of vocal acoustics with internal motor control signals. The neural mechanism of this auditory-vocal interaction remains largely unknown at the cellular level. Previous studies in naturally vocalizing marmosets have demonstrated diverse neural activities in auditory cortex during vocalization, dominated by a vocalization-induced suppression of neural firing. How underlying auditory tuning properties of these neurons might contribute to this sensory-motor processing is unknown. In the present study, we quantitatively compared marmoset auditory cortex neural activities during vocal production with those during passive listening. We found that neurons excited during vocalization were readily driven by passive playback of vocalizations and other acoustic stimuli. In contrast, neurons suppressed during vocalization exhibited more diverse playback responses, including responses that were not predictable by auditory tuning properties. These results suggest that vocalization-related excitation in auditory cortex is largely a sensory-driven response. In contrast, vocalization-induced suppression is not well predicted by a neuron's auditory responses, supporting the prevailing theory that internal motor-related signals contribute to the auditory-vocal interaction observed in auditory cortex. Copyright © 2017 Elsevier B.V. All rights reserved.
Mellor, James R; Barnes, Clarissa S; Rehfeldt, Ruth Anne
The current research investigated whether intraverbals would emerge following auditory tact instruction. Participants were first taught to tact auditory stimuli by providing the name of the item or animal that produces the sound (e.g., saying "eagle" when presented with the recording of an eagle cawing). Following test probes for simple intraverbals as well as intraverbal categorization participants were taught to tact what each auditory stimulus is (e.g., saying "caw" when presented with the recording of an eagle cawing). Following both tact instructional phases, the effects of an auditory imagining instruction procedure on target intraverbals were examined. Results indicate that following both tact instructional phases, intraverbals increased for three of four participants. Auditory imagining instruction was sufficient for two of four participants to reach mastery criterion, and two of four participants needed some direct instruction. Low covariation between simple intraverbal and categorization was also observed. Functional interdependence between tacts and intraverbals and the possible role of a conditioned hearing response are discussed.
Nielsen, Lars Bramsløw
An auditory model based on the psychophysics of hearing has been developed and tested. The model simulates the normal ear or an impaired ear with a given hearing loss. Based on reviews of the current literature, the frequency selectivity and loudness growth as functions of threshold and stimulus...... level have been found and implemented in the model. The auditory model was verified against selected results from the literature, and it was confirmed that the normal spread of masking and loudness growth could be simulated in the model. The effects of hearing loss on these parameters was also...... in qualitative agreement with recent findings. The temporal properties of the ear have currently not been included in the model. As an example of a real-world application of the model, loudness spectrograms for a speech utterance were presented. By introducing hearing loss, the speech sounds became less audible...
Kondo, Hirohito M; van Loon, Anouk M; Kawahara, Jun-Ichiro; Moore, Brian C J
We perceive the world as stable and composed of discrete objects even though auditory and visual inputs are often ambiguous owing to spatial and temporal occluders and changes in the conditions of observation. This raises important questions regarding where and how 'scene analysis' is performed in the brain. Recent advances from both auditory and visual research suggest that the brain does not simply process the incoming scene properties. Rather, top-down processes such as attention, expectations and prior knowledge facilitate scene perception. Thus, scene analysis is linked not only with the extraction of stimulus features and formation and selection of perceptual objects, but also with selective attention, perceptual binding and awareness. This special issue covers novel advances in scene-analysis research obtained using a combination of psychophysics, computational modelling, neuroimaging and neurophysiology, and presents new empirical and theoretical approaches. For integrative understanding of scene analysis beyond and across sensory modalities, we provide a collection of 15 articles that enable comparison and integration of recent findings in auditory and visual scene analysis.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Author(s).
Gori, Monica; Vercillo, Tiziana; Sandini, Giulio; Burr, David
Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gori et al., 2014). To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds before and after training, either with tactile feedback, verbal feedback, or no feedback. Audio thresholds were first measured with a spatial bisection task: subjects judged whether the second sound of a three sound sequence was spatially closer to the first or the third sound. The tactile feedback group underwent two audio-tactile feedback sessions of 100 trials, where each auditory trial was followed by the same spatial sequence played on the subject's forearm; auditory spatial bisection thresholds were evaluated after each session. In the verbal feedback condition, the positions of the sounds were verbally reported to the subject after each feedback trial. The no feedback group did the same sequence of trials, with no feedback. Performance improved significantly only after audio-tactile feedback. The results suggest that direct tactile feedback interacts with the auditory spatial localization system, possibly by a process of cross-sensory recalibration. Control tests with the subject rotated suggested that this effect occurs only when the tactile and acoustic sequences are spatially congruent. Our results suggest that the tactile system can be used to recalibrate the auditory sense of space. These results encourage the possibility of designing rehabilitation programs to help blind persons establish a robust auditory sense of space, through training with the tactile modality.
Full Text Available Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial-bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gori et al., 2014. To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds before and after training, either with tactile feedback, verbal feedback or no feedback. Audio thresholds were first measured with a spatial bisection task: subjects judged whether the second sound of a three sound sequence was spatially closer to the first or the third sound. The tactile-feedback group underwent two audio-tactile feedback sessions of 100 trials, where each auditory trial was followed by the same spatial sequence played on the subject’s forearm; auditory spatial bisection thresholds were evaluated after each session. In the verbal-feedback condition, the positions of the sounds were verbally reported to the subject after each feedback trial. The no-feedback group did the same sequence of trials, with no feedback. Performance improved significantly only after audio-tactile feedback. The results suggest that direct tactile feedback interacts with the auditory spatial localization system, possibly by a process of cross-sensory recalibration. Control tests with the subject rotated suggested that this effect occurs only when the tactile and acoustic sequences are spatially coherent. Our results suggest that the tactile system can be used to recalibrate the auditory sense of space. These results encourage the possibility of designing rehabilitation programs to help blind persons establish a robust auditory sense of space, through training with the tactile modality.
Full Text Available Objectives: This study investigated the efficacy of working memory training for improving working memory capacity and related auditory stream segregation in auditory processing disorders children. Methods: Fifteen subjects (9-11 years, clinically diagnosed with auditory processing disorder participated in this non-randomized case-controlled trial. Working memory abilities and auditory stream segregation were evaluated prior to beginning and six weeks after completing the training program. Ten control subjects, who did not participate in training program, underwent the same battery of tests at time intervals equivalent to the trained subjects. Differences between the two groups were measured using a repeated measures analysis of variance. Results: The results of this study indicated children who received auditory working memory training performed significantly better on working memory abilities and auditory stream segregation task than children do not received training program. Discussion: Results from this case-control study support the benefits of working memory training for children with auditory processing disorders and indicate that training of auditory working memory is especially important for this population.
Li, Qi; Yang, Huamin; Sun, Fang; Wu, Jinglong
Sensory information is multimodal; through audiovisual interaction, task-irrelevant auditory stimuli tend to speed response times and increase visual perception accuracy. However, mechanisms underlying these performance enhancements have remained unclear. We hypothesize that task-irrelevant auditory stimuli might provide reliable temporal and spatial cues for visual target discrimination and behavioral response enhancement. Using signal detection theory, the present study investigated the effects of spatiotemporal relationships on auditory facilitation of visual target discrimination. Three experiments were conducted where an auditory stimulus maintained reliable temporal and/or spatial relationships with visual target stimuli. Results showed that perception sensitivity (d') to visual target stimuli was enhanced only when a task-irrelevant auditory stimulus maintained reliable spatiotemporal relationships with a visual target stimulus. When only reliable spatial or temporal information was contained, perception sensitivity was not enhanced. These results suggest that reliable spatiotemporal relationships between visual and auditory signals are required for audiovisual integration during a visual discrimination task, most likely due to a spread of attention. These results also indicate that auditory facilitation of visual target discrimination follows from late-stage cognitive processes rather than early stage sensory processes. © 2015 SAGE Publications.
Yuasa, Kenichi; Yotsumoto, Yuko
When an object is presented visually and moves or flickers, the perception of its duration tends to be overestimated. Such an overestimation is called time dilation. Perceived time can also be distorted when a stimulus is presented aurally as an auditory flutter, but the mechanisms and their relationship to visual processing remains unclear. In the present study, we measured interval timing perception while modulating the temporal characteristics of visual and auditory stimuli, and investigated whether the interval times of visually and aurally presented objects shared a common mechanism. In these experiments, participants compared the durations of flickering or fluttering stimuli to standard stimuli, which were presented continuously. Perceived durations for auditory flutters were underestimated, while perceived durations of visual flickers were overestimated. When auditory flutters and visual flickers were presented simultaneously, these distortion effects were cancelled out. When auditory flutters were presented with a constantly presented visual stimulus, the interval timing perception of the visual stimulus was affected by the auditory flutters. These results indicate that interval timing perception is governed by independent mechanisms for visual and auditory processing, and that there are some interactions between the two processing systems.
Hou, Yanlian; Xiao, Xiaoyan; Ren, Jianmin; Wang, Yajuan; Zhao, Faming
More attention has recently been focused on auditory impairment of young type 1 diabetics. This study aimed to evaluate auditory function of young type 1 diabetics and the correlation between clinical indexes and hearing impairment. We evaluated the auditory function of 50 type 1 diabetics and 50 healthy subjects. Clinical indexes were measured along with analyzing their relation of auditory function. Type 1 diabetic patients demonstrated a deficit with elevated thresholds at right ear and left ear when compared to healthy controls (p p V and interwave I-V) and left ear (wave III, V and interwave I-III, I-V) in diabetic group significantly increased compared to those in control subjects (p p p p p <0.01). Type 1 diabetics exerted higher auditory threshold, slower auditory conduction time and cochlear impairment. HDL-cholesterol, diabetes duration, systemic blood pressure, microalbuminuria, GHbA1C, triglyceride, and age may affect the auditory function of type 1 diabetics. Copyright © 2015 IMSS. Published by Elsevier Inc. All rights reserved.
Kuriki, Shinya; Numao, Ryousuke; Nemoto, Iku
The auditory illusory perception "scale illusion" occurs when ascending and descending musical scale tones are delivered in a dichotic manner, such that the higher or lower tone at each instant is presented alternately to the right and left ears. Resulting tone sequences have a zigzag pitch in one ear and the reversed (zagzig) pitch in the other ear. Most listeners hear illusory smooth pitch sequences of up-down and down-up streams in the two ears separated in higher and lower halves of the scale. Although many behavioral studies have been conducted, how and where in the brain the illusory percept is formed have not been elucidated. In this study, we conducted functional magnetic resonance imaging using sequential tones that induced scale illusion (ILL) and those that mimicked the percept of scale illusion (PCP), and we compared the activation responses evoked by those stimuli by region-of-interest analysis. We examined the effects of adaptation, i.e., the attenuation of response that occurs when close-frequency sounds are repeated, which might interfere with the changes in activation by the illusion process. Results of the activation difference of the two stimuli, measured at varied tempi of tone presentation, in the superior temporal auditory cortex were not explained by adaptation. Instead, excess activation of the ILL stimulus from the PCP stimulus at moderate tempi (83 and 126 bpm) was significant in the posterior auditory cortex with rightward superiority, while significant prefrontal activation was dominant at the highest tempo (245 bpm). We suggest that the area of the planum temporale posterior to the primary auditory cortex is mainly involved in the illusion formation, and that the illusion-related process is strongly dependent on the rate of tone presentation. Copyright © 2016 Elsevier B.V. All rights reserved.
Antonio-Santos, Aileen; Vedula, Satyanarayana S; Hatt, Sarah R; Powell, Christine
Background Stimulus deprivation amblyopia (SDA) develops due to an obstruction to the passage of light secondary to a condition such as cataract. The obstruction prevents formation of a clear image on the retina. SDA can be resistant to treatment, leading to poor visual prognosis. SDA probably constitutes less than 3% of all amblyopia cases, although precise estimates of prevalence are unknown. In developed countries, most patients present under the age of one year; in less developed parts of the world patients are likely to be older at the time of presentation. The mainstay of treatment is removal of the cataract and then occlusion of the better-seeing eye, but regimens vary, can be difficult to execute, and traditionally are believed to lead to disappointing results. Objectives Our objective was to evaluate the effectiveness of occlusion therapy for SDA in an attempt to establish realistic treatment outcomes. Where data were available, we also planned to examine evidence of any dose response effect and to assess the effect of the duration, severity, and causative factor on the size and direction of the treatment effect. Search methods We searched CENTRAL (which contains the Cochrane Eyes and Vision Group Trials Register) (The Cochrane Library 2013, Issue 9), Ovid MEDLINE, Ovid MEDLINE In-Process and Other Non-Indexed Citations, Ovid MEDLINE Daily, Ovid OLDMEDLINE (January 1946 to October 2013), EMBASE (January 1980 to October 2013), the Latin American and Caribbean Literature on Health Sciences (LILACS) (January 1982 to October 2013), PubMed (January 1946 to October 2013), the metaRegister of Controlled Trials (mRCT) (www.controlled-trials.com), ClinicalTrials.gov (www.clinicaltrials.gov) and the WHO International Clinical Trials Registry Platform (ICTRP) (www.who.int/ictrp/search/en). We did not use any date or language restrictions in the electronic searches for trials. We last searched the electronic databases on 28 October 2013. Selection criteria We planned
Kouni, Sophia N; Giannopoulos, Sotirios; Ziavra, Nausika; Koutsojannis, Constantinos
Acoustic signals are transmitted through the external and middle ear mechanically to the cochlea where they are transduced into electrical impulse for further transmission via the auditory nerve. The auditory nerve encodes the acoustic sounds that are conveyed to the auditory brainstem. Multiple brainstem nuclei, the cochlea, the midbrain, the thalamus, and the cortex constitute the central auditory system. In clinical practice, auditory brainstem responses (ABRs) to simple stimuli such as click or tones are widely used. Recently, complex stimuli or complex auditory brain responses (cABRs), such as monosyllabic speech stimuli and music, are being used as a tool to study the brainstem processing of speech sounds. We have used the classic 'click' as well as, for the first time, the artificial successive complex stimuli 'ba', which constitutes the Greek word 'baba' corresponding to the English 'daddy'. Twenty young adults institutionally diagnosed as dyslexic (10 subjects) or light dyslexic (10 subjects) comprised the diseased group. Twenty sex-, age-, education-, hearing sensitivity-, and IQ-matched normal subjects comprised the control group. Measurements included the absolute latencies of waves I through V, the interpeak latencies elicited by the classical acoustic click, the negative peak latencies of A and C waves, as well as the interpeak latencies of A-C elicited by the verbal stimulus 'baba' created on a digital speech synthesizer. The absolute peak latencies of waves I, III, and V in response to monoaural rarefaction clicks as well as the interpeak latencies I-III, III-V, and I-V in the dyslexic subjects, although increased in comparison with normal subjects, did not reach the level of a significant difference (plearning disabilities' and who were characterized as with 'light' dyslexia according to dyslexia tests, no significant delays were found in peak latencies A and C and interpeak latencies A-C in comparison with the control group. Acoustic
Langers, Dave R.M.; Krumbholz, Katrin; Bowtell, Richard W.; Hall, Deborah A.
Although a consensus is emerging in the literature regarding the tonotopic organisation of auditory cortex in humans, previous studies employed a vast array of different neuroimaging protocols. In the present functional magnetic resonance imaging (fMRI) study, we made a systematic comparison between stimulus protocols involving jittered tone sequences with either a narrowband, broadband, or sweep character in order to evaluate their suitability for the purpose of tonotopic mapping. Data-drive...
DiMattina, Christopher; Zhang, Kechen
In this paper, we review several lines of recent work aimed at developing practical methods for adaptive on-line stimulus generation for sensory neurophysiology. We consider various experimental paradigms where on-line stimulus optimization is utilized, including the classical optimal stimulus paradigm where the goal of experiments is to identify a stimulus which maximizes neural responses, the iso-response paradigm which finds sets of stimuli giving rise to constant responses, and the system...
Fuhrman, Susan I; Redfern, Mark S; Jennings, J Richard; Furman, Joseph M
This study investigated whether spatial aspects of an information processing task influence dual-task interference. Two groups (Older/Young) of healthy adults participated in dual-task experiments. Two auditory information processing tasks included a frequency discrimination choice reaction time task (non-spatial task) and a lateralization choice reaction time task (spatial task). Postural tasks included combinations of standing with eyes open or eyes closed on either a fixed floor or a sway-referenced floor. Reaction times and postural sway via center of pressure were recorded. Baseline measures of reaction time and sway were subtracted from the corresponding dual-task results to calculate reaction time task costs and postural task costs. Reaction time task cost increased with eye closure (p = 0.01), sway-referenced flooring (p vision x age interaction indicated that older subjects had a significant vision X task interaction whereas young subjects did not. However, when analyzed by age group, the young group showed minimal differences in interference for the spatial and non-spatial tasks with eyes open, but showed increased interference on the spatial relative to non-spatial task with eyes closed. On the contrary, older subjects demonstrated increased interference on the spatial relative to the non-spatial task with eyes open, but not with eyes closed. These findings suggest that visual-spatial interference may occur in older subjects when vision is used to maintain posture.
Zimmermann, Jacqueline F; Moscovitch, Morris; Alain, Claude
Attention to memory describes the process of attending to memory traces when the object is no longer present. It has been studied primarily for representations of visual stimuli with only few studies examining attention to sound object representations in short-term memory. Here, we review the interplay of attention and auditory memory with an emphasis on 1) attending to auditory memory in the absence of related external stimuli (i.e., reflective attention) and 2) effects of existing memory on guiding attention. Attention to auditory memory is discussed in the context of change deafness, and we argue that failures to detect changes in our auditory environments are most likely the result of a faulty comparison system of incoming and stored information. Also, objects are the primary building blocks of auditory attention, but attention can also be directed to individual features (e.g., pitch). We review short-term and long-term memory guided modulation of attention based on characteristic features, location, and/or semantic properties of auditory objects, and propose that auditory attention to memory pathways emerge after sensory memory. A neural model for auditory attention to memory is developed, which comprises two separate pathways in the parietal cortex, one involved in attention to higher-order features and the other involved in attention to sensory information. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.
Rance, Gary; Corben, Louise; Barker, Elizabeth; Carew, Peter; Chisari, Donella; Rogers, Meghan; Dowell, Richard; Jamaluddin, Saiful; Bryson, Rochelle; Delatycki, Martin B
Friedreich's ataxia (FRDA) is an inherited ataxia with a range of progressive features including axonal degeneration of sensory nerves. The aim of this study was to investigate auditory perception in affected individuals. Fourteen subjects with genetically defined FRDA participated. Two control groups, one consisting of healthy, normally hearing individuals and another comprised of subjects with sensorineural hearing loss, were also assessed. Auditory processing was evaluated using structured tasks designed to reveal the listeners' ability to perceive temporal and spectral cues. Findings were then correlated with open-set speech understanding. Nine of 14 individuals with FRDA showed evidence of auditory processing disorder. Gap and amplitude modulation detection levels in these subjects were significantly elevated, indicating impaired encoding of rapid signal changes. Electrophysiologic findings (auditory brainstem response, ABR) also reflected disrupted neural activity. Speech understanding was significantly affected in these listeners and the degree of disruption was related to temporal processing ability. Speech analyses indicated that timing cues (notably consonant voice onset time and vowel duration) were most affected. The results suggest that auditory pathway abnormality is a relatively common consequence of FRDA. Regular auditory evaluation should therefore be part of the management regime for all affected individuals. This assessment should include both ABR testing, which can provide insights into the degree to which auditory neural activity is disrupted, and some functional measure of hearing capacity such as speech perception assessment, which can quantify the disorder and provide a basis for intervention. Copyright 2009 S. Karger AG, Basel.
Full Text Available It is well established that auditory cortex is plastic on different time scales and that this plasticity is driven by the reinforcement that is used to motivate subjects to learn or to perform an auditory task. Motivated by these findings, we study in detail properties of neuronal firing in auditory cortex that is related to reward feedback. We recorded from the auditory cortex of two monkeys while they were performing an auditory categorization task. Monkeys listened to a sequence of tones and had to signal when the frequency of adjacent tones stepped in downward direction, irrespective of the tone frequency and step size. Correct identifications were rewarded with either a large or a small amount of water. The size of reward depended on the monkeys' performance in the previous trial: it was large after a correct trial and small after an incorrect trial. The rewards served to maintain task performance. During task performance we found three successive periods of neuronal firing in auditory cortex that reflected (1 the reward expectancy for each trial, (2 the reward size received and (3 the mismatch between the expected and delivered reward. These results, together with control experiments suggest that auditory cortex receives reward feedback that could be used to adapt auditory cortex to task requirements. Additionally, the results presented here extend previous observations of non-auditory roles of auditory cortex and shows that auditory cortex is even more cognitively influenced than lately recognized.
Levine, R A; Gardner, J C; Stufflebeam, S M; Fullerton, B C; Carlisle, E W; Furst, M; Rosen, B R; Kiang, N Y
In order to relate human auditory processing to physiological and anatomical experimental animal data, we have examined the interrelationships between behavioral, electrophysiological and anatomical data obtained from human subjects with focal brainstem lesions. Thirty-eight subjects with multiple sclerosis were studied with tests of interaural time and level discrimination (just noticeable differences or jnds), brainstem auditory evoked potentials and magnetic resonance (MR) imaging. Interaural testing used two types of stimuli, high-pass (> 4000 Hz) and low-pass (< 1000 Hz) noise bursts. Abnormal time jnds (Tjnd) were far more common than abnormal level jnds (70% vs 11%); especially for the high-pass (Hp) noise (70% abnormal vs 40% abnormal for low-pass (Lp) noise). The HpTjnd could be abnormal with no other abnormalities; however, whenever the BAEPs, LpTjnd and/or level jnds were abnormal HpTjnd was always abnormal. Abnormal wave III amplitude was associated with abnormalities in both time jnds, but abnormal wave III latency with only abnormal HpTjnds. Abnormal wave V amplitude, when unilateral, was associated with a major HpTjnd abnormality, and, when bilateral, with both HpTjnd and LpTjnd major abnormalities. Sixteen of the subjects had their MR scans obtained with a uniform protocol and could be analyzed with objective criteria. In all four subjects with lesions involving the pontine auditory pathway, the BAEPs and both time jnds were abnormal. Of the twelve subjects with no lesions involving the pontine auditory pathway, all had normal BAEPs and level jnds, ten had normal LpTjnds, but only five had normal HpTjnds. We conclude that interaural time discrimination is closely related to the BAEPs and is dependent upon the stimulus spectrum. Redundant encoding of low-frequency sounds in the discharge patterns of auditory neurons, may explain why the HpTjnd is a better indicator of neural desynchrony than the LpTjnd. Encroachment of MS lesions upon the pontine
Sheinin, Anton; Lavi, Ayal; Michaelevski, Izhak
Electrical stimulus isolator is a widely used device in electrophysiology. The timing of the stimulus application is usually automated and controlled by the external device or acquisition software; however, the intensity of the stimulus is adjusted manually. Inaccuracy, lack of reproducibility and no automation of the experimental protocol are disadvantages of the manual adjustment. To overcome these shortcomings, we developed StimDuino, an inexpensive Arduino-controlled stimulus isolator allowing highly accurate, reproducible automated setting of the stimulation current. The intensity of the stimulation current delivered by StimDuino is controlled by Arduino, an open-source microcontroller development platform. The automatic stimulation patterns are software-controlled and the parameters are set from Matlab-coded simple, intuitive and user-friendly graphical user interface. The software also allows remote control of the device over the network. Electrical current measurements showed that StimDuino produces the requested current output with high accuracy. In both hippocampal slice and in vivo recordings, the fEPSP measurements obtained with StimDuino and the commercial stimulus isolators showed high correlation. Commercial stimulus isolators are manually managed, while StimDuino generates automatic stimulation patterns with increasing current intensity. The pattern is utilized for the input-output relationship analysis, necessary for assessment of excitability. In contrast to StimuDuino, not all commercial devices are capable for remote control of the parameters and stimulation process. StimDuino-generated automation of the input-output relationship assessment eliminates need for the current intensity manually adjusting, improves stimulation reproducibility, accuracy and allows on-site and remote control of the stimulation parameters. Copyright © 2015 Elsevier B.V. All rights reserved.
Plakas, Anna; van Zuijen, Titia; van Leeuwen, Theo; Thomson, Jennifer M; van der Leij, Aryan
Impaired auditory sensitivity to amplitude rise time (ART) has been suggested to be a primary deficit in developmental dyslexia. The present study investigates whether impaired ART-sensitivity at a pre-reading age precedes and predicts later emerging reading problems in a sample of Dutch children. An oddball paradigm, with a deviant that differed from the standard stimulus in ART, was administered to 41-month-old children (30 genetically at-risk for developmental dyslexia and 14 controls) with concurrent EEG measurement. A second deviant that differed from the standard stimulus in frequency served as a control deviant. Grade two reading scores were used to divide the at-risks in a typical-reading and a dyslexic subgroup. We found that both ART- and frequency processing were related to later reading skill. We however also found that irrespective of reading level, the at-risks in general showed impaired basic auditory processing when compared to controls and that it was impossible to discriminate between the at-risk groups on basis of both auditory measures. A relatively higher quality of early expressive syntactic skills in the typical-reading at-risk group might indicate a protective factor against negative effects of impaired auditory processing on reading development. Based on these results we argue that ART- and frequency-processing measures, although they are related to reading skill, lack the power to be considered single-cause predictors of developmental dyslexia. More likely, they are genetically driven risk factors that may add to cumulative effects on processes that are critical for learning to read. Copyright © 2012 Elsevier Ltd. All rights reserved.
Elbert, Sarah Pietertje; Dijkstra, Arie; Oenema, Anke
Mobile phone apps are increasingly used to deliver health interventions, which provide the opportunity to present health information via different communication modes. However, scientific evidence regarding the effects of such health apps is scarce. In a randomized controlled trial, we tested the efficacy of a 6-month intervention delivered via a mobile phone app that communicated either textual or auditory tailored health information aimed at stimulating fruit and vegetable intake. A control condition in which no health information was given was added. Perceived own health and health literacy were included as moderators to assess for which groups the interventions could possibly lead to health behavior change. After downloading the mobile phone app, respondents were exposed monthly to either text-based or audio-based tailored health information and feedback over a period of 6 months via the mobile phone app. In addition, respondents in the control condition only completed the baseline and posttest measures. Within a community sample (online recruitment), self-reported fruit and vegetable intake at 6-month follow-up was our primary outcome measure. In total, 146 respondents (ranging from 40 to 58 per condition) completed the study (attrition rate 55%). A significant main effect of condition was found on fruit intake (P=.049, partial η(2)=0.04). A higher fruit intake was found after exposure to the auditory information, especially in recipients with a poor perceived own health (P=.003, partial η(2)=0.08). In addition, health literacy moderated the effect of condition on vegetable intake 6 months later (Pmobile health app. The app seems to have the potential to change fruit and vegetable intake up to 6 months later, at least for specific groups. We found different effects for fruit and vegetable intake, respectively, suggesting that different underlying psychological mechanisms are associated with these specific behaviors. Based on our results, it seems worthwhile
Schwent, V. L.; Hillyard, S. A.
Ten subjects were presented with random, rapid sequences of four auditory tones which were separated in pitch and apparent spatial position. The N1 component of the auditory vertex evoked potential (EP) measured relative to a baseline was observed to increase with attention. It was concluded that the N1 enhancement reflects a finely tuned selective attention to one stimulus channel among several concurrent, competing channels. This EP enhancement probably increases with increased information load on the subject.
Chen, Xi; Guo, Yiping; Feng, Jingyu; Liao, Zhengli; Li, Xinjian; Wang, Haitao; Li, Xiao; He, Jufang
Damage to the medial temporal lobe impairs the encoding of new memories and the retrieval of memories acquired immediately before the damage in human. In this study, we demonstrated that artificial visuoauditory memory traces can be established in the rat auditory cortex and that their encoding and retrieval depend on the entorhinal cortex of the medial temporal lobe in the rat. We trained rats to associate a visual stimulus with electrical stimulation of the auditory cortex using a classical conditioning protocol. After conditioning, we examined the associative memory traces electrophysiologically (i.e., visual stimulus-evoked responses of auditory cortical neurons) and behaviorally (i.e., visual stimulus-induced freezing and visual stimulus-guided reward retrieval). The establishment of a visuoauditory memory trace in the auditory cortex, which was detectable by electrophysiological recordings, was achieved over 20-30 conditioning trials and was blocked by unilateral, temporary inactivation of the entorhinal cortex. Retrieval of a previously established visuoauditory memory was also affected by unilateral entorhinal cortex inactivation. These findings suggest that the entorhinal cortex is necessary for the encoding and involved in the retrieval of artificial visuoauditory memory in the auditory cortex, at least during the early stages of memory consolidation.
Iversen, John R.; Patel, Aniruddh D.; Nicodemus, Brenda; Emmorey, Karen
A striking asymmetry in human sensorimotor processing is that humans synchronize movements to rhythmic sound with far greater precision than to temporally equivalent visual stimuli (e.g., to an auditory vs. a flashing visual metronome). Traditionally, this finding is thought to reflect a fundamental difference in auditory vs. visual processing, i.e., superior temporal processing by the auditory system and/or privileged coupling between the auditory and motor systems. It is unclear whether this asymmetry is an inevitable consequence of brain organization or whether it can be modified (or even eliminated) by stimulus characteristics or by experience. With respect to stimulus characteristics, we found that a moving, colliding visual stimulus (a silent image of a bouncing ball with a distinct collision point on the floor) was able to drive synchronization nearly as accurately as sound in hearing participants. To study the role of experience, we compared synchronization to flashing metronomes in hearing and profoundly deaf individuals. Deaf individuals performed better than hearing individuals when synchronizing with visual flashes, suggesting that cross-modal plasticity enhances the ability to synchronize with temporally discrete visual stimuli. Furthermore, when deaf (but not hearing) individuals synchronized with the bouncing ball, their tapping patterns suggest that visual timing may access higher-order beat perception mechanisms for deaf individuals. These results indicate that the auditory advantage in rhythmic synchronization is more experience- and stimulus-dependent than has been previously reported. PMID:25460395
Svendsen, Pernille Maj; Malmkvist, Jens; Halekoh, Ulrich
/neutral situation), whereas another auditory stimulus was followed by an aversive stimulus (air blow) before the inter-trial-interval (danger situation). We observed behaviour including latencies to show a response during both experiments. The High mink showed significant habituation in experiment 1 but the Low...... mink only showed habituation in experiment 2. Regardless of the frequency used (2 and 18 kHz), cues predicting the danger situation initially elicited slower responses compared to those predicting the safe situation but quickly became faster. Using auditory cues as discrimination stimuli for female...
Full Text Available Aging is often accompanied by hearing loss, which impacts how sounds are processed and represented along the ascending auditory pathways and within the auditory cortices. Here, we assess the impact of mild binaural hearing loss on the older adults’ ability to both process complex sounds embedded in noise and segregate a mistuned harmonic in an otherwise periodic stimulus. We measured auditory evoked fields (AEFs using magnetoencephalography while participants were presented with complex tones that had either all harmonics in tune or had the third harmonic mistuned by 4 or 16% of its original value. The tones (75 dB sound pressure level, SPL were presented without, with low (45 dBA, SPL or with moderate (65 dBA SPL Gaussian noise. For each participant, we modeled the AEFs with a pair of dipoles in the superior temporal plane. We then examined the effects of hearing loss and noise on the amplitude and latency of the resulting source waveforms. In the present study, results revealed that similar noise-induced increases in N1m were present in older adults with and without hearing loss. Our results also showed that the P1m amplitude was larger in the hearing impaired than normal-hearing adults. In addition, the object-related negativity (ORN elicited by the mistuned harmonic was larger in hearing impaired listeners. The enhanced P1m and ORN amplitude in the hearing impaired older adults suggests that hearing loss increased neural excitability in auditory cortices, which could be related to deficits in inhibitory control.
Full Text Available Objectives: Rehabilitation strategies play a pivotal role in reliving the inappropriate behaviors and improving children's performance during school. Concentration and visual and auditory comprehension in children are crucial to effective learning and have drawn interest from researchers and clinicians. Vestibular function deficits usually cause high level of alertness and vigilance, and problems in maintaining focus, paying selective attention, and altering in precision and attention to the stimulus. The aim of this study is to investigate the correlation between vestibular stimulation and auditory perception in children with attention deficit hyperactivity disorder. Methods: Totally 30 children aged from 7 to 12 years with attention deficit hyperactivity disorder participated in this study. They were assessed based on the criteria of diagnostic and statistical manual of mental disorders. After obtaining guardian and parental consent, they were enrolled and randomly matched on age to two groups of intervention and control. Integrated visual and auditory continuous performance test was carried out as a pre-test. Those in the intervention group received vestibular stimulation during the therapy sessions, twice a week for 10 weeks. At the end the test was done to both groups as post-test. Results: The pre-and post-test scores were measured and compared the differences between means for two subject groups. Statistical analyses found a significant difference for the mean differences regarding auditory comprehension improvement. Discussion: The findings suggest that vestibular training is a reliable and powerful option treatment for attention deficit hyperactivity disorder especially along with other trainings, meaning that stimulating the sense of balance highlights the importance of interaction between inhabitation and cognition.
Controlling friction and adhesion is relevant in nature and in our daily life. Such control can be achieved using stimulus responsive end-anchored polymers forming a brush. These brushes can adapt their physicochemical properties upon changing the surrounding environments, such as temperature,
Lazar, Aurel A; Slutskiy, Yevgeniy B
We present a multi-input multi-output neural circuit architecture for nonlinear processing and encoding of stimuli in the spike domain. In this architecture a bank of dendritic stimulus processors implements nonlinear transformations of multiple temporal or spatio-temporal signals such as spike trains or auditory and visual stimuli in the analog domain. Dendritic stimulus processors may act on both individual stimuli and on groups of stimuli, thereby executing complex computations that arise as a result of interactions between concurrently received signals. The results of the analog-domain computations are then encoded into a multi-dimensional spike train by a population of spiking neurons modeled as nonlinear dynamical systems. We investigate general conditions under which such circuits faithfully represent stimuli and demonstrate algorithms for (i) stimulus recovery, or decoding, and (ii) identification of dendritic stimulus processors from the observed spikes. Taken together, our results demonstrate a fundamental duality between the identification of the dendritic stimulus processor of a single neuron and the decoding of stimuli encoded by a population of neurons with a bank of dendritic stimulus processors. This duality result enabled us to derive lower bounds on the number of experiments to be performed and the total number of spikes that need to be recorded for identifying a neural circuit.
Rayner, Louise H; Lee, Kwang-Hyuk; Woodruff, Peter W R
Evidence suggests that auditory hallucinations may result from abnormally enhanced auditory sensitivity. To investigate whether there is an auditory processing bias in healthy individuals who are prone to experiencing auditory hallucinations. Two hundred healthy volunteers performed a temporal order judgement task in which they determined whether an auditory or a visual stimulus came first under conditions of directed attention ('attend-auditory' and 'attend-visual' conditions). The Launay-Slade Hallucination Scale was used to divide the sample into high and low hallucination-proneness groups. The high hallucination-proneness group exhibited a reduced sensitivity to auditory stimuli under the attend-auditory condition. By contrast, attention-directed visual sensitivity did not differ significantly between groups. Healthy individuals prone to hallucinatory experiences may possess a bias in attention towards internal auditory stimuli at the expense of external sounds. Interventions involving the redistribution of attentional resources would have therapeutic benefit in patients experiencing auditory hallucinations. © The Royal College of Psychiatrists 2015.
Rodríguez, Gabriel; Márquez, Raúl; Gil, Marta; Alonso, Gumersinda; Hall, Geoffrey
According to a recent theory (Hall & Rodriguez, 2010), the latent inhibition produced by nonreinforced exposure to a target stimulus (B) will be deepened by subsequent exposure of that stimulus in compound with another (AB). This effect of compound exposure is taken to depend on the addition of a novel A to the familiar B and is not predicted for equivalent preexposure on which AB trials precede the A trials. This prediction was tested in 2 experiments using rats. Experiment 1 used an aversive procedure with flavors as the stimuli; Experiment 2 used an appetitive procedure with visual and auditory stimuli. In both, we found that conditioning with B as the conditioned stimulus proceeded more slowly (i.e., latent inhibition was greater) in subjects given the B-AB sequence in preexposure than in subjects given the AB-B sequence.
Kim, Soo Ji; Kwak, Eunmi E; Park, Eun Sook; Cho, Sung-Rae
To investigate the effects of rhythmic auditory stimulation (RAS) on gait patterns in comparison with changes after neurodevelopmental treatment (NDT/Bobath) in adults with cerebral palsy. A repeated-measures analysis between the pretreatment and posttreatment tests and a comparison study between groups. Human gait analysis laboratory. Twenty-eight cerebral palsy patients with bilateral spasticity participated in this study. The subjects were randomly allocated to either neurodevelopmental treatment (n = 13) or rhythmic auditory stimulation (n = 15). Gait training with rhythmic auditory stimulation or neurodevelopmental treatment was performed three sessions per week for three weeks. Temporal and kinematic data were analysed before and after the intervention. Rhythmic auditory stimulation was provided using a combination of a metronome beat set to the individual's cadence and rhythmic cueing from a live keyboard, while neurodevelopmental treatment was implemented following the traditional method. Temporal data, kinematic parameters and gait deviation index as a measure of overall gait pathology were assessed. Temporal gait measures revealed that rhythmic auditory stimulation significantly increased cadence, walking velocity, stride length, and step length (P < 0.05). Kinematic data demonstrated that anterior tilt of the pelvis and hip flexion during a gait cycle was significantly ameliorated after rhythmic auditory stimulation (P < 0.05). Gait deviation index also showed modest improvement in cerebral palsy patients treated with rhythmic auditory stimulation (P < 0.05). However, neurodevelopmental treatment showed that internal and external rotations of hip joints were significantly improved, whereas rhythmic auditory stimulation showed aggravated maximal internal rotation in the transverse plane (P < 0.05). Gait training with rhythmic auditory stimulation or neurodevelopmental treatment elicited differential effects on gait patterns in adults with cerebral palsy.
Jones, L A; Hills, P J; Dick, K M; Jones, S P; Bright, P
Sensory gating is a neurophysiological measure of inhibition that is characterised by a reduction in the P50 event-related potential to a repeated identical stimulus. The objective of this work was to determine the cognitive mechanisms that relate to the neurological phenomenon of auditory sensory gating. Sixty participants underwent a battery of 10 cognitive tasks, including qualitatively different measures of attentional inhibition, working memory, and fluid intelligence. Participants additionally completed a paired-stimulus paradigm as a measure of auditory sensory gating. A correlational analysis revealed that several tasks correlated significantly with sensory gating. However once fluid intelligence and working memory were accounted for, only a measure of latent inhibition and accuracy scores on the continuous performance task showed significant sensitivity to sensory gating. We conclude that sensory gating reflects the identification of goal-irrelevant information at the encoding (input) stage and the subsequent ability to selectively attend to goal-relevant information based on that previous identification. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Full Text Available In this study, it is demonstrated that moving sounds have an effect on the direction in which one sees visual stimuli move. During the main experiment sounds were presented consecutively at four speaker locations inducing left- or rightwards auditory apparent motion. On the path of auditory apparent motion, visual apparent motion stimuli were presented with a high degree of directional ambiguity. The main outcome of this experiment is that our participants perceived visual apparent motion stimuli that were ambiguous (equally likely to be perceived as moving left- or rightwards more often as moving in the same direction than in the opposite direction of auditory apparent motion. During the control experiment we replicated this finding and found no effect of sound motion direction on eye movements. This indicates that auditory motion can capture our visual motion percept when visual motion direction is insufficiently determinate without affecting eye movements.
Dignath, David; Eder, Andreas B
According to a recent extension of the conflict-monitoring theory, conflict between two competing response tendencies is registered as an aversive event and triggers a motivation to avoid the source of conflict. In the present study, we tested this assumption. Over five experiments, we examined whether conflict is associated with an avoidance motivation and whether stimulus conflict or response conflict triggers an avoidance tendency. Participants first performed a color Stroop task. In a subsequent motivation test, participants responded to Stroop stimuli with approach- and avoidance-related lever movements. These results showed that Stroop-conflict stimuli increased the frequency of avoidance responses in a free-choice motivation test, and also increased the speed of avoidance relative to approach responses in a forced-choice test. High and low proportions of response conflict in the Stroop task had no effect on avoidance in the motivation test. Avoidance of conflict was, however, obtained even with new conflict stimuli that had not been presented before in a Stroop task, and when the Stroop task was replaced with an unrelated filler task. Taken together, these results suggest that stimulus conflict is sufficient to trigger avoidance.
Jääskeläinen, Iiro P; Ahveninen, Jyrki; Bonmassar, Giorgio; Dale, Anders M; Ilmoniemi, Risto J; Levänen, Sari; Lin, Fa-Hsuan; May, Patrick; Melcher, Jennifer; Stufflebeam, Steven; Tiitinen, Hannu; Belliveau, John W
Life or death in hostile environments depends crucially on one's ability to detect and gate novel sounds to awareness, such as that of a twig cracking under the paw of a stalking predator in a noisy jungle. Two distinct auditory cortex processes have been thought to underlie this phenomenon: (i) attenuation of the so-called N1 response with repeated stimulation and (ii) elicitation of a mismatch negativity response (MMN) by changes in repetitive aspects of auditory stimulation. This division has been based on previous studies suggesting that, unlike for the N1, repetitive "standard" stimuli preceding a physically different "novel" stimulus constitute a prerequisite to MMN elicitation, and that the source loci of MMN and N1 are different. Contradicting these findings, our combined electromagnetic, hemodynamic, and psychophysical data indicate that the MMN is generated as a result of differential adaptation of anterior and posterior auditory cortex N1 sources by preceding auditory stimulation. Early ( approximately 85 ms) neural activity within posterior auditory cortex is adapted as sound novelty decreases. This alters the center of gravity of electromagnetic N1 source activity, creating an illusory difference between N1 and MMN source loci when estimated by using equivalent current dipole fits. Further, our electroencephalography data show a robust MMN after a single standard event when the interval between two consecutive novel sounds is kept invariant. Our converging findings suggest that transient adaptation of feature-specific neurons within human posterior auditory cortex filters superfluous sounds from entering one's awareness.
Full Text Available While many studies have shown that visual information affects perception in the other modalities, little is known about how auditory and haptic information affect visual perception. In this study, we investigated how auditory, haptic, or auditory and haptic stimulation affects visual perception. We used a behavioral task based on the subjects observing the phenomenon of two identical visual objects moving toward each other, overlapping and then continuing their original motion. Subjects may perceive the objects as either streaming each other or bouncing and reversing their direction of motion. With only visual motion stimulus, subjects usually report the objects as streaming, whereas if a sound or flash is played when the objects touch each other, subjects report the objects as bouncing (Bounce-Inducing Effect. In this study, “auditory stimulation”, “haptic stimulation” or “haptic and auditory stimulation” were presented at various times relative to the visual overlap of objects. Our result shows the bouncing rate when haptic and auditory stimulation were presented were the highest. This result suggests that the Bounce-Inducing Effect is enhanced by simultaneous modality presentation to visual motion. In the future, a neuroscience approach (eg, TMS, fMRI may be required to elucidate the brain mechanism in this study.
Full Text Available Speech perception is known to rely on both auditory and visual information. However, sound specific somatosensory input has been shown also to influence speech perceptual processing (Ito et al., 2009. In the present study we addressed further the relationship between somatosensory information and speech perceptual processing by addressing the hypothesis that the temporal relationship between orofacial movement and sound processing contributes to somatosensory-auditory interaction in speech perception. We examined the changes in event-related potentials in response to multisensory synchronous (simultaneous and asynchronous (90 ms lag and lead somatosensory and auditory stimulation compared to individual unisensory auditory and somatosensory stimulation alone. We used a robotic device to apply facial skin somatosensory deformations that were similar in timing and duration to those experienced in speech production. Following synchronous multisensory stimulation the amplitude of the event-related potential was reliably different from the two unisensory potentials. More importantly, the magnitude of the event-related potential difference varied as a function of the relative timing of the somatosensory-auditory stimulation. Event-related activity change due to stimulus timing was seen between 160-220 ms following somatosensory onset, mostly around the parietal area. The results demonstrate a dynamic modulation of somatosensory-auditory convergence and suggest the contribution of somatosensory information for speech processing process is dependent on the specific temporal order of sensory inputs in speech production.
Kondo, Hirohito M; Farkas, Dávid; Denham, Susan L; Asai, Tomohisa; Winkler, István
Multistability in perception is a powerful tool for investigating sensory-perceptual transformations, because it produces dissociations between sensory inputs and subjective experience. Spontaneous switching between different perceptual objects occurs during prolonged listening to a sound sequence of tone triplets or repeated words (termed auditory streaming and verbal transformations, respectively). We used these examples of auditory multistability to examine to what extent neurochemical and cognitive factors influence the observed idiosyncratic patterns of switching between perceptual objects. The concentrations of glutamate-glutamine (Glx) and γ-aminobutyric acid (GABA) in brain regions were measured by magnetic resonance spectroscopy, while personality traits and executive functions were assessed using questionnaires and response inhibition tasks. Idiosyncratic patterns of perceptual switching in the two multistable stimulus configurations were identified using a multidimensional scaling (MDS) analysis. Intriguingly, although switching patterns within each individual differed between auditory streaming and verbal transformations, similar MDS dimensions were extracted separately from the two datasets. Individual switching patterns were significantly correlated with Glx and GABA concentrations in auditory cortex and inferior frontal cortex but not with the personality traits and executive functions. Our results suggest that auditory perceptual organization depends on the balance between neural excitation and inhibition in different brain regions.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Author(s).
Brown, Rachel M; Palmer, Caroline
In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.
Warnock, Mairi; Boss, Marvin W.
Eighty fourth-graders enrolled in an English/French bilingual program in Canada were administered an auditory skills battery of six tests to measure auditory discrimination and short-term auditory memory. It was concluded that a relationship exists between certain auditory perceptual abilities and school achievement independent of cognitive…
Häkkinen, Suvi; Ovaska, Noora; Rinne, Teemu
The relationship between stimulus-dependent and task-dependent activations in human auditory cortex (AC) during pitch and location processing is not well understood. In the present functional magnetic resonance imaging study, we investigated the processing of task-irrelevant and task-relevant pitch and location during discrimination, n-back, and visual tasks. We tested three hypotheses: (1) According to prevailing auditory models, stimulus-dependent processing of pitch and location should be associated with enhanced activations in distinct areas of the anterior and posterior superior temporal gyrus (STG), respectively. (2) Based on our previous studies, task-dependent activation patterns during discrimination and n-back tasks should be similar when these tasks are performed on sounds varying in pitch or location. (3) Previous studies in humans and animals suggest that pitch and location tasks should enhance activations especially in those areas that also show activation enhancements associated with stimulus-dependent pitch and location processing, respectively. Consistent with our hypotheses, we found stimulus-dependent sensitivity to pitch and location in anterolateral STG and anterior planum temporale (PT), respectively, in line with the view that these features are processed in separate parallel pathways. Further, task-dependent activations during discrimination and n-back tasks were associated with enhanced activations in anterior/posterior STG and posterior STG/inferior parietal lobule (IPL) irrespective of stimulus features. However, direct comparisons between pitch and location tasks performed on identical sounds revealed no significant activation differences. These results suggest that activations during pitch and location tasks are not strongly affected by enhanced stimulus-dependent activations to pitch or location. We also found that activations in PT were strongly modulated by task requirements and that areas in the inferior parietal lobule (IPL) showed
Kim, Bong Jik; Kim, Jungyoon; Park, Il-Yong; Jung, Jae Yun; Suh, Myung-Whan; Oh, Seung-Ha
The central auditory pathway matures through sensory experiences and it is known that sensory experiences during periods called critical periods exert an important influence on brain development. The present study aimed to investigate whether temporary auditory deprivation during critical periods (CPs) could have a detrimental effect on the development of auditory temporal processing. Twelve neonatal rats were randomly assigned to control and study groups; Study group experienced temporary (18-20 days) auditory deprivation during CPs (Early deprivation study group). Outcome measures included changes in auditory brainstem response (ABR), gap prepulse inhibition of the acoustic startle reflex (GPIAS), and gap detection threshold (GDT). To further delineate the specific role of CPs in the outcome measures above, the same paradigm was applied in adult rats (Late deprivation group) and the findings were compared with those of the neonatal rats. Soon after the restoration of hearing, early deprivation study animals showed a significantly lower GPIAS at intermediate gap durations and a larger GDT than early deprivation controls, but these differences became insignificant after subsequent auditory inputs. Additionally, the ABR results showed significantly delayed latencies of waves IV, V, and interpeak latencies of wave I-III and wave I-V in study group. Late deprivation group didn't exhibit any deterioration in temporal processing following sensory deprivation. Taken together, the present results suggest that transient auditory deprivation during CPs might cause reversible disruptions in the development of temporal processing. Copyright © 2017 Elsevier B.V. All rights reserved.
Lim, Sung-Joo; Wöstmann, Malte; Obleser, Jonas
Selective attention to a task-relevant stimulus facilitates encoding of that stimulus into a working memory representation. It is less clear whether selective attention also improves the precision of a stimulus already represented in memory. Here, we investigate the behavioral and neural dynamics of selective attention to representations in auditory working memory (i.e., auditory objects) using psychophysical modeling and model-based analysis of electroencephalographic signals. Human listeners performed a syllable pitch discrimination task where two syllables served as to-be-encoded auditory objects. Valid (vs neutral) retroactive cues were presented during retention to allow listeners to selectively attend to the to-be-probed auditory object in memory. Behaviorally, listeners represented auditory objects in memory more precisely (expressed by steeper slopes of a psychometric curve) and made faster perceptual decisions when valid compared to neutral retrocues were presented. Neurally, valid compared to neutral retrocues elicited a larger frontocentral sustained negativity in the evoked potential as well as enhanced parietal alpha/low-beta oscillatory power (9-18 Hz) during memory retention. Critically, individual magnitudes of alpha oscillatory power (7-11 Hz) modulation predicted the degree to which valid retrocues benefitted individuals' behavior. Our results indicate that selective attention to a specific object in auditory memory does benefit human performance not by simply reducing memory load, but by actively engaging complementary neural resources to sharpen the precision of the task-relevant object in memory. Can selective attention improve the representational precision with which objects are held in memory? And if so, what are the neural mechanisms that support such improvement? These issues have been rarely examined within the auditory modality, in which acoustic signals change and vanish on a milliseconds time scale. Introducing a new auditory memory
Delorme, Arnaud; Polich, John
Long-term Vipassana meditators sat in meditation vs. a control (instructed mind wandering) states for 25 min, electroencephalography (EEG) was recorded and condition order counterbalanced. For the last 4 min, a three-stimulus auditory oddball series was presented during both meditation and control periods through headphones and no task imposed. Time-frequency analysis demonstrated that meditation relative to the control condition evinced decreased evoked delta (2–4 Hz) power to distracter stimuli concomitantly with a greater event-related reduction of late (500–900 ms) alpha-1 (8–10 Hz) activity, which indexed altered dynamics of attentional engagement to distracters. Additionally, standard stimuli were associated with increased early event-related alpha phase synchrony (inter-trial coherence) and evoked theta (4–8 Hz) phase synchrony, suggesting enhanced processing of the habituated standard background stimuli. Finally, during meditation, there was a greater differential early-evoked gamma power to the different stimulus classes. Correlation analysis indicated that this effect stemmed from a meditation state-related increase in early distracter-evoked gamma power and phase synchrony specific to longer-term expert practitioners. The findings suggest that Vipassana meditation evokes a brain state of enhanced perceptual clarity and decreased automated reactivity. PMID:22648958
Berwick, Robert C; Pietroski, Paul; Yankama, Beracah; Chomsky, Noam
A central goal of modern generative grammar has been to discover invariant properties of human languages that reflect "the innate schematism of mind that is applied to the data of experience" and that "might reasonably be attributed to the organism itself as its contribution to the task of the acquisition of knowledge" (Chomsky, 1971). Candidates for such invariances include the structure dependence of grammatical rules, and in particular, certain constraints on question formation. Various "poverty of stimulus" (POS) arguments suggest that these invariances reflect an innate human endowment, as opposed to common experience: Such experience warrants selection of the grammars acquired only if humans assume, a priori, that selectable grammars respect substantive constraints. Recently, several researchers have tried to rebut these POS arguments. In response, we illustrate why POS arguments remain an important source of support for appeal to a priori structure-dependent constraints on the grammars that humans naturally acquire. Copyright © 2011 Cognitive Science Society, Inc.
Favrot, Sylvain Emmanuel
A loudspeaker-based virtual auditory environment (VAE) has been developed to provide a realistic versatile research environment for investigating the auditory signal processing in real environments, i.e., considering multiple sound sources and room reverberation. The VAE allows a full control...... of the acoustic scenario in order to systematically study the auditory processing of reverberant sounds. It is based on the ODEON software, which is state-of-the-art software for room acoustic simulations developed at Acoustic Technology, DTU. First, a MATLAB interface to the ODEON software has been developed...
Liang, Feixue; Bai, Lin; Tao, Huizhong W.; Zhang, Li I.; Xiao, Zhongju
It is generally thought that background noise can mask auditory information. However, how the noise specifically transforms neuronal auditory processing in a level-dependent manner remains to be carefully determined. Here, with in vivo loose-patch cell-attached recordings in layer 4 of the rat primary auditory cortex (A1), we systematically examined how continuous wideband noise of different levels affected receptive field properties of individual neurons. We found that the background noise, when above a certain critical/effective level, resulted in an elevation of intensity threshold for tone-evoked responses. This increase of threshold was linearly dependent on the noise intensity above the critical level. As such, the tonal receptive field (TRF) of individual neurons was translated upward as an entirety toward high intensities along the intensity domain. This resulted in preserved preferred characteristic frequency (CF) and the overall shape of TRF, but reduced frequency responding range and an enhanced frequency selectivity for the same stimulus intensity. Such translational effects on intensity threshold were observed in both excitatory and fast-spiking inhibitory neurons, as well as in both monotonic and nonmonotonic (intensity-tuned) A1 neurons. Our results suggest that in a noise background, fundamental auditory representations are modulated through a background level-dependent linear shifting along intensity domain, which is equivalent to reducing stimulus intensity. PMID:25426029
Wu, M F; Ison, J R; Wecker, J R; Lapham, L W
Rats were given a total dose of 50 mg/kg (Exp. 1), 13.3 or 40 mg/kg (Exp. 2), or 40 mg/kg (Exp. 3) of methyl mercury chloride subcutaneously over a course of 5 days. At varying times after the toxic exposure, up to 1 year, their sensory functioning was assessed by reflex modulation methods: stimuli of interest were presented just before an intense tone which elicited the startle reflex, and stimulus reception was measured by the inhibitory control of the stimuli over the amplitude of the reflex. In Experiment 1 cutaneous prestimuli (electric shock to the tail) and brief acoustic transients (silent periods in noise) were less effective inhibitors of reflex activity in poisoned animals, compared to controls, indicating that the poisoned animals had impairments in cutaneous sensitivity and audition. In Experiment 2 the time course of sensory loss and subsequent recovery was studied. Impaired auditory function was shown further by a deficit in the effectiveness of weak noise pulses, and, in addition, the cutaneous deficit for weak tail shocks was accompanied by an exaggerated or hyperpathic response to more intense tail shocks. Experiment 3 confirmed the finding that the loss of sensitivity to weak shock was accompanied by an enhancement of the response to more intense shock. These data were related to peripheral neuropathy and shown to be analogous to certain clinical symptoms of Minamata disease reported in humans.
Michael A. Nees
Full Text Available Researchers have shown increased interest in mechanisms of working memory for nonverbal sounds such as music and environmental sounds. These studies often have used two-stimulus comparison tasks: two sounds separated by a brief retention interval (often 3 to 5 s are compared, and a same or different judgment is recorded. Researchers seem to have assumed that sensory memory has a negligible impact on performance in auditory two-stimulus comparison tasks. This assumption is examined in detail in this comment. According to seminal texts and recent research reports, sensory memory persists in parallel with working memory for a period of time following hearing a stimulus and can influence behavioral responses on memory tasks. Unlike verbal working memory studies that use serial recall tasks, research paradigms for exploring nonverbal working memory—especially two-stimulus comparison tasks—may not be differentiating working memory from sensory memory processes in analyses of behavioral responses, because retention interval durations have not excluded the possibility that the sensory memory trace drives task performance. This conflation of different constructs may be one contributor to discrepant research findings and the resulting proliferation of theoretical conjectures regarding mechanisms of working memory for nonverbal sounds.
Nees, Michael A
Researchers have shown increased interest in mechanisms of working memory for nonverbal sounds such as music and environmental sounds. These studies often have used two-stimulus comparison tasks: two sounds separated by a brief retention interval (often 3-5 s) are compared, and a "same" or "different" judgment is recorded. Researchers seem to have assumed that sensory memory has a negligible impact on performance in auditory two-stimulus comparison tasks. This assumption is examined in detail in this comment. According to seminal texts and recent research reports, sensory memory persists in parallel with working memory for a period of time following hearing a stimulus and can influence behavioral responses on memory tasks. Unlike verbal working memory studies that use serial recall tasks, research paradigms for exploring nonverbal working memory-especially two-stimulus comparison tasks-may not be differentiating working memory from sensory memory processes in analyses of behavioral responses, because retention interval durations have not excluded the possibility that the sensory memory trace drives task performance. This conflation of different constructs may be one contributor to discrepant research findings and the resulting proliferation of theoretical conjectures regarding mechanisms of working memory for nonverbal sounds.
Javitt, D C; Grochowski, S; Shelley, A M; Ritter, W
Schizophrenia is a severe mental disorder associated with disturbances in perception and cognition. Event-related potentials (ERP) provide a mechanism for evaluating potential mechanisms underlying neurophysiological dysfunction in schizophrenia. Mismatch negativity (MMN) is a short-duration auditory cognitive ERP component that indexes operation of the auditory sensory ('echoic') memory system. Prior studies have demonstrated impaired MMN generation in schizophrenia along with deficits in auditory sensory memory performance. MMN is elicited in an auditory oddball paradigm in which a sequence of repetitive standard tones is interrupted infrequently by a physically deviant ('oddball') stimulus. The present study evaluates MMN generation as a function of deviant stimulus probability, interstimulus interval, interdeviant interval and the degree of pitch separation between the standard and deviant stimuli. The major findings of the present study are first, that MMN amplitude is decreased in schizophrenia across a broad range of stimulus conditions, and second, that the degree of deficit in schizophrenia is largest under conditions when MMN is normally largest. The pattern of deficit observed in schizophrenia differs from the pattern observed in other conditions associated with MMN dysfunction, including Alzheimer's disease, stroke, and alcohol intoxication.
Comparison of the Effects of Two Auditory Methods by Mother and Fetus on the Results of Non-Stress Test (Baseline Fetal Heart Rate and Number of Accelerations in Pregnant Women: A Randomized Controlled Trial
Full Text Available Objective: To compare the effects of two auditory methods by mother and fetus on the results of NST in 2011-2012.Materials and methods: In this single-blind clinical trial, 213 pregnant women with gestational age of 37-41 weeks who had no pregnancy complications were randomly divided into 3 groups (auditory intervention for mother, auditory intervention for fetus, and control each containing 71 subjects. In the intervention groups, music was played through the second 10 minutes of NST. The three groups were compared regarding baseline fetal heart rate and number of accelerations in the first and second 10 minutes of NST. The data were analyzed using one-way ANOVA, Kruskal-Wallis, and paired T-test.Results: The results showed no significant difference among the three groups regarding baseline fetal heart rate in the first (p = 0.945 and second (p = 0.763 10 minutes. However, a significant difference was found among the three groups concerning the number of accelerations in the second 10 minutes. Also, a significant difference was observed in the number of accelerations in the auditory intervention for mother (p = 0.013 and auditory intervention for fetus groups (p < 0.001. The difference between the number of accelerations in the first and second 10 minutes was also statistically significant (p = 0.002.Conclusion: Music intervention was effective in the number of accelerations which is the indicator of fetal health. Yet, further studies are required to be conducted on the issue.
Full Text Available Various studies have highlighted plasticity of the auditory system from visual stimuli, limiting the trained field of perception. The aim of the present study is to investigate auditory system adaptation using an audio-kinesthetic platform. Participants were placed in a Virtual Auditory Environment allowing the association of the physical position of a virtual sound source with an alternate set of acoustic spectral cues or Head-Related Transfer Function (HRTF through the use of a tracked ball manipulated by the subject. This set-up has the advantage to be not being limited to the visual field while also offering a natural perception-action coupling through the constant awareness of one's hand position. Adaptation process to non-individualized HRTF was realized through a spatial search game application. A total of 25 subjects participated, consisting of subjects presented with modified cues using non-individualized HRTF and a control group using individual measured HRTFs to account for any learning effect due to the game itself. The training game lasted 12 minutes and was repeated over 3 consecutive days. Adaptation effects were measured with repeated localization tests. Results showed a significant performance improvement for vertical localization and a significant reduction in the front/back confusion rate after 3 sessions.
Profant, Oliver; Roth, Jan; Bureš, Zbyněk; Balogová, Zuzana; Lišková, Irena; Betka, Jan; Syka, Josef
Huntington's disease (HD) is an autosomal, dominantly inherited, neurodegenerative disease. The main clinical features are motor impairment, progressive cognitive deterioration and behavioral changes. The aim of our study was to find out whether patients with HD suffer from disorders of the auditory system. A group of 17 genetically verified patients (11 males, 6 females) with various stages of HD (examined by UHDRS - motor part and total functional capacity, MMSE for cognitive functions) underwent an audiological examination (high frequency pure tone audiometry, otoacoustic emissions, speech audiometry, speech audiometry in babble noise, auditory brainstem responses). Additionally, 5 patients underwent a more extensive audiological examination, focused on central auditory processing. The results were compared with a group of age-matched healthy volunteers. Our results show that HD patients have physiologic hearing thresholds, otoacoustic emissions and auditory brainstem responses; however, they display a significant decrease in speech understanding, especially under demanding conditions (speech in noise) compared to age-matched controls. Additional auditory tests also show deficits in sound source localization, based on temporal and intensity cues. We also observed a statistically significant correlation between the perception of speech in noise, and motoric and cognitive functions. However, a correlation between genetic predisposition (number of triplets) and function of inner ear was not found. We conclude that HD negatively influences the function of the central part of the auditory system at cortical and subcortical levels, altering predominantly speech processing and sound source lateralization. We have thoroughly characterized auditory pathology in patients with HD that suggests involvement of central auditory and cognitive areas. Copyright © 2017. Published by Elsevier B.V.
Ali Akbar Tahaei
Full Text Available Auditory processing deficits have been hypothesized as an underlying mechanism for stuttering. Previous studies have demonstrated abnormal responses in subjects with persistent developmental stuttering (PDS at the higher level of the central auditory system using speech stimuli. Recently, the potential usefulness of speech evoked auditory brainstem responses in central auditory processing disorders has been emphasized. The current study used the speech evoked ABR to investigate the hypothesis that subjects with PDS have specific auditory perceptual dysfunction. Objectives. To determine whether brainstem responses to speech stimuli differ between PDS subjects and normal fluent speakers. Methods. Twenty-five subjects with PDS participated in this study. The speech-ABRs were elicited by the 5-formant synthesized syllable/da/, with duration of 40 ms. Results. There were significant group differences for the onset and offset transient peaks. Subjects with PDS had longer latencies for the onset and offset peaks relative to the control group. Conclusions. Subjects with PDS showed a deficient neural timing in the early stages of the auditory pathway consistent with temporal processing deficits and their abnormal timing may underlie to their disfluency.
Vincent, Philippe F Y; Bouleau, Yohan; Charpentier, Gilles; Emptoz, Alice; Safieddine, Saaid; Petit, Christine; Dulon, Didier
The mechanisms orchestrating transient and sustained exocytosis in auditory inner hair cells (IHCs) remain largely unknown. These exocytotic responses are believed to mobilize sequentially a readily releasable pool of vesicles (RRP) underneath the synaptic ribbons and a slowly releasable pool of vesicles (SRP) at farther distance from them. They are both governed by Ca v 1.3 channels and require otoferlin as Ca 2+ sensor, but whether they use the same Ca v 1.3 isoforms is still unknown. Using whole-cell patch-clamp recordings in posthearing mice, we show that only a proportion (∼25%) of the total Ca 2+ current in IHCs displaying fast inactivation and resistance to 20 μm nifedipine, a l-type Ca 2+ channel blocker, is sufficient to trigger RRP but not SRP exocytosis. This Ca 2+ current is likely conducted by short C-terminal isoforms of Ca v 1.3 channels, notably Ca v 1.3 42A and Ca v 1.3 43S , because their mRNA is highly expressed in wild-type IHCs but poorly expressed in Otof -/- IHCs, the latter having Ca 2+ currents with considerably reduced inactivation. Nifedipine-resistant RRP exocytosis was poorly affected by 5 mm intracellular EGTA, suggesting that the Ca v 1.3 short isoforms are closely associated with the release site at the synaptic ribbons. Conversely, our results suggest that Ca v 1.3 long isoforms, which carry ∼75% of the total IHC Ca 2+ current with slow inactivation and confer high sensitivity to nifedipine and to internal EGTA, are essentially involved in recruiting SRP vesicles. Intracellular Ca 2+ imaging showed that Ca v 1.3 long isoforms support a deep intracellular diffusion of Ca 2+ SIGNIFICANCE STATEMENT Auditory inner hair cells (IHCs) encode sounds into nerve impulses through fast and indefatigable Ca 2+ -dependent exocytosis at their ribbon synapses. We show that this synaptic process involves long and short C-terminal isoforms of the Ca v 1.3 Ca 2+ channel that differ in the kinetics of their Ca 2+ -dependent inactivation and their
Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depressin, and hyperacute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of The Sound of a Moracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.
Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depression, and hyper acute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of the sound of a miracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.
Full Text Available In humans, theta phase (4–8 Hz synchronization observed on electroencephalography (EEG plays an important role in the manipulation of mental representations during working memory (WM tasks; fronto-temporal synchronization is involved in auditory-verbal WM tasks and fronto-parietal synchronization is involved in visual WM tasks. However, whether or not theta phase synchronization is able to select the to-be-manipulated modalities is uncertain. To address the issue, we recorded EEG data from subjects who were performing auditory-verbal and visual WM tasks; we compared the theta synchronizations when subjects performed either auditory-verbal or visual manipulations in separate WM tasks, or performed both two manipulations in the same WM task. The auditory-verbal WM task required subjects to calculate numbers presented by an auditory-verbal stimulus, whereas the visual WM task required subjects to move a spatial location in a mental representation in response to a visual stimulus. The dual WM task required subjects to manipulate auditory-verbal, visual, or both auditory-verbal and visual representations while maintaining auditory-verbal and visual representations. Our time-frequency EEG analyses revealed significant fronto-temporal theta phase synchronization during auditory-verbal manipulation in both auditory-verbal and auditory-verbal/visual WM tasks, but not during visual manipulation tasks. Similarly, we observed significant fronto-parietal theta phase synchronization during visual manipulation tasks, but not during auditory-verbal manipulation tasks. Moreover, we observed significant synchronization in both the fronto-temporal and fronto-parietal theta signals during simultaneous auditory-verbal/visual manipulations. These findings suggest that theta synchronization seems to flexibly connect the brain areas that manipulate WM.
Kawasaki, Masahiro; Kitajo, Keiichi; Yamaguchi, Yoko
In humans, theta phase (4-8 Hz) synchronization observed on electroencephalography (EEG) plays an important role in the manipulation of mental representations during working memory (WM) tasks; fronto-temporal synchronization is involved in auditory-verbal WM tasks and fronto-parietal synchronization is involved in visual WM tasks. However, whether or not theta phase synchronization is able to select the to-be-manipulated modalities is uncertain. To address the issue, we recorded EEG data from subjects who were performing auditory-verbal and visual WM tasks; we compared the theta synchronizations when subjects performed either auditory-verbal or visual manipulations in separate WM tasks, or performed both two manipulations in the same WM task. The auditory-verbal WM task required subjects to calculate numbers presented by an auditory-verbal stimulus, whereas the visual WM task required subjects to move a spatial location in a mental representation in response to a visual stimulus. The dual WM task required subjects to manipulate auditory-verbal, visual, or both auditory-verbal and visual representations while maintaining auditory-verbal and visual representations. Our time-frequency EEG analyses revealed significant fronto-temporal theta phase synchronization during auditory-verbal manipulation in both auditory-verbal and auditory-verbal/visual WM tasks, but not during visual manipulation tasks. Similarly, we observed significant fronto-parietal theta phase synchronization during visual manipulation tasks, but not during auditory-verbal manipulation tasks. Moreover, we observed significant synchronization in both the fronto-temporal and fronto-parietal theta signals during simultaneous auditory-verbal/visual manipulations. These findings suggest that theta synchronization seems to flexibly connect the brain areas that manipulate WM.
Lerud, Karl D; Almonte, Felix V; Kim, Ji Chul; Large, Edward W
The auditory nervous system is highly nonlinear. Some nonlinear responses arise through active processes in the cochlea, while others may arise in neural populations of the cochlear nucleus, inferior colliculus and higher auditory areas. In humans, auditory brainstem recordings reveal nonlinear population responses to combinations of pure tones, and to musical intervals composed of complex tones. Yet the biophysical origin of central auditory nonlinearities, their signal processing properties, and their relationship to auditory perception remain largely unknown. Both stimulus components and nonlinear resonances are well represented in auditory brainstem nuclei due to neural phase-locking. Recently mode-locking, a generalization of phase-locking that implies an intrinsically nonlinear processing of sound, has been observed in mammalian auditory brainstem nuclei. Here we show that a canonical model of mode-locked neural oscillation predicts the complex nonlinear population responses to musical intervals that have been observed in the human brainstem. The model makes predictions about auditory signal processing and perception that are different from traditional delay-based models, and may provide insight into the nature of auditory population responses. We anticipate that the application of dynamical systems analysis will provide the starting point for generic models of auditory population dynamics, and lead to a deeper understanding of nonlinear auditory signal processing possibly arising in excitatory-inhibitory networks of the central auditory nervous system. This approach has the potential to link neural dynamics with the perception of pitch, music, and speech, and lead to dynamical models of auditory system development. Copyright © 2013 Elsevier B.V. All rights reserved.
Namazi, Hamidreza; Khosrowabadi, Reza; Hussaini, Jamal; Habibi, Shaghayegh; Farid, Ali Akhavan; Kulish, Vladimir V
One of the major challenges in brain research is to relate the structural features of the auditory stimulus to structural features of Electroencephalogram (EEG) signal. Memory content is an important feature of EEG signal and accordingly the brain. On the other hand, the memory content can also be considered in case of stimulus. Beside all works done on analysis of the effect of stimuli on human EEG and brain memory, no work discussed about the stimulus memory and also the relationship that may exist between the memory content of stimulus and the memory content of EEG signal. For this purpose we consider the Hurst exponent as the measure of memory. This study reveals the plasticity of human EEG signals in relation to the auditory stimuli. For the first time we demonstrated that the memory content of an EEG signal shifts towards the memory content of the auditory stimulus used. The results of this analysis showed that an auditory stimulus with higher memory content causes a larger increment in the memory content of an EEG signal. For the verification of this result, we benefit from approximate entropy as indicator of time series randomness. The capability, observed in this research, can be further investigated in relation to human memory.
Full Text Available Klinefelter syndrome (47, XXY (KS is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS = 49; Controls = 49 responded to whether the words “GREEN” or “RED” were displayed in green or red (incongruent versus congruent colors. One of the colors was presented three times as often as the other, making it possible to study both congruency and adaptation effects independently. Auditory stimuli saying “GREEN” or “RED” had the same distribution, making it possible to study effects of perceptual modality as well as Frequency effects across modalities. We found that KS had an increased response to motor output in primary motor cortex and an increased response to auditory stimuli in auditory cortices, but no difference in primary visual cortices. KS displayed a diminished response to written visual stimuli in secondary visual regions near the Visual Word Form Area, consistent with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain's motor network with no difference between groups. No effects of testosterone level or brain volume were found. In sum, the present findings suggest that auditory and motor systems in KS are selectively affected, perhaps as a compensatory strategy, and that this is not a systemic effect as it is not seen in the visual system.
Wallentin, Mikkel; Skakkebæk, Anne; Bojesen, Anders; Fedder, Jens; Laurberg, Peter; Østergaard, John R.; Hertz, Jens Michael; Pedersen, Anders Degn; Gravholt, Claus Højbjerg
Klinefelter syndrome (47, XXY) (KS) is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS = 49; Controls = 49) responded to whether the words “GREEN” or “RED” were displayed in green or red (incongruent versus congruent colors). One of the colors was presented three times as often as the other, making it possible to study both congruency and adaptation effects independently. Auditory stimuli saying “GREEN” or “RED” had the same distribution, making it possible to study effects of perceptual modality as well as Frequency effects across modalities. We found that KS had an increased response to motor output in primary motor cortex and an increased response to auditory stimuli in auditory cortices, but no difference in primary visual cortices. KS displayed a diminished response to written visual stimuli in secondary visual regions near the Visual Word Form Area, consistent with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop) or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain's motor network with no difference between groups. No effects of testosterone level or brain volume were found. In sum, the present findings suggest that auditory and motor systems in KS are selectively affected, perhaps as a compensatory strategy, and that this is not a systemic effect as it is not seen in the visual system. PMID:26958463
Jeng, Fuh-Cherng; Abbas, Paul J; Brown, Carolyn J; Miller, Charles A; Nourski, Kirill V; Robinson, Barbara K
Most cochlear implant systems available today provide the user with information about the envelope of the speech signal. The goal of this study was to explore the feasibility of recording electrically evoked auditory steady-state response (ESSR) and in particular to evaluate the degree to which the response recorded using electrical stimulation could be separated from stimulus artifact. Sinusoidally amplitude-modulated electrical stimuli with alternating polarities were used to elicit the response in adult guinea pigs. Separation of the stimulus artifact from evoked neural responses was achieved by summing alternating polarity responses or by using spectral analysis techniques. The recorded response exhibited physiological response properties including a pattern of nonlinear growth and their abolishment following euthanasia or administration of tetrodotoxin. These findings demonstrate that the ESSR is a response generated by the auditory system and can be separated from electrical stimulus artifact. As it is evoked by a stimulus that shares important features of cochlear implant stimulation, this evoked potential may be useful in either clinical or basic research efforts. Copyright 2007 S. Karger AG, Basel.
Vu, Kim-Phuong L; Minakata, Katsumi; Ngo, Mary Kim
compatibility. The right-left prevalence effects obtained with auditory stimuli are typically larger than that obtained with visual stimuli even though less attention should be demanded from the horizontal dimension in auditory processing. In the present study, we examined whether auditory or visual dominance...... vertical coding through use of the spatial-musical association of response codes (SMARC) effect, where pitch is coded in terms of height in space. In Experiment 1, we found a larger right-left prevalence effect for unimodal auditory than visual stimuli. Neutral, non-pitch coded, audiovisual stimuli did......When auditory stimuli are used in two-dimensional spatial compatibility tasks, where the stimulus and response configurations vary along the horizontal and vertical dimensions simultaneously, a right-left prevalence effect occurs in which horizontal compatibility dominates over vertical...
Bruneau, N; Roux, S; Adrien, J L; Barthélémy, C
Auditory processing at the cortical level was investigated with late auditory evoked potentials (N1 wave-T complex) in 4-8-year-old autistic children with mental retardation and compared to both age-matched normal and mentally retarded children (16 children in each group). Two negative peaks which occurred in the 80-200 ms latency range were analyzed according to stimulus intensity level (50 to 80 dB SPL): the first culminated at fronto-central sites (N1b) and the second at bitemporal sites (N1c, equivalent to Tb of the T complex). The latter wave was the most prominent and reliable response in normal children at this age. Our results in autistic children indicated abnormalities of this wave with markedly smaller amplitude at bitemporal sites and pronounced peak latency delay (around 20 ms). Moreover, in both reference groups the intensity effect was found on both sides whereas in autistic children it was absent on the left side but present on the right. These findings in autistic children showing very disturbed verbal communication argue for dysfunction in brain areas involved in N1c generation i.e., the auditory associative cortex in the lateral part of the superior temporal gyrus, with more specific left side defects when auditory stimulus have to be processed.
Full Text Available Previous empirical observations have led researchers to propose that auditory feedback (the auditory perception of self-produced sounds when speaking functions abnormally in the speech motor systems of persons who stutter (PWS. Researchers have theorized that an important neural basis of stuttering is the aberrant integration of auditory information into incipient speech motor commands. Because of the circumstantial support for these hypotheses and the differences and contradictions between them, there is a need for carefully designed experiments that directly examine auditory-motor integration during speech production in PWS. In the current study, we used real-time manipulation of auditory feedback to directly investigate whether the speech motor system of PWS utilizes auditory feedback abnormally during articulation and to characterize potential deficits of this auditory-motor integration. Twenty-one PWS and 18 fluent control participants were recruited. Using a short-latency formant-perturbation system, we examined participants' compensatory responses to unanticipated perturbation of auditory feedback of the first formant frequency during the production of the monophthong [ε]. The PWS showed compensatory responses that were qualitatively similar to the controls' and had close-to-normal latencies (∼150 ms, but the magnitudes of their responses were substantially and significantly smaller than those of the control participants (by 47% on average, p<0.05. Measurements of auditory acuity indicate that the weaker-than-normal compensatory responses in PWS were not attributable to a deficit in low-level auditory processing. These findings are consistent with the hypothesis that stuttering is associated with functional defects in the inverse models responsible for the transformation from the domain of auditory targets and auditory error information into the domain of speech motor commands.
Cliff, Michael; Joyce, Dan W; Lamar, Melissa; Dannhauser, Thomas; Tracy, Derek K; Shergill, Sukhwinder S
Traditionally, studies investigating the functional implications of age-related structural brain alterations have focused on higher cognitive processes; by increasing stimulus load, these studies assess behavioral and neurophysiological performance. In order to understand age-related changes in these higher cognitive processes, it is crucial to examine changes in visual and auditory processes that are the gateways to higher cognitive functions. This study provides evidence for age-related functional decline in visual and auditory processing, and regional alterations in functional brain processing, using non-invasive neuroimaging. Using functional magnetic resonance imaging (fMRI), younger (n=11; mean age=31) and older (n=10; mean age=68) adults were imaged while observing flashing checkerboard images (passive visual stimuli) and hearing word lists (passive auditory stimuli) across varying stimuli presentation rates. Younger adults showed greater overall levels of temporal and occipital cortical activation than older adults for both auditory and visual stimuli. The relative change in activity as a function of stimulus presentation rate showed differences between young and older participants. In visual cortex, the older group showed a decrease in fMRI blood oxygen level dependent (BOLD) signal magnitude as stimulus frequency increased, whereas the younger group showed a linear increase. In auditory cortex, the younger group showed a relative increase as a function of word presentation rate, while older participants showed a relatively stable magnitude of fMRI BOLD response across all rates. When analyzing participants across all ages, only the auditory cortical activation showed a continuous, monotonically decreasing BOLD signal magnitude as a function of age. Our preliminary findings show an age-related decline in demand-related, passive early sensory processing. As stimulus demand increases, visual and auditory cortex do not show increases in activity in older
Strait, Dana L; Kraus, Nina; Parbery-Clark, Alexandra; Ashley, Richard
A growing body of research suggests that cognitive functions, such as attention and memory, drive perception by tuning sensory mechanisms to relevant acoustic features. Long-term musical experience also modulates lower-level auditory function, although the mechanisms by which this occurs remain uncertain. In order to tease apart the mechanisms that drive perceptual enhancements in musicians, we posed the question: do well-developed cognitive abilities fine-tune auditory perception in a top-down fashion? We administered a standardized battery of perceptual and cognitive tests to adult musicians and non-musicians, including tasks either more or less susceptible to cognitive control (e.g., backward versus simultaneous masking) and more or less dependent on auditory or visual processing (e.g., auditory versus visual attention). Outcomes indicate lower perceptual thresholds in musicians specifically for auditory tasks that relate with cognitive abilities, such as backward masking and auditory attention. These enhancements were observed in the absence of group differences for the simultaneous masking and visual attention tasks. Our results suggest that long-term musical practice strengthens cognitive functions and that these functions benefit auditory skills. Musical training bolsters higher-level mechanisms that, when impaired, relate to language and literacy deficits. Thus, musical training may serve to lessen the impact of these deficits by strengthening the corticofugal system for hearing. 2009 Elsevier B.V. All rights reserved.
Leitman, David I; Laukka, Petri; Juslin, Patrik N; Saccente, Erica; Butler, Pamela; Javitt, Daniel C
Individuals with schizophrenia show reliable deficits in the ability to recognize emotions from vocal expressions. Here, we examined emotion recognition ability in 23 schizophrenia patients relative to 17 healthy controls using a stimulus battery with well-characterized acoustic features. We further evaluated performance deficits relative to ancillary assessments of underlying pitch perception abilities. As predicted, patients showed reduced emotion recognition ability across a range of emotions, which correlated with impaired basic tone matching abilities. Emotion identification deficits were strongly related to pitch-based acoustic cues such as mean and variability of fundamental frequency. Whereas healthy subjects' performance varied as a function of the relative presence or absence of these cues, with higher cue levels leading to enhanced performance, schizophrenia patients showed significantly less variation in performance as a function of cue level. In contrast to pitch-based cues, both groups showed equivalent variation in performance as a function of intensity-based cues. Finally, patients were less able than controls to differentiate between expressions with high and low emotion intensity, and this deficit was also correlated with impaired tone matching ability. Both emotion identification and intensity rating deficits were unrelated to valence of intended emotions. Deficits in both auditory emotion identification and more basic perceptual abilities correlated with impaired functional outcome. Overall, these findings support the concept that auditory emotion identification deficits in schizophrenia reflect, at least in part, a relative inability to process critical acoustic characteristics of prosodic stimuli and that such deficits contribute to poor global outcome.
Klingenhoefer, Steffen; Bremmer, Frank
Interaction with the outside world requires the knowledge about where objects are with respect to one's own body. Such spatial information is represented in various topographic maps in different sensory systems. From a computational point of view, however, a single, modality-invariant map of the incoming sensory signals appears to be a more efficient strategy for spatial representations. If such a single supra-modal map existed and were used for perceptual purposes, localization characteristics should be similar across modalities. Previous studies had shown mislocalization of brief visual stimuli presented in the temporal vicinity of saccadic eye-movements. Here, we tested, if such mislocalizations could also be found for auditory stimuli. We presented brief noise bursts before, during, and after visually guided saccades. Indeed, we found localization errors for these auditory stimuli. The spatio-temporal pattern of this mislocalization, however, clearly differed from the one found for visual stimuli. The spatial error also depended on the exact type of eye-movement (visually guided vs. memory guided saccades). Finally, results obtained in fixational control paradigms under different conditions suggest that auditory localization can be strongly influenced by both static and dynamic visual stimuli. Visual localization on the other hand is not influenced by distracting visual stimuli but can be inaccurate in the temporal vicinity of eye-movements. Taken together, our results argue against a single, modality-independent spatial representation of sensory signals.
Mustovic, Henrietta; Scheffler, Klaus; Di Salle, Francesco; Esposito, Fabrizio; Neuhoff, John G; Hennig, Jürgen; Seifritz, Erich
Temporal integration is a fundamental process that the brain carries out to construct coherent percepts from serial sensory events. This process critically depends on the formation of memory traces reconciling past with present events and is particularly important in the auditory domain where sensory information is received both serially and in parallel. It has been suggested that buffers for transient auditory memory traces reside in the auditory cortex. However, previous studies investigating "echoic memory" did not distinguish between brain response to novel auditory stimulus characteristics on the level of basic sound processing and a higher level involving matching of present with stored information. Here we used functional magnetic resonance imaging in combination with a regular pattern of sounds repeated every 100 ms and deviant interspersed stimuli of 100-ms duration, which were either brief presentations of louder sounds or brief periods of silence, to probe the formation of auditory memory traces. To avoid interaction with scanner noise, the auditory stimulation sequence was implemented into the image acquisition scheme. Compared to increased loudness events, silent periods produced specific neural activation in the right planum temporale and temporoparietal junction. Our findings suggest that this area posterior to the auditory cortex plays a critical role in integrating sequential auditory events and is involved in the formation of short-term auditory memory traces. This function of the planum temporale appears to be fundamental in the segregation of simultaneous sound sources.
Ikuta, Toshikazu; DeRosse, Pamela; Argyelan, Miklos; Karlsgodt, Katherine H; Kingsley, Peter B; Szeszko, Philip R; Malhotra, Anil K
Hearing perception in individuals with auditory hallucinations has not been well studied. Auditory hallucinations have previously been shown to involve primary auditory cortex activation. This activation suggests that auditory hallucinations activate the terminal of the auditory pathway as if auditory signals are submitted from the cochlea, and that a hallucinatory event is therefore perceived as hearing. The primary auditory cortex is stimulated by some unknown source that is outside of the auditory pathway. The current study aimed to assess the outcomes of stimulating the primary auditory cortex through the auditory pathway in individuals who have experienced auditory hallucinations. Sixteen patients with schizophrenia underwent functional magnetic resonance imaging (fMRI) sessions, as well as hallucination assessments. During the fMRI session, auditory stimuli were presented in one-second intervals at times when scanner noise was absent. Participants listened to auditory stimuli of sine waves (SW) (4-5.5kHz), English words (EW), and acoustically reversed English words (arEW) in a block design fashion. The arEW were employed to deliver the sound of a human voice with minimal linguistic components. Patients' auditory hallucination severity was assessed by the auditory hallucination item of the Brief Psychiatric Rating Scale (BPRS). During perception of arEW when compared with perception of SW, bilateral activation of the globus pallidus correlated with severity of auditory hallucinations. EW when compared with arEW did not correlate with auditory hallucination severity. Our findings suggest that the sensitivity of the globus pallidus to the human voice is associated with the severity of auditory hallucination. Copyright © 2015 Elsevier B.V. All rights reserved.
Cai, Shanqing; Beal, Deryk S.; Ghosh, Satrajit S.; Tiede, Mark K.; Guenther, Frank H.; Perkell, Joseph S.
Previous empirical observations have led researchers to propose that auditory feedback (the auditory perception of self-produced sounds when speaking) functions abnormally in the speech motor systems of persons who stutter (PWS). Researchers have theorized that an important neural basis of stuttering is the aberrant integration of auditory information into incipient speech motor commands. Because of the circumstantial support for these hypotheses and the differences and contradictions between them, there is a need for carefully designed experiments that directly examine auditory-motor integration during speech production in PWS. In the current study, we used real-time manipulation of auditory feedback to directly investigate whether the speech motor system of PWS utilizes auditory feedback abnormally during articulation and to characterize potential deficits of this auditory-motor integration. Twenty-one PWS and 18 fluent control participants were recruited. Using a short-latency formant-perturbation system, we examined participants’ compensatory responses to unanticipated perturbation of auditory feedback of the first formant frequency during the production of the monophthong [ε]. The PWS showed compensatory responses that were qualitatively similar to the controls’ and had close-to-normal latencies (∼150 ms), but the magnitudes of their responses were substantially and significantly smaller than those of the control participants (by 47% on average, p<0.05). Measurements of auditory acuity indicate that the weaker-than-normal compensatory responses in PWS were not attributable to a deficit in low-level auditory processing. These findings are consistent with the hypothesis that stuttering is associated with functional defects in the inverse models responsible for the transformation from the domain of auditory targets and auditory error information into the domain of speech motor commands. PMID:22911857
Drolet, Matthis; Schubotz, Ricarda I; Fischer, Julia
Context has been found to have a profound effect on the recognition of social stimuli and correlated brain activation. The present study was designed to determine whether knowledge about emotional authenticity influences emotion recognition expressed through speech intonation. Participants classified emotionally expressive speech in an fMRI experimental design as sad, happy, angry, or fearful. For some trials, stimuli were cued as either authentic or play-acted in order to manipulate participant top-down belief about authenticity, and these labels were presented both congruently and incongruently to the emotional authenticity of the stimulus. Contrasting authentic versus play-acted stimuli during uncued trials indicated that play-acted stimuli spontaneously up-regulate activity in the auditory cortex and regions associated with emotional speech processing. In addition, a clear interaction effect of cue and stimulus authenticity showed up-regulation in the posterior superior temporal sulcus and the anterior cingulate cortex, indicating that cueing had an impact on the perception of authenticity. In particular, when a cue indicating an authentic stimulus was followed by a play-acted stimulus, additional activation occurred in the temporoparietal junction, probably pointing to increased load on perspective taking in such trials. While actual authenticity has a significant impact on brain activation, individual belief about stimulus authenticity can additionally modulate the brain response to differences in emotionally expressive speech.
Full Text Available Abstract Background Previous studies have shown that spatio-tactile acuity is influenced by the clarity of the cortical response in primary somatosensory cortex (SI. Stimulus characteristics such as frequency, amplitude, and location of tactile stimuli presented to the skin have been shown to have a significant effect on the response in SI. The present study observes the effect of changing stimulus parameters of 25 Hz sinusoidal vertical skin displacement stimulation ("flutter" on a human subject's ability to discriminate between two adjacent or near-adjacent skin sites. Based on results obtained from recent neurophysiological studies of the SI response to different conditions of vibrotactile stimulation, we predicted that the addition of 200 Hz vibration to the same site that a two-point flutter stimulus was delivered on the skin would improve a subject's spatio-tactile acuity over that measured with flutter alone. Additionally, similar neurophysiological studies predict that the presence of either a 25 Hz flutter or 200 Hz vibration stimulus on the unattended hand (on the opposite side of the body from the site of two-point limen testing – the condition of bilateral stimulation – which has been shown to evoke less SI cortical activity than the contralateral-only stimulus condition would decrease a subject's ability to discriminate between two points on the skin. Results A Bekesy tracking method was employed to track a subject's ability to discriminate between two-point stimuli delivered to the skin. The distance between the two points of stimulation was varied on a trial-by-trial basis, and several different stimulus conditions were examined: (1 The "control" condition, in which 25 Hz flutter stimuli were delivered simultaneously to the two points on the skin of the attended hand, (2 the "complex" condition, in which a combination of 25 Hz flutter and 200 Hz vibration stimuli were delivered to the two points on the attended hand, and (3 a
Gooding, Diane C.; Gjini, Klevest; Burroughs, Scott A.; Boutros, Nash N.
This was a naturalistic study of 23 abstinent cocaine-dependent patients and 38 controls who were studied using a paired-stimulus paradigm to elicit three mid-latency auditory evoked responses (MLAERs), namely, the P50, N100, and P200. Sensory gating was defined as the ratio of the S2 amplitude to the S1 amplitude. Psychosis-proneness was assessed using four Chapman psychosisproneness scales measuring perceptual aberration, magical ideation, social anhedonia, and physical anhedonia. Omnibus c...
Cheng, Chia-Hsiung; Baillet, Sylvain; Lin, Yung-Yang
Aging has been associated with declines in sensory-perceptual processes. Sensory gating (SG), or repetition suppression, refers to the attenuation of neural activity in response to a second stimulus and is considered to be an automatic process to inhibit redundant sensory inputs. It is controversial whether SG deficits, as tested with an auditory paired-stimulus protocol, accompany normal aging in humans. To reconcile the debates arising from event-related potential studies, we recorded auditory neuromagnetic reactivity in 20 young and 19 elderly adult men and determined the neural activation by using minimum-norm estimate (MNE) source modeling. SG of M100 was calculated by the ratio of the response to the second stimulus over that to the first stimulus. MNE results revealed that fronto-temporo-parietal networks were implicated in the M100 SG. Compared to the younger participants, the elderly showed selectively increased SG ratios in the anterior superior temporal gyrus, anterior middle temporal gyrus, temporal pole and orbitofrontal cortex, suggesting an insufficient age-related gating to repetitive auditory stimulation. These findings also highlight the loss of frontal inhibition of the auditory cortex in normal aging. Copyright © 2015 Elsevier Inc. All rights reserved.
Sergent, Claire; Ruff, Christian C; Barbot, Antoine; Driver, Jon; Rees, Geraint
Modulations of sensory processing in early visual areas are thought to play an important role in conscious perception. To date, most empirical studies focused on effects occurring before or during visual presentation. By contrast, several emerging theories postulate that sensory processing and conscious visual perception may also crucially depend on late top-down influences, potentially arising after a visual display. To provide a direct test of this, we performed an fMRI study using a postcued report procedure. The ability to report a target at a specific spatial location in a visual display can be enhanced behaviorally by symbolic auditory postcues presented shortly after that display. Here we showed that such auditory postcues can enhance target-specific signals in early human visual cortex (V1 and V2). For postcues presented 200 msec after stimulus termination, this target-specific enhancement in visual cortex was specifically associated with correct conscious report. The strength of this modulation predicted individual levels of performance in behavior. By contrast, although later postcues presented 1000 msec after stimulus termination had some impact on activity in early visual cortex, this modulation no longer related to conscious report. These results demonstrate that within a critical time window of a few hundred milliseconds after a visual stimulus has disappeared, successful conscious report of that stimulus still relates to the strength of top-down modulation in early visual cortex. We suggest that, within this critical time window, sensory representation of a visual stimulus is still under construction and so can still be flexibly influenced by top-down modulatory processes.
Kent, Christopher; Lamberts, Koen
This study investigated the effect of stimulus presentation probability on accuracy and response times in an absolute identification task. Three schedules of presentation were used to investigate the interaction between presentation probability and stimulus position within the set. Data from individual participants indicated strong effects of…
Davidson, Gray D; Pitts, Michael A
Previous event-related potential (ERP) experiments have consistently identified two components associated with perceptual transitions of bistable visual stimuli, the "reversal negativity" (RN) and the "late positive complex" (LPC). The RN (~200 ms post-stimulus, bilateral occipital-parietal distribution) is thought to reflect transitions between neural representations that form the moment-to-moment contents of conscious perception, while the LPC (~400 ms, central-parietal) is considered an index of post-perceptual processing related to accessing and reporting one's percept. To explore the generality of these components across sensory modalities, the present experiment utilized a novel bistable auditory stimulus. Pairs of complex tones with ambiguous pitch relationships were presented sequentially while subjects reported whether they perceived the tone pairs as ascending or descending in pitch. ERPs elicited by the tones were compared according to whether perceived pitch motion changed direction or remained the same across successive trials. An auditory reversal negativity (aRN) component was evident at ~170 ms post-stimulus over bilateral fronto-central scalp locations. An auditory LPC component (aLPC) was evident at subsequent latencies (~350 ms, fronto-central distribution). These two components may be auditory analogs of the visual RN and LPC, suggesting functionally equivalent but anatomically distinct processes in auditory vs. visual bistable perception.
Gray D. Davidson
Full Text Available Previous event-related potential (ERP experiments have consistently identified two components associated with perceptual transitions of bistable visual stimuli, the reversal negativity (RN and the late positive complex (LPC. The RN (~200ms post-stimulus, bilateral occipital-parietal distribution is thought to reflect transitions between neural representations that form the moment-to-moment contents of conscious perception, while the LPC (~400ms, central-parietal is considered an index of post-perceptual processing related to accessing and reporting one’s percept. To explore the generality of these components across sensory modalities, the present experiment utilized a novel bistable auditory stimulus. Pairs of complex tones with ambiguous pitch relationships were presented sequentially while subjects reported whether they perceived the tone pairs as ascending or descending in pitch. ERPs elicited by the tones were compared according to whether perceived pitch motion changed direction or remained the same across successive trials. An auditory RN component (aRN was evident at ~170ms post-stimulus over bilateral fronto-central scalp locations. An auditory LPC component (aLPC was evident at subsequent latencies (~350ms, fronto-central distribution. These two components may be auditory analogs of the visual RN and LPC, suggesting functionally equivalent but anatomically distinct processes in auditory versus visual bistable perception.
Jacks, Adam; Haley, Katarina L.
Purpose: To study the effects of masked auditory feedback (MAF) on speech fluency in adults with aphasia and/or apraxia of speech (APH/AOS). We hypothesized that adults with AOS would increase speech fluency when speaking with noise. Altered auditory feedback (AAF; i.e., delayed/frequency-shifted feedback) was included as a control condition not…
Leite Filho, Carlos Alberto; Silva, Fábio Ferreira da; Pradella-Hallinan, Márcia; Xavier, Sandra Doria; Miranda, Mônica Carolina; Pereira, Liliane Desgualdo
Intermittent hypoxia caused by obstructive sleep apnea syndrome (OSAS) may lead to damage in brain areas associated to auditory processing. The aim of this study was to compare children with OSAS or primary snoring (PS) to children without sleep-disordered breathing with regard to their performance on the Gaps-in-Noise (GIN) test and the Scale of Auditory Behaviors (SAB) questionnaire. Thirty-seven children (6-12 years old) were submitted to sleep anamnesis and in-lab night-long polysomnography. Three groups were organized according to clinical criteria: OSAS group (13 children), PS group (13 children), and control group (11 children). They were submitted to the GIN test and parents answered SAB questionnaire. The Kruskal-Wallis statistical test was used to compare the groups; p auditory behavior in children. These findings suggest that sleep-disordered breathing may lead to auditory behavior impairment. Copyright © 2017 Elsevier B.V. All rights reserved.
Boets, Bart; Verhoeven, Judith; Wouters, Jan; Steyaert, Jean
We investigated low-level auditory spectral and temporal processing in adolescents with autism spectrum disorder (ASD) and early language delay compared to matched typically developing controls. Auditory measures were designed to target right versus left auditory cortex processing (i.e. frequency discrimination and slow amplitude modulation (AM)…
Shiotsuki, Ippei; Terao, Takeshi; Ishii, Nobuyoshi; Hatano, Koji
A 26-year-old female outpatient presenting with a depressive state suffered from auditory hallucinations at night. Her auditory hallucinations did not respond to blonanserin or paliperidone, but partially responded to risperidone. In view of the possibility that her auditory hallucinations began after starting trazodone, trazodone was discontinued, leading to a complete resolution of her auditory hallucinations. Furthermore, even after risperidone was decreased and discontinued, her auditory hallucinations did not recur. These findings suggest that trazodone may induce auditory hallucinations in some susceptible patients. PMID:24700048
Hall, M.; Smeele, P.M.T.; Kuhl, P.K.
The integration of auditory and visual speech is observed when modes specify different places of articulation. Influences of auditory variation on integration were examined using consonant identifi-cation, plus quality and similarity ratings. Auditory identification predicted auditory-visual
Full Text Available Pedro H Pondé,1 Eduardo P de Sena,2 Joan A Camprodon,3 Arão Nogueira de Araújo,2 Mário F Neto,4 Melany DiBiasi,5 Abrahão Fontes Baptista,6,7 Lidia MVR Moura,8 Camila Cosmo2,3,6,9,10 1Dynamics of Neuromusculoskeletal System Laboratory, Bahiana School of Medicine and Public Health, 2Postgraduate Program in Interactive Process of Organs and Systems, Federal University of Bahia, Salvador, Bahia, Brazil; 3Laboratory for Neuropsychiatry and Neuromodulation and Transcranial Magnetic Stimulation Clinical Service, Department of Psychiatry, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; 4Scientific Training Center Department, School of Medicine of Bahia, Federal University of Bahia, Salvador, Bahia, Brazil; 5Neuromodulation Center, Spaulding Rehabilitation Hospital, Harvard Medical School, Boston, MA, USA; 6Functional Electrostimulation Laboratory, Biomorphology Department, 7Postgraduate Program on Medicine and Human Health, School of Medicine, Federal University of Bahia, Salvador, Bahia, Brazil; 8Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; 9Center for Technological Innovation in Rehabilitation, Federal University of Bahia, 10Bahia State Health Department (SESAB, Salvador, Bahia, Brazil Introduction: Auditory hallucinations are defined as experiences of auditory perceptions in the absence of a provoking external stimulus. They are the most prevalent symptoms of schizophrenia with high capacity for chronicity and refractoriness during the course of disease. The transcranial direct current stimulation (tDCS – a safe, portable, and inexpensive neuromodulation technique – has emerged as a promising treatment for the management of auditory hallucinations. Objective: The aim of this study is to analyze the level of evidence in the literature available for the use of tDCS as a treatment for auditory hallucinations in schizophrenia. Methods: A systematic review was performed
Wang, Rong; Wu, Lingjie; Tang, Zuohua; Sun, Xinghuai; Feng, Xiaoyuan; Tang, Weijun; Qian, Wen; Wang, Jie; Jin, Lixin; Zhong, Yufeng; Xiao, Zebin
Cross-modal plasticity within the visual and auditory cortices of early binocularly blind macaques is not well studied. In this study, four healthy neonatal macaques were assigned to group A (control group) or group B (binocularly blind group). Sixteen months later, blood oxygenation level-dependent functional imaging (BOLD-fMRI) was conducted to examine the activation in the visual and auditory cortices of each macaque while being tested using pure tones as auditory stimuli. The changes in the BOLD response in the visual and auditory cortices of all macaques were compared with immunofluorescence staining findings. Compared with group A, greater BOLD activity was observed in the bilateral visual cortices of group B, and this effect was particularly obvious in the right visual cortex. In addition, more activated volumes were found in the bilateral auditory cortices of group B than of group A, especially in the right auditory cortex. These findings were consistent with the fact that there were more c-Fos-positive cells in the bilateral visual and auditory cortices of group B compared with group A (p visual cortices of binocularly blind macaques can be reorganized to process auditory stimuli after visual deprivation, and this effect is more obvious in the right than the left visual cortex. These results indicate the establishment of cross-modal plasticity within the visual and auditory cortices. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Engineer, C T; Centanni, T M; Im, K W; Borland, M S; Moreno, N A; Carraway, R S; Wilson, L G; Kilgard, M P
Although individuals with autism are known to have significant communication problems, the cellular mechanisms responsible for impaired communication are poorly understood. Valproic acid (VPA) is an anticonvulsant that is a known risk factor for autism in prenatally exposed children. Prenatal VPA exposure in rats causes numerous neural and behavioral abnormalities that mimic autism. We predicted that VPA exposure may lead to auditory processing impairments which may contribute to the deficits in communication observed in individuals with autism. In this study, we document auditory cortex responses in rats prenatally exposed to VPA. We recorded local field potentials and multiunit responses to speech sounds in primary auditory cortex, anterior auditory field, ventral auditory field. and posterior auditory field in VPA exposed and control rats. Prenatal VPA exposure severely degrades the precise spatiotemporal patterns evoked by speech sounds in secondary, but not primary auditory cortex. This result parallels findings in humans and suggests that secondary auditory fields may be more sensitive to environmental disturbances and may provide insight into possible mechanisms related to auditory deficits in individuals with autism. © 2014 Wiley Periodicals, Inc.
Turner, Jeremy G; Parrish, Jennifer L; Zuiderveld, Loren; Darr, Stacy; Hughes, Larry F; Caspary, Donald M; Idrezbegovic, Esma; Canlon, Barbara
Presbyacusis, one of the most common ailments of the elderly, is often treated with hearing aids, which serve to reintroduce some or all of those sounds lost to peripheral hearing loss. However, little is known about the underlying changes to the ear and brain as a result of such experience with sound late in life. The present study attempts to model this process by rearing aged CBA mice in an augmented acoustic environment (AAE). Aged (22-23 months) male (n = 12) and female (n = 9) CBA/CaJ mice were reared in either 6 weeks of low-level (70 dB SPL) broadband noise stimulation (AAE) or normal vivarium conditions. Changes as a function of the treatment were measured for behavior, auditory brainstem response thresholds, hair cell cochleograms, and gamma aminobutyric acid neurochemistry in the key central auditory structures of the inferior colliculus and primary auditory cortex. The AAE-exposed group was associated with sex-specific changes in cochlear pathology, auditory brainstem response thresholds, and gamma aminobutyric acid neurochemistry. Males exhibited significantly better thresholds and reduced hair cell loss (relative to controls) whereas females exhibited the opposite effect. AAE was associated with increased glutamic acid decarboxylase (GAD67) levels in the inferior colliculus of both male and female mice. However, in primary auditory cortex AAE exposure was associated with increased GAD67 labeling in females and decreased GAD67 in males. These findings suggest that exposing aged mice to a low-level AAE alters both peripheral and central properties of the auditory system and these changes partially interact with sex or the degree of hearing loss before AAE. Although direct application of these findings to hearing aid use or auditory training in aged humans would be premature, the results do begin to provide direct evidence for the underlying changes that might be occurring as a result of hearing aid use late in life. These results suggest the aged brain
Plaud, J J; Gaither, G A; Weller, L A; Bigwood, S J; Barth, J; von Duvillard, S P
Stimulus equivalence is a behavioral approach to analyzing the "meaning" of stimulus sets and has an implication for clinical psychology. The formation of three-member (A --> B --> C) stimulus equivalence classes was used to investigate the effects of three different sets of sample and comparison stimuli on emergent behavior. The three stimulus sets were composed of Rational-Emotive Behavior Therapy (REBT)-related words, non-REBT emotionally charged words, and a third category of neutral words composed of flower labels. Sixty-two women and men participated in a modified matching-to-sample experiment. Using a mixed cross-over design, and controlling for serial order effects, participants received conditional training and emergent relationship training in the three stimulus set conditions. Results revealed a significant interaction between the formation of stimulus equivalence classes and stimulus meaning, indicating consistently biased responding in favor of reaching criterion responding more slowly for REBT-related and non-REBT emotionally charged words. Results were examined in the context of an analysis of the importance of stimulus meaning on behavior and the relation of stimulus meaning to behavioral and cognitive theories, with special appraisal given to the influence of fear-related discriminative stimuli on behavior.
Singer, Bryan F.; Bryan, Myranda A.; Popov, Pavlo; Scarff, Raymond; Carter, Cody; Wright, Erin; Aragona, Brandon J.; Robinson, Terry E.
The sensory properties of a reward-paired cue (a conditioned stimulus; CS) may impact the motivational value attributed to the cue, and in turn influence the form of the conditioned response (CR) that develops. A cue with multiple sensory qualities, such as a moving lever-CS, may activate numerous neural pathways that process auditory and visual…
Ponnath, Abhilash; Farris, Hamilton E
Descending circuitry can modulate auditory processing, biasing sensitivity to particular stimulus parameters and locations. Using awake in vivo single unit recordings, this study tested whether electrical stimulation of the thalamus modulates auditory excitability and relative binaural sensitivity in neurons of the amphibian midbrain. In addition, by using electrical stimuli that were either longer than the acoustic stimuli (i.e., seconds) or presented on a sound-by-sound basis (ms), experiments addressed whether the form of modulation depended on the temporal structure of the electrical stimulus. Following long duration electrical stimulation (3-10 s of 20 Hz square pulses), excitability (spikes/acoustic stimulus) to free-field noise stimuli decreased by 32%, but returned over 600 s. In contrast, sound-by-sound electrical stimulation using a single 2 ms duration electrical pulse 25 ms before each noise stimulus caused faster and varied forms of modulation: modulation lasted sound-by-sound electrical stimulation varied between different acoustic stimuli, including for different male calls, suggesting modulation is specific to certain stimulus attributes. For binaural units, modulation depended on the ear of input, as sound-by-sound electrical stimulation preceding dichotic acoustic stimulation caused asymmetric modulatory effects: sensitivity shifted for sounds at only one ear, or by different relative amounts for both ears. This caused a change in the relative difference in binaural sensitivity. Thus, sound-by-sound electrical stimulation revealed fast and ear-specific (i.e., lateralized) auditory modulation that is potentially suited to shifts in auditory attention during sound segregation in the auditory scene.
Karen V. Chenausky
Full Text Available We tested the effect of Auditory-Motor Mapping Training (AMMT, a novel, intonation-based treatment for spoken language originally developed for minimally verbal (MV children with autism, on a more-verbal child with autism. We compared this child’s performance after 25 therapy sessions with that of: (1 a child matched on age, autism severity, and expressive language level who received 25 sessions of a non-intonation-based control treatment Speech Repetition Therapy (SRT; and (2 a matched pair of MV children (one of whom received AMMT; the other, SRT. We found a significant Time × Treatment effect in favor of AMMT for number of Syllables Correct and Consonants Correct per stimulus for both pairs of children, as well as a significant Time × Treatment effect in favor of AMMT for number of Vowels Correct per stimulus for the more-verbal pair. Magnitudes of the difference in post-treatment performance between AMMT and SRT, adjusted for Baseline differences, were: (a larger for the more-verbal pair than for the MV pair; and (b associated with very large effect sizes (Cohen’s d > 1.3 in the more-verbal pair. Results hold promise for the efficacy of AMMT for improving spoken language production in more-verbal children with autism as well as their MV peers and suggest hypotheses about brain function that are testable in both correlational and causal behavioral-imaging studies.
Torres-Fortuny, Alejandro; Arnaiz-Marquez, Isabel; Hernández-Pérez, Heivet; Eimil-Suárez, Eduardo
Auditory steady state responses to continuous amplitude modulated tones at rates between 70 and 110Hz, have been proposed as a feasible alternative to objective frequency specific audiometry in cochlear implant subjects. The aim of the present study is to obtain physiological thresholds by means of auditory steady-state response in cochlear implant patients (Clarion HiRes 90K), with acoustic stimulation, on free field conditions and to verify its biological origin. 11 subjects comprised the sample. Four amplitude modulated tones of 500, 1000, 2000 and 4000Hz were used as stimuli, using the multiple frequency technique. The recording of auditory steady-state response was also recorded at 0dB HL of intensity, non-specific stimulus and using a masking technique. The study enabled the electrophysiological thresholds to be obtained for each subject of the explored sample. There were no auditory steady-state responses at either 0dB or non-specific stimulus recordings. It was possible to obtain the masking thresholds. A difference was identified between behavioral and electrophysiological thresholds of -6±16, -2±13, 0±22 and -8±18dB at frequencies of 500, 1000, 2000 and 4000Hz respectively. The auditory steady state response seems to be a suitable technique to evaluate the hearing threshold in cochlear implant subjects. Copyright © 2018 Sociedad Española de Otorrinolaringología y Cirugía de Cabeza y Cuello. Publicado por Elsevier España, S.L.U. All rights reserved.
Ross, Bernhard; Jamali, Shahab; Tremblay, Kelly L
Auditory perceptual learning persistently modifies neural networks in the central nervous system. Central auditory processing comprises a hierarchy of sound analysis and integration, which transforms an acoustical signal into a meaningful object for perception. Based on latencies and source locations of auditory evoked responses, we investigated which stage of central processing undergoes neuroplastic changes when gaining auditory experience during passive listening and active perceptual training. Young healthy volunteers participated in a five-day training program to identify two pre-voiced versions of the stop-consonant syllable 'ba', which is an unusual speech sound to English listeners. Magnetoencephalographic (MEG) brain responses were recorded during two pre-training and one post-training sessions. Underlying cortical sources were localized, and the temporal dynamics of auditory evoked responses were analyzed. After both passive listening and active training, the amplitude of the P2m wave with latency of 200 ms increased considerably. By this latency, the integration of stimulus features into an auditory object for further conscious perception is considered to be complete. Therefore the P2m changes were discussed in the light of auditory object representation. Moreover, P2m sources were localized in anterior auditory association cortex, which is part of the antero-ventral pathway for object identification. The amplitude of the earlier N1m wave, which is related to processing of sensory information, did not change over the time course of the study. The P2m amplitude increase and its persistence over time constitute a neuroplastic change. The P2m gain likely reflects enhanced object representation after stimulus experience and training, which enables listeners to improve their ability for scrutinizing fine differences in pre-voicing time. Different trajectories of brain and behaviour changes suggest that the preceding effect of a P2m increase relates to brain
Chen, Ling-Chia; Sandmann, Pascale; Thorne, Jeremy D; Herrmann, Christoph S; Debener, Stefan
Functional near-infrared spectroscopy (fNIRS) has been proven reliable for investigation of low-level visual processing in both infants and adults. Similar investigation of fundamental auditory processes with fNIRS, however, remains only partially complete. Here we employed a systematic three-level validation approach to investigate whether fNIRS could capture fundamental aspects of bottom-up acoustic processing. We performed a simultaneous fNIRS-EEG experiment with visual and auditory stimulation in 24 participants, which allowed the relationship between changes in neural activity and hemoglobin concentrations to be studied. In the first level, the fNIRS results showed a clear distinction between visual and auditory sensory modalities. Specifically, the results demonstrated area specificity, that is, maximal fNIRS responses in visual and auditory areas for the visual and auditory stimuli respectively, and stimulus selectivity, whereby the visual and auditory areas responded mainly toward their respective stimuli. In the second level, a stimulus-dependent modulation of the fNIRS signal was observed in the visual area, as well as a loudness modulation in the auditory area. Finally in the last level, we observed significant correlations between simultaneously-recorded visual evoked potentials and deoxygenated hemoglobin (DeoxyHb) concentration, and between late auditory evoked potentials and oxygenated hemoglobin (OxyHb) concentration. In sum, these results suggest good sensitivity of fNIRS to low-level sensory processing in both the visual and the auditory domain, and provide further evidence of the neurovascular coupling between hemoglobin concentration changes and non-invasive brain electrical activity.
Glutamate is down-regulated and tinnitus loudness-levels decreased following rTMS over auditory cortex of the left hemisphere: A prospective randomized single-blinded sham-controlled cross-over study.
Cacace, Anthony T; Hu, Jiani; Romero, Stephen; Xuan, Yang; Burkard, Robert F; Tyler, Richard S
Using a prospective randomized single-blinded sham-controlled cross-over design, we studied the efficacy of low frequency (1-Hz) repetitive transcranial magnetic stimulation (rTMS) over auditory cortex of the left temporal lobe as an experimental treatment modality for noise-induced tinnitus. Pre/post outcome measures for sham vs. active rTMS conditions included differential changes in tinnitus loudness, self-perceived changes in the Tinnitus Handicap Questionnaire (THQ), and neurochemical changes of brain metabolite concentrations using single voxel proton magnetic resonance spectroscopy ( 1 H-MRS) obtained from left and right auditory cortical areas. While no subject in our sample had complete abatement of their tinnitus percept, active but not sham rTMS significantly reduced the loudness level of the tinnitus perception on the order of 4.5 dB; improved subscales in several content areas on the THQ, and down regulated (reduced) glutamate concentrations specific to the auditory cortex of the left temporal lobe that was stimulated. In addition, significant pair-wise correlations were observed among questionnaire variables, metabolite variables, questionnaire-metabolite variables, and metabolite-loudness variables. As part of this correlation analysis, we demonstrate for the first time that active rTMS produced a down regulation in the excitatory neurotransmitter glutamate that was highly correlated (r = 0.77, p < 0.05) with a reduction in tinnitus loudness levels measured psychoacoustically with a magnitude estimation procedure. Overall, this study provides unique information on neurochemical, psychoacoustic, and questionnaire-related profiles which emphasizes the emerging fields of perceptual and cognitive MRS and provides a perspective on a new frontier in auditory and tinnitus-related research. Copyright © 2017 Elsevier B.V. All rights reserved.
DiMattina, Christopher; Zhang, Kechen
In this paper, we review several lines of recent work aimed at developing practical methods for adaptive on-line stimulus generation for sensory neurophysiology. We consider various experimental paradigms where on-line stimulus optimization is utilized, including the classical optimal stimulus paradigm where the goal of experiments is to identify a stimulus which maximizes neural responses, the iso-response paradigm which finds sets of stimuli giving rise to constant responses, and the system identification paradigm where the experimental goal is to estimate and possibly compare sensory processing models. We discuss various theoretical and practical aspects of adaptive firing rate optimization, including optimization with stimulus space constraints, firing rate adaptation, and possible network constraints on the optimal stimulus. We consider the problem of system identification, and show how accurate estimation of non-linear models can be highly dependent on the stimulus set used to probe the network. We suggest that optimizing stimuli for accurate model estimation may make it possible to successfully identify non-linear models which are otherwise intractable, and summarize several recent studies of this type. Finally, we present a two-stage stimulus design procedure which combines the dual goals of model estimation and model comparison and may be especially useful for system identification experiments where the appropriate model is unknown beforehand. We propose that fast, on-line stimulus optimization enabled by increasing computer power can make it practical to move sensory neuroscience away from a descriptive paradigm and toward a new paradigm of real-time model estimation and comparison.
Larsen, Kit Melissa; Pellegrino, Giovanni; Birknow, Michelle Rosgaard; Kjær, Trine Nørgaard; Baaré, William Frans Christiaan; Didriksen, Michael; Olsen, Line; Werge, Thomas; Mørup, Morten; Siebner, Hartwig Roman
The 22q11.2 deletion syndrome confers a markedly increased risk for schizophrenia. 22q11.2 deletion carriers without manifest psychotic disorder offer the possibility to identify functional abnormalities that precede clinical onset. Since schizophrenia is associated with a reduced cortical gamma response to auditory stimulation at 40 Hz, we hypothesized that the 40 Hz auditory steady-state response (ASSR) may be attenuated in nonpsychotic individuals with a 22q11.2 deletion. Eighteen young nonpsychotic 22q11.2 deletion carriers and a control group of 27 noncarriers with comparable age range (12-25 years) and sex ratio underwent 128-channel EEG. We recorded the cortical ASSR to a 40 Hz train of clicks, given either at a regular inter-stimulus interval of 25 ms or at irregular intervals jittered between 11 and 37 ms. Healthy noncarriers expressed a stable ASSR to regular but not in the irregular 40 Hz click stimulation. Both gamma power and inter-trial phase coherence of the ASSR were markedly reduced in the 22q11.2 deletion group. The ability to phase lock cortical gamma activity to regular auditory 40 Hz stimulation correlated with the individual expression of negative symptoms in deletion carriers (ρ = -0.487, P = .041). Nonpsychotic 22q11.2 deletion carriers lack efficient phase locking of evoked gamma activity to regular 40 Hz auditory stimulation. This abnormality indicates a dysfunction of fast intracortical oscillatory processing in the gamma-band. Since ASSR was attenuated in nonpsychotic deletion carriers, ASSR deficiency may constitute a premorbid risk marker of schizophrenia. © The Author 2017. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center.
... Noisy, loosely structured classrooms could be very frustrating. Auditory memory problems: This is when a child has difficulty remembering information such as directions, lists, or study materials. It can ... later"). Auditory discrimination problems: This is when a child has ...
Fostick, Leah; Babkoff, Harvey; Zukerman, Gil
Purpose: To test the effects of 24 hr of sleep deprivation on auditory and linguistic perception and to assess the magnitude of this effect by comparing such performance with that of aging adults on speech perception and with that of dyslexic readers on phonological awareness. Method: Fifty-five sleep-deprived young adults were compared with 29…
San Juan, Juan; Hu, Xiao-Su; Issa, Mohamad; Bisconti, Silvia; Kovelman, Ioulia; Kileny, Paul; Basura, Gregory
Tinnitus, or phantom sound perception, leads to increased spontaneous neural firing rates and enhanced synchrony in central auditory circuits in animal models. These putative physiologic correlates of tinnitus to date have not been well translated in the brain of the human tinnitus sufferer. Using functional near-infrared spectroscopy (fNIRS) we recently showed that tinnitus in humans leads to maintained hemodynamic activity in auditory and adjacent, non-auditory cortices. Here we used fNIRS technology to investigate changes in resting state functional connectivity between human auditory and non-auditory brain regions in normal-hearing, bilateral subjective tinnitus and controls before and after auditory stimulation. Hemodynamic activity was monitored over the region of interest (primary auditory cortex) and non-region of interest (adjacent non-auditory cortices) and functional brain connectivity was measured during a 60-second baseline/period of silence before and after a passive auditory challenge consisting of alternating pure tones (750 and 8000Hz), broadband noise and silence. Functional connectivity was measured between all channel-pairs. Prior to stimulation, connectivity of the region of interest to the temporal and fronto-temporal region was decreased in tinnitus participants compared to controls. Overall, connectivity in tinnitus was differentially altered as compared to controls following sound stimulation. Enhanced connectivity was seen in both auditory and non-auditory regions in the tinnitus brain, while controls showed a decrease in connectivity following sound stimulation. In tinnitus, the strength of connectivity was increased between auditory cortex and fronto-temporal, fronto-parietal, temporal, occipito-temporal and occipital cortices. Together these data suggest that central auditory and non-auditory brain regions are modified in tinnitus and that resting functional connectivity measured by fNIRS technology may contribute to conscious phantom
Juan San Juan
Full Text Available Tinnitus, or phantom sound perception, leads to increased spontaneous neural firing rates and enhanced synchrony in central auditory circuits in animal models. These putative physiologic correlates of tinnitus to date have not been well translated in the brain of the human tinnitus sufferer. Using functional near-infrared spectroscopy (fNIRS we recently showed that tinnitus in humans leads to maintained hemodynamic activity in auditory and adjacent, non-auditory cortices. Here we used fNIRS technology to investigate changes in resting state functional connectivity between human auditory and non-auditory brain regions in normal-hearing, bilateral subjective tinnitus and controls before and after auditory stimulation. Hemodynamic activity was monitored over the region of interest (primary auditory cortex and non-region of interest (adjacent non-auditory cortices and functional brain connectivity was measured during a 60-second baseline/period of silence before and after a passive auditory challenge consisting of alternating pure tones (750 and 8000Hz, broadband noise and silence. Functional connectivity was measured between all channel-pairs. Prior to stimulation, connectivity of the region of interest to the temporal and fronto-temporal region was decreased in tinnitus participants compared to controls. Overall, connectivity in tinnitus was differentially altered as compared to controls following sound stimulation. Enhanced connectivity was seen in both auditory and non-auditory regions in the tinnitus brain, while controls showed a decrease in connectivity following sound stimulation. In tinnitus, the strength of connectivity was increased between auditory cortex and fronto-temporal, fronto-parietal, temporal, occipito-temporal and occipital cortices. Together these data suggest that central auditory and non-auditory brain regions are modified in tinnitus and that resting functional connectivity measured by fNIRS technology may contribute to
Full Text Available Background and Aim: Blocking of the adenosine receptor in central nervous system by caffeine can lead to increasing the level of neurotransmitters like glutamate. As the adenosine receptors are present in almost all brain areas like central auditory pathway, it seems caffeine can change conduction in this way. The purpose of this study was to evaluate the effects of caffeine on latency and amplitude of auditory brainstem response(ABR.Materials and Methods: In this clinical trial study 43 normal 18-25 years old male students were participated. The subjects consumed 0, 2 and 3 mg/kg BW caffeine in three different sessions. Auditory brainstem responses were recorded before and 30 minute after caffeine consumption. The results were analyzed by Friedman and Wilcoxone test to assess the effects of caffeine on auditory brainstem response.Results: Compared to control group the latencies of waves III,V and I-V interpeak interval of the cases decreased significantly after 2 and 3mg/kg BW caffeine consumption. Wave I latency significantly decreased after 3mg/kg BW caffeine consumption(p<0.01. Conclusion: Increasing of the glutamate level resulted from the adenosine receptor blocking brings about changes in conduction in the central auditory pathway.
Professor Yoichi Ando, acoustic architectural designer of the Kirishima International Concert Hall in Japan, presents a comprehensive rational-scientific approach to designing performance spaces. His theory is based on systematic psychoacoustical observations of spatial hearing and listener preferences, whose neuronal correlates are observed in the neurophysiology of the human brain. A correlation-based model of neuronal signal processing in the central auditory system is proposed in which temporal sensations (pitch, timbre, loudness, duration) are represented by an internal autocorrelation representation, and spatial sensations (sound location, size, diffuseness related to envelopment) are represented by an internal interaural crosscorrelation function. Together these two internal central auditory representations account for the basic auditory qualities that are relevant for listening to music and speech in indoor performance spaces. Observed psychological and neurophysiological commonalities between auditor...
Full Text Available We have recently demonstrated that alternating left-right sound sources induce motion perception to static visual stimuli along the horizontal plane (SIVM: sound-induced visual motion perception, Hidaka et al., 2009. The aim of the current study was to elucidate whether auditory motion signals, rather than auditory positional signals, can directly contribute to the SIVM. We presented static visual flashes at retinal locations outside the fovea together with a lateral auditory motion provided by a virtual stereo noise source smoothly shifting in the horizontal plane. The flashes appeared to move in the situation where auditory positional information would have little influence on the perceived position of visual stimuli; the spatiotemporal position of the flashes was in the middle of the auditory motion trajectory. Furthermore, the auditory motion altered visual motion perception in a global motion display; in this display, different localized motion signals of multiple visual stimuli were combined to produce a coherent visual motion perception so that there was no clear one-to-one correspondence between the auditory stimuli and each visual stimulus. These findings suggest the existence of direct interactions between the auditory and visual modalities in motion processing and motion perception.
Full Text Available For Brain-Computer Interface (BCI systems that are designed for users with severe impairments of the oculomotor system, an appropriate mode of presenting stimuli to the user is crucial. To investigate whether multi-sensory integration can be exploited in the gaze-independent event-related potentials (ERP speller and to enhance BCI performance, we designed a visual-auditory speller. We investigate the possibility to enhance stimulus presentation by combining visual and auditory stimuli within gaze-independent spellers. In this study with N = 15 healthy users, two different ways of combining the two sensory modalities are proposed: simultaneous redundant streams (Combined-Speller and interleaved independent streams (Parallel-Speller. Unimodal stimuli were applied as control conditions. The workload, ERP components, classification accuracy and resulting spelling speed were analyzed for each condition. The Combined-speller showed a lower workload than uni-modal paradigms, without the sacrifice of spelling performance. Besides, shorter latencies, lower amplitudes, as well as a shift of the temporal and spatial distribution of discriminative information were observed for Combined-speller. These results are important and are inspirations for future studies to search the reason for these differences. For the more innovative and demanding Parallel-Speller, where the auditory and visual domains are independent from each other, a proof of concept was obtained: fifteen users could spell online with a mean accuracy of 87.7% (chance level <3% showing a competitive average speed of 1.65 symbols per minute. The fact that it requires only one selection period per symbol makes it a good candidate for a fast communication channel. It brings a new insight into the true multisensory stimuli paradigms. Novel approaches for combining two sensory modalities were designed here, which are valuable for the development of ERP-based BCI paradigms.
Clement, Sylvain; Moroni, Christine; Samson, Séverine
The goal of this paper was to review various experimental and neuropsychological studies that support the modular conception of auditory sensory memory or auditory short-term memory. Based on initial findings demonstrating that verbal sensory memory system can be dissociated from a general auditory memory store at the functional and anatomical levels. we reported a series of studies that provided evidence in favor of multiple auditory sensory stores specialized in retaining eit...
Ohyama, Masashi; Kitamura, Shin; Terashi, Akiro; Senda, Michio.
In order to investigate the relation between auditory cognitive function and regional brain activation, we measured the changes in the regional cerebral blood flow (CBF) using positron emission tomography (PET) during the 'odd-ball' paradigm in ten normal healthy volunteers. The subjects underwent 3 tasks, twice for each, while the evoked potential was recorded. In these tasks, the auditory stimulus was a series of pure tones delivered every 1.5 sec binaurally at 75 dB from the earphones. Task A: the stimulus was a series of tones with 1000 Hz only, and the subject was instructed to only hear. Task B: the stimulus was a series of tones with 1000 Hz only, and the subject was instructed to push the button on detecting a tone. Task C: the stimulus was a series of pure tones delivered every 1.5 sec binaurally at 75 dB with a frequency of 1000 Hz (non-target) in 80% and 2000 Hz (target) in 20% at random, and the subject was instructed to push the button on detecting a target tone. The event related potential (P300) was observed in task C (Pz: 334.3±19.6 msec). At each task, the CBF was measured using PET with i.v. injection of 1.5 GBq of O-15 water. The changes in CBF associated with auditory cognition was evaluated by the difference between the CBF images in task C and B. Localized increase was observed in the anterior cingulate cortex (in all subjects), the bilateral associate auditory cortex, the prefrontal cortex and the parietal cortex. The latter three areas had a large individual variation in the location of foci. These results suggested the role of those cortical areas in auditory cognition. The anterior cingulate was most activated (15.0±2.24% of global CBF). This region was not activated in the condition of task B minus task A. The anterior cingulate is a part of Papez's circuit that is related to memory and other higher cortical function. These results suggested that this area may play an important role in cognition as well as in attention. (author)
Coffman, Brian A; Haigh, Sarah M; Murphy, Timothy K; Leiter-Mcbeth, Justin; Salisbury, Dean F
Auditory scene analysis (ASA) dysfunction is likely an important component of the symptomatology of schizophrenia. Auditory object segmentation, the grouping of sequential acoustic elements into temporally-distinct auditory objects, can be assessed with electroencephalography through measurement of the auditory segmentation potential (ASP). Further, N2 responses to the initial and final elements of auditory objects are enhanced relative to medial elements, which may indicate auditory object edge detection (initiation and termination). Both ASP and N2 modulation are impaired in long-term schizophrenia. To determine whether these deficits are present early in disease course, we compared ASP and N2 modulation between individuals at their first episode of psychosis within the schizophrenia spectrum (FE, N=20) and matched healthy controls (N=24). The ASP was reduced by >40% in FE; however, N2 modulation was not statistically different from HC. This suggests that auditory segmentation (ASP) deficits exist at this early stage of schizophrenia, but auditory edge detection (N2 modulation) is relatively intact. In a subset of subjects for whom structural MRIs were available (N=14 per group), ASP sources were localized to midcingulate cortex (MCC) and temporal auditory cortex. Neurophysiological activity in FE was reduced in MCC, an area linked to aberrant perceptual organization, negative symptoms, and cognitive dysfunction in schizophrenia, but not temporal auditory cortex. This study supports the validity of the ASP for measurement of auditory object segmentation and suggests that the ASP may be useful as an early index of schizophrenia-related MCC dysfunction. Further, ASP deficits may serve as a viable biomarker of disease presence. Copyright © 2017 Elsevier B.V. All rights reserved.
Yoneya, Makoto; Liao, Hsin-I; Furukawa, Shigeto; Kashino, Makio
The sensory cortex may adapt to predictable events, focusing instead on unexpected events or surprise stimuli. Previous studies modeled the auditory surprise using the joint probability of an incoming stimulus and the recent short stimulus history. However, such an approach is not applicable to describe a long-term pattern change in auditory sequences, since the joint probability is incomputable due to data sparsity when the window size of stimulus history increases. Additionally, "predictive uncertainty" should be considered to prevent overestimation of surprise, since a violation of expectation would not evoke a large surprise when the prediction is made with a sparse observation. Here, we propose a novel auditory surprise model that can detect a deviant sound embedded in long-term pattern changes. Instead of calculating the joint probability, our model uses the similarity-based pattern retrieval from the past observation to predict the future behavior of auditory sequences. The predictive uncertainty was expressed as the variance of the prediction distribution, which is inversely correlated with the similarity between the selected past patterns and the recent history. Our model is applicable to any auditory input since it requires neither exact pattern matching nor any conversion of auditory signals into symbolic forms. We conducted two experiments to test the applicability of our model. In experiment 1, we showed that the model could predict the reaction time for detecting the disappearance of tone pips. In experiment 2, we showed that the model could predict a pupil size change after the pattern transition in auditory sequences. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Kang, Su Jin; Kim, Jae Hyoung; Shin, Tae Min
To obtain preliminary data for understanding the central auditory neural pathway by means of functional MR imaging (fMRI) of the cerebral auditory cortex during linguistic and non-linguistic auditory stimulation. In three right-handed volunteers we conducted fMRI of auditory cortex stimulation at 1.5 T using a conventional gradient-echo technique (TR/TE/flip angle: 80/60/40 deg). Using a pulsed tone of 1000 Hz and speech as non-linguistic and linguistic auditory stimuli, respectively, images-including those of the superior temporal gyrus of both hemispheres-were obtained in sagittal plases. Both stimuli were separately delivered binaurally or monoaurally through a plastic earphone. Images were activated by processing with homemade software. In order to analyze patterns of auditory cortex activation according to type of stimulus and which side of the ear was stimulated, the number and extent of activated pixels were compared between both temporal lobes. Biaural stimulation led to bilateral activation of the superior temporal gyrus, while monoaural stimulation led to more activation in the contralateral temporal lobe than in the ipsilateral. A trend toward slight activation of the left (dominant) temporal lobe in ipsilateral stimulation, particularly with a linguistic stimulus, was observed. During both biaural and monoaural stimulation, a linguistic stimulus produced more widespread activation than did a non-linguistic one. The superior temporal gyri of both temporal lobes are associated with acoustic-phonetic analysis, and the left (dominant) superior temporal gyrus is likely to play a dominant role in this processing. For better understanding of physiological and pathological central auditory pathways, further investigation is needed
Yoles-Frenkel, Michal; Cohen, Oksana; Bansal, Rohini; Horesh, Noa; Ben-Shaul, Yoram
Achieving controlled stimulus delivery is a major challenge in the physiological analysis of the vomeronasal system (VNS). We provide a comprehensive description of a setup allowing controlled stimulus delivery into the vomeronasal organ (VNO) of anesthetized mice. VNO suction is achieved via electrical stimulation of the sympathetic nerve trunk (SNT) using cuff electrodes, followed by flushing of the nasal cavity. Successful application of this methodology depends on several aspects including the surgical preparation, fabrication of cuff electrodes, experimental setup modifications, and the stimulus delivery and flushing. Here, we describe all these aspects in sufficient detail to allow other researchers to readily adopt it. We also present a custom written MATLAB based software with a graphical user interface that controls all aspects of the actual experiment, including trial sequencing, hardware control, and data logging. The method allows measurement of stimulus evoked sensory responses in brain regions that receive vomeronasal inputs. An experienced investigator can complete the entire surgical procedure within thirty minutes. This is the only approach that allows repeated and controlled stimulus delivery to the intact VNO, employing the natural mode of stimulus uptake. The approach is economical with respect to stimuli, requiring stimulus volumes as low as 1-2μl. This comprehensive description will allow other investigators to adapt this setup to their own experimental needs and can thus promote our physiological understanding of this fascinating chemosensory system. With minor changes it can also be adapted for other rodent species. Copyright © 2017 Elsevier B.V. All rights reserved.
Maurizi, M; Corina, L; Del Gratta, C; Galli, J; Paludetti, G; Pasquarelli, A; Pellini, R; Peresson, M; Pizzella, V; Romani, G L
After outlining the fundamentals of biomagnetism and their possible clinical applications, the authors report the results of a normative study on auditory magnetic fields performed on 18 normally hearing subjects between the ages of 25 and 30. Having presented a thorough review of the literature, they then describe the recording technique employed, the dcSQUID biomagnetic system for signal detection, the shielded room, the characteristics of the stimulus. The auditory magnetic response is characterized by three main waves (P4Om, N100m, P200m) whose latency and amplitude values were calculated. Moreover, in order to localize dipolar activity, certain parameters, such as P and T, were taken into consideration. Localizations were made using a spherical volume conductor or with MRI, which was in any case employed in all the subjects. The waves, especially the N100m recorded contralaterally to the stimulus, showed a reduced latency and an increased amplitude when compared to those recorded ipsilaterally. Moreover, a systematic posterior shift of the N100m source into the left hemisphere with respect to the right one was detected. In conclusion, the authors emphasize the need to study electric as well as magnetic responses in order to better understand auditory cortical functions.
Marcella de Castro Campos Velten
Full Text Available Spatial region concepts such as front, back, left and right reflect our typical interaction with space, and the corresponding surrounding regions have different statuses in memory. We examined the representation of spatial directions in the auditory space, specifically in how far natural response actions, such as orientation movements towards a sound source, would affect the categorization of egocentric auditory space. While standing in the middle of a circle with 16 loudspeakers, participants were presented acoustic stimuli coming from the loudspeakers in randomized order, and verbally described their directions by using the concept labels front, back, left, right, front-right, front-left, back-right and back-left. Response actions varied in three blocked conditions: 1 facing front, 2 turning the head and upper body to face the stimulus, and 3 turning the head and upper body plus pointing with the hand and outstretched arm towards the stimulus. In addition to a protocol of the verbal utterances, motion capture and video recording generated a detailed corpus for subsequent analysis of the participants’ behavior. Chi-square tests revealed an effect of response condition for directions within the left and right sides. We conclude that movement-based response actions influence the representation of auditory space, especially within the sides’ regions.
Corina, David P.; Blau, Shane; LaMarr, Todd; Lawyer, Laurel A.; Coffey-Corina, Sharon
Deaf children who receive a cochlear implant early in life and engage in intensive oral/aural therapy often make great strides in spoken language acquisition. However, despite clinicians’ best efforts, there is a great deal of variability in language outcomes. One concern is that cortical regions which normally support auditory processing may become reorganized for visual function, leaving fewer available resources for auditory language acquisition. The conditions under which these changes occur are not well understood, but we may begin investigating this phenomenon by looking for interactions between auditory and visual evoked cortical potentials in deaf children. If children with abnormal auditory responses show increased sensitivity to visual stimuli, this may indicate the presence of maladaptive cortical plasticity. We recorded evoked potentials, using both auditory and visual paradigms, from 25 typical hearing children and 26 deaf children (ages 2–8 years) with cochlear implants. An auditory oddball paradigm was used (85% /ba/ syllables vs. 15% frequency modulated tone sweeps) to elicit an auditory P1 component. Visual evoked potentials (VEPs) were recorded during presentation of an intermittent peripheral radial checkerboard while children watched a silent cartoon, eliciting a P1–N1 response. We observed reduced auditory P1 amplitudes and a lack of latency shift associated with normative aging in our deaf sample. We also observed shorter latencies in N1 VEPs to visual stimulus offset in deaf participants. While these data demonstrate cortical changes associated with auditory deprivation, we did not find evidence for a relationship between cortical auditory evoked potentials and the VEPs. This is consistent with descriptions of intra-modal plasticity within visual systems of deaf children, but do not provide evidence for cross-modal plasticity. In addition, we note that sign language experience had no effect on deaf children’s early auditory and visual
Corina, David P; Blau, Shane; LaMarr, Todd; Lawyer, Laurel A; Coffey-Corina, Sharon
Deaf children who receive a cochlear implant early in life and engage in intensive oral/aural therapy often make great strides in spoken language acquisition. However, despite clinicians' best efforts, there is a great deal of variability in language outcomes. One concern is that cortical regions which normally support auditory processing may become reorganized for visual function, leaving fewer available resources for auditory language acquisition. The conditions under which these changes occur are not well understood, but we may begin investigating this phenomenon by looking for interactions between auditory and visual evoked cortical potentials in deaf children. If children with abnormal auditory responses show increased sensitivity to visual stimuli, this may indicate the presence of maladaptive cortical plasticity. We recorded evoked potentials, using both auditory and visual paradigms, from 25 typical hearing children and 26 deaf children (ages 2-8 years) with cochlear implants. An auditory oddball paradigm was used (85% /ba/ syllables vs. 15% frequency modulated tone sweeps) to elicit an auditory P1 component. Visual evoked potentials (VEPs) were recorded during presentation of an intermittent peripheral radial checkerboard while children watched a silent cartoon, eliciting a P1-N1 response. We observed reduced auditory P1 amplitudes and a lack of latency shift associated with normative aging in our deaf sample. We also observed shorter latencies in N1 VEPs to visual stimulus offset in deaf participants. While these data demonstrate cortical changes associated with auditory deprivation, we did not find evidence for a relationship between cortical auditory evoked potentials and the VEPs. This is consistent with descriptions of intra-modal plasticity within visual systems of deaf children, but do not provide evidence for cross-modal plasticity. In addition, we note that sign language experience had no effect on deaf children's early auditory and visual ERP
Weinberger, Norman M
Primary ("early") sensory cortices have been viewed as stimulus analyzers devoid of function in learning, memory, and cognition. However, studies combining sensory neurophysiology and learning protocols have revealed that associative learning systematically modifies the encoding of stimulus dimensions in the primary auditory cortex (A1) to accentuate behaviorally important sounds. This "representational plasticity" (RP) is manifest at different levels. The sensitivity and selectivity of signal tones increase near threshold, tuning above threshold shifts toward the frequency of acoustic signals, and their area of representation can increase within the tonotopic map of A1. The magnitude of area gain encodes the level of behavioral stimulus importance and serves as a substrate of memory strength. RP has the same characteristics as behavioral memory: it is associative, specific, develops rapidly, consolidates, and can last indefinitely. Pairing tone with stimulation of the cholinergic nucleus basalis induces RP and implants specific behavioral memory, while directly increasing the representational area of a tone in A1 produces matching behavioral memory. Thus, RP satisfies key criteria for serving as a substrate of auditory memory. The findings suggest a basis for posttraumatic stress disorder in abnormally augmented cortical representations and emphasize the need for a new model of the cerebral cortex. © 2015 Elsevier B.V. All rights reserved.
Slugocki, Christopher; Bosnyak, Daniel; Trainor, Laurel J
Recent electrophysiological work has evinced a capacity for plasticity in subcortical auditory nuclei in human listeners. Similar plastic effects have been measured in cortically-generated auditory potentials but it is unclear how the two interact. Here we present Simultaneously-Evoked Auditory Potentials (SEAP), a method designed to concurrently elicit electrophysiological brain potentials from inferior colliculus, thalamus, and primary and secondary auditory cortices. Twenty-six normal-hearing adult subjects (mean 19.26 years, 9 male) were exposed to 2400 monaural (right-ear) presentations of a specially-designed stimulus which consisted of a pure-tone carrier (500 or 600 Hz) that had been amplitude-modulated at the sum of 37 and 81 Hz (depth 100%). Presentation followed an oddball paradigm wherein the pure-tone carrier was set to 500 Hz for 85% of presentations and pseudo-randomly changed to 600 Hz for the remaining 15% of presentations. Single-channel electroencephalographic data were recorded from each subject using a vertical montage referenced to the right earlobe. We show that SEAP elicits a 500 Hz frequency-following response (FFR; generated in inferior colliculus), 80 (subcortical) and 40 (primary auditory cortex) Hz auditory steady-state responses (ASSRs), mismatch negativity (MMN) and P3a (when there is an occasional change in carrier frequency; secondary auditory cortex) in addition to the obligatory N1-P2 complex (secondary auditory cortex). Analyses showed that subcortical and cortical processes are linked as (i) the latency of the FFR predicts the phase delay of the 40 Hz steady-state response, (ii) the phase delays of the 40 and 80 Hz steady-state responses are correlated, and (iii) the fidelity of the FFR predicts the latency of the N1 component. The SEAP method offers a new approach for measuring the dynamic encoding of acoustic features at multiple levels of the auditory pathway. As such, SEAP is a promising tool with which to study how
Formby, Craig; Hawley, Monica L.; Sherlock, LaGuinn P.; Gold, Susan; Payne, JoAnne; Brooks, Rebecca; Parton, Jason M.; Juneau, Roger; Desporte, Edward J.; Siegle, Gregory R.
The primary aim of this research was to evaluate the validity, efficacy, and generalization of principles underlying a sound therapy–based treatment for promoting expansion of the auditory dynamic range (DR) for loudness. The basic sound therapy principles, originally devised for treatment of hyperacusis among patients with tinnitus, were evaluated in this study in a target sample of unsuccessfully fit and/or problematic prospective hearing aid users with diminished DRs (owing to their elevated audiometric thresholds and reduced sound tolerance). Secondary aims included: (1) delineation of the treatment contributions from the counseling and sound therapy components to the full-treatment protocol and, in turn, the isolated treatment effects from each of these individual components to intervention success; and (2) characterization of the respective dynamics for full, partial, and control treatments. Thirty-six participants with bilateral sensorineural hearing losses and reduced DRs, which affected their actual or perceived ability to use hearing aids, were enrolled in and completed a placebo-controlled (for sound therapy) randomized clinical trial. The 2 × 2 factorial trial design was implemented with or without various assignments of counseling and sound therapy. Specifically, participants were assigned randomly to one of four treatment groups (nine participants per group), including: (1) group 1—full treatment achieved with scripted counseling plus sound therapy implemented with binaural sound generators; (2) group 2—partial treatment achieved with counseling and placebo sound generators (PSGs); (3) group 3—partial treatment achieved with binaural sound generators alone; and (4) group 4—a neutral control treatment implemented with the PSGs alone. Repeated measurements of categorical loudness judgments served as the primary outcome measure. The full-treatment categorical-loudness judgments for group 1, measured at treatment termination, were
Nan, Yun; Skoe, Erika; Nicol, Trent; Kraus, Nina
Differentiating between voices is a basic social skill humans acquire early in life. The current study aimed to understand the subcortical mechanisms of voice processing by focusing on the two most important acoustical voice features: the fundamental frequency (F0) and harmonics. We measured frequency following responses in a group of young adults to a naturally produced speech syllable under two linguistic contexts: same-syllable and multiple-syllable. Compared to the same-syllable context, the multiple-syllable context contained more speech cues to aid voice processing. We analyzed the magnitude of the response to the F0 and harmonics between same-talker and multiple-talker conditions within each linguistic context. Results establish that the human auditory brainstem is sensitive to different talkers as shown by enhanced harmonic responses under the multiple-talker compared to the same-talker condition, when the stimulus stream contained multiple syllables. This study thus provides the first electrophysiological evidence of the auditory brainstem's sensitivity to human voices. Copyright © 2015 Elsevier B.V. All rights reserved.
Besle, Julien; Fort, Alexandra; Giard, Marie-Hélène
The mismatch negativity (MMN) component of auditory event-related brain potentials can be used as a probe to study the representation of sounds in auditory sensory memory (ASM). Yet it has been shown that an auditory MMN can also be elicited by an illusory auditory deviance induced by visual changes. This suggests that some visual information may be encoded in ASM and is accessible to the auditory MMN process. It is not known, however, whether visual information affects ASM representation for any audiovisual event or whether this phenomenon is limited to specific domains in which strong audiovisual illusions occur. To highlight this issue, we have compared the topographies of MMNs elicited by non-speech audiovisual stimuli deviating from audiovisual standards on the visual, the auditory, or both dimensions. Contrary to what occurs with audiovisual illusions, each unimodal deviant elicited sensory-specific MMNs, and the MMN to audiovisual deviants included both sensory components. The visual MMN was, however, different from a genuine visual MMN obtained in a visual-only control oddball paradigm, suggesting that auditory and visual information interacts before the MMN process occurs. Furthermore, the MMN to audiovisual deviants was significantly different from the sum of the two sensory-specific MMNs, showing that the processes of visual and auditory change detection are not completely independent.
Moore, David R.; Halliday, Lorna F.; Amitay, Sygal
This paper reviews recent studies that have used adaptive auditory training to address communication problems experienced by some children in their everyday life. It considers the auditory contribution to developmental listening and language problems and the underlying principles of auditory learning that may drive further refinement of auditory learning applications. Following strong claims that language and listening skills in children could be improved by auditory learning, researchers have debated what aspect of training contributed to the improvement and even whether the claimed improvements reflect primarily a retest effect on the skill measures. Key to understanding this research have been more circumscribed studies of the transfer of learning and the use of multiple control groups to examine auditory and non-auditory contributions to the learning. Significant auditory learning can occur during relatively brief periods of training. As children mature, their ability to train improves, but the relation between the duration of training, amount of learning and benefit remains unclear. Individual differences in initial performance and amount of subsequent learning advocate tailoring training to individual learners. The mechanisms of learning remain obscure, especially in children, but it appears that the development of cognitive skills is of at least equal importance to the refinement of sensory processing. Promotion of retention and transfer of learning are major goals for further research. PMID:18986969
Rennig, Johannes; Bleyer, Anna Lena; Karnath, Hans-Otto
Simultanagnosia is a neuropsychological deficit of higher visual processes caused by temporo-parietal brain damage. It is characterized by a specific failure of recognition of a global visual Gestalt, like a visual scene or complex objects, consisting of local elements. In this study we investigated to what extend this deficit should be understood as a deficit related to specifically the visual domain or whether it should be seen as defective Gestalt processing per se. To examine if simultanagnosia occurs across sensory domains, we designed several auditory experiments sharing typical characteristics of visual tasks that are known to be particularly demanding for patients suffering from simultanagnosia. We also included control tasks for auditory working memory deficits and for auditory extinction. We tested four simultanagnosia patients who suffered from severe symptoms in the visual domain. Two of them indeed showed significant impairments in recognition of simultaneously presented sounds. However, the same two patients also suffered from severe auditory working memory deficits and from symptoms comparable to auditory extinction, both sufficiently explaining the impairments in simultaneous auditory perception. We thus conclude that deficits in auditory Gestalt perception do not appear to be characteristic for simultanagnosia and that the human brain obviously uses independent mechanisms for visual and for auditory Gestalt perception. Copyright © 2017 Elsevier Ltd. All rights reserved.
Moore, David R; Halliday, Lorna F; Amitay, Sygal
This paper reviews recent studies that have used adaptive auditory training to address communication problems experienced by some children in their everyday life. It considers the auditory contribution to developmental listening and language problems and the underlying principles of auditory learning that may drive further refinement of auditory learning applications. Following strong claims that language and listening skills in children could be improved by auditory learning, researchers have debated what aspect of training contributed to the improvement and even whether the claimed improvements reflect primarily a retest effect on the skill measures. Key to understanding this research have been more circumscribed studies of the transfer of learning and the use of multiple control groups to examine auditory and non-auditory contributions to the learning. Significant auditory learning can occur during relatively brief periods of training. As children mature, their ability to train improves, but the relation between the duration of training, amount of learning and benefit remains unclear. Individual differences in initial performance and amount of subsequent learning advocate tailoring training to individual learners. The mechanisms of learning remain obscure, especially in children, but it appears that the development of cognitive skills is of at least equal importance to the refinement of sensory processing. Promotion of retention and transfer of learning are major goals for further research.
Donohue, Sarah E; Todisco, Alexandra E; Woldorff, Marty G
Neuroimaging work on multisensory conflict suggests that the relevant modality receives enhanced processing in the face of incongruency. However, the degree of stimulus processing in the irrelevant modality and the temporal cascade of the attentional modulations in either the relevant or irrelevant modalities are unknown. Here, we employed an audiovisual conflict paradigm with a sensory probe in the task-irrelevant modality (vision) to gauge the attentional allocation to that modality. ERPs were recorded as participants attended to and discriminated spoken auditory letters while ignoring simultaneous bilateral visual letter stimuli that were either fully congruent, fully incongruent, or partially incongruent (one side incongruent, one congruent) with the auditory stimulation. Half of the audiovisual letter stimuli were followed 500-700 msec later by a bilateral visual probe stimulus. As expected, ERPs to the audiovisual stimuli showed an incongruency ERP effect (fully incongruent versus fully congruent) of an enhanced, centrally distributed, negative-polarity wave starting ∼250 msec. More critically here, the sensory ERP components to the visual probes were larger when they followed fully incongruent versus fully congruent multisensory stimuli, with these enhancements greatest on fully incongruent trials with the slowest RTs. In addition, on the slowest-response partially incongruent trials, the P2 sensory component to the visual probes was larger contralateral to the preceding incongruent visual stimulus. These data suggest that, in response to conflicting multisensory stimulus input, the initial cognitive effect is a capture of attention by the incongruent irrelevant-modality input, pulling neural processing resources toward that modality, resulting in rapid enhancement, rather than rapid suppression, of that input.
Arne F Meyer
Full Text Available Analysis of sensory neurons' processing characteristics requires simultaneous measurement of presented stimuli and concurrent spike responses. The functional transformation from high-dimensional stimulus space to the binary space of spike and non-spike responses is commonly described with linear-nonlinear models, whose linear filter component describes the neuron's receptive field. From a machine learning perspective, this corresponds to the binary classification problem of discriminating spike-eliciting from non-spike-eliciting stimulus examples. The classification-based receptive field (CbRF estimation method proposed here adapts a linear large-margin classifier to optimally predict experimental stimulus-response data and subsequently interprets learned classifier weights as the neuron's receptive field filter. Computational learning theory provides a theoretical framework for learning from data and guarantees optimality in the sense that the risk of erroneously assigning a spike-eliciting stimulus example to the non-spike class (and vice versa is minimized. Efficacy of the CbRF method is validated with simulations and for auditory spectro-temporal receptive field (STRF estimation from experimental recordings in the auditory midbrain of Mongolian gerbils. Acoustic stimulation is performed with frequency-modulated tone complexes that mimic properties of natural stimuli, specifically non-Gaussian amplitude distribution and higher-order correlations. Results demonstrate that the proposed approach successfully identifies correct underlying STRFs, even in cases where second-order methods based on the spike-triggered average (STA do not. Applied to small data samples, the method is shown to converge on smaller amounts of experimental recordings and with lower estimation variance than the generalized linear model and recent information theoretic methods. Thus, CbRF estimation may prove useful for investigation of neuronal processes in response to
Full Text Available This paper introduces Pyff, the Pythonic Feedback Framework for feedbackapplications and stimulus presentation. Pyff provides a platform independentframework that allows users to develop and run neuroscientific experiments inthe programming language Python. Existing solutions have mostly beenimplemented in C++, which makes for a rather tedious programming task fornon-computer-scientists, or in Matlab, which is not well suited for moreadvanced visual or auditory applications. Pyff was designed to makeexperimental paradigms (i.e. feedback and stimulus applications easilyprogrammable. It includes base classes for various types of common feedbacksand stimuli as well as useful libraries for external hardware such aseyetrackers. Pyff is also equipped with a steadily growing set of ready-to-usefeedbacks and stimuli. It can be used as a standalone application, for instanceproviding stimulus presentation in psychophysics experiments, or within aclosed loop such as in biofeedback or brain-computer interfacing experiments.Pyff communicates with other systems via a standardized communication protocoland is therefore suitable to be used with any system that may be adapted tosend its data in the specified format. Having such a general, open sourceframework will help foster a fruitful exchange of experimental paradigmsbetween research groups. In particular, it will decrease the need ofreprogramming standard paradigms, ease the reproducibility of publishedresults, and naturally entail some standardization of stimulus presentation.
Venthur, Bastian; Scholler, Simon; Williamson, John; Dähne, Sven; Treder, Matthias S; Kramarek, Maria T; Müller, Klaus-Robert; Blankertz, Benjamin
This paper introduces Pyff, the Pythonic feedback framework for feedback applications and stimulus presentation. Pyff provides a platform-independent framework that allows users to develop and run neuroscientific experiments in the programming language Python. Existing solutions have mostly been implemented in C++, which makes for a rather tedious programming task for non-computer-scientists, or in Matlab, which is not well suited for more advanced visual or auditory applications. Pyff was designed to make experimental paradigms (i.e., feedback and stimulus applications) easily programmable. It includes base classes for various types of common feedbacks and stimuli as well as useful libraries for external hardware such as eyetrackers. Pyff is also equipped with a steadily growing set of ready-to-use feedbacks and stimuli. It can be used as a standalone application, for instance providing stimulus presentation in psychophysics experiments, or within a closed loop such as in biofeedback or brain-computer interfacing experiments. Pyff communicates with other systems via a standardized communication protocol and is therefore suitable to be used with any system that may be adapted to send its data in the specified format. Having such a general, open-source framework will help foster a fruitful exchange of experimental paradigms between research groups. In particular, it will decrease the need of reprogramming standard paradigms, ease the reproducibility of published results, and naturally entail some standardization of stimulus presentation.
Venthur, Bastian; Scholler, Simon; Williamson, John; Dähne, Sven; Treder, Matthias S.; Kramarek, Maria T.; Müller, Klaus-Robert; Blankertz, Benjamin
This paper introduces Pyff, the Pythonic feedback framework for feedback applications and stimulus presentation. Pyff provides a platform-independent framework that allows users to develop and run neuroscientific experiments in the programming language Python. Existing solutions have mostly been implemented in C++, which makes for a rather tedious programming task for non-computer-scientists, or in Matlab, which is not well suited for more advanced visual or auditory applications. Pyff was designed to make experimental paradigms (i.e., feedback and stimulus applications) easily programmable. It includes base classes for various types of common feedbacks and stimuli as well as useful libraries for external hardware such as eyetrackers. Pyff is also equipped with a steadily growing set of ready-to-use feedbacks and stimuli. It can be used as a standalone application, for instance providing stimulus presentation in psychophysics experiments, or within a closed loop such as in biofeedback or brain–computer interfacing experiments. Pyff communicates with other systems via a standardized communication protocol and is therefore suitable to be used with any system that may be adapted to send its data in the specified format. Having such a general, open-source framework will help foster a fruitful exchange of experimental paradigms between research groups. In particular, it will decrease the need of reprogramming standard paradigms, ease the reproducibility of published results, and naturally entail some standardization of stimulus presentation. PMID:21160550
McKeown, Denis; Wellsted, David
Psychophysical studies are reported examining how the context of recent auditory stimulation may modulate the processing of new sounds. The question posed is how recent tone stimulation may affect ongoing performance in a discrimination task. In the task, two complex sounds occurred in successive intervals. A single target component of one complex…
Coffey, Emily B. J.; Herholz, Sibylle C.; Chepesiuk, Alexander M. P.; Baillet, Sylvain; Zatorre, Robert J.
The auditory frequency-following response (FFR) to complex periodic sounds is used to study the subcortical auditory system, and has been proposed as a biomarker for disorders that feature abnormal sound processing. Despite its value in fundamental and clinical research, the neural origins of the FFR are unclear. Using magnetoencephalography, we observe a strong, right-asymmetric contribution to the FFR from the human auditory cortex at the fundamental frequency of the stimulus, in addition to signal from cochlear nucleus, inferior colliculus and medial geniculate. This finding is highly relevant for our understanding of plasticity and pathology in the auditory system, as well as higher-level cognition such as speech and music processing. It suggests that previous interpretations of the FFR may need re-examination using methods that allow for source separation. PMID:27009409
Almeida, Jorge; He, Dongjun; Chen, Quanjing; Mahon, Bradford Z.; Zhang, Fan; Gonçalves, Óscar F.; Fang, Fang; Bi, Yanchao
Sensory cortices of individuals who are congenitally deprived of a sense can exhibit considerable plasticity and be recruited to process information from the senses that remain intact. Here, we explored whether the auditory cortex of congenitally deaf individuals represents visual field location of a stimulus—a dimension that is represented in early visual areas. We used functional MRI to measure neural activity in auditory and visual cortices of congenitally deaf and hearing humans while they observed stimuli typically used for mapping visual field preferences in visual cortex. We found that the location of a visual stimulus can be successfully decoded from the patterns of neural activity in auditory cortex of congenitally deaf but not hearing individuals. This is particularly true for locations within the horizontal plane and within peripheral vision. These data show that the representations stored within neuroplastically changed auditory cortex can align with dimensions that are typically represented in visual cortex. PMID:26423461
Hironori Kuga, M.D.
We acquired BOLD responses elicited by click trains of 20, 30, 40 and 80-Hz frequencies from 15 patients with acute episode schizophrenia (AESZ, 14 symptom-severity-matched patients with non-acute episode schizophrenia (NASZ, and 24 healthy controls (HC, assessed via a standard general linear-model-based analysis. The AESZ group showed significantly increased ASSR-BOLD signals to 80-Hz stimuli in the left auditory cortex compared with the HC and NASZ groups. In addition, enhanced 80-Hz ASSR-BOLD signals were associated with more severe auditory hallucination experiences in AESZ participants. The present results indicate that neural over activation occurs during 80-Hz auditory stimulation of the left auditory cortex in individuals with acute state schizophrenia. Given the possible association between abnormal gamma activity and increased glutamate levels, our data may reflect glutamate toxicity in the auditory cortex in the acute state of schizophrenia, which might lead to progressive changes in the left transverse temporal gyrus.
Full Text Available The retrieval-extinction paradigm, which disrupts the reconsolidation of fear memories in humans, is a non-invasive technique that can be used to prevent the return of fear in humans. In the present study, unconditioned stimulus revaluation was applied in the retrieval-extinction paradigm to investigate its promotion of conditioned fear extinction in the memory reconsolidation window after participants acquired conditioned fear. This experiment comprised three stages (acquisition, unconditioned stimulus revaluation, retrieval-extinction and three methods for indexing fear (unconditioned stimulus expectancy, skin conductance response, conditioned stimulus pleasure rating. After the acquisition phase, we decreased the intensity of the unconditioned stimulus in one group (devaluation and maintained constant for the other group (control. The results indicated that both groups exhibited similar levels of unconditioned stimulus expectancy, but the devaluation group had significantly smaller skin conductance responses and exhibited a growth in conditioned stimulus + pleasure. Thus, our findings indicate unconditioned stimulus revaluation effectively promoted the extinction of conditioned fear within the memory reconsolidation window.
Pérez-Valenzuela, Catherine; Gárate-Pérez, Macarena F; Sotomayor-Zárate, Ramón; Delano, Paul H; Dagnino-Subiabre, Alexies
Chronic stress impairs auditory attention in rats and monoamines regulate neurotransmission in the primary auditory cortex (A1), a brain area that modulates auditory attention. In this context, we hypothesized that norepinephrine (NE) levels in A1 correlate with the auditory attention performance of chronically stressed rats. The first objective of this research was to evaluate whether chronic stress affects monoamines levels in A1. Male Sprague-Dawley rats were subjected to chronic stress (restraint stress) and monoamines levels were measured by high performance liquid chromatographer (HPLC)-electrochemical detection. Chronically stressed rats had lower levels of NE in A1 than did controls, while chronic stress did not affect serotonin (5-HT) and dopamine (DA) levels. The second aim was to determine the effects of reboxetine (a selective inhibitor of NE reuptake) on auditory attention and NE levels in A1. Rats were trained to discriminate between two tones of different frequencies in a two-alternative choice task (2-ACT), a behavioral paradigm to study auditory attention in rats. Trained animals that reached a performance of ≥80% correct trials in the 2-ACT were randomly assigned to control and stress experimental groups. To analyze the effects of chronic stress on the auditory task, trained rats of both groups were subjected to 50 2-ACT trials 1 day before and 1 day after of the chronic stress period. A difference score (DS) was determined by subtracting the number of correct trials after the chronic stress protocol from those before. An unexpected result was that vehicle-treated control rats and vehicle-treated chronically stressed rats had similar performances in the attentional task, suggesting that repeated injections with vehicle were stressful for control animals and deteriorated their auditory attention. In this regard, both auditory attention and NE levels in A1 were higher in chronically stressed rats treated with reboxetine than in vehicle
Bar-Haim, Yair; Henkin, Yael; Ari-Even-Roth, Daphne; Tetin-Schneider, Simona; Hildesheimer, Minka; Muchnik, Chava
Selective mutism is a psychiatric disorder of childhood characterized by consistent inability to speak in specific situations despite the ability to speak normally in others. The objective of this study was to test whether reduced auditory efferent activity, which may have direct bearings on speaking behavior, is compromised in selectively mute children. Participants were 16 children with selective mutism and 16 normally developing control children matched for age and gender. All children were tested for pure-tone audiometry, speech reception thresholds, speech discrimination, middle-ear acoustic reflex thresholds and decay function, transient evoked otoacoustic emission, suppression of transient evoked otoacoustic emission, and auditory brainstem response. Compared with control children, selectively mute children displayed specific deficiencies in auditory efferent activity. These aberrations in efferent activity appear along with normal pure-tone and speech audiometry and normal brainstem transmission as indicated by auditory brainstem response latencies. The diminished auditory efferent activity detected in some children with SM may result in desensitization of their auditory pathways by self-vocalization and in reduced control of masking and distortion of incoming speech sounds. These children may gradually learn to restrict vocalization to the minimal amount possible in contexts that require complex auditory processing.
Arie, Miri; Henkin, Yael; Lamy, Dominique; Tetin-Schneider, Simona; Apter, Alan; Sadeh, Avi; Bar-Haim, Yair
Because abnormal Auditory Efferent Activity (AEA) is associated with auditory distortions during vocalization, we tested whether auditory processing is impaired during vocalization in children with Selective Mutism (SM). Participants were children with SM and abnormal AEA, children with SM and normal AEA, and normally speaking controls, who had to detect aurally presented target words embedded within word lists under two conditions: silence (single task), and while vocalizing (dual task). To ascertain specificity of auditory-vocal deficit, effects of concurrent vocalizing were also examined during a visual task. Children with SM and abnormal AEA showed impaired auditory processing during vocalization relative to children with SM and normal AEA, and relative to control children. This impairment is specific to the auditory modality and does not reflect difficulties in dual task per se. The data extends previous findings suggesting that deficient auditory processing is involved in speech selectivity in SM.
Full Text Available Adults integrate multisensory information optimally (e.g. Ernst & Banks, 2002 while children are not able to integrate multisensory visual haptic cues until 8-10 years of age (e.g. Gori, Del Viva, Sandini, & Burr, 2008. Before that age strong unisensory dominance is present for size and orientation visual-haptic judgments maybe reflecting a process of cross-sensory calibration between modalities. It is widely recognized that audition dominates time perception, while vision dominates space perception. If the cross sensory calibration process is necessary for development, then the auditory modality should calibrate vision in a bimodal temporal task, and the visual modality should calibrate audition in a bimodal spatial task. Here we measured visual-auditory integration in both the temporal and the spatial domains reproducing for the spatial task a child-friendly version of the ventriloquist stimuli used by Alais and Burr (2004 and for the temporal task a child-friendly version of the stimulus used by Burr, Banks and Morrone (2009. Unimodal and bimodal (conflictual or not conflictual audio-visual thresholds and PSEs were measured and compared with the Bayesian predictions. In the temporal domain, we found that both in children and adults, audition dominates the bimodal visuo-auditory task both in perceived time and precision thresholds. Contrarily, in the visual-auditory spatial task, children younger than 12 years of age show clear visual dominance (on PSEs and bimodal thresholds higher than the Bayesian prediction. Only in the adult group bimodal thresholds become optimal. In agreement with previous studies, our results suggest that also visual-auditory adult-like behaviour develops late. Interestingly, the visual dominance for space and the auditory dominance for time that we found might suggest a cross-sensory comparison of vision in a spatial visuo-audio task and a cross-sensory comparison of audition in a temporal visuo-audio task.
Höhne, Johannes; Tangermann, Michael
Realizing the decoding of brain signals into control commands, brain-computer interfaces (BCI) aim to establish an alternative communication pathway for locked-in patients. In contrast to most visual BCI approaches which use event-related potentials (ERP) of the electroencephalogram, auditory BCI systems are challenged with ERP responses, which are less class-discriminant between attended and unattended stimuli. Furthermore, these auditory approaches have more complex interfaces which imposes a substantial workload on their users. Aiming for a maximally user-friendly spelling interface, this study introduces a novel auditory paradigm: “CharStreamer”. The speller can be used with an instruction as simple as “please attend to what you want to spell”. The stimuli of CharStreamer comprise 30 spoken sounds of letters and actions. As each of them is represented by the sound of itself and not by an artificial substitute, it can be selected in a one-step procedure. The mental mapping effort (sound stimuli to actions) is thus minimized. Usability is further accounted for by an alphabetical stimulus presentation: contrary to random presentation orders, the user can foresee the presentation time of the target letter sound. Healthy, normal hearing users (n = 10) of the CharStreamer paradigm displayed ERP responses that systematically differed between target and non-target sounds. Class-discriminant features, however, varied individually from the typical N1-P2 complex and P3 ERP components found in control conditions with random sequences. To fully exploit the sequential presentation structure of CharStreamer, novel data analysis approaches and classification methods were introduced. The results of online spelling tests showed that a competitive spelling speed can be achieved with CharStreamer. With respect to user rating, it clearly outperforms a control setup with random presentation sequences. PMID:24886978