Salo, S; Lang, A H; Aaltonen, O; Lertola, K; Kärki, T
A cortical cognitive auditory evoked potential, mismatch negativity (MMN), reflects automatic discrimination and echoic memory functions of the auditory system. For this study, we examined whether this potential is dependent on the stimulus intensity. The MMN potentials were recorded from 10 subjects with normal hearing using a sine tone of 1000 Hz as the standard stimulus and a sine tone of 1141 Hz as the deviant stimulus, with probabilities of 90% and 10%, respectively. The intensities were 40, 50, 60, 70, and 80 dB HL for both standard and deviant stimuli in separate blocks. Stimulus intensity had a statistically significant effect on the mean amplitude, rise time parameter, and onset latency of the MMN. Automatic auditory discrimination seems to be dependent on the sound pressure level of the stimuli.
Robinson, Christopher W.; Sloutsky, Vladimir M.
Two experiments examined the effects of multimodal presentation and stimulus familiarity on auditory and visual processing. In Experiment 1, 10-month-olds were habituated to either an auditory stimulus, a visual stimulus, or an auditory-visual multimodal stimulus. Processing time was assessed during the habituation phase, and discrimination of…
Dinsmoor, James A.
In his effort to distinguish operant from respondent conditioning, Skinner stressed the lack of an eliciting stimulus and rejected the prevailing stereotype of Pavlovian “stimulus—response” psychology. But control by antecedent stimuli, whether classified as conditional or discriminative, is ubiquitous in the natural setting. With both respondent and operant behavior, symmetrical gradients of generalization along unrelated dimensions may be obtained following differential reinforcement in the...
Full Text Available Sensory maps are often distorted representations of the environment, where ethologically-important ranges are magnified. The implication of a biased representation extends beyond increased acuity for having more neurons dedicated to a certain range. Because neurons are functionally interconnected, non-uniform representations influence the processing of high-order features that rely on comparison across areas of the map. Among these features are time-dependent changes of the auditory scene generated by moving objects. How sensory representation affects high order processing can be approached in the map of auditory space of the owl’s midbrain, where locations in the front are over-represented. In this map, neurons are selective not only to location but also to location over time. The tuning to space over time leads to direction selectivity, which is also topographically organized. Across the population, neurons tuned to peripheral space are more selective to sounds moving into the front. The distribution of direction selectivity can be explained by spatial and temporal integration on the non-uniform map of space. Thus, the representation of space can induce biased computation of a second-order stimulus feature. This phenomenon is likely observed in other sensory maps and may be relevant for behavior.
Engdahl, Lis; Bjerre, Vicky K; Christoffersen, Gert R J
Cognitive anticipation of a stimulus has been associated with an ERP called "stimulus preceding negativity" (SPN). A new auditory delay task without stimulus-related motor activity demonstrated a prefrontal SPN, present during attentive anticipation of sounds with closed eyes, but absent during d...
Tartar, Jaime L; de Almeida, Kristen; McIntosh, Roger C; Rosselli, Monica; Nash, Allan J
Emotionally negative stimuli serve as a mechanism of biological preparedness to enhance attention. We hypothesized that emotionally negative stimuli would also serve as motivational priming to increase attention resources for subsequent stimuli. To that end, we tested 11 participants in a dual sensory modality task, wherein emotionally negative pictures were contrasted with emotionally neutral pictures and each picture was followed 600 ms later by a tone in an auditory oddball paradigm. Each trial began with a picture displayed for 200 ms; half of the trials began with an emotionally negative picture and half of the trials began with an emotionally neutral picture; 600 ms following picture presentation, the participants heard either an oddball tone or a standard tone. At the end of each trial (picture followed by tone), the participants categorized, with a button press, the picture and tone combination. As expected, and consistent with previous studies, we found an enhanced visual late positive potential (latency range=300-700 ms) to the negative picture stimuli. We further found that compared to neutral pictures, negative pictures resulted in early attention and orienting effects to subsequent tones (measured through an enhanced N1 and N2) and sustained attention effects only to the subsequent oddball tones (measured through late processing negativity, latency range=400-700 ms). Number pad responses to both the picture and tone category showed the shortest response latencies and greatest percentage of correct picture-tone categorization on the negative picture followed by oddball tone trials. Consistent with previous work on natural selective attention, our results support the idea that emotional stimuli can alter attention resource allocation. This finding has broad implications for human attention and performance as it specifically shows the conditions in which an emotionally negative stimulus can result in extended stimulus evaluation. Copyright © 2011
Hultsch; Schleuss; Todt
In many oscine birds, song learning is affected by social variables, for example the behaviour of a tutor. This implies that both auditory and visual perceptual systems should be involved in the acquisition process. To examine whether and how particular visual stimuli can affect song acquisition, we tested the impact of a tutoring design in which the presentation of auditory stimuli (i.e. species-specific master songs) was paired with a well-defined nonauditory stimulus (i.e. stroboscope light flashes: Strobe regime). The subjects were male hand-reared nightingales, Luscinia megarhynchos. For controls, males were exposed to tutoring without a light stimulus (Control regime). The males' singing recorded 9 months later showed that the Strobe regime had enhanced the acquisition of song patterns. During this treatment birds had acquired more songs than during the Control regime; the observed increase in repertoire size was from 20 to 30% in most cases. Furthermore, the copy quality of imitations acquired during the Strobe regime was better than that of imitations developed from the Control regime, and this was due to a significant increase in the number of 'perfect' song copies. We conclude that these effects were mediated by an intrinsic component (e.g. attention or arousal) which specifically responded to the Strobe regime. Our findings also show that mechanisms of song learning are well prepared to process information from cross-modal perception. Thus, more detailed enquiries into stimulus complexes that are usually referred to as social variables are promising. Copyright 1999 The Association for the Study of Animal Behaviour.
Teki, Sundeep; Chait, Maria; Kumar, Sukhbinder; von Kriegstein, Katharina; Griffiths, Timothy D
Auditory figure-ground segregation, listeners' ability to selectively hear out a sound of interest from a background of competing sounds, is a fundamental aspect of scene analysis. In contrast to the disordered acoustic environment we experience during everyday listening, most studies of auditory segregation have used relatively simple, temporally regular signals. We developed a new figure-ground stimulus that incorporates stochastic variation of the figure and background that captures the rich spectrotemporal complexity of natural acoustic scenes. Figure and background signals overlap in spectrotemporal space, but vary in the statistics of fluctuation, such that the only way to extract the figure is by integrating the patterns over time and frequency. Our behavioral results demonstrate that human listeners are remarkably sensitive to the appearance of such figures. In a functional magnetic resonance imaging experiment, aimed at investigating preattentive, stimulus-driven, auditory segregation mechanisms, naive subjects listened to these stimuli while performing an irrelevant task. Results demonstrate significant activations in the intraparietal sulcus (IPS) and the superior temporal sulcus related to bottom-up, stimulus-driven figure-ground decomposition. We did not observe any significant activation in the primary auditory cortex. Our results support a role for automatic, bottom-up mechanisms in the IPS in mediating stimulus-driven, auditory figure-ground segregation, which is consistent with accumulating evidence implicating the IPS in structuring sensory input and perceptual organization.
Full Text Available It is well known that the planum temporale (PT area in the posterior temporal lobe carries out spectro-temporal analysis of auditory stimuli, which is crucial for speech, for example. There are suggestions that the PT is also involved in auditory attention, specifically in the discrimination and selection of stimuli from the left and right ear. However, direct evidence is missing so far. To examine the role of the PT in auditory attention we asked fourteen participants to complete the Bergen Dichotic Listening Test. In this test two different consonant-vowel syllables (e.g., "ba" and "da" are presented simultaneously, one to each ear, and participants are asked to verbally report the syllable they heard best or most clearly. Thus attentional selection of a syllable is stimulus-driven. Each participant completed the test three times: after their left and right PT (located with anatomical brain scans had been stimulated with repetitive transcranial magnetic stimulation (rTMS, which transiently interferes with normal brain functioning in the stimulated sites, and after sham stimulation, where participants were led to believe they had been stimulated but no rTMS was applied (control. After sham stimulation the typical right ear advantage emerged, that is, participants reported relatively more right than left ear syllables, reflecting a left-hemispheric dominance for language. rTMS over the right but not left PT significantly reduced the right ear advantage. This was the result of participants reporting more left and fewer right ear syllables after right PT stimulation, suggesting there was a leftward shift in stimulus selection. Taken together, our findings point to a new function of the PT in addition to auditory perception: particularly the right PT is involved in stimulus selection and (stimulus-driven, auditory attention.
Pineda, Gustavo; Atehortúa, Angélica; Iregui, Marcela; García-Arteaga, Juan D.; Romero, Eduardo
External auditory cues stimulate motor related areas of the brain, activating motor ways parallel to the basal ganglia circuits and providing a temporary pattern for gait. In effect, patients may re-learn motor skills mediated by compensatory neuroplasticity mechanisms. However, long term functional gains are dependent on the nature of the pathology, follow-up is usually limited and reinforcement by healthcare professionals is crucial. Aiming to cope with these challenges, several researches and device implementations provide auditory or visual stimulation to improve Parkinsonian gait pattern, inside and outside clinical scenarios. The current work presents a semiautomated strategy for spatio-temporal feature extraction to study the relations between auditory temporal stimulation and spatiotemporal gait response. A protocol for auditory stimulation was built to evaluate the integrability of the strategy in the clinic practice. The method was evaluated in transversal measurement with an exploratory group of people with Parkinson's (n = 12 in stage 1, 2 and 3) and control subjects (n =6). The result showed a strong linear relation between auditory stimulation and cadence response in control subjects (R=0.98 +/-0.008) and PD subject in stage 2 (R=0.95 +/-0.03) and stage 3 (R=0.89 +/-0.05). Normalized step length showed a variable response between low and high gait velocity (0.2> R >0.97). The correlation between normalized mean velocity and stimulus was strong in all PD stage 2 (R>0.96) PD stage 3 (R>0.84) and controls (R>0.91) for all experimental conditions. Among participants, the largest variation from baseline was found in PD subject in stage 3 (53.61 +/-39.2 step/min, 0.12 +/- 0.06 in step length and 0.33 +/- 0.16 in mean velocity). In this group these values were higher than the own baseline. These variations are related with direct effect of metronome frequency on cadence and velocity. The variation of step length involves different regulation strategies and
Campolattaro, Matthew M.; Halverson, Hunter E.; Freeman, John H.
The neural pathways that convey conditioned stimulus (CS) information to the cerebellum during eyeblink conditioning have not been fully delineated. It is well established that pontine mossy fiber inputs to the cerebellum convey CS-related stimulation for different sensory modalities (e.g., auditory, visual, tactile). Less is known about the…
Schwent, V. L.; Hillyard, S. A.; Galambos, R.
Enhancement of the auditory vertex potentials with selective attention to dichotically presented tone pips was found to be critically sensitive to the range of inter-stimulus intervals in use. Only at the shortest intervals was a clear-cut enhancement of the latency component to stimuli observed for the attended ear.
Fobel, Oliver; Dau, Torsten
-chirp, was based on estimates of human basilar membrane (BM) group delays derived from stimulus-frequency otoacoustic emissions (SFOAE) at a sound pressure level of 40 dB [Shera and Guinan, in Recent Developments in Auditory Mechanics (2000)]. The other chirp, referred to as the A-chirp, was derived from latency...
Full Text Available A unique sound that deviates from a repetitive background sound induces signature neural responses, such as mismatch negativity and novelty P3 response in electro-encephalography studies. Here we show that a deviant auditory stimulus induces a human pupillary dilation response (PDR that is sensitive to the stimulus properties and irrespective whether attention is directed to the sounds or not. In an auditory oddball sequence, we used white noise and 2000-Hz tones as oddballs against repeated 1000-Hz tones. Participants’ pupillary responses were recorded while they listened to the auditory oddball sequence. In Experiment 1, they were not involved in any task. Results show that pupils dilated to the noise oddballs for approximately 4 s, but no such PDR was found for the 2000-Hz tone oddballs. In Experiments 2, two types of visual oddballs were presented synchronously with the auditory oddballs. Participants discriminated the auditory or visual oddballs while trying to ignore stimuli from the other modality. The purpose of this manipulation was to direct attention to or away from the auditory sequence. In Experiment 3, the visual oddballs and the auditory oddballs were always presented asynchronously to prevent residuals of attention on to-be-ignored oddballs due to the concurrence with the attended oddballs. Results show that pupils dilated to both the noise and 2000-Hz tone oddballs in all conditions. Most importantly, PDRs to noise were larger than those to the 2000-Hz tone oddballs regardless of the attention condition in both experiments. The overall results suggest that the stimulus-dependent factor of the PDR appears to be independent of attention.
Liao, Hsin-I; Yoneya, Makoto; Kidani, Shunsuke; Kashino, Makio; Furukawa, Shigeto
A unique sound that deviates from a repetitive background sound induces signature neural responses, such as mismatch negativity and novelty P3 response in electro-encephalography studies. Here we show that a deviant auditory stimulus induces a human pupillary dilation response (PDR) that is sensitive to the stimulus properties and irrespective whether attention is directed to the sounds or not. In an auditory oddball sequence, we used white noise and 2000-Hz tones as oddballs against repeated 1000-Hz tones. Participants' pupillary responses were recorded while they listened to the auditory oddball sequence. In Experiment 1, they were not involved in any task. Results show that pupils dilated to the noise oddballs for approximately 4 s, but no such PDR was found for the 2000-Hz tone oddballs. In Experiments 2, two types of visual oddballs were presented synchronously with the auditory oddballs. Participants discriminated the auditory or visual oddballs while trying to ignore stimuli from the other modality. The purpose of this manipulation was to direct attention to or away from the auditory sequence. In Experiment 3, the visual oddballs and the auditory oddballs were always presented asynchronously to prevent residuals of attention on to-be-ignored oddballs due to the concurrence with the attended oddballs. Results show that pupils dilated to both the noise and 2000-Hz tone oddballs in all conditions. Most importantly, PDRs to noise were larger than those to the 2000-Hz tone oddballs regardless of the attention condition in both experiments. The overall results suggest that the stimulus-dependent factor of the PDR appears to be independent of attention.
Deng, Zishan; Gao, Yuan; Li, Ting
As one of the main causes of traffic accidents, driving fatigue deserves researchers' attention and its detection and monitoring during long-term driving require a new technique to realize. Since functional near-infrared spectroscopy (fNIRS) can be applied to detect cerebral hemodynamic responses, we can promisingly expect its application in fatigue level detection. Here, we performed three different kinds of experiments on a driver and recorded his cerebral hemodynamic responses when driving for long hours utilizing our device based on fNIRS. Each experiment lasted for 7 hours and one of the three specific experimental tests, detecting the driver's response to sounds, traffic lights and direction signs respectively, was done every hour. The results showed that visual stimulus was easier to cause fatigue compared with auditory stimulus and visual stimulus induced by traffic lights scenes was easier to cause fatigue compared with visual stimulus induced by direction signs in the first few hours. We also found that fatigue related hemodynamics caused by auditory stimulus increased fastest, then traffic lights scenes, and direction signs scenes slowest. Our study successfully compared audio, visual color, and visual character stimulus in sensitivity to cause driving fatigue, which is meaningful for driving safety management.
Sussman, Elyse; Winkler, István; Kreuzer, Judith; Saher, Marieke; Näätänen, Risto; Ritter, Walter
Our previous study showed that the auditory context could influence whether two successive acoustic changes occurring within the temporal integration window (approximately 200ms) were pre-attentively encoded as a single auditory event or as two discrete events (Cogn Brain Res 12 (2001) 431). The aim of the current study was to assess whether top-down processes could influence the stimulus-driven processes in determining what constitutes an auditory event. Electroencepholagram (EEG) was recorded from 11 scalp electrodes to frequently occurring standard and infrequently occurring deviant sounds. Within the stimulus blocks, deviants either occurred only in pairs (successive feature changes) or both singly and in pairs. Event-related potential indices of change and target detection, the mismatch negativity (MMN) and the N2b component, respectively, were compared with the simultaneously measured performance in discriminating the deviants. Even though subjects could voluntarily distinguish the two successive auditory feature changes from each other, which was also indicated by the elicitation of the N2b target-detection response, top-down processes did not modify the event organization reflected by the MMN response. Top-down processes can extract elemental auditory information from a single integrated acoustic event, but the extraction occurs at a later processing stage than the one whose outcome is indexed by MMN. Initial processes of auditory event-formation are fully governed by the context within which the sounds occur. Perception of the deviants as two separate sound events (the top-down effects) did not change the initial neural representation of the same deviants as one event (indexed by the MMN), without a corresponding change in the stimulus-driven sound organization.
Javitt, D C; Steinschneider, M; Schroeder, C E; Vaughan, H G; Arezzo, J C
Mismatch negativity (MMN) is a cognitive, auditory event-related potential (AEP) that reflects preattentive detection of stimulus deviance and indexes the operation of the auditory sensory ('echoic') memory system. MMN is elicited most commonly in an auditory oddball paradigm in which a sequence of repetitive standard stimuli is interrupted infrequently and unexpectedly by a physically deviant 'oddball' stimulus. Electro- and magnetoencephalographic dipole mapping studies have localized the generators of MMN to supratemporal auditory cortex in the vicinity of Heschl's gyrus, but have not determined the degree to which MMN reflects activation within primary auditory cortex (AI) itself. The present study, using moveable multichannel electrodes inserted acutely into superior temporal plane, demonstrates a significant contribution of AI to scalp-recorded MMN in the monkey, as reflected by greater response of AI to loud or soft clicks presented as deviants than to the same stimuli presented as repetitive standards. The MMN-like activity was localized primarily to supragranular laminae within AI. Thus, standard and deviant stimuli elicited similar degrees of initial, thalamocortical excitation. In contrast, responses within supragranular cortex were significantly larger to deviant stimuli than to standards. No MMN-like activity was detected in a limited number to passes that penetrated anterior and medial to AI. AI plays a well established role in the decoding of the acoustic properties of individual stimuli. The present study demonstrates that primary auditory cortex also plays an important role in processing the relationships between stimuli, and thus participates in cognitive, as well as purely sensory, processing of auditory information.
Bigelow, James; Poremba, Amy
We conducted two experiments to examine the influences of stimulus set size (the number of stimuli that are used throughout the session) and intertrial interval (ITI, the elapsed time between trials) in auditory short-term memory in monkeys. We used an auditory delayed matching-to-sample task wherein the animals had to indicate whether two sounds separated by a 5-s retention interval were the same (match trials) or different (nonmatch trials). In Experiment 1, we randomly assigned stimulus set sizes of 2, 4, 8, 16, 32, 64, or 192 (trial-unique) for each session of 128 trials. Consistent with previous visual studies, overall accuracy was consistently lower when smaller stimulus set sizes were used. Further analyses revealed that these effects were primarily caused by an increase in incorrect "same" responses on nonmatch trials. In Experiment 2, we held the stimulus set size constant at four for each session and alternately set the ITI at 5, 10, or 20 s. Overall accuracy improved when the ITI was increased from 5 to 10 s, but it was the same across the 10- and 20-s conditions. As in Experiment 1, the overall decrease in accuracy during the 5-s condition was caused by a greater number of false "match" responses on nonmatch trials. Taken together, Experiments 1 and 2 showed that auditory short-term memory in monkeys is highly susceptible to proactive interference caused by stimulus repetition. Additional analyses of the data from Experiment 1 suggested that monkeys may make same-different judgments on the basis of a familiarity criterion that is adjusted by error-related feedback.
Azarmnejad, Elham; Sarhangi, Forogh; Javadi, Mahrooz; Rejeh, Nahid; Amirsalari, Susan; Tadrisi, Seyed Davood
Hospitalized neonates usually undergo different painful procedures. This study sought to test the effects of a familiar auditory stimulus on the physiologic responses to pain of venipuncture among neonates in intensive care unit. The study design is quasi-experimental. The randomized clinical trial study was done on 60 full-term neonates admitted to the neonatal intensive care unit between March 20 to June 20, 2014. The neonates were conveniently selected and randomly allocated to the control and the experimental groups. Recorded maternal voice was played for the neonates in the experimental group from 10 minutes before to 10 minutes after venipuncture while the neonates in the control group received no sound therapy intervention. The participants' physiologic parameters were assessed 10 minutes before, during, and after venipuncture. At baseline, the study groups did not differ significantly regarding the intended physiologic parameters (P > .05). During venipuncture, maternal voice was effective in reducing the neonates' heart rate, respiratory rate, and diastolic blood pressure (P familiar sounds to effectively manage neonates' physiologic responses to procedural pain of venipuncture. © 2017 John Wiley & Sons Australia, Ltd.
Park, Seoung Hoon; Kim, Seonjin; Kwon, MinHyuk; Christou, Evangelos A
Vision and auditory information are critical for perception and to enhance the ability of an individual to respond accurately to a stimulus. However, it is unknown whether visual and auditory information contribute differentially to identify the direction and rotational motion of the stimulus. The purpose of this study was to determine the ability of an individual to accurately predict the direction and rotational motion of the stimulus based on visual and auditory information. In this study, we recruited 9 expert table-tennis players and used table-tennis service as our experimental model. Participants watched recorded services with different levels of visual and auditory information. The goal was to anticipate the direction of the service (left or right) and the rotational motion of service (topspin, sidespin, or cut). We recorded their responses and quantified the following outcomes: (i) directional accuracy and (ii) rotational motion accuracy. The response accuracy was the accurate predictions relative to the total number of trials. The ability of the participants to predict the direction of the service accurately increased with additional visual information but not with auditory information. In contrast, the ability of the participants to predict the rotational motion of the service accurately increased with the addition of auditory information to visual information but not with additional visual information alone. In conclusion, this finding demonstrates that visual information enhances the ability of an individual to accurately predict the direction of the stimulus, whereas additional auditory information enhances the ability of an individual to accurately predict the rotational motion of stimulus.
van Laarhoven, Thijs; Stekelenburg, Jeroen J; Vroomen, Jean
A rare omission of a sound that is predictable by anticipatory visual information induces an early negative omission response (oN1) in the EEG during the period of silence where the sound was expected. It was previously suggested that the oN1 was primarily driven by the identity of the anticipated sound. Here, we examined the role of temporal prediction in conjunction with identity prediction of the anticipated sound in the evocation of the auditory oN1. With incongruent audiovisual stimuli (a video of a handclap that is consistently combined with the sound of a car horn) we demonstrate in Experiment 1 that a natural match in identity between the visual and auditory stimulus is not required for inducing the oN1, and that the perceptual system can adapt predictions to unnatural stimulus events. In Experiment 2 we varied either the auditory onset (relative to the visual onset) or the identity of the sound across trials in order to hamper temporal and identity predictions. Relative to the natural stimulus with correct auditory timing and matching audiovisual identity, the oN1 was abolished when either the timing or the identity of the sound could not be predicted reliably from the video. Our study demonstrates the flexibility of the perceptual system in predictive processing (Experiment 1) and also shows that precise predictions of timing and content are both essential elements for inducing an oN1 (Experiment 2). Copyright © 2017 Elsevier B.V. All rights reserved.
Full Text Available BACKGROUND: Patients with cervical dystonia (CD present with an impaired performance of voluntary neck movements, which are usually slow and limited. We hypothesized that such abnormality could involve defective preparation for task execution. Therefore, we examined motor preparation in CD patients using the StartReact method. In this test, a startling auditory stimulus (SAS is delivered unexpectedly at the time of the imperative signal (IS in a reaction time task to cause a faster execution of the prepared motor programme. We expected that CD patients would show an abnormal StartReact phenomenon. METHODS: Fifteen CD patients and 15 age matched control subjects (CS were asked to perform a rotational movement (RM to either side as quick as possible immediately after IS perception (a low intensity electrical stimulus to the II finger. In randomly interspersed test trials (25% a 130 dB SAS was delivered simultaneously with the IS. We recorded RMs in the horizontal plane with a high speed video camera (2.38 ms per frame in synchronization with the IS. The RM kinematic-parameters (latency, velocity, duration and amplitude were analyzed using video-editing software and screen protractor. Patients were asked to rate the difficulty of their RMs in a numerical rating scale. RESULTS: In control trials, CD patients executed slower RMs (repeated measures ANOVA, p<0.10(-5, and reached a smaller final head position angle relative to the midline (p<0.05, than CS. In test trials, SAS improved all RMs in both groups (p<0.10(-14. In addition, patients were more likely to reach beyond their baseline RM than CS (χ(2, p<0.001 and rated their performance better than in control trials (t-test, p<0.01. CONCLUSION: We found improvement of kinematic parameters and subjective perception of motor performance in CD patients with StartReact testing. Our results suggest that CD patients reach an adequate level of motor preparation before task execution.
Miller, Lee M; Recanzone, Gregg H
The auditory cortex is critical for perceiving a sound's location. However, there is no topographic representation of acoustic space, and individual auditory cortical neurons are often broadly tuned to stimulus location. It thus remains unclear how acoustic space is represented in the mammalian cerebral cortex and how it could contribute to sound localization. This report tests whether the firing rates of populations of neurons in different auditory cortical fields in the macaque monkey carry sufficient information to account for horizontal sound localization ability. We applied an optimal neural decoding technique, based on maximum likelihood estimation, to populations of neurons from 6 different cortical fields encompassing core and belt areas. We found that the firing rate of neurons in the caudolateral area contain enough information to account for sound localization ability, but neurons in other tested core and belt cortical areas do not. These results provide a detailed and plausible population model of how acoustic space could be represented in the primate cerebral cortex and support a dual stream processing model of auditory cortical processing.
Full Text Available Hearing losses during infancy and childhood have many negative future effects and impacts on the child life and productivity. The earlier detection of hearing losses, the earlier medical intervention and then the greater benefit of remediation will be. During this research a PC-based audiometer is designed and, currently, the audiometer prototype is in its final development steps. It is based on the auditory brainstem response (ABR method. Chirp stimuli instead of traditional click stimuli will be used to invoke the ABR signal. The stimulus is designed to synchronize the hair cells movement when it spreads out over the cochlea. In addition to the available hardware utilization (PC and PCI board, the efforts confined to design and implement a hardware prototype and to develop a software package that enables the system to behave as ABR audiometer. By using such a method and chirp stimulus, it is expected to be able to detect hearing impairment (sensorineural in the first few days of the life and conduct hearing test at low frequency of stimulus. Currently, the intended chirp stimulus has been successfully generated and the implemented module is able to amplify a signal (on the order of ABR signal to a recordable level. Moreover, a NI-DAQ data acquisition board has been chosen to implement the PC-prototype interface.
Fogerty, Daniel; Humes, Larry E; Busey, Thomas A
Age-related temporal-processing declines of rapidly presented sequences may involve contributions of sensory memory. This study investigated recall for rapidly presented auditory (vowel) and visual (letter) sequences presented at six different stimulus onset asynchronies (SOA) that spanned threshold SOAs for sequence identification. Younger, middle-aged, and older adults participated in all tasks. Results were investigated at both equivalent performance levels (i.e., SOA threshold) and at identical physical stimulus values (i.e., SOAs). For four-item sequences, results demonstrated best performance for the first and last items in the auditory sequences, but only the first item for visual sequences. For two-item sequences, adults identified the second vowel or letter significantly better than the first. Overall, when temporal-order performance was equated for each individual by testing at SOA thresholds, recall accuracy for each position across the age groups was highly similar. These results suggest that modality-specific processing declines of older adults primarily determine temporal-order performance for rapid sequences. However, there is some evidence for a second amodal processing decline in older adults related to early sensory memory for final items in a sequence. This selective deficit was observed particularly for longer sequence lengths and was not accounted for by temporal masking.
Full Text Available Change deafness, the auditory analog to change blindness, occurs when salient and behaviorally relevant changes to sound sources are missed. Missing significant changes in the environment can have serious consequences, however, this effect, has remained little more than a lab phenomenon and a party trick. It is only recently that researchers have begun to explore the nature of these profound errors in change perception. Despite a wealth of examples of the change blindness phenomenon, work on change deafness remains fairly limited. The purpose of the current paper is to review the state of the literature on change deafness and propose an explanation of change deafness that relies on factors related to stimulus information rather than attentional or memory limits. To achieve this, work on across several auditory research domains, including environmental sound classification, informational masking and change deafness are synthesized to present a unified perspective on the perception of change errors in complex, dynamic sound environments. We hope to extend previous research by describing how it may be possible to predict specific patters of change perception errors based on varying degrees of similarity in stimulus features and uncertainty about which stimuli and features are important for a given perceptual decision.
Howell, Tiffani J; Conduit, Russell; Toukhsati, Samia; Bennett, Pauleen
Dog cognition research tends to rely on behavioural response, which can be confounded by obedience or motivation, as the primary means of indexing dog cognitive abilities. A physiological method of measuring dog cognitive processing would be instructive and could complement behavioural response. Electroencephalogram (EEG) has been used in humans to study stimulus processing, which results in waveforms called event-related potentials (ERPs). One ERP component, mismatch negativity (MMN), is a negative deflection approximately 160-200 ms after stimulus onset, which may be related to change detection from echoic sensory memory. We adapted a minimally invasive technique to record MMN in dogs. Dogs were exposed to an auditory oddball paradigm in which deviant tones (10% probability) were pseudo-randomly interspersed throughout an 8 min sequence of standard tones (90% probability). A significant difference in MMN ERP amplitude was observed after the deviant tone in comparison to the standard tone, t5 = -2.98, p = 0.03. This difference, attributed to discrimination of an unexpected stimulus in a series of expected stimuli, was not observed when both tones occurred 50% of the time, t1 = -0.82, p > 0.05. Dogs showed no evidence of pain or distress at any point. We believe this is the first illustration of MMN in a group of dogs and anticipate that this technique may provide valuable insights in cognitive tasks such as object discrimination. Copyright © 2011 Elsevier B.V. All rights reserved.
Linke, Annika C; Vicente-Grabovetsky, Alejandro; Cusack, Rhodri
Philosophers and scientists have puzzled for millennia over how perceptual information is stored in short-term memory. Some have suggested that early sensory representations are involved, but their precise role has remained unclear. The current study asks whether auditory cortex shows sustained frequency-specific activation while sounds are maintained in short-term memory using high-resolution functional MRI (fMRI). Investigating short-term memory representations within regions of human auditory cortex with fMRI has been difficult because of their small size and high anatomical variability between subjects. However, we overcame these constraints by using multivoxel pattern analysis. It clearly revealed frequency-specific activity during the encoding phase of a change detection task, and the degree of this frequency-specific activation was positively related to performance in the task. Although the sounds had to be maintained in memory, activity in auditory cortex was significantly suppressed. Strikingly, patterns of activity in this maintenance period correlated negatively with the patterns evoked by the same frequencies during encoding. Furthermore, individuals who used a rehearsal strategy to remember the sounds showed reduced frequency-specific suppression during the maintenance period. Although negative activations are often disregarded in fMRI research, our findings imply that decreases in blood oxygenation level-dependent response carry important stimulus-specific information and can be related to cognitive processes. We hypothesize that, during auditory change detection, frequency-specific suppression protects short-term memory representations from being overwritten by inhibiting the encoding of interfering sounds.
Gin, T E; Puchot, M L; Cook, A K
Baseline cortisol concentrations are routinely used to screen dogs for hypoadrenocorticism (HOC); this diagnosis must then be confirmed with an ACTH stimulation test. A baseline cortisol concentration less than 55 nmol/L (2 μg/dL) is highly sensitive for HOC but lacks specificity, with a false positive rate >20%. Many dogs with nonadrenal disease are therefore subjected to unnecessary additional testing. It was hypothesized that exposure to an unpleasant auditory stimulus before sample collection would improve the specificity of baseline cortisol measurements in dogs with nonadrenal disease by triggering cortisol production. Twenty-eight healthy client-owned dogs were included in the study, with a median age of 4 yr (range 2-9 yr) and a median weight of 20 kg (range 10-27 kg). Dogs were ineligible for inclusion if they had received short- or long-acting glucocorticoids within the previous 30 and 90 d, respectively. Dogs were randomly assigned to group 1 (control; no noise; n = 7), group 2 (brief noise: n = 10), or group 3 (long noise: n = 11). Each dog and owner were directed to a secluded area for approximately 15 min. Group 1 sat in relative quiet, exposed only to the background sounds of a veterinary hospital. Group 2 were exposed to the sound of a wet-dry vacuum in an adjacent hallway during the first 3 min of this period. Group 3 were exposed to random bursts of wet-dry vacuum noise during this period. At the end of the test interval, each dog was escorted to an adjacent examination room for blood collection. Samples were processed within 15 min; serum was frozen at -80°C before measurement of cortisol concentrations. Median serum cortisol concentrations and the proportion of dogs with results dogs with apparently normal adrenal function was therefore rejected. Copyright © 2018 Elsevier Inc. All rights reserved.
Ren, Yanna; Yang, Weiping; Nakahashi, Kohei; Takahashi, Satoshi; Wu, Jinglong
Although neuronal studies have shown that audiovisual integration is regulated by temporal factors, there is still little knowledge about the impact of temporal factors on audiovisual integration in older adults. To clarify how stimulus onset asynchrony (SOA) between auditory and visual stimuli modulates age-related audiovisual integration, 20 younger adults (21-24 years) and 20 older adults (61-80 years) were instructed to perform an auditory or visual stimuli discrimination experiment. The results showed that in younger adults, audiovisual integration was altered from an enhancement (AV, A ± 50 V) to a depression (A ± 150 V). In older adults, the alterative pattern was similar to that for younger adults with the expansion of SOA; however, older adults showed significantly delayed onset for the time-window-of-integration and peak latency in all conditions, which further demonstrated that audiovisual integration was delayed more severely with the expansion of SOA, especially in the peak latency for V-preceded-A conditions in older adults. Our study suggested that audiovisual facilitative integration occurs only within a certain SOA range (e.g., -50 to 50 ms) in both younger and older adults. Moreover, our results confirm that the response for older adults was slowed and provided empirical evidence that integration ability is much more sensitive to the temporal alignment of audiovisual stimuli in older adults.
Full Text Available There is increasing interest in multisensory influences upon sensory-specific judgements, such as when auditory stimuli affect visual perception. Here we studied whether the duration of an auditory event can objectively affect the perceived duration of a co-occurring visual event. On each trial, participants were presented with a pair of successive flashes and had to judge whether the first or second was longer. Two beeps were presented with the flashes. The order of short and long stimuli could be the same across audition and vision (audiovisual congruent or reversed, so that the longer flash was accompanied by the shorter beep and vice versa (audiovisual incongruent; or the two beeps could have the same duration as each other. Beeps and flashes could onset synchronously or asynchronously. In a further control experiment, the beep durations were much longer (tripled than the flashes. Results showed that visual duration-discrimination sensitivity (d' was significantly higher for congruent (and significantly lower for incongruent audiovisual synchronous combinations, relative to the visual only presentation. This effect was abolished when auditory and visual stimuli were presented asynchronously, or when sound durations tripled those of flashes. We conclude that the temporal properties of co-occurring auditory stimuli influence the perceived duration of visual stimuli and that this can reflect genuine changes in visual sensitivity rather than mere response bias.
Mittag, Maria; Takegata, Rika; Winkler, István
Representations encoding the probabilities of auditory events do not directly support predictive processing. In contrast, information about the probability with which a given sound follows another (transitional probability) allows predictions of upcoming sounds. We tested whether behavioral and cortical auditory deviance detection (the latter indexed by the mismatch negativity event-related potential) relies on probabilities of sound patterns or on transitional probabilities. We presented healthy adult volunteers with three types of rare tone-triplets among frequent standard triplets of high-low-high (H-L-H) or L-H-L pitch structure: proximity deviant (H-H-H/L-L-L), reversal deviant (L-H-L/H-L-H), and first-tone deviant (L-L-H/H-H-L). If deviance detection was based on pattern probability, reversal and first-tone deviants should be detected with similar latency because both differ from the standard at the first pattern position. If deviance detection was based on transitional probabilities, then reversal deviants should be the most difficult to detect because, unlike the other two deviants, they contain no low-probability pitch transitions. The data clearly showed that both behavioral and cortical auditory deviance detection uses transitional probabilities. Thus, the memory traces underlying cortical deviance detection may provide a link between stimulus probability-based change/novelty detectors operating at lower levels of the auditory system and higher auditory cognitive functions that involve predictive processing. Our research presents the first definite evidence for the auditory system prioritizing transitional probabilities over probabilities of individual sensory events. Forming representations for transitional probabilities paves the way for predictions of upcoming sounds. Several recent theories suggest that predictive processing provides the general basis of human perception, including important auditory functions, such as auditory scene analysis. Our
Meier, Matt E.; Kane, Michael J.
Three experiments examined the relation between working memory capacity (WMC) and two different forms of cognitive conflict: stimulus-stimulus (S-S) and stimulus-response (SR) interference. Our goal was to test whether WMC’s relation to conflict-task performance is mediated by stimulus-identification processes (captured by S-S conflict), response-selection processes (captured by S-R conflict), or both. In Experiment 1, subjects completed a single task presenting both S-S and S-R conflict trials, plus trials that combined the two conflict types. We limited ostensible goal-maintenance contributions to performance by requiring the same goal for all trial types and by presenting frequent conflict trials that reinforced the goal. WMC predicted resolution of S-S conflict as expected: Higher-WMC subjects showed reduced response time interference. Although WMC also predicted S-R interference, here, higher-WMC subjects showed increased error interference. Experiment 2A replicated these results in a version of the conflict task without combined S-S/S-R trials. Experiment 2B increased the proportion of congruent (non-conflict) trials to promote reliance on goal-maintenance processes. Here, higher-WMC subjects resolved both S-S and S-R conflict more successfully than did lower-WMC subjects. The results were consistent with Kane and Engle’s (2003) two-factor theory of cognitive control, according to which WMC predicts executive-task performance through goal-maintenance and conflict-resolution processes. However, the present results add specificity to the account by suggesting that higher-WMC subjects better resolve cognitive conflict because they more efficiently select relevant stimulus features against irrelevant, distracting ones. PMID:26120774
Sun Da; Xu Wei; Zhan Hongwei; Liu Hongbiao
Purpose: To detect the cerebral functional location in normal subjects with Chinese classical national music auditory stimulus. Methods: 10 normal young students of the medical collage of Zhejiang University,22-24 years old,5 male and 5 female. The first they underwent a 99mTc-ECD brain imaging during a rest state using a dual detectors gamma camera with fan beam collimators. After 2-4 days they were asked to listen a Chinese classical national music that was played by Erhu and Guzheng for 20 minters. They were also asked to pay special attention to the name of the music, what musical instruments they played and what imagination was opened out in the music. 99mTc-ECD was administered in the first 3 minutes during thy listened the music. The brain imaging was performed in 30-60 minutes after the tracer was administered. Results: To compare the rest state, during listening the Chinese classical national music and paying special attention to the imagination of music the right midtemporal in 6 cases, left midtemporal in 2 cases, right superior temporal in 2 cases, left superior temporal in 6 cases, and right inferior temporal in 2 cases were activated. Among them, dual temporal were activated in 6 cases, right temporal in 3 cases and left temporal in 1 case. It is very interesting that the inferior frontal and/or medial frontal lobes were activated in all 10 subjects, and the activity was markedly higher in frontal than in temporal. Among them dual frontal lobes were activated in 9 subjects, and only right frontal in 1 case. The right superior frontal lobes were activated in 2 cases. The occipital lobes were activated in 4 subjects, and dual occipital in 3 cases, right occipital in 1 case. These 4 subjects stated after listening that they imagined the natural landscape and imagination that was opened out in the music follow the music. Other regions that were activated included parietal lobes (right and left in 1 respectively), pre-cingulated gyms (in 2 cases), and left
Victorino, Kristen R; Schwartz, Richard G
Children with specific language impairment (SLI) appear to demonstrate deficits in attention and its control. Selective attention involves the cognitive control of attention directed toward a relevant stimulus and simultaneous inhibition of attention toward irrelevant stimuli. The current study examined attention control during a cross-modal word recognition task. Twenty participants with SLI (ages 9-12 years) and 20 age-matched peers with typical language development (TLD) listened to words through headphones and were instructed to attend to the words in 1 ear while ignoring the words in the other ear. They were simultaneously presented with pictures and asked to make a lexical decision about whether the pictures and auditory words were the same or different. Accuracy and reaction time were measured in 5 conditions, in which the stimulus in the unattended channel was manipulated. The groups performed with similar accuracy. Compared with their peers with TLD, children with SLI had slower reaction times overall and different within-group patterns of performance by condition. Children with TLD showed efficient inhibitory control in conditions that required active suppression of competing stimuli. Participants with SLI had difficulty exerting control over their auditory attention in all conditions, with particular difficulty inhibiting distractors of all types.
Meier, Matt E.; Kane, Michael J.
Three experiments examined the relation between working memory capacity (WMC) and 2 different forms of cognitive conflict: stimulus-stimulus (S-S) and stimulus-response (S-R) interference. Our goal was to test whether WMC's relation to conflict-task performance is mediated by stimulus-identification processes (captured by S-S conflict),…
Ohmoto, S; Kikuchi, T; Kumada, T
A visual stimulus display system controlled by a microcomputer was constructed at low cost. The system consists of a LED stimulus display device, a microcomputer, two interface boards, a pointing device (a "mouse") and two kinds of software. The first software package is written in BASIC. Its functions are: to construct stimulus patterns using the mouse, to construct letter patterns (alphabet, digit, symbols and Japanese letters--kanji, hiragana, katakana), to modify the patterns, to store the patterns on a floppy disc, to translate the patterns into integer data which are used to display the patterns in the second software. The second software package, written in BASIC and machine language, controls display of a sequence of stimulus patterns in predetermined time schedules in visual experiments.
Winter, J.C.; Rice, K.C.; Amorosi, D.J.; Rabin, R.A.
Although psilocybin has been trained in the rat as a discriminative stimulus, little is known of the pharmacological receptors essential for stimulus control. In the present investigation rats were trained with psilocybin and tests were then conducted employing a series of other hallucinogens and presumed antagonists. An intermediate degree of antagonism of psilocybin was observed following treatment with the 5-HT2A receptor antagonist, M100907. In contrast, no significant antagonism was obse...
Xu, Yifang; Collins, Leslie M
The incorporation of low levels of noise into an electrical stimulus has been shown to improve auditory thresholds in some human subjects (Zeng et al., 2000). In this paper, thresholds for noise-modulated pulse-train stimuli are predicted utilizing a stochastic neural-behavioral model of ensemble fiber responses to bi-phasic stimuli. The neural refractory effect is described using a Markov model for a noise-free pulse-train stimulus and a closed-form solution for the steady-state neural response is provided. For noise-modulated pulse-train stimuli, a recursive method using the conditional probability is utilized to track the neural responses to each successive pulse. A neural spike count rule has been presented for both threshold and intensity discrimination under the assumption that auditory perception occurs via integration over a relatively long time period (Bruce et al., 1999). An alternative approach originates from the hypothesis of the multilook model (Viemeister and Wakefield, 1991), which argues that auditory perception is based on several shorter time integrations and may suggest an NofM model for prediction of pulse-train threshold. This motivates analyzing the neural response to each individual pulse within a pulse train, which is considered to be the brief look. A logarithmic rule is hypothesized for pulse-train threshold. Predictions from the multilook model are shown to match trends in psychophysical data for noise-free stimuli that are not always matched by the long-time integration rule. Theoretical predictions indicate that threshold decreases as noise variance increases. Theoretical models of the neural response to pulse-train stimuli not only reduce calculational overhead but also facilitate utilization of signal detection theory and are easily extended to multichannel psychophysical tasks.
Shah, A. S.; Lakatos, P.; McGinnis, T.; O'Connell, N.; Mills, A.; Knuth, K. H.; Chen, C.; Karmos, G.; Schroeder, C. E.
Cortical gamma band oscillations have been recorded in sensory cortices of cats and monkeys, and are thought to aid in perceptual binding. Gamma activity has also been recorded in the rat hippocampus and entorhinal cortex, where it has been shown, that field gamma power is modulated at theta frequency. Since the power of gamma activity in the sensory cortices is not constant (gamma-bursts). we decided to examine the relationship between gamma power and the phase of low frequency oscillation in the auditory cortex of the awake macaque. Macaque monkeys were surgically prepared for chronic awake electrophysiological recording. During the time of the experiments. linear array multielectrodes were inserted in area AI to obtain laminar current source density (CSD) and multiunit activity profiles. Instantaneous theta and gamma power and phase was extracted by applying the Morlet wavelet transformation to the CSD. Gamma power was averaged for every 1 degree of low frequency oscillations to calculate power-phase relation. Both gamma and theta-delta power are largest in the supragranular layers. Power modulation of gamma activity is phase locked to spontaneous, as well as stimulus-related local theta and delta field oscillations. Our analysis also revealed that the power of theta oscillations is always largest at a certain phase of delta oscillation. Auditory stimuli produce evoked responses in the theta band (Le., there is pre- to post-stimulus addition of theta power), but there is also indication that stimuli may cause partial phase re-setting of spontaneous delta (and thus also theta and gamma) oscillations. We also show that spontaneous oscillations might play a role in the processing of incoming sensory signals by 'preparing' the cortex.
Ramkissoon, Ishara; Beverly, Brenda L.
Purpose: Effects of clicks and tonebursts on early and late auditory middle latency response (AMLR) components were evaluated in young and older cigarette smokers and nonsmokers. Method: Participants ( n = 49) were categorized by smoking and age into 4 groups: (a) older smokers, (b) older nonsmokers, (c) young smokers, and (d) young nonsmokers.…
Paul Fredrick Sowman
Full Text Available Acoustic stimuli can cause a transient increase in the excitability of the motor cortex. The current study leverages this phenomenon to develop a method for testing the integrity of auditorimotor integration and the capacity for auditorimotor plasticity. We demonstrate that appropriately timed transcranial magnetic stimulation (TMS of the hand area, paired with auditorily mediated excitation of the motor cortex, induces an enhancement of motor cortex excitability that lasts beyond the time of stimulation. This result demonstrates for the first time that paired associative stimulation (PAS -induced plasticity within the motor cortex is applicable with auditory stimuli. We propose that the method developed here might provide a useful tool for future studies that measure auditory-motor connectivity in communication disorders.
Winter, J C; Rice, K C; Amorosi, D J; Rabin, R A
Although psilocybin has been trained in the rat as a discriminative stimulus, little is known of the pharmacological receptors essential for stimulus control. In the present investigation rats were trained with psilocybin and tests were then conducted employing a series of other hallucinogens and presumed antagonists. An intermediate degree of antagonism of psilocybin was observed following treatment with the 5-HT(2A) receptor antagonist, M100907. In contrast, no significant antagonism was observed following treatment with the 5-HT(1A/7) receptor antagonist, WAY-100635, or the DA D(2) antagonist, remoxipride. Psilocybin generalized fully to DOM, LSD, psilocin, and, in the presence of WAY-100635, DMT while partial generalization was seen to 2C-T-7 and mescaline. LSD and MDMA partially generalized to psilocybin and these effects were completely blocked by M-100907; no generalization of PCP to psilocybin was seen. The present data suggest that psilocybin induces a compound stimulus in which activity at the 5-HT(2A) receptor plays a prominent but incomplete role. In addition, psilocybin differs from closely related hallucinogens such as 5-MeO-DMT in that agonism at 5-HT(1A) receptors appears to play no role in psilocybin-induced stimulus control.
Paris, Tim; Kim, Jeesun; Davis, Chris
We investigated whether internal models of the relationship between lip movements and corresponding speech sounds [Auditory-Visual (AV) speech] could be updated via experience. AV associations were indexed by early and late event related potentials (ERPs) and by oscillatory power and phase locking. Different AV experience was produced via a context manipulation. Participants were presented with valid (the conventional pairing) and invalid AV speech items in either a 'reliable' context (80% AVvalid items) or an 'unreliable' context (80% AVinvalid items). The results showed that for the reliable context, there was N1 facilitation for AV compared to auditory only speech. This N1 facilitation was not affected by AV validity. Later ERPs showed a difference in amplitude between valid and invalid AV speech and there was significant enhancement of power for valid versus invalid AV speech. These response patterns did not change over the context manipulation, suggesting that the internal models of AV speech were not updated by experience. The results also showed that the facilitation of N1 responses did not vary as a function of the salience of visual speech (as previously reported); in post-hoc analyses, it appeared instead that N1 facilitation varied according to the relative time of the acoustic onset, suggesting for AV events N1 may be more sensitive to the relationship of AV timing than form. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.
Full Text Available All sensory systems need to continuously prioritize and select incoming stimuli in order to avoid overflow or interference, and provide a structure to the brain's input. However, the characteristics of this input differ across sensory systems; therefore, and as a direct consequence, each sensory system might have developed specialized strategies to cope with the continuous stream of incoming information. Neural oscillations are intimately connected with this selection process, as they can be used by the brain to rhythmically amplify or attenuate input and therefore represent an optimal tool for stimulus selection. In this paper, we focus on oscillatory processes for stimulus selection in the visual and auditory systems. We point out both commonalities and differences between the two systems and develop several hypotheses, inspired by recently published findings: (1 The rhythmic component in its input is crucial for the auditory, but not for the visual system. The alignment between oscillatory phase and rhythmic input (phase entrainment is therefore an integral part of stimulus selection in the auditory system whereas the visual system merely adjusts its phase to upcoming events, without the need for any rhythmic component. (2 When input is unpredictable, the visual system can maintain its oscillatory sampling, whereas the auditory system switches to a different, potentially internally oriented, “mode” of processing that might be characterized by alpha oscillations. (3 Visual alpha can be divided into a faster occipital alpha (10 Hz and a slower frontal alpha (7 Hz that critically depends on attention.
Full Text Available BACKGROUND: For decades, the chimpanzee, phylogenetically closest to humans, has been analyzed intensively in comparative cognitive studies. Other than the accumulation of behavioral data, the neural basis for cognitive processing in the chimpanzee remains to be clarified. To increase our knowledge on the evolutionary and neural basis of human cognition, comparative neurophysiological studies exploring endogenous neural activities in the awake state are needed. However, to date, such studies have rarely been reported in non-human hominid species, due to the practical difficulties in conducting non-invasive measurements on awake individuals. METHODOLOGY/PRINCIPAL FINDINGS: We measured auditory event-related potentials (ERPs of a fully awake chimpanzee, with reference to a well-documented component of human studies, namely mismatch negativity (MMN. In response to infrequent, deviant tones that were delivered in a uniform sound stream, a comparable ERP component could be detected as negative deflections in early latencies. CONCLUSIONS/SIGNIFICANCE: The present study reports the MMN-like component in a chimpanzee for the first time. In human studies, various ERP components, including MMN, are well-documented indicators of cognitive and neural processing. The results of the present study validate the use of non-invasive ERP measurements for studies on cognitive and neural processing in chimpanzees, and open the way for future studies comparing endogenous neural activities between humans and chimpanzees. This signifies an essential step in hominid cognitive neurosciences.
Morgan, Simeon J; Paolini, Antonio G
Acute animal preparations have been used in research prospectively investigating electrode designs and stimulation techniques for integration into neural auditory prostheses, such as auditory brainstem implants and auditory midbrain implants. While acute experiments can give initial insight to the effectiveness of the implant, testing the chronically implanted and awake animals provides the advantage of examining the psychophysical properties of the sensations induced using implanted devices. Several techniques such as reward-based operant conditioning, conditioned avoidance, or classical fear conditioning have been used to provide behavioral confirmation of detection of a relevant stimulus attribute. Selection of a technique involves balancing aspects including time efficiency (often poor in reward-based approaches), the ability to test a plurality of stimulus attributes simultaneously (limited in conditioned avoidance), and measure reliability of repeated stimuli (a potential constraint when physiological measures are employed). Here, a classical fear conditioning behavioral method is presented which may be used to simultaneously test both detection of a stimulus, and discrimination between two stimuli. Heart-rate is used as a measure of fear response, which reduces or eliminates the requirement for time-consuming video coding for freeze behaviour or other such measures (although such measures could be included to provide convergent evidence). Animals were conditioned using these techniques in three 2-hour conditioning sessions, each providing 48 stimulus trials. Subsequent 48-trial testing sessions were then used to test for detection of each stimulus in presented pairs, and test discrimination between the member stimuli of each pair. This behavioral method is presented in the context of its utilisation in auditory prosthetic research. The implantation of electrocardiogram telemetry devices is shown. Subsequent implantation of brain electrodes into the Cochlear
Basura, Gregory J; Koehler, Seth D; Shore, Susan E
Central auditory circuits are influenced by the somatosensory system, a relationship that may underlie tinnitus generation. In the guinea pig dorsal cochlear nucleus (DCN), pairing spinal trigeminal nucleus (Sp5) stimulation with tones at specific intervals and orders facilitated or suppressed subsequent tone-evoked neural responses, reflecting spike timing-dependent plasticity (STDP). Furthermore, after noise-induced tinnitus, bimodal responses in DCN were shifted from Hebbian to anti-Hebbian timing rules with less discrete temporal windows, suggesting a role for bimodal plasticity in tinnitus. Here, we aimed to determine if multisensory STDP principles like those in DCN also exist in primary auditory cortex (A1), and whether they change following noise-induced tinnitus. Tone-evoked and spontaneous neural responses were recorded before and 15 min after bimodal stimulation in which the intervals and orders of auditory-somatosensory stimuli were randomized. Tone-evoked and spontaneous firing rates were influenced by the interval and order of the bimodal stimuli, and in sham-controls Hebbian-like timing rules predominated as was seen in DCN. In noise-exposed animals with and without tinnitus, timing rules shifted away from those found in sham-controls to more anti-Hebbian rules. Only those animals with evidence of tinnitus showed increased spontaneous firing rates, a purported neurophysiological correlate of tinnitus in A1. Together, these findings suggest that bimodal plasticity is also evident in A1 following noise damage and may have implications for tinnitus generation and therapeutic intervention across the central auditory circuit. Copyright © 2015 the American Physiological Society.
Morey, Rajendra A.; Mitchell, Teresa V.; Inan, Seniha; Lieberman, Jeffrey A.; Belger, Aysenil
Individuals with schizophrenia demonstrate impairments in selective attention and sensory processing. The authors assessed differences in brain function between 26 participants with schizophrenia and 17 comparison subjects engaged in automatic (unattended) and controlled (attended) auditory information processing using event-related functional MRI. Lower regional neural activation during automatic auditory processing in the schizophrenia group was not confined to just the temporal lobe, but also extended to prefrontal regions. Controlled auditory processing was associated with a distributed frontotemporal and subcortical dysfunction. Differences in activation between these two modes of auditory information processing were more pronounced in the comparison group than in the patient group. PMID:19196926
Giard, M H; Lavikahen, J; Reinikainen, K; Perrin, F; Bertrand, O; Pernier, J; Näätänen, R
Abstract The present study analyzed the neural correlates of acoustic stimulus representation in echoic sensory memory. The neural traces of auditory sensory memory were indirectly studied by using the mismatch negativity (MMN), an event-related potential component elicited by a change in a repetitive sound. The MMN is assumed to reflect change detection in a comparison process between the sensory input from a deviant stimulus and the neural representation of repetitive stimuli in echoic memory. The scalp topographies of the MMNs elicited by pure tones deviating from standard tones by either frequency, intensity, or duration varied according to the type of stimulus deviance, indicating that the MMNs for different attributes originate, at least in part, from distinct neural populations in the auditory cortex. This result was supported by dipole-model analysis. If the MMN generator process occurs where the stimulus information is stored, these findings strongly suggest that the frequency, intensity, and duration of acoustic stimuli have a separate neural representation in sensory memory.
Full Text Available Participants were requested to respond to a sequence of visual targets while listening to a well-known lullaby. One of the notes in the lullaby was occasionally exchanged with a pattern deviant. Experiment 1 found that deviants capture attention as a function of the pitch difference between the deviant and the replaced/expected tone. However, when the pitch difference between the expected tone and the deviant tone is held constant, a violation to the direction-of-pitch change across tones can also capture attention (Experiment 2. Moreover, in more complex auditory environments, wherein it is difficult to build a coherent neural model of the sound environment from which expectations are formed, deviations can capture attention but it appears to matter less whether this is a violation from a specific stimulus or a violation of the current direction-of-change (Experiment 3. The results support the expectation violation account of auditory distraction and suggest that there are at least two different expectations that can be violated: One appears to be bound to a specific stimulus and the other would seem to be bound to a more global cross-stimulus rule such as the direction-of-change based on a sequence of preceding sound events. Factors like base-rate probability of tones within the sound environment might become the driving mechanism of attentional capture--rather than violated expectations--in complex sound environments.
Nöstl, Anatole; Marsh, John E; Sörqvist, Patrik
Participants were requested to respond to a sequence of visual targets while listening to a well-known lullaby. One of the notes in the lullaby was occasionally exchanged with a pattern deviant. Experiment 1 found that deviants capture attention as a function of the pitch difference between the deviant and the replaced/expected tone. However, when the pitch difference between the expected tone and the deviant tone is held constant, a violation to the direction-of-pitch change across tones can also capture attention (Experiment 2). Moreover, in more complex auditory environments, wherein it is difficult to build a coherent neural model of the sound environment from which expectations are formed, deviations can capture attention but it appears to matter less whether this is a violation from a specific stimulus or a violation of the current direction-of-change (Experiment 3). The results support the expectation violation account of auditory distraction and suggest that there are at least two different expectations that can be violated: One appears to be bound to a specific stimulus and the other would seem to be bound to a more global cross-stimulus rule such as the direction-of-change based on a sequence of preceding sound events. Factors like base-rate probability of tones within the sound environment might become the driving mechanism of attentional capture--rather than violated expectations--in complex sound environments.
Zhang, Honghui; Wang, Qingyun; Chen, Guanrong
Experimental studies have shown that neuron population located in the basal ganglia of parkinsonian primates can exhibit characteristic firings with certain firing rates differing from normal brain activities. Motivated by recent experimental findings, we investigate the effects of various stimulation paradigms on the firing rates of parkinsonism based on the proposed dynamical models. Our results show that the closed-loop deep brain stimulation is superior in ameliorating the firing behaviors of the parkinsonism, and other control strategies have similar effects according to the observation of electrophysiological experiments. In addition, in conformity to physiological experiments, we found that there exists optimal delay of input in the closed-loop GPtrain|M1 paradigm, where more normal behaviors can be obtained. More interestingly, we observed that W-shaped curves of the firing rates always appear as stimulus delay varies. We furthermore verify the robustness of the obtained results by studying three pallidal discharge rates of the parkinsonism based on the conductance-based model, as well as the integrate-and-fire-or-burst model. Finally, we show that short-term plasticity can improve the firing rates and optimize the control effects on parkinsonism. Our conclusions may give more theoretical insight into Parkinson's disease studies.
Wang, Yuru; Damen, Tom G E; Aarts, Henk
The sense of agency refers to feelings of causing one's own action and resulting effect. Previous research indicates that voluntary action selection is an important factor in shaping the sense of agency. Whereas the volitional nature of the sense of agency is well documented, the present study examined whether agency is modulated when action selection shifts from self-control to a more automatic stimulus-driven process. Seventy-two participants performed an auditory Simon task including congruent and incongruent trials to generate automatic stimulus-driven vs. more self-control driven action, respectively. Responses in the Simon task produced a tone and agency was assessed with the intentional binding task - an implicit measure of agency. Results showed a Simon effect and temporal binding effect. However, temporal binding was independent of congruency. These findings suggest that temporal binding, a window to the sense of agency, emerges for both automatic stimulus-driven actions and self-controlled actions. Copyright © 2017 Elsevier Inc. All rights reserved.
Ozdamar, Ozcan; Bohorquez, Jorge; Mihajloski, Todor; Yavuz, Erdem; Lachowska, Magdalena
Electrophysiological indices of auditory binaural beats illusions are studied using late latency evoked responses. Binaural beats are generated by continuous monaural FM tones with slightly different ascending and descending frequencies lasting about 25 ms presented at 1 sec intervals. Frequency changes are carefully adjusted to avoid any creation of abrupt waveform changes. Binaural Interaction Component (BIC) analysis is used to separate the neural responses due to binaural involvement. The results show that the transient auditory evoked responses can be obtained from the auditory illusion of binaural beats.
Shiffman, Saul; Dunbar, Michael S.; Li, Xiaoxue; Scholl, Sarah M.; Tindle, Hilary A.; Anderson, Stewart J.; Ferguson, Stuart G.
Intermittent smokers (ITS) – who smoke less than daily – comprise an increasing proportion of adult smokers. Their smoking patterns challenge theoretical models of smoking motivation, which emphasize regular and frequent smoking to maintain nicotine levels and avoid withdrawal, but yet have gone largely unexamined. We characterized smoking patterns among 212 ITS (smoking 4–27 days per month) compared to 194 daily smokers (DS; smoking 5–30 cigarettes daily) who monitored situational antecedents of smoking using ecological momentary assessment. Subjects recorded each cigarette on an electronic diary, and situational variables were assessed in a random subset (n = 21,539 smoking episodes); parallel assessments were obtained by beeping subjects at random when they were not smoking (n = 26,930 non-smoking occasions). Compared to DS, ITS' smoking was more strongly associated with being away from home, being in a bar, drinking alcohol, socializing, being with friends and acquaintances, and when others were smoking. Mood had only modest effects in either group. DS' and ITS' smoking were substantially and equally suppressed by smoking restrictions, although ITS more often cited self-imposed restrictions. ITS' smoking was consistently more associated with environmental cues and contexts, especially those associated with positive or “indulgent” smoking situations. Stimulus control may be an important influence in maintaining smoking and making quitting difficult among ITS. PMID:24599056
Full Text Available Intermittent smokers (ITS - who smoke less than daily - comprise an increasing proportion of adult smokers. Their smoking patterns challenge theoretical models of smoking motivation, which emphasize regular and frequent smoking to maintain nicotine levels and avoid withdrawal, but yet have gone largely unexamined. We characterized smoking patterns among 212 ITS (smoking 4-27 days per month compared to 194 daily smokers (DS; smoking 5-30 cigarettes daily who monitored situational antecedents of smoking using ecological momentary assessment. Subjects recorded each cigarette on an electronic diary, and situational variables were assessed in a random subset (n=21,539 smoking episodes; parallel assessments were obtained by beeping subjects at random when they were not smoking (n=26,930 non-smoking occasions. Compared to DS, ITS' smoking was more strongly associated with being away from home, being in a bar, drinking alcohol, socializing, being with friends and acquaintances, and when others were smoking. Mood had only modest effects in either group. DS' and ITS' smoking were substantially and equally suppressed by smoking restrictions, although ITS more often cited self-imposed restrictions. ITS' smoking was consistently more associated with environmental cues and contexts, especially those associated with positive or "indulgent" smoking situations. Stimulus control may be an important influence in maintaining smoking and making quitting difficult among ITS.
Xu, Yifang; Collins, Leslie M
This work investigates dynamic range and intensity discrimination for electrical pulse-train stimuli that are modulated by noise using a stochastic auditory nerve model. Based on a hypothesized monotonic relationship between loudness and the number of spikes elicited by a stimulus, theoretical prediction of the uncomfortable level has previously been determined by comparing spike counts to a fixed threshold, Nucl. However, no specific rule for determining Nucl has been suggested. Our work determines the uncomfortable level based on the excitation pattern of the neural response in a normal ear. The number of fibers corresponding to the portion of the basilar membrane driven by a stimulus at an uncomfortable level in a normal ear is related to Nucl at an uncomfortable level of the electrical stimulus. Intensity discrimination limens are predicted using signal detection theory via the probability mass function of the neural response and via experimental simulations. The results show that the uncomfortable level for pulse-train stimuli increases slightly as noise level increases. Combining this with our previous threshold predictions, we hypothesize that the dynamic range for noise-modulated pulse-train stimuli should increase with additive noise. However, since our predictions indicate that intensity discrimination under noise degrades, overall intensity coding performance may not improve significantly.
Fields, Lanny; Garruto, Michelle; Watanabe, Mari
Conditional discrimination or matching-to-sample procedures have been used to study a wide range of complex psychological phenomena with infrahuman and human subjects. In most studies, the percentage of trials in which a subject selects the comparison stimulus that is related to the sample stimulus is used to index the control exerted by the…
Poulet, James F. A.; Hedwig, Berthold
Many groups of insects are specialists in exploiting sensory cues to locate food resources or conspecifics. To achieve orientation, bees and ants analyze the polarization pattern of the sky, male moths orient along the females' odor plume, and cicadas, grasshoppers, and crickets use acoustic signals to locate singing conspecifics. In comparison with olfactory and visual orientation, where learning is involved, auditory processing underlying orientation in insects appears to be more hardwired and genetically determined. In each of these examples, however, orientation requires a recognition process identifying the crucial sensory pattern to interact with a localization process directing the animal's locomotor activity. Here, we characterize this interaction. Using a sensitive trackball system, we show that, during cricket auditory behavior, the recognition process that is tuned toward the species-specific song pattern controls the amplitude of auditory evoked steering responses. Females perform small reactive steering movements toward any sound patterns. Hearing the male's calling song increases the gain of auditory steering within 2-5 s, and the animals even steer toward nonattractive sound patterns inserted into the speciesspecific pattern. This gain control mechanism in the auditory-to-motor pathway allows crickets to pursue species-specific sound patterns temporarily corrupted by environmental factors and may reflect the organization of recognition and localization networks in insects. localization | phonotaxis
Wronka, E.A.; Kaiser, J.; Coenen, A.M.L.
Relationship between psychometric intelligence measured with Raven's Advanced Progressive Matrices (RAPM) and event-related potentials (ERP) was examined using 3-stimulus oddball task. Subjects who had scored higher on RAPM exhibited larger amplitude of P3a component. Additional analysis using the
Vlugt, van der M.J.; Nooteboom, S.G.
Several accounts of human recognition of spoken words a.!!llign special importance to stimulus-word onsets. The experiment described here was d~igned to find out whether such a word-beginning superiority effect, which ill supported by experimental evidence of various kinds, is due to a special
Mitchell, Teresa V.; Morey, Rajendra A.; Inan, Seniha; Belger, Aysenil
Activity within fronto-striato-temporal regions during processing of unattended auditory deviant tones and an auditory target detection task was investigated using event-related functional magnetic resonance imaging. Activation within the middle frontal gyrus, inferior frontal gyrus, anterior cingulate gyrus, superior temporal gyrus, thalamus, and basal ganglia were analyzed for differences in activity patterns between the two stimulus conditions. Unattended deviant tones elicited robust acti...
Agessi, Larissa Mendonça; Villa, Thaís Rodrigues; Dias, Karin Ziliotto; Carvalho, Deusvenir de Souza; Pereira, Liliane Desgualdo
This study aimed to verify and compare central auditory processing (CAP) performance in migraine with and without aura patients and healthy controls. Forty-one volunteers of both genders, aged between 18 and 40 years, diagnosed with migraine with and without aura by the criteria of "The International Classification of Headache Disorders" (ICDH-3 beta) and a control group of the same age range and with no headache history, were included. Gaps-in-noise (GIN), Duration Pattern test (DPT) and Dichotic Digits Test (DDT) tests were used to assess central auditory processing performance. The volunteers were divided into 3 groups: Migraine with aura (11), migraine without aura (15), and control group (15), matched by age and schooling. Subjects with aura and without aura performed significantly worse in GIN test for right ear (p = .006), for left ear (p = .005) and for DPT test (p UNIFESP.
de Rose, Julio C.; Hidalgo, Matheus; Vasconcellos, Mariliz
Variation in baseline controlling relations is suggested as one of the factors determining variability in stimulus equivalence outcomes. This study used single- comparison trials attempting to control such controlling relations. Four children learned AB, BC, and CD conditional discriminations, with 2 samples and 2 comparison stimuli. In Condition…
Hughes, Michelle L; Choi, Sangsook; Glickman, Erin
Modeling studies suggest that differences in neural responses between polarities might reflect underlying neural health. Specifically, large differences in electrically evoked compound action potential (eCAP) amplitudes and amplitude-growth-function (AGF) slopes between polarities might reflect poorer peripheral neural health, whereas more similar eCAP responses between polarities might reflect better neural health. The interphase gap (IPG) has also been shown to relate to neural survival in animal studies. Specifically, healthy neurons exhibit larger eCAP amplitudes, lower thresholds, and steeper AGF slopes for increasing IPGs. In ears with poorer neural survival, these changes in neural responses are generally less apparent with increasing IPG. The primary goal of this study was to examine the combined effects of stimulus polarity and IPG within and across subjects to determine whether both measures represent similar underlying mechanisms related to neural health. With the exception of one measure in one group of subjects, results showed that polarity and IPG effects were generally not correlated in a systematic or predictable way. This suggests that these two effects might represent somewhat different aspects of neural health, such as differences in site of excitation versus integrative membrane characteristics, for example. Overall, the results from this study suggest that the underlying mechanisms that contribute to polarity and IPG effects in human CI recipients might be difficult to determine from animal models that do not exhibit the same anatomy, variance in etiology, electrode placement, and duration of deafness as humans. Copyright © 2017 Elsevier B.V. All rights reserved.
Sussman, Elyse; Steinschneider, Mitchell
Attention biases the way in which sound information is stored in auditory memory. Little is known, however, about the contribution of stimulus-driven processes in forming and storing coherent sound events. An electrophysiological index of cortical auditory change detection (mismatch negativity [MMN]) was used to assess whether sensory memory representations could be biased toward one organization over another (one or two auditory streams) without attentional control. Results revealed that sound representations held in sensory memory biased the organization of subsequent auditory input. The results demonstrate that context-dependent sound representations modulate stimulus-dependent neural encoding at early stages of auditory cortical processing.
Bell, Raoul; Röer, Jan P; Marsh, John E; Storch, Dunja; Buchner, Axel
Deviant as well as changing auditory distractors interfere with short-term memory. According to the duplex model of auditory distraction, the deviation effect is caused by a shift of attention while the changing-state effect is due to obligatory order processing. This theory predicts that foreknowledge should reduce the deviation effect, but should have no effect on the changing-state effect. We compared the effect of foreknowledge on the two phenomena directly within the same experiment. In a pilot study, specific foreknowledge was impotent in reducing either the changing-state effect or the deviation effect, but it reduced disruption by sentential speech, suggesting that the effects of foreknowledge on auditory distraction may increase with the complexity of the stimulus material. Given the unexpected nature of this finding, we tested whether the same finding would be obtained in (a) a direct preregistered replication in Germany and (b) an additional replication with translated stimulus materials in Sweden.
Ainsworth, Matthew; Lee, Shane; Cunningham, Mark O.; Roopun, Anita K.; Traub, Roger D.; Kopell, Nancy J.; Whittington, Miles A.
Rhythmic activity in populations of cortical neurons accompanies, and may underlie, many aspects of primary sensory processing and short-term memory. Activity in the gamma band (30 Hz up to > 100 Hz) is associated with such cognitive tasks and is thought to provide a substrate for temporal coupling of spatially separate regions of the brain. However, such coupling requires close matching of frequencies in co-active areas, and because the nominal gamma band is so spectrally broad, it may not constitute a single underlying process. Here we show that, for inhibition-based gamma rhythms in vitro in rat neocortical slices, mechanistically distinct local circuit generators exist in different laminae of rat primary auditory cortex. A persistent, 30 – 45 Hz, gap-junction-dependent gamma rhythm dominates rhythmic activity in supragranular layers 2/3, whereas a tonic depolarization-dependent, 50 – 80 Hz, pyramidal/interneuron gamma rhythm is expressed in granular layer 4 with strong glutamatergic excitation. As a consequence, altering the degree of excitation of the auditory cortex causes bifurcation in the gamma frequency spectrum and can effectively switch temporal control of layer 5 from supragranular to granular layers. Computational modeling predicts the pattern of interlaminar connections may help to stabilize this bifurcation. The data suggest that different strategies are used by primary auditory cortex to represent weak and strong inputs, with principal cell firing rate becoming increasingly important as excitation strength increases. PMID:22114273
Stretch, Roger; Skinner, Nicholas
The introduction of a warning signal that preceded a scheduled shock modified the temporal distribution of free-operant avoidance responses. With response-shock and shock-shock intervals held constant, response rates increased only slightly when the response-signal interval was reduced. The result is consistent with Sidman's (1955) findings under different conditions, but at variance with Ulrich, Holz, and Azrin's (1964) findings under similar conditions. Methylphenidate in graded doses increased response rates, modifying frequency distributions of interresponse times. Drug treatment may have disrupted a “temporal discrimination” formed within the signal-shock interval. More simply, methylphenidate influenced response rates by increasing short response latencies after signal onset; this effect was more prominent than the drug's tendency to increase the frequency of pre-signal responses. When signal-onset preceded shock by 2 sec, individual differences in performance were marked; methylphenidate suppressed responding in one rat as a function of increasing dose levels to a greater degree than in a second animal, but both subjects received more shocks than under control conditions. PMID:6050059
N. Jeremy Hill
Full Text Available Most brain-computer interface (BCI systems require users to modulate brain signals in response to visual stimuli. Thus, they may not be useful to people with limited vision, such as those with severe paralysis. One important approach for overcoming this issue is auditory streaming, an approach whereby a BCI system is driven by shifts of attention between two dichotically presented auditory stimulus streams. Motivated by the long-term goal of translating such a system into a reliable, simple yes-no interface for clinical usage, we aim to answer two main questions. First, we asked which of two previously-published variants provides superior performance: a fixed-phase (FP design in which the streams have equal period and opposite phase, or a drifting-phase (DP design where the periods are unequal. We found FP to be superior to DP (p = 0.002: average performance levels were 80% and 72% correct, respectively. We were also able to show, in a pilot with one subject, that auditory streaming can support continuous control and neurofeedback applications: by shifting attention between ongoing left and right auditory streams, the subject was able to control the position of a paddle in a computer game. Second, we examined whether the system is dependent on eye movements, since it is known that eye movements and auditory attention may influence each other, and any dependence on the ability to move one’s eyes would be a barrier to translation to paralyzed users. We discovered that, despite instructions, some subjects did make eye movements that were indicative of the direction of attention. However, there was no correlation, across subjects, between the reliability of the eye movement signal and the reliability of the BCI system, indicating that our system was configured to work independently of eye movement. Together, these findings are an encouraging step forward toward BCIs that provide practical communication and control options for the most severely
Hill, N Jeremy; Moinuddin, Aisha; Häuser, Ann-Katrin; Kienzle, Stephan; Schalk, Gerwin
Most brain-computer interface (BCI) systems require users to modulate brain signals in response to visual stimuli. Thus, they may not be useful to people with limited vision, such as those with severe paralysis. One important approach for overcoming this issue is auditory streaming, an approach whereby a BCI system is driven by shifts of attention between two simultaneously presented auditory stimulus streams. Motivated by the long-term goal of translating such a system into a reliable, simple yes-no interface for clinical usage, we aim to answer two main questions. First, we asked which of two previously published variants provides superior performance: a fixed-phase (FP) design in which the streams have equal period and opposite phase, or a drifting-phase (DP) design where the periods are unequal. We found FP to be superior to DP (p = 0.002): average performance levels were 80 and 72% correct, respectively. We were also able to show, in a pilot with one subject, that auditory streaming can support continuous control and neurofeedback applications: by shifting attention between ongoing left and right auditory streams, the subject was able to control the position of a paddle in a computer game. Second, we examined whether the system is dependent on eye movements, since it is known that eye movements and auditory attention may influence each other, and any dependence on the ability to move one's eyes would be a barrier to translation to paralyzed users. We discovered that, despite instructions, some subjects did make eye movements that were indicative of the direction of attention. However, there was no correlation, across subjects, between the reliability of the eye movement signal and the reliability of the BCI system, indicating that our system was configured to work independently of eye movement. Together, these findings are an encouraging step forward toward BCIs that provide practical communication and control options for the most severely paralyzed users.
Palmer, David C.
The task of extending Skinner's (1957) interpretation of verbal behavior includes accounting for the moment-to-moment changes in stimulus control as one speaks. A consideration of the behavior of the reader reminds us of the continuous evocative effect of verbal stimuli on readers, listeners, and speakers. Collateral discriminative responses to…
Walpole, Carrie Wallace; Roscoe, Eileen M.; Dube, William V.
This study extends previous work on the use of differential observing responses (DOR) to remediate atypically restricted stimulus control. A participant with autism had high matching-to-sample accuracy scores with printed words that had no letters in common (e.g., "cat," "lid," "bug") but poor accuracy with words that had two letters in common…
Vincent, Norah; Lewycky, Samantha; Finnegan, Heather
Sleep restriction (SRT) and stimulus control (SC) have been found to be effective interventions for chronic insomnia (Morgenthaler et al., 2006), and yet adherence to SRT and SC varies widely. The objective of this study was to investigate correlates to adherence to SC/SRT among 40 outpatients with primary or comorbid insomnia using a…
McGowan, Sarah Kate; Behar, Evelyn
For individuals with generalized anxiety disorder, worry becomes associated with numerous aspects of life (e.g., time of day, specific stimuli, environmental cues) and is thus under poor discriminative stimulus control (SC). In addition, excessive worry is associated with anxiety, depressed mood, and sleep difficulties. This investigation sought…
Luke, Steven G.; Nuthmann, Antje; Henderson, John M.
The present study used the stimulus onset delay paradigm to investigate eye movement control in reading and in scene viewing in a within-participants design. Short onset delays (0, 25, 50, 200, and 350 ms) were chosen to simulate the type of natural processing difficulty encountered in reading and scene viewing. Fixation duration increased…
Moyal, Barbara R.
Variables of self-esteem, locus of control, stimulus appraisal, and depressive symptoms, which are related to depression in adults, were investigated in a sample of nonreferred Grade 5 and Grade 6 children. Grade and sex effects were not significant. All other intervariable correlations were significant. (Author)
Charles R Larson
Full Text Available The pitch-shift paradigm has become a widely used method for studying the role of voice pitch auditory feedback in voice control. This paradigm introduces small, brief pitch shifts in voice auditory feedback to vocalizing subjects. The perturbations trigger a reflexive mechanism that counteracts the change in pitch. The underlying mechanisms of the vocal responses are thought to reflect a negative feedback control system that is similar to constructs developed to explain other forms of motor control. Another use of this technique requires subjects to voluntarily change the pitch of their voice when they hear a pitch shift stimulus. Under these conditions, short latency responses are produced that change voice pitch to match that of the stimulus. The pitch-shift technique has been used with magnetoencephalography (MEG and electroencephalography (EEG recordings, and has shown that at vocal onset there is normally a suppression of neural activity related to vocalization. However, if a pitch-shift is also presented at voice onset, there is a cancellation of this suppression, which has been interpreted to mean that one way in which a person distinguishes self-vocalization from vocalization of others is by a comparison of the intended voice and the actual voice. Studies of the pitch shift reflex in the fMRI environment show that the superior temporal gyrus (STG plays an important role in the process of controlling voice F0 based on auditory feedback. Additional studies using fMRI for effective connectivity modeling show that the left and right STG play critical roles in correcting for an error in voice production. While both the left and right STG are involved in this process, a feedback loop develops between left and right STG during perturbations, in which the left to right connection becomes stronger, and a new negative right to left connection emerges along with the emergence of other feedback loops within the cortical network tested.
Lee, M D
Two experiments are presented that serve as a framework for exploring auditory information processing. The framework is referred to as polychotic listening or auditory search, and it requires a listener to scan multiple simultaneous auditory streams for the appearance of a target word (the name of a letter such as A or M). Participants' ability to scan between two and six simultaneous auditory streams of letter and digit names for the name of a target letter was examined using six loudspeakers. The main independent variable was auditory load, or the number of active audio streams on a given trial. The primary dependent variables were target localization accuracy and reaction time. Results showed that as load increased, performance decreased. The performance decrease was evident in reaction time, accuracy, and sensitivity measures. The second study required participants to practice the same task for 10 sessions, for a total of 1800 trials. Results indicated that even with extensive practice, performance was still affected by auditory load. The present results are compared with findings in the visual search literature. The implications for the use of multiple auditory displays are discussed. Potential applications include cockpit and automobile warning displays, virtual reality systems, and training systems.
Full Text Available The purpose of this work was to determine in a clinical trial the efficacy of reducing or preventing seizures in patients with neurological handicaps through sustained cortical activation evoked by passive exposure to a specific auditory stimulus (particular music. The specific type of stimulation had been determined in previous studies to evoke anti-epileptiform/anti-seizure brain activity.The study was conducted at the Thad E. Saleeby Center in Harstville, South Carolina, which is a permanent residence for individuals with heterogeneous neurological impairments, many with epilepsy. We investigated the ability to reduce or prevent seizures in subjects through cortical stimulation from sustained passive nightly exposure to a specific auditory stimulus (music in a three-year randomized controlled study. In year 1, baseline seizure rates were established. In year 2, subjects were randomly assigned to treatment and control groups. Treatment group subjects were exposed during sleeping hours to specific music at regular intervals. Control subjects received no music exposure and were maintained on regular anti-seizure medication. In year 3, music treatment was terminated and seizure rates followed. We found a significant treatment effect (p = 0.024 during the treatment phase persisting through the follow-up phase (p = 0.002. Subjects exposed to treatment exhibited a significant 24% decrease in seizures during the treatment phase, and a 33% decrease persisting through the follow-up phase. Twenty-four percent of treatment subjects exhibited a complete absence of seizures during treatment.Exposure to specific auditory stimuli (i.e. music can significantly reduce seizures in subjects with a range of epilepsy and seizure types, in some cases achieving a complete cessation of seizures. These results are consistent with previous work showing reductions in epileptiform activity from particular music exposure and offers potential for achieving a non
Dai, Lengshi; Shinn-Cunningham, Barbara G
Listeners with normal hearing thresholds (NHTs) differ in their ability to steer attention to whatever sound source is important. This ability depends on top-down executive control, which modulates the sensory representation of sound in the cortex. Yet, this sensory representation also depends on the coding fidelity of the peripheral auditory system. Both of these factors may thus contribute to the individual differences in performance. We designed a selective auditory attention paradigm in which we could simultaneously measure envelope following responses (EFRs, reflecting peripheral coding), onset event-related potentials (ERPs) from the scalp (reflecting cortical responses to sound) and behavioral scores. We performed two experiments that varied stimulus conditions to alter the degree to which performance might be limited due to fine stimulus details vs. due to control of attentional focus. Consistent with past work, in both experiments we find that attention strongly modulates cortical ERPs. Importantly, in Experiment I, where coding fidelity limits the task, individual behavioral performance correlates with subcortical coding strength (derived by computing how the EFR is degraded for fully masked tones compared to partially masked tones); however, in this experiment, the effects of attention on cortical ERPs were unrelated to individual subject performance. In contrast, in Experiment II, where sensory cues for segregation are robust (and thus less of a limiting factor on task performance), inter-subject behavioral differences correlate with subcortical coding strength. In addition, after factoring out the influence of subcortical coding strength, behavioral differences are also correlated with the strength of attentional modulation of ERPs. These results support the hypothesis that behavioral abilities amongst listeners with NHTs can arise due to both subcortical coding differences and differences in attentional control, depending on stimulus characteristics
Full Text Available Listeners with normal hearing thresholds differ in their ability to steer attention to whatever sound source is important. This ability depends on top-down executive control, which modulates the sensory representation of sound in cortex. Yet, this sensory representation also depends on the coding fidelity of the peripheral auditory system. Both of these factors may thus contribute to the individual differences in performance. We designed a selective auditory attention paradigm in which we could simultaneously measure envelope following responses (EFRs, reflecting peripheral coding, onset event-related potentials from the scalp (ERPs, reflecting cortical responses to sound, and behavioral scores. We performed two experiments that varied stimulus conditions to alter the degree to which performance might be limited due to fine stimulus details vs. due to control of attentional focus. Consistent with past work, in both experiments we find that attention strongly modulates cortical ERPs. Importantly, in Experiment I, where coding fidelity limits the task, individual behavioral performance correlates with subcortical coding strength (derived by computing how the EFR is degraded for fully masked tones compared to partially masked tones; however, in this experiment, the effects of attention on cortical ERPs were unrelated to individual subject performance. In contrast, in Experiment II, where sensory cues for segregation are robust (and thus less of a limiting factor on task performance, inter-subject behavioral differences correlate with subcortical coding strength. In addition, after factoring out the influence of subcortical coding strength, behavioral differences are also correlated with the strength of attentional modulation of ERPs. These results support the hypothesis that behavioral abilities amongst listeners with normal hearing thresholds can arise due to both subcortical coding differences and differences in attentional control, depending on
Lawo, Vera; Koch, Iring
Using a novel task-switching variant of dichotic selective listening, we examined age-related differences in the ability to intentionally switch auditory attention between 2 speakers defined by their sex. In our task, young (M age = 23.2 years) and older adults (M age = 66.6 years) performed a numerical size categorization on spoken number words. The task-relevant speaker was indicated by a cue prior to auditory stimulus onset. The cuing interval was either short or long and varied randomly trial by trial. We found clear performance costs with instructed attention switches. These auditory attention switch costs decreased with prolonged cue-stimulus interval. Older adults were generally much slower (but not more error prone) than young adults, but switching-related effects did not differ across age groups. These data suggest that the ability to intentionally switch auditory attention in a selective listening task is not compromised in healthy aging. We discuss the role of modality-specific factors in age-related differences.
Pan, Jeng-Shyang; Lo, Chi-Chun; Tsai, Shang-Ho; Lin, Bor-Shyh
The design of a novel non-contact multimedia controller is proposed in this study. Nowadays, multimedia controllers are generally used by patients and nursing assistants in the hospital. Conventional multimedia controllers usually involve in manual operation or other physical movements. However, it is more difficult for the disabled patients to operate the conventional multimedia controller by themselves; they might totally depend on others. Different from other multimedia controllers, the proposed system provides a novel concept of controlling multimedia via visual stimuli, without manual operation. The disabled patients can easily operate the proposed multimedia system by focusing on the control icons of a visual stimulus device, where a commercial tablet is used as the visual stimulus device. Moreover, a wearable and wireless electroencephalogram (EEG) acquisition device is also designed and implemented to easily monitor the user's EEG signals in daily life. Finally, the proposed system has been validated. The experimental result shows that the proposed system can effectively measure and extract the EEG feature related to visual stimuli, and its information transfer rate is also good. Therefore, the proposed non-contact multimedia controller exactly provides a good prototype of novel multimedia controlling scheme. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Full Text Available Auditory reafferences are real-time auditory products created by a person’s own movements. Whereas the interdependency of action and perception is generally well studied, the auditory feedback channel and the influence of perceptual processes during movement execution remain largely unconsidered. We argue that movements have a rhythmic character that is closely connected to sound, making it possible to manipulate auditory reafferences online to understand their role in motor control. We examined if step sounds, occurring as a by-product of running, have an influence on the performance of a complex movement task. Twenty participants completed a hurdling task in three auditory feedback conditions: a control condition with normal auditory feedback, a white noise condition in which sound was masked, and a delayed auditory feedback condition. Overall time and kinematic data were collected. Results show that delayed auditory feedback led to a significantly slower overall time and changed kinematic parameters. Our findings complement previous investigations in a natural movement situation with nonartificial auditory cues. Our results support the existing theoretical understanding of action–perception coupling and hold potential for applied work, where naturally occurring movement sounds can be implemented in the motor learning processes.
Kennel, Christian; Streese, Lukas; Pizzera, Alexandra; Justen, Christoph; Hohmann, Tanja; Raab, Markus
Auditory reafferences are real-time auditory products created by a person's own movements. Whereas the interdependency of action and perception is generally well studied, the auditory feedback channel and the influence of perceptual processes during movement execution remain largely unconsidered. We argue that movements have a rhythmic character that is closely connected to sound, making it possible to manipulate auditory reafferences online to understand their role in motor control. We examined if step sounds, occurring as a by-product of running, have an influence on the performance of a complex movement task. Twenty participants completed a hurdling task in three auditory feedback conditions: a control condition with normal auditory feedback, a white noise condition in which sound was masked, and a delayed auditory feedback condition. Overall time and kinematic data were collected. Results show that delayed auditory feedback led to a significantly slower overall time and changed kinematic parameters. Our findings complement previous investigations in a natural movement situation with non-artificial auditory cues. Our results support the existing theoretical understanding of action-perception coupling and hold potential for applied work, where naturally occurring movement sounds can be implemented in the motor learning processes.
Taylor, S; Cipani, E; Clardy, A
Standard toilet training regimens used with children with developmental disabilities have demonstrated effectiveness at achieving bladder and bowel continence. However, in some clinical applications in everyday practice, success has not been achieved, necessitating research into possible modifications of the current approaches. A widely used toilet training program was modified to reduce toileting accidents of a referred child. The modification involved the assessment of the discriminative stimulus for eliminating, namely, his undergarments. By removing the undergarments when an elimination became imminent, an "errorless" learning paradigm was established that allowed for more rapid and enduring acquisition of toileting skills than seen in previous training attempts. The results indicate the present procedure could expedite training for individuals who are difficult to teach appropriate toileting skills through an analysis of the controlling antecedent stimulus for accidents and subsequent manipulation of such stimuli.
Koch, Iring; Lawo, Vera; Fels, Janina; Vorlander, Michael
Using a novel variant of dichotic selective listening, we examined the control of auditory selective attention. In our task, subjects had to respond selectively to one of two simultaneously presented auditory stimuli (number words), always spoken by a female and a male speaker, by performing a numerical size categorization. The gender of the…
Ma, Ning; Yu, Angela J
Inhibitory control, the ability to stop or modify preplanned actions under changing task conditions, is an important component of cognitive functions. Two lines of models of inhibitory control have previously been proposed for human response in the classical stop-signal task, in which subjects must inhibit a default go response upon presentation of an infrequent stop signal: (1) the race model, which posits two independent go and stop processes that race to determine the behavioral outcome, go or stop; and (2) an optimal decision-making model, which posits that observers decides whether and when to go based on continually (Bayesian) updated information about both the go and stop stimuli. In this work, we probe the relationship between go and stop processing by explicitly manipulating the discrimination difficulty of the go stimulus. While the race model assumes the go and stop processes are independent, and therefore go stimulus discriminability should not affect the stop stimulus processing, we simulate the optimal model to show that it predicts harder go discrimination should result in longer go reaction time (RT), lower stop error rate, as well as faster stop-signal RT. We then present novel behavioral data that validate these model predictions. The results thus favor a fundamentally inseparable account of go and stop processing, in a manner consistent with the optimal model, and contradicting the independence assumption of the race model. More broadly, our findings contribute to the growing evidence that the computations underlying inhibitory control are systematically modulated by cognitive influences in a Bayes-optimal manner, thus opening new avenues for interpreting neural responses underlying inhibitory control.
Full Text Available Inhibitory control, the ability to stop or modify preplanned actions under changing task conditions, is an important component of cognitive functions. Two lines of models of inhibitory control have previously been proposed for human response in the classical stop-signal task, in which subjects must inhibit a default go response upon presentation of an infrequent stop signal: (1 the race model, which posits two independent go and stop processes that race to determine the behavioral outcome, go or stop; and (2 an optimal decision-making model, which posits that observers decides whether and when to go based on continually (Bayesian updated information about both the go and stop stimuli. In this work, we probe the relationship between go and stop processing by explicitly manipulating the discrimination difficulty of the go stimulus. While the race model assumes the go and stop processes are independent, and therefore go stimulus discriminability should not affect the stop stimulus processing, we simulate the optimal model to show that it predicts harder go discrimination results in a longer go reaction time (RT, a lower stop error rate, as well as a faster stop-signal RT. We then present novel behavioral data that validate these model predictions. The results thus favor a fundamentally inseparable account of go and stop processing, in a manner consistent with the optimal model, and contradicting the independence assumption of the race model. More broadly, our findings contribute to the growing evidence that the computations underlying inhibitory control are systematically modulated by cognitive influences in a Bayes-optimal manner, thus opening new avenues for interpreting neural responses underlying inhibitory control.
Hu, Yanmei; Allen, Richard J; Baddeley, Alan D; Hitch, Graham J
We examined the role of executive control in stimulus-driven and goal-directed attention in visual working memory using probed recall of a series of objects, a task that allows study of the dynamics of storage through analysis of serial position data. Experiment 1 examined whether executive control underlies goal-directed prioritization of certain items within the sequence. Instructing participants to prioritize either the first or final item resulted in improved recall for these items, and an increase in concurrent task difficulty reduced or abolished these gains, consistent with their dependence on executive control. Experiment 2 examined whether executive control is also involved in the disruption caused by a post-series visual distractor (suffix). A demanding concurrent task disrupted memory for all items except the most recent, whereas a suffix disrupted only the most recent items. There was no interaction when concurrent load and suffix were combined, suggesting that deploying selective attention to ignore the distractor did not draw upon executive resources. A final experiment replicated the independent interfering effects of suffix and concurrent load while ruling out possible artifacts. We discuss the results in terms of a domain-general episodic buffer in which information is retained in a transient, limited capacity privileged state, influenced by both stimulus-driven and goal-directed processes. The privileged state contains the most recent environmental input together with goal-relevant representations being actively maintained using executive resources.
Pigeons were trained with the A+, AB-, ABC+, AD- and ADE+ task where each of stimulus A and stimulus compounds ABC and ADE signalled food (positive trials), and each of stimulus compounds AB and AD signalled no food (negative trials). Stimuli A, B, C and E were small visual figures localised on a response key, and stimulus D was a white noise. Stimulus B was more effective than D as an inhibitor of responding to A during the training. After the birds learned to respond exclusively on the positive trials, effects of B and D on responding to C and E, respectively, were tested by comparing C, BC, E and DE trials. Stimulus B continuously facilitated responding to C on the BC test trials, but D's facilitative effect was observed only on the first DE test trial. Stimulus B also facilitated responding to E on BE test trials. Implications for the Rescorla-Wagner elemental model and the Pearce configural model of Pavlovian conditioning were discussed.
Hu, Bing; Wang, Qingyun
Epilepsy is a typical neural disease in nervous system, and the control of seizures is very important for treating the epilepsy. It is well known that the drug treatment is the main strategy for controlling the epilepsy. However, there are about 10–15 percent of patients, whose seizures cannot be effectively controlled by means of the drug. Alternatively, the deep brain stimulus (DBS) technology is a feasible method to control the serious seizures. However, theoretical explorations of DBS are still absent, and need to be further made. Presently, we will explore to control the absence seizures by introducing the DBS to a basal ganglia thalamocortical network model. In particular, we apply DBS onto substantia nigra pars reticulata (SNr) and the cortex to explore its effects on controlling absence seizures, respectively. We can find that the absence seizure can be well controlled within suitable parameter ranges by tuning the period and duration of current stimulation as DBS is implemented in the SNr. And also, as the DBS is applied onto the cortex, it is shown that for the ranges of present parameters, only adjusting the duration of current stimulation is an effective control method for the absence seizures. The obtained results can have better understanding for the mechanism of DBS in the medical treatment.
Full Text Available Feedforward inhibition represents a powerful mechanism by which control of the timing and fidelity of action potentials in local synaptic circuits of various brain regions is achieved. In the cochlear nucleus, the auditory nerve provides excitation to both principal neurons and inhibitory interneurons. Here, we investigated the synaptic circuit associated with fusiform cells (FCs, principal neurons of the dorsal cochlear nucleus (DCN that receive excitation from auditory nerve fibers and inhibition from tuberculoventral cells (TVCs on their basal dendrites in the deep layer of DCN. Despite the importance of these inputs in regulating fusiform cell firing behavior, the mechanisms determining the balance of excitation and feed-forward inhibition in this circuit are not well understood. Therefore, we examined the timing and plasticity of auditory nerve driven feed-forward inhibition (FFI onto FCs. We find that in some FCs, excitatory and inhibitory components of feed-forward inhibition had the same stimulation thresholds indicating they could be triggered by activation of the same fibers. In other FCs, excitation and inhibition exhibit different stimulus thresholds, suggesting FCs and TVCs might be activated by different sets of fibers. In addition we find that during repetitive activation, synapses formed by the auditory nerve onto TVCs and FCs exhibit distinct modes of short-term plasticity. Feed-forward inhibitory post-synaptic currents (IPSCs in FCs exhibit short-term depression because of prominent synaptic depression at the auditory nerve-TVC synapse. Depression of this feedforward inhibitory input causes a shift in the balance of fusiform cell synaptic input towards greater excitation and suggests that fusiform cell spike output will be enhanced by physiological patterns of auditory nerve activity.
Sedlacek, Miloslav; Brenowitz, Stephan D
Feed-forward inhibition (FFI) represents a powerful mechanism by which control of the timing and fidelity of action potentials in local synaptic circuits of various brain regions is achieved. In the cochlear nucleus, the auditory nerve provides excitation to both principal neurons and inhibitory interneurons. Here, we investigated the synaptic circuit associated with fusiform cells (FCs), principal neurons of the dorsal cochlear nucleus (DCN) that receive excitation from auditory nerve fibers and inhibition from tuberculoventral cells (TVCs) on their basal dendrites in the deep layer of DCN. Despite the importance of these inputs in regulating fusiform cell firing behavior, the mechanisms determining the balance of excitation and FFI in this circuit are not well understood. Therefore, we examined the timing and plasticity of auditory nerve driven FFI onto FCs. We find that in some FCs, excitatory and inhibitory components of FFI had the same stimulation thresholds indicating they could be triggered by activation of the same fibers. In other FCs, excitation and inhibition exhibit different stimulus thresholds, suggesting FCs and TVCs might be activated by different sets of fibers. In addition, we find that during repetitive activation, synapses formed by the auditory nerve onto TVCs and FCs exhibit distinct modes of short-term plasticity. Feed-forward inhibitory post-synaptic currents (IPSCs) in FCs exhibit short-term depression because of prominent synaptic depression at the auditory nerve-TVC synapse. Depression of this feedforward inhibitory input causes a shift in the balance of fusiform cell synaptic input towards greater excitation and suggests that fusiform cell spike output will be enhanced by physiological patterns of auditory nerve activity.
Schüz, Benjamin; Bower, Jodie; Ferguson, Stuart G
Dietary behaviours are substantially influenced by environmental and internal stimuli, such as mood, social situation, and food availability. However, little is known about the role of stimulus control for eating in non-clinical populations, and no studies so far have looked at eating and drinking behaviour simultaneously. 53 individuals from the general population took part in an intensive longitudinal study with repeated, real-time assessments of eating and drinking using Ecological Momentary Assessment. Eating was assessed as main meals and snacks, drinks assessments were separated along alcoholic and non-alcoholic drinks. Situational and internal stimuli were assessed during both eating and drinking events, and during randomly selected non-eating occasions. Hierarchical multinomial logistic random effects models were used to analyse data, comparing dietary events to non-eating occasions. Several situational and affective antecedents of dietary behaviours could be identified. Meals were significantly associated with having food available and observing others eat. Snacking was associated with negative affect, having food available, and observing others eat. Engaging in activities and being with others decreased the likelihood of eating behaviours. Non-alcoholic drinks were associated with observing others eat, and less activities and company. Alcoholic drinks were associated with less negative affect and arousal, and with observing others eat. RESULTS support the role of stimulus control in dietary behaviours, with support for both internal and external, in particular availability and social stimuli. The findings for negative affect support the idea of comfort eating, and results point to the formation of eating habits via cue-behaviour associations. Copyright © 2015 Elsevier Ltd. All rights reserved.
Fairnie, Jake; Moore, Brian C J; Remington, Anna
In the visual domain there is considerable evidence supporting the Load Theory of Attention and Cognitive Control, which holds that conscious perception of background stimuli depends on the level of perceptual load involved in a primary task. However, literature on the applicability of this theory to the auditory domain is limited and, in many cases, inconsistent. Here we present a novel "auditory search task" that allows systematic investigation of the impact of auditory load on auditory conscious perception. An array of simultaneous, spatially separated sounds was presented to participants. On half the trials, a critical stimulus was presented concurrently with the array. Participants were asked to detect which of 2 possible targets was present in the array (primary task), and whether the critical stimulus was present or absent (secondary task). Increasing the auditory load of the primary task (raising the number of sounds in the array) consistently reduced the ability to detect the critical stimulus. This indicates that, at least in certain situations, load theory applies in the auditory domain. The implications of this finding are discussed both with respect to our understanding of typical audition and for populations with altered auditory processing. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Hasegawa, Naoya; Takeda, Kenta; Sakuma, Moe; Mani, Hiroki; Maejima, Hiroshi; Asaka, Tadayoshi
Augmented sensory biofeedback (BF) for postural control is widely used to improve postural stability. However, the effective sensory information in BF systems of motor learning for postural control is still unknown. The purpose of this study was to investigate the learning effects of visual versus auditory BF training in dynamic postural control. Eighteen healthy young adults were randomly divided into two groups (visual BF and auditory BF). In test sessions, participants were asked to bring the real-time center of pressure (COP) in line with a hidden target by body sway in the sagittal plane. The target moved in seven cycles of sine curves at 0.23Hz in the vertical direction on a monitor. In training sessions, the visual and auditory BF groups were required to change the magnitude of a visual circle and a sound, respectively, according to the distance between the COP and target in order to reach the target. The perceptual magnitudes of visual and auditory BF were equalized according to Stevens' power law. At the retention test, the auditory but not visual BF group demonstrated decreased postural performance errors in both the spatial and temporal parameters under the no-feedback condition. These findings suggest that visual BF increases the dependence on visual information to control postural performance, while auditory BF may enhance the integration of the proprioceptive sensory system, which contributes to motor learning without BF. These results suggest that auditory BF training improves motor learning of dynamic postural control. Copyright © 2017 Elsevier B.V. All rights reserved.
Sabri, Merav; Humphries, Colin; Verber, Matthew; Liebenthal, Einat; Binder, Jeffrey R; Mangalathu, Jain; Desai, Anjali
Whether and how working memory disrupts or alters auditory selective attention is unclear. We compared simultaneous event-related potentials (ERP) and functional magnetic resonance imaging (fMRI) responses associated with task-irrelevant sounds across high and low working memory load in a dichotic-listening paradigm. Participants performed n-back tasks (1-back, 2-back) in one ear (Attend ear) while ignoring task-irrelevant speech sounds in the other ear (Ignore ear). The effects of working memory load on selective attention were observed at 130-210ms, with higher load resulting in greater irrelevant syllable-related activation in localizer-defined regions in auditory cortex. The interaction between memory load and presence of irrelevant information revealed stronger activations primarily in frontal and parietal areas due to presence of irrelevant information in the higher memory load. Joint independent component analysis of ERP and fMRI data revealed that the ERP component in the N1 time-range is associated with activity in superior temporal gyrus and medial prefrontal cortex. These results demonstrate a dynamic relationship between working memory load and auditory selective attention, in agreement with the load model of attention and the idea of common neural resources for memory and attention. Copyright © 2014 Elsevier Ltd. All rights reserved.
Yin, Pingbo; Mishkin, Mortimer; Sutter, Mitchell; Fritz, Jonathan B.
To explore the effects of acoustic and behavioral context on neuronal responses in the core of auditory cortex (fields A1 and R), two monkeys were trained on a go/no-go discrimination task in which they learned to respond selectively to a four-note target (S+) melody and withhold response to a variety of other nontarget (S−) sounds. We analyzed evoked activity from 683 units in A1/R of the trained monkeys during task performance and from 125 units in A1/R of two naive monkeys. We characterized two broad classes of neural activity that were modulated by task performance. Class I consisted of tone-sequence–sensitive enhancement and suppression responses. Enhanced or suppressed responses to specific tonal components of the S+ melody were frequently observed in trained monkeys, but enhanced responses were rarely seen in naive monkeys. Both facilitatory and suppressive responses in the trained monkeys showed a temporal pattern different from that observed in naive monkeys. Class II consisted of nonacoustic activity, characterized by a task-related component that correlated with bar release, the behavioral response leading to reward. We observed a significantly higher percentage of both Class I and Class II neurons in field R than in A1. Class I responses may help encode a long-term representation of the behaviorally salient target melody. Class II activity may reflect a variety of nonacoustic influences, such as attention, reward expectancy, somatosensory inputs, and/or motor set and may help link auditory perception and behavioral response. Both types of neuronal activity are likely to contribute to the performance of the auditory task. PMID:18842950
Barras, Caroline; Kerzel, Dirk
Some points of criticism against the idea that attentional selection is controlled by bottom-up processing were dispelled by the attentional window account. The attentional window account claims that saliency computations during visual search are only performed for stimuli inside the attentional window. Therefore, a small attentional window may avoid attentional capture by salient distractors because it is likely that the salient distractor is located outside the window. In contrast, a large attentional window increases the chances of attentional capture by a salient distractor. Large and small attentional windows have been associated with efficient (parallel) and inefficient (serial) search, respectively. We compared the effect of a salient color singleton on visual search for a shape singleton during efficient and inefficient search. To vary search efficiency, the nontarget shapes were either similar or dissimilar with respect to the shape singleton. We found that interference from the color singleton was larger with inefficient than efficient search, which contradicts the attentional window account. While inconsistent with the attentional window account, our results are predicted by computational models of visual search. Because of target-nontarget similarity, the target was less salient with inefficient than efficient search. Consequently, the relative saliency of the color distractor was higher with inefficient than with efficient search. Accordingly, stronger attentional capture resulted. Overall, the present results show that bottom-up control by stimulus saliency is stronger when search is difficult, which is inconsistent with the attentional window account.
Background Manipulating task difficulty is a useful way of elucidating the functional recruitment of the brain’s executive control network. In a Stroop task, pre-exposing the irrelevant word using varying stimulus onset asynchronies (‘negative’ SOAs) modulates the amount of behavioural interference and facilitation, suggesting disparate mechanisms of cognitive processing in each SOA. The current study employed a Stroop task with three SOAs (−400, -200, 0 ms), using functional magnetic resonance imaging to investigate for the first time the neural effects of SOA manipulation. Of specific interest were 1) how SOA affects the neural representation of interference and facilitation; 2) response priming effects in negative SOAs; and 3) attentional effects of blocked SOA presentation. Results The results revealed three regions of the executive control network that were sensitive to SOA during Stroop interference; the 0 ms SOA elicited the greatest activation of these areas but experienced relatively smaller behavioural interference, suggesting that the enhanced recruitment led to more efficient conflict processing. Response priming effects were localized to the right inferior frontal gyrus, which is consistent with the idea that this region performed response inhibition in incongruent conditions to overcome the incorrectly-primed response, as well as more general action updating and response preparation. Finally, the right superior parietal lobe was sensitive to blocked SOA presentation and was most active for the 0 ms SOA, suggesting that this region is involved in attentional control. Conclusions SOA exerted both trial-specific and block-wide effects on executive processing, providing a unique paradigm for functional investigations of the cognitive control network. PMID:23902451
Hofstadter-Duke, Kristi L; Daly, Edward J
This study investigated a method for conducting experimental analyses of academic responding. In the experimental analyses, academic responding (math computation), rather than problem behavior, was reinforced across conditions. Two separate experimental analyses (one with fluent math computation problems and one with non-fluent math computation problems) were conducted with three elementary school children using identical contingencies while math computation rate was measured. Results indicate that the experimental analysis with non-fluent problems produced undifferentiated responding across participants; however, differentiated responding was achieved for all participants in the experimental analysis with fluent problems. A subsequent comparison of the single-most effective condition from the experimental analyses replicated the findings with novel computation problems. Results are discussed in terms of the critical role of stimulus control in identifying controlling consequences for academic deficits, and recommendations for future research refining and extending experimental analysis to academic responding are made. © The Author(s) 2014.
Abbass, Hussein A; Tang, Jiangjun; Ellejmi, Mohamed; Kirby, Stephen
The use of quantitative electroencephalograph in the analysis of air traffic controllers' performance can reveal with a high temporal resolution those mental responses associated with different task demands. To understand the relationship between visual and auditory correct responses, reaction time, and the corresponding brain areas and functions, air traffic controllers were given an integrated visual and auditory continuous reaction task. Strong correlations were found between correct responses to the visual target and the theta band in the frontal lobe, the total power in the medial of the parietal lobe and the theta-to-beta ratio in the left side of the occipital lobe. Incorrect visual responses triggered activations in additional bands including the alpha band in the medial of the frontal and parietal lobes, and the Sensorimotor Rhythm in the medial of the parietal lobe. Controllers' responses to visual cues were found to be more accurate but slower than their corresponding performance on auditory cues. These results suggest that controllers are more susceptible to overload when more visual cues are used in the air traffic control system, and more errors are pruned as more auditory cues are used. Therefore, workload studies should be carried out to assess the usefulness of additional cues and their interactions with the air traffic control environment.
Oliveira, Emileane C; Hunziker, Maria Helena
In this study, we investigated whether (a) animals demonstrating the learned helplessness effect during an escape contingency also show learning deficits under positive reinforcement contingencies involving stimulus control and (b) the exposure to positive reinforcement contingencies eliminates the learned helplessness effect under an escape contingency. Rats were initially exposed to controllable (C), uncontrollable (U) or no (N) shocks. After 24h, they were exposed to 60 escapable shocks delivered in a shuttlebox. In the following phase, we selected from each group the four subjects that presented the most typical group pattern: no escape learning (learned helplessness effect) in Group U and escape learning in Groups C and N. All subjects were then exposed to two phases, the (1) positive reinforcement for lever pressing under a multiple FR/Extinction schedule and (2) a re-test under negative reinforcement (escape). A fourth group (n=4) was exposed only to the positive reinforcement sessions. All subjects showed discrimination learning under multiple schedule. In the escape re-test, the learned helplessness effect was maintained for three of the animals in Group U. These results suggest that the learned helplessness effect did not extend to discriminative behavior that is positively reinforced and that the learned helplessness effect did not revert for most subjects after exposure to positive reinforcement. We discuss some theoretical implications as related to learned helplessness as an effect restricted to aversive contingencies and to the absence of reversion after positive reinforcement. This article is part of a Special Issue entitled: insert SI title. Copyright © 2014. Published by Elsevier B.V.
Batista, Gervasio; Johnson, Jennifer Leigh; Dominguez, Elena; Costa-Mattioli, Mauro; Pena, Jose L
The formation of imprinted memories during a critical period is crucial for vital behaviors, including filial attachment. Yet, little is known about the underlying molecular mechanisms. Using a combination of behavior, pharmacology, in vivo surface sensing of translation (SUnSET) and DiOlistic labeling we found that, translational control by the eukaryotic translation initiation factor 2 alpha (eIF2α) bidirectionally regulates auditory but not visual imprinting and related changes in structural plasticity in chickens. Increasing phosphorylation of eIF2α (p-eIF2α) reduces translation rates and spine plasticity, and selectively impairs auditory imprinting. By contrast, inhibition of an eIF2α kinase or blocking the translational program controlled by p-eIF2α enhances auditory imprinting. Importantly, these manipulations are able to reopen the critical period. Thus, we have identified a translational control mechanism that selectively underlies auditory imprinting. Restoring translational control of eIF2α holds the promise to rejuvenate adult brain plasticity and restore learning and memory in a variety of cognitive disorders. DOI: http://dx.doi.org/10.7554/eLife.17197.001 PMID:28009255
Full Text Available Auditory feedback from the animal's own voice is essential during bat echolocation: to optimize signal detection, bats continuously adjust various call parameters in response to changing echo signals. Auditory feedback seems also necessary for controlling many bat communication calls, although it remains unclear how auditory feedback control differs in echolocation and communication. We tackled this question by analyzing echolocation and communication in greater horseshoe bats, whose echolocation pulses are dominated by a constant frequency component that matches the frequency range they hear best. To maintain echoes within this "auditory fovea", horseshoe bats constantly adjust their echolocation call frequency depending on the frequency of the returning echo signal. This Doppler-shift compensation (DSC behavior represents one of the most precise forms of sensory-motor feedback known. We examined the variability of echolocation pulses emitted at rest (resting frequencies, RFs and one type of communication signal which resembles an echolocation pulse but is much shorter (short constant frequency communication calls, SCFs and produced only during social interactions. We found that while RFs varied from day to day, corroborating earlier studies in other constant frequency bats, SCF-frequencies remained unchanged. In addition, RFs overlapped for some bats whereas SCF-frequencies were always distinctly different. This indicates that auditory feedback during echolocation changed with varying RFs but remained constant or may have been absent during emission of SCF calls for communication. This fundamentally different feedback mechanism for echolocation and communication may have enabled these bats to use SCF calls for individual recognition whereas they adjusted RF calls to accommodate the daily shifts of their auditory fovea.
Schwent, V. L.; Hillyard, S. A.; Galambos, R.
The effects of varying the rate of delivery of dichotic tone pip stimuli on selective attention measured by evoked-potential amplitudes and signal detectability scores were studied. The subjects attended to one channel (ear) of tones, ignored the other, and pressed a button whenever occasional targets - tones of a slightly higher pitch were detected in the attended ear. Under separate conditions, randomized interstimulus intervals were short, medium, and long. Another study compared the effects of attention on the N1 component of the auditory evoked potential for tone pips presented alone and when white noise was added to make the tones barely above detectability threshold in a three-channel listening task. Major conclusions are that (1) N1 is enlarged to stimuli in an attended channel only in the short interstimulus interval condition (averaging 350 msec), (2) N1 and P3 are related to different modes of selective attention, and (3) attention selectivity in multichannel listening task is greater when tones are faint and/or difficult to detect.
Liu, Ying; Fan, Hao; Li, Jingting; Jones, Jeffery A; Liu, Peng; Zhang, Baofeng; Liu, Hanjun
When people hear unexpected perturbations in auditory feedback, they produce rapid compensatory adjustments of their vocal behavior. Recent evidence has shown enhanced vocal compensations and cortical event-related potentials (ERPs) in response to attended pitch feedback perturbations, suggesting that this reflex-like behavior is influenced by selective attention. Less is known, however, about auditory-motor integration for voice control during divided attention. The present cross-modal study investigated the behavioral and ERP correlates of auditory feedback control of vocal pitch production during divided attention. During the production of sustained vowels, 32 young adults were instructed to simultaneously attend to both pitch feedback perturbations they heard and flashing red lights they saw. The presentation rate of the visual stimuli was varied to produce a low, intermediate, and high attentional load. The behavioral results showed that the low-load condition elicited significantly smaller vocal compensations for pitch perturbations than the intermediate-load and high-load conditions. As well, the cortical processing of vocal pitch feedback was also modulated as a function of divided attention. When compared to the low-load and intermediate-load conditions, the high-load condition elicited significantly larger N1 responses and smaller P2 responses to pitch perturbations. These findings provide the first neurobehavioral evidence that divided attention can modulate auditory feedback control of vocal pitch production.
Rincover, Arnold; And Others
Three autistic boys (ages 9-13) were trained to select a card containing a stimulus array comprised of three visual cues. Decreased distance between cues resulted in responses to more cues, increased distance to fewer cues. Distances did not affect the responding of children matched for mental and chronological age. (Author/JW)
Scherbaum, Stefan; Frisch, Simon; Dshemuchadse, Maja
Selective attention and its adaptation by cognitive control processes are considered a core aspect of goal-directed action. Often, selective attention is studied behaviorally with conflict tasks, but an emerging neuroscientific method for the study of selective attention is EEG frequency tagging. It applies different flicker frequencies to the stimuli of interest eliciting steady state visual evoked potentials (SSVEPs) in the EEG. These oscillating SSVEPs in the EEG allow tracing the allocation of selective attention to each tagged stimulus continuously over time. The present behavioral investigation points to an important caveat of using tagging frequencies: The flicker of stimuli not only produces a useful neuroscientific marker of selective attention, but interacts with the adaptation of selective attention itself. Our results indicate that RT patterns of adaptation after response conflict (so-called conflict adaptation) are reversed when flicker frequencies switch at once. However, this effect of frequency switches is specific to the adaptation by conflict-driven control processes, since we find no effects of frequency switches on inhibitory control processes after no-go trials. We discuss the theoretical implications of this finding and propose precautions that should be taken into account when studying conflict adaptation using frequency tagging in order to control for the described confounds. Copyright © 2015 Elsevier B.V. All rights reserved.
Full Text Available Background and Aim: Tinnitus is an unpleasant sound which can cause some behavioral disorders. According to evidence the origin of tinnitus is not only in peripheral but also in central auditory system. So evaluation of central auditory system function is necessary. In this study Auditory brainstem responses (ABR were compared in noise induced tinnitus and non-tinnitus control subjects.Materials and Methods: This cross-sectional, descriptive and analytic study is conducted in 60 cases in two groups including of 30 noise induced tinnitus and 30 non-tinnitus control subjects. ABRs were recorded ipsilateraly and contralateraly and their latencies and amplitudes were analyzed.Results: Mean interpeak latencies of III-V (p= 0.022, I-V (p=0.033 in ipsilatral electrode array and mean absolute latencies of IV (p=0.015 and V (p=0.048 in contralatral electrode array were significantly increased in noise induced tinnitus group relative to control group. Conclusion: It can be concluded from that there are some decrease in neural transmission time in brainstem and there are some sign of involvement of medial nuclei in olivery complex in addition to lateral lemniscus.
Jacobsen, Thomas; Horvath, Janos; Schroger, Erich; Lattner, Sonja; Widmann, Andreas; Winkler, Istvan
The effects of lexicality on auditory change detection based on auditory sensory memory representations were investigated by presenting oddball sequences of repeatedly presented stimuli, while participants ignored the auditory stimuli. In a cross-linguistic study of Hungarian and German participants, stimulus sequences were composed of words that…
Charles R Larson; Donald A Robin
The pitch-shift paradigm has become a widely used method for studying the role of voice pitch auditory feedback in voice control. This paradigm introduces small, brief pitch shifts in voice auditory feedback to vocalizing subjects. The perturbations trigger a reflexive mechanism that counteracts the change in pitch. The underlying mechanisms of the vocal responses are thought to reflect a negative feedback control system that is similar to constructs developed to explain other forms of motor ...
Eric O. Boyer
Full Text Available As eye movements are mostly automatic and overtly generated to attain visual goals, individuals have a poor metacognitive knowledge of their own eye movements. We present an exploratory study on the effects of real-time continuous auditory feedback generated by eye movements. We considered both a tracking task and a production task where smooth pursuit eye movements (SPEM can be endogenously generated. In particular, we used a visual paradigm which enables to generate and control SPEM in the absence of a moving visual target. We investigated whether real-time auditory feedback of eye movement dynamics might improve learning in both tasks, through a training protocol over 8 days. The results indicate that real-time sonification of eye movements can actually modify the oculomotor behavior, and reinforce intrinsic oculomotor perception. Nevertheless, large inter-individual differences were observed preventing us from reaching a strong conclusion on sensorimotor learning improvements.
Mückschel, Moritz; Dippel, Gabriel; Beste, Christian
Response inhibition mechanisms are mediated via cortical and subcortical networks. At the cortical level, the superior frontal gyrus, including the supplementary motor area (SMA) and inferior frontal areas, is important. There is an ongoing debate about the functional roles of these structures during response inhibition as it is unclear whether these structures process different codes or contents of information during response inhibition. In the current study, we examined this question with a focus on theta frequency oscillations during response inhibition processes. We used a standard Go/Nogo task in a sample of human participants and combined different EEG signal decomposition methods with EEG beamforming approaches. The results suggest that stimulus coding during inhibitory control is attained by oscillations in the upper theta frequency band (∼7 Hz). In contrast, response selection codes during inhibitory control appear to be attained by the lower theta frequency band (∼4 Hz). Importantly, these different codes seem to be processed in distinct functional neuroanatomical structures. Although the SMA may process stimulus codes and response selection codes, the inferior frontal cortex may selectively process response selection codes during inhibitory control. Taken together, the results suggest that different entities within the functional neuroanatomical network associated with response inhibition mechanisms process different kinds of codes during inhibitory control. These codes seem to be reflected by different oscillations within the theta frequency band. Hum Brain Mapp 38:5681-5690, 2017. © 2017 Wiley-Liss, Inc. © 2017 Wiley Periodicals, Inc.
Karen Johanne Pallesen
Full Text Available Musical competence may confer cognitive advantages that extend beyond processing of familiar musical sounds. Behavioural evidence indicates a general enhancement of both working memory and attention in musicians. It is possible that musicians, due to their training, are better able to maintain focus on task-relevant stimuli, a skill which is crucial to working memory. We measured the blood oxygenation-level dependent (BOLD activation signal in musicians and non-musicians during working memory of musical sounds to determine the relation among performance, musical competence and generally enhanced cognition. All participants easily distinguished the stimuli. We tested the hypothesis that musicians nonetheless would perform better, and that differential brain activity would mainly be present in cortical areas involved in cognitive control such as the lateral prefrontal cortex. The musicians performed better as reflected in reaction times and error rates. Musicians also had larger BOLD responses than non-musicians in neuronal networks that sustain attention and cognitive control, including regions of the lateral prefrontal cortex, lateral parietal cortex, insula, and putamen in the right hemisphere, and bilaterally in the posterior dorsal prefrontal cortex and anterior cingulate gyrus. The relationship between the task performance and the magnitude of the BOLD response was more positive in musicians than in non-musicians, particularly during the most difficult working memory task. The results confirm previous findings that neural activity increases during enhanced working memory performance. The results also suggest that superior working memory task performance in musicians rely on an enhanced ability to exert sustained cognitive control. This cognitive benefit in musicians may be a consequence of focused musical training.
Pallesen, Karen Johanne; Brattico, Elvira; Bailey, Christopher J
focus on task-relevant stimuli, a skill which is crucial to working memory. We measured the blood oxygenation-level dependent (BOLD) activation signal in musicians and non-musicians during working memory of musical sounds to determine the relation among performance, musical competence and generally...... hemisphere, and bilaterally in the posterior dorsal prefrontal cortex and anterior cingulate gyrus. The relationship between the task performance and the magnitude of the BOLD response was more positive in musicians than in non-musicians, particularly during the most difficult working memory task....... The results confirm previous findings that neural activity increases during enhanced working memory performance. The results also suggest that superior working memory task performance in musicians rely on an enhanced ability to exert sustained cognitive control. This cognitive benefit in musicians may...
Piray, Payam; Zeighami, Yashar; Bahrami, Fariba; Eissa, Abeer M; Hewedi, Doaa H; Moustafa, Ahmed A
A substantial subset of Parkinson's disease (PD) patients suffers from impulse control disorders (ICDs), which are side effects of dopaminergic medication. Dopamine plays a key role in reinforcement learning processes. One class of reinforcement learning models, known as the actor-critic model, suggests that two components are involved in these reinforcement learning processes: a critic, which estimates values of stimuli and calculates prediction errors, and an actor, which estimates values of potential actions. To understand the information processing mechanism underlying impulsive behavior, we investigated stimulus and action value learning from reward and punishment in four groups of participants: on-medication PD patients with ICD, on-medication PD patients without ICD, off-medication PD patients without ICD, and healthy controls. Analysis of responses suggested that participants used an actor-critic learning strategy and computed prediction errors based on stimulus values rather than action values. Quantitative model fits also revealed that an actor-critic model of the basal ganglia with different learning rates for positive and negative prediction errors best matched the choice data. Moreover, whereas ICDs were associated with model parameters related to stimulus valuation (critic), PD was associated with parameters related to action valuation (actor). Specifically, PD patients with ICD exhibited lower learning from negative prediction errors in the critic, resulting in an underestimation of adverse consequences associated with stimuli. These findings offer a specific neurocomputational account of the nature of compulsive behaviors induced by dopaminergic drugs. Copyright © 2014 the authors 0270-6474/14/347814-11$15.00/0.
Foxton, Jessica M; Stewart, Mary E; Barnard, Louise; Rodgers, Jacqui; Young, Allan H; O'Brien, Gregory; Griffiths, Timothy D
There has been considerable recent interest in the cognitive style of individuals with Autism Spectrum Disorder (ASD). One theory, that of weak central coherence, concerns an inability to combine stimulus details into a coherent whole. Here we test this theory in the case of sound patterns, using a new definition of the details (local structure) and the coherent whole (global structure). Thirteen individuals with a diagnosis of autism or Asperger's syndrome and 15 control participants were administered auditory tests, where they were required to match local pitch direction changes between two auditory sequences. When the other local features of the sequence pairs were altered (the actual pitches and relative time points of pitch direction change), the control participants obtained lower scores compared with when these details were left unchanged. This can be attributed to interference from the global structure, defined as the combination of the local auditory details. In contrast, the participants with ASD did not obtain lower scores in the presence of such mismatches. This was attributed to the absence of interference from an auditory coherent whole. The results are consistent with the presence of abnormal interactions between local and global auditory perception in ASD.
Donmez, Birsen; Cummings, M L; Graham, Hudson D
This article is an investigation of the effectiveness of sonifications, which are continuous auditory alerts mapped to the state of a monitored task, in supporting unmanned aerial vehicle (UAV) supervisory control. UAV supervisory control requires monitoring a UAV across multiple tasks (e.g., course maintenance) via a predominantly visual display, which currently is supported with discrete auditory alerts. Sonification has been shown to enhance monitoring performance in domains such as anesthesiology by allowing an operator to immediately determine an entity's (e.g., patient) current and projected states, and is a promising alternative to discrete alerts in UAV control. However, minimal research compares sonification to discrete alerts, and no research assesses the effectiveness of sonification for monitoring multiple entities (e.g., multiple UAVs). The authors conducted an experiment with 39 military personnel, using a simulated setup. Participants controlled single and multiple UAVs and received sonifications or discrete alerts based on UAV course deviations and late target arrivals. Regardless of the number of UAVs supervised, the course deviation sonification resulted in reactions to course deviations that were 1.9 s faster, a 19% enhancement, compared with discrete alerts. However, course deviation sonifications interfered with the effectiveness of discrete late arrival alerts in general and with operator responses to late arrivals when supervising multiple vehicles. Sonifications can outperform discrete alerts when designed to aid operators to predict future states of monitored tasks. However, sonifications may mask other auditory alerts and interfere with other monitoring tasks that require divided attention. This research has implications for supervisory control display design.
Cha, Yuri; Kim, Young; Hwang, Sujin; Chung, Yijung
Motor relearning protocols should involve task-oriented movement, focused attention, and repetition of desired movements. To investigate the effect of intensive gait training with rhythmic auditory stimulation on postural control and gait performance in individuals with chronic hemiparetic stroke. Twenty patients with chronic hemiparetic stroke participated in this study. Subjects in the Rhythmic auditory stimulation training group (10 subjects) underwent intensive gait training with rhythmic auditory stimulation for a period of 6 weeks (30 min/day, five days/week), while those in the control group (10 subjects) underwent intensive gait training for the same duration. Two clinical measures, Berg balance scale and stroke specific quality of life scale, and a 2-demensional gait analysis system, were used as outcome measure. To provide rhythmic auditory stimulation during gait training, the MIDI Cuebase musical instrument digital interface program and a KM Player version 3.3 was utilized for this study. Intensive gait training with rhythmic auditory stimulation resulted in significant improvement in scores on the Berg balance scale, gait velocity, cadence, stride length and double support period in affected side, and stroke specific quality of life scale compared with the control group after training. Findings of this study suggest that intensive gait training with rhythmic auditory stimulation improves balance and gait performance as well as quality of life, in individuals with chronic hemiparetic stroke.
Wightman, Frederic L.; Jenison, Rick
All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.
Pérez-Díaz, Francisco; Díaz, Estrella; Sánchez, Natividad; Vargas, Juan Pedro; Pearce, John M.; López, Juan Carlos
Recent studies support the idea that stimulus processing in latent inhibition can vary during the course of preexposure. Controlled attentional mechanisms are said to be important in the early stages of preexposure, while in later stages animals adopt automatic processing of the stimulus to be used for conditioning. Given this distinction, it is possible that both types of processing are governed by different neural systems, affecting differentially the retrieval of information about the stim...
Villar, Anna Carolina Nascimento Waack Braga; Pereira, Liliane Desgualdo
To investigate the auditory skills of closure and figure-ground and factors associated with health, communication, and attention in air traffic controllers, and compare these variables with those of other civil and military servants. Study participants were sixty adults with normal audiometric thresholds divided into two groups matched for age and gender: study group (SG), comprising 30 air traffic controllers and control group (CG), composed of 30 other military and civil servants. All participants were asked a number of questions regarding their health, communication, and attention, and underwent the Speech-in-Noise Test (SIN) to assess their closure skills and the Synthetic Sentence Identification Test - Ipsilateral Competitive Message (SSI-ICM) in monotic listening to evaluate their figure-ground abilities. Data were compared using nonparametric statistical tests and logistic regression analysis. More individuals in the SG reported fatigue and/or burnout and work-related stress and showed better performance than that of individuals in the CG for the figure-ground ability. Both groups performed similarly and satisfactorily in the other hearing tests. The odds ratio for participants belonging in the SG was 5.59 and 1.24 times regarding work-related stress and SSI-ICM (right ear), respectively. Results for the variables auditory closure, self-reported health, attention, and communication were similar in both groups. The SG presented significantly better performance in auditory figure-ground compared with that of the CG. Self-reported stress and right-ear SSI-ICM were significant predictors of individuals belonging to the SG.
Jones, JoAnna; Lerman, Dorothea C; Lechago, Sarah
We taught social responses to young children with autism using an adult as the recipient of the social interaction and then assessed generalization of performance to adults and peers who had not participated in the training. Although the participants' performance was similar across adults, responding was less consistent with peers, and a subsequent probe suggested that the recipient of the social behavior (adults vs. peers) controlled responding. We then evaluated the effects of having participants observe a video of a peer engaged in the targeted social behavior with another peer who provided reinforcement for the social response. Results suggested that certain irrelevant stimuli (adult vs. peer recipient) were more likely to exert stimulus control over responding than others (setting, materials) and that video viewing was an efficient way to promote generalization to peers. © Society for the Experimental Analysis of Behavior.
Varella, André A B; de Souza, Deisy G
Empirical studies have demonstrated that class-specific contingencies may engender stimulus-reinforcer relations. In these studies, crossmodal relations emerged when crossmodal relations comprised the baseline, and intramodal relations emerged when intramodal relations were taught during baseline. This study investigated whether auditory-visual relations (crossmodal) would emerge after participants learned a visual-visual baseline (intramodal) with auditory stimuli presented as specific consequences. Four individuals with autism learned AB and CD relations with class-specific reinforcers. When A1 and C1 were presented as samples, the selections of B1 and D1, respectively, were followed by an edible (R1) and a sound (S1). Selections of B2 and D2 under the control of A2 and C2, respectively, were followed by R2 and S2. Probe trials tested for visual-visual AC, CA, AD, DA, BC, CB, BD, and DB emergent relations and auditory-visual SA, SB, SC, and SD emergent relations. All of the participants demonstrated the emergence of all auditory-visual relations, and three of four participants showed emergence of all visual-visual relations. Thus, the emergence of auditory-visual relations from specific auditory consequences suggests that these relations do not depend on crossmodal baseline training. The procedure has great potential for applied technology to generate auditory-visual discriminations and stimulus classes in the context of behavior-analytic interventions for autism. © Society for the Experimental Analysis of Behavior.
Melzer, Itshak; Damry, Elad; Landau, Anat; Yagev, Ronit
In order to evaluate the effect of an auditory-memory attention-demanding task on balance control, nine blind adults were compared to nine age-gender-matched sighted controls. This issue is particularly relevant for the blind population in which functional assessment of postural control has to be revealed through "real life" motor and cognitive function. The study aimed to explore whether an auditory-memory attention-demanding cognitive task would influence postural control in blind persons and compare this with blindfolded sighted persons. Subjects were instructed to minimize body sway during narrow base upright standing on a single force platform under two conditions: 1) standing still (single task); 2) as in 1) while performing an auditory-memory attention-demanding cognitive task (dual task). Subjects in both groups were required to stand blindfolded with their eyes closed. Center of Pressure displacement data were collected and analyzed using summary statistics and stabilogram-diffusion analysis. Blind and sighted subjects had similar postural sway in eyes closed condition. However, for dual compared to single task, sighted subjects show significant decrease in postural sway while blind subjects did not. The auditory-memory attention-demanding cognitive task had no interference effect on balance control on blind subjects. It seems that sighted individuals used auditory cues to compensate for momentary loss of vision, whereas blind subjects did not. This may suggest that blind and sighted people use different sensorimotor strategies to achieve stability. Copyright © 2010 Elsevier Ltd. All rights reserved.
Groskreutz, Nicole C.; Karsina, Allen; Miguel, Caio F.; Groskreutz, Mark P.
Six participants with autism learned conditional relations between complex auditory-visual sample stimuli (dictated words and pictures) and simple visual comparisons (printed words) using matching-to-sample training procedures. Pre- and posttests examined potential stimulus control by each element of the complex sample when presented individually…
Lydon, Sinéad; Moran, Laura; Healy, Olive; Mulhern, Teresa; Enright Young, Kerie
Stereotypy is pervasive among persons with autism and may impact negatively on social inclusion and learning. The implementation of resource-intensive behavioral interventions to decrease these behaviors has been questioned. Inhibitory stimulus control procedures (ISCPs) comprise a type of antecedent-based intervention that has been proposed as an effective treatment approach for stereotypy but has received limited research attention to date. The current systematic review sought to examine and synthesize the literature reporting applications of ISCPs in the treatment of stereotypy among persons with autism. Treatment outcomes were analyzed quantitatively and the status of ISCPs as evidence-based practice was evaluated in accordance with the National Autism Center's National Standards Report guidelines. A total of 11 studies were reviewed with results indicating that ISCPs constituted an emerging treatment for the stereotypy exhibited by persons with autism. ISCPs comprise a promising intervention for stereotyped behavior but further research is required.
Full Text Available We investigated the effects of auditory stimuli on the perceived velocity of a moving visual stimulus. Previous studies have reported that the duration of visual events is perceived as being longer for events filled with auditory stimuli than for events not filled with auditory stimuli, ie, the so-called “filled-duration illusion.” In this study, we have shown that auditory stimuli also affect the perceived velocity of a moving visual stimulus. In Experiment 1, a moving comparison stimulus (4.2∼5.8 deg/s was presented together with filled (or unfilled white-noise bursts or with no sound. The standard stimulus was a moving visual stimulus (5 deg/s presented before or after the comparison stimulus. The participants had to judge which stimulus was moving faster. The results showed that the perceived velocity in the auditory-filled condition was lower than that in the auditory-unfilled and no-sound conditions. In Experiment 2, we investigated the effects of auditory stimuli on velocity adaptation. The results showed that the effects of velocity adaptation in the auditory-filled condition were weaker than those in the no-sound condition. These results indicate that auditory stimuli tend to decrease the perceived velocity of a moving visual stimulus.
Wang, Yanan; Qin, Qing-Hua
The control mechanism of mechanical bone remodeling at cellular level was investigated by means of an extensive parametric study on a theoretical model described in this paper. From a perspective of control mechanism, it was found that there are several control mechanisms working simultaneously in bone remodeling which is a complex process. Typically, an extensive parametric study was carried out for investigating model parameter space related to cell differentiation and apoptosis which can describe the fundamental cell lineage behaviors. After analyzing all the combinations of 728 permutations in six model parameters, we have identified a small number of parameter combinations that can lead to physiologically realistic responses which are similar to theoretically idealized physiological responses. The results presented in the work enhanced our understanding on mechanical bone remodeling and the identified control mechanisms can help researchers to develop combined pharmacological-mechanical therapies to treat bone loss diseases such as osteoporosis.
Full Text Available Abstract Background Recent research has implicated deficits of the working memory (WM and attention in dyslexia. The N100 component of event-related potentials (ERP is thought to reflect attention and working memory operation. However, previous studies showed controversial results concerning the N100 in dyslexia. Variability in this issue may be the result of inappropriate match up of the control sample, which is usually based exclusively on age and gender. Methods In order to address this question the present study aimed at investigating the auditory N100 component elicited during a WM test in 38 dyslexic children in comparison to those of 19 unaffected sibling controls. Both groups met the criteria of the International Classification of Diseases (ICD-10. ERP were evoked by two stimuli, a low (500 Hz and a high (3000 Hz frequency tone indicating forward and reverse digit span respectively. Results As compared to their sibling controls, dyslexic children exhibited significantly reduced N100 amplitudes induced by both reverse and forward digit span at Fp1, F3, Fp2, Fz, C4, Cz and F4 and at Fp1, F3, C5, C3, Fz, F4, C6, P4 and Fp2 leads respectively. Memory performance of the dyslexics group was not significantly lower than that of the controls. However, enhanced memory performance in the control group is associated with increased N100 amplitude induced by high frequency stimuli at the C5, C3, C6 and P4 leads and increased N100 amplitude induced by low frequency stimuli at the P4 lead. Conclusion The present findings are in support of the notion of weakened capture of auditory attention in dyslexia, allowing for a possible impairment in the dynamics that link attention with short memory, suggested by the anchoring-deficit hypothesis.
Donohue, Sarah E.; Liotti, Mario; Perez, Rick; Woldorff, Marty G.
The electrophysiological correlates of conflict processing and cognitive control have been well characterized for the visual modality in paradigms such as the Stroop task. Much less is known about corresponding processes in the auditory modality. Here, electroencephalographic recordings of brain activity were measured during an auditory Stroop task, using three different forms of behavioral response (Overt verbal, Covert verbal, and Manual), that closely paralleled our previous visual-Stroop study. As expected, behavioral responses were slower and less accurate for incongruent compared to congruent trials. Neurally, incongruent trials showed an enhanced fronto-central negative-polarity wave (Ninc), similar to the N450 in visual-Stroop tasks, with similar variations as a function of behavioral response mode, but peaking ~150 ms earlier, followed by an enhanced positive posterior wave. In addition, sequential behavioral and neural effects were observed that supported the conflict-monitoring and cognitive-adjustment hypothesis. Thus, while some aspects of the conflict detection processes, such as timing, may be modality-dependent, the general mechanisms would appear to be supramodal. PMID:21964643
Bachiller, Alejandro; Poza, Jesús; Gómez, Carlos; Molina, Vicente; Suazo, Vanessa; Hornero, Roberto
Objective. The aim of this research is to explore the coupling patterns of brain dynamics during an auditory oddball task in schizophrenia (SCH). Approach. Event-related electroencephalographic (ERP) activity was recorded from 20 SCH patients and 20 healthy controls. The coupling changes between auditory response and pre-stimulus baseline were calculated in conventional EEG frequency bands (theta, alpha, beta-1, beta-2 and gamma), using three coupling measures: coherence, phase-locking value and Euclidean distance. Main results. Our results showed a statistically significant increase from baseline to response in theta coupling and a statistically significant decrease in beta-2 coupling in controls. No statistically significant changes were observed in SCH patients. Significance. Our findings support the aberrant salience hypothesis, since SCH patients failed to change their coupling dynamics between stimulus response and baseline when performing an auditory cognitive task. This result may reflect an impaired communication among neural areas, which may be related to abnormal cognitive functions.
Zarkesh-Ha, Payman [University of New Mexico
The main goal of this research grant is to develop a system-level solution leveraging novel technologies that enable network communications at 100 Gb/s or beyond. University of New Mexico in collaboration with Acadia Optronics LLC has been working on this project to develop the 100 Gb/s Network Interface Controller (NIC) under this Department of Energy (DOE) grant.
Miller, Carlin J.; Miller, Scott R.; Healey, Dione M.; Marshall, Katie; Halperin, Jeffrey M.
Temperament and attention-deficit/hyperactivity disorder (ADHD) are both typically viewed as biologically based behavioural constructs. There is substantial overlap between ADHD symptoms and specific temperamental traits, such as effortful control, especially in young children. Recent work by Martel and colleagues (2009, 2011) suggests that…
Heimler, B.; Pavani, F.; Donk, M.; van Zoest, W.
Action videogame players (AVGPs) have been shown to outperform nongamers (NVGPs) in covert visual attention tasks. These advantages have been attributed to improved top-down control in this population. The time course of visual selection, which permits researchers to highlight when top-down
Full Text Available To improve the performance of cochlear implants, we have integrated a microdevice into a model of the auditory periphery with the goal of creating a microprocessor. We constructed an artificial peripheral auditory system using a hybrid model in which polyvinylidene difluoride was used as a piezoelectric sensor to convert mechanical stimuli into electric signals. To produce frequency selectivity, the slit on a stainless steel base plate was designed such that the local resonance frequency of the membrane over the slit reflected the transfer function. In the acoustic sensor, electric signals were generated based on the piezoelectric effect from local stress in the membrane. The electrodes on the resonating plate produced relatively large electric output signals. The signals were fed into a computer model that mimicked some functions of inner hair cells, inner hair cell–auditory nerve synapses, and auditory nerve fibers. In general, the responses of the model to pure-tone burst and complex stimuli accurately represented the discharge rates of high-spontaneous-rate auditory nerve fibers across a range of frequencies greater than 1 kHz and middle to high sound pressure levels. Thus, the model provides a tool to understand information processing in the peripheral auditory system and a basic design for connecting artificial acoustic sensors to the peripheral auditory nervous system. Finally, we discuss the need for stimulus control with an appropriate model of the auditory periphery based on auditory brainstem responses that were electrically evoked by different temporal pulse patterns with the same pulse number.
Lehmann, Alexandre; Skoe, Erika; Moreau, Patricia; Peretz, Isabelle; Kraus, Nina
Congenital amusia is a neurogenetic condition, characterized by a deficit in music perception and production, not explained by hearing loss, brain damage or lack of exposure to music. Despite inferior musical performance, amusics exhibit normal auditory cortical responses, with abnormal neural correlates suggested to lie beyond auditory cortices. Here we show, using auditory brainstem responses to complex sounds in humans, that fine-grained automatic processing of sounds is impoverished in amusia. Compared with matched non-musician controls, spectral amplitude was decreased in amusics for higher harmonic components of the auditory brainstem response. We also found a delayed response to the early transient aspects of the auditory stimulus in amusics. Neural measures of spectral amplitude and response timing correlated with participants' behavioral assessments of music processing. We demonstrate, for the first time, that amusia affects how complex acoustic signals are processed in the auditory brainstem. This neural signature of amusia mirrors what is observed in musicians, such that the aspects of the auditory brainstem responses that are enhanced in musicians are degraded in amusics. By showing that gradients of music abilities are reflected in the auditory brainstem, our findings have implications not only for current models of amusia but also for auditory functioning in general. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Pinaud, R.; Terleph, T. A.; Wynne, R. D.; Tremere, L. A.
Songbirds have emerged as powerful experimental models for the study of auditory processing of complex natural communication signals. Intact hearing is necessary for several behaviors in developing and adult animals including vocal learning, territorial defense, mate selection and individual recognition. These behaviors are thought to require the processing, discrimination and memorization of songs. Although much is known about the brain circuits that participate in sensorimotor (auditory-vocal) integration, especially the ``song-control" system, less is known about the anatomical and functional organization of central auditory pathways. Here we discuss findings associated with a telencephalic auditory area known as the caudomedial nidopallium (NCM). NCM has attracted significant interest as it exhibits functional properties that may support higher order auditory functions such as stimulus discrimination and the formation of auditory memories. NCM neurons are vigorously dr iven by auditory stimuli. Interestingly, these responses are selective to conspecific, relative to heterospecific songs and artificial stimuli. In addition, forms of experience-dependent plasticity occur in NCM and are song-specific. Finally, recent experiments employing high-throughput quantitative proteomics suggest that complex protein regulatory pathways are engaged in NCM as a result of auditory experience. These molecular cascades are likely central to experience-associated plasticity of NCM circuitry and may be part of a network of calcium-driven molecular events that support the formation of auditory memory traces.
Cai, Kun; Wan, Jing; Shi, Jiao; Qin, Qing H
Since a double-walled carbon nanotube (DWCNT)-based rotary motor driven by a uniform temperature field was proposed in 2014, how to control quantitatively the rotation of the rotor is still an open question. In this work, we present a mathematical relationship between the rotor’s speed and interaction energy. Essentially, the increment of interaction energy between the rotor and the stator(s) determines the rotor’s rotational speed, whereas the type of radial deviation of an end carbon atom on the stator determines the rotational direction. The rotational speed of the rotor can be specified by adjusting temperature and radial deviation of an end carbon atom on the stator. It is promising for designing a controllable temperature-driven rotary motor based on DWCNTs with length of few nanometers only. (paper)
Pérez-Díaz, Francisco; Díaz, Estrella; Sánchez, Natividad; Vargas, Juan Pedro; Pearce, John M; López, Juan Carlos
Recent studies support the idea that stimulus processing in latent inhibition can vary during the course of preexposure. Controlled attentional mechanisms are said to be important in the early stages of preexposure, while in later stages animals adopt automatic processing of the stimulus to be used for conditioning. Given this distinction, it is possible that both types of processing are governed by different neural systems, affecting differentially the retrieval of information about the stimulus. In the present study we tested if a lesion to the dorso-lateral striatum or to the medial prefrontal cortex has a selective effect on exposure to the future conditioned stimulus (CS). With this aim, animals received different amounts of exposure to the future CS. The results showed that a lesion to the medial prefrontal cortex enhanced latent inhibition in animals receiving limited preexposure to the CS, but had no effect in animals receiving extended preexposure to the CS. The lesion of the dorso-lateral striatum produced a decrease in latent inhibition, but only in animals with an extended exposure to the future conditioned stimulus. These results suggest that the dorsal striatum and medial prefrontal cortex play essential roles in controlled and automatic processes. Automatic attentional processes appear to be impaired by a lesion to the dorso-lateral striatum and facilitated by a lesion to the prefrontal cortex.
Szalóki, György; Croué, Vincent; Carré, Vincent; Aubriet, Frédéric; Alévêque, Olivier; Levillain, Eric; Allain, Magali; Aragó, Juan; Ortí, Enrique; Goeb, Sébastien; Sallé, Marc
A proof-of-concept related to the redox-control of the binding/releasing process in a host-guest system is achieved by designing a neutral and robust Pt-based redox-active metallacage involving two extended-tetrathiafulvalene (exTTF) ligands. When neutral, the cage is able to bind a planar polyaromatic guest (coronene). Remarkably, the chemical or electrochemical oxidation of the host-guest complex leads to the reversible expulsion of the guest outside the cavity, which is assigned to a drastic change of the host-guest interaction mode, illustrating the key role of counteranions along the exchange process. The reversible process is supported by various experimental data ( 1 H NMR spectroscopy, ESI-FTICR, and spectroelectrochemistry) as well as by in-depth theoretical calculations performed at the density functional theory (DFT) level. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Smith, Sherri L.; Saunders, Gabrielle H.; Chisolm, Theresa H.; Frederick, Melissa; Bailey, Beth A.
Purpose: The purpose of this study was to determine if patient characteristics or clinical variables could predict who benefits from individual auditory training. Method: A retrospective series of analyses were performed using a data set from a large, multisite, randomized controlled clinical trial that compared the treatment effects of at-home…
Hughes, Robert W.; Hurlstone, Mark J.; Marsh, John E.; Vachon, Francois; Jones, Dylan M.
The influence of top-down cognitive control on 2 putatively distinct forms of distraction was investigated. Attentional capture by a task-irrelevant auditory deviation (e.g., a female-spoken token following a sequence of male-spoken tokens)--as indexed by its disruption of a visually presented recall task--was abolished when focal-task engagement…
Bravi, Riccardo; Del Tongo, Claudia; Cohen, Erez James; Dalle Mura, Gabriele; Tognetti, Alessandro; Minciacchi, Diego
The ability to perform isochronous movements while listening to a rhythmic auditory stimulus requires a flexible process that integrates timing information with movement. Here, we explored how non-temporal and temporal characteristics of an auditory stimulus (presence, interval occupancy, and tempo) affect motor performance. These characteristics were chosen on the basis of their ability to modulate the precision and accuracy of synchronized movements. Subjects have participated in sessions in which they performed sets of repeated isochronous wrist's flexion-extensions under various conditions. The conditions were chosen on the basis of the defined characteristics. Kinematic parameters were evaluated during each session, and temporal parameters were analyzed. In order to study the effects of the auditory stimulus, we have minimized all other sensory information that could interfere with its perception or affect the performance of repeated isochronous movements. The present study shows that the distinct characteristics of an auditory stimulus significantly influence isochronous movements by altering their duration. Results provide evidence for an adaptable control of timing in the audio-motor coupling for isochronous movements. This flexibility would make plausible the use of different encoding strategies to adapt audio-motor coupling for specific tasks.
Wigestrand, Mattis B.; Schiff, Hillary C.; Fyhn, Marianne; LeDoux, Joseph E.; Sears, Robert M.
Distinguishing threatening from nonthreatening stimuli is essential for survival and stimulus generalization is a hallmark of anxiety disorders. While auditory threat learning produces long-lasting plasticity in primary auditory cortex (Au1), it is not clear whether such Au1 plasticity regulates memory specificity or generalization. We used…
Graf, Heiko; Wiegers, Maike; Metzger, Coraline D; Walter, Martin; Grön, Georg; Abler, Birgit
Impaired sexual function is increasingly recognized as a side effect of psychopharmacological treatment. However, underlying mechanisms of action of the different drugs on sexual processing are still to be explored. Using functional magnetic resonance imaging, we previously investigated effects of serotonergic (paroxetine) and dopaminergic (bupropion) antidepressants on sexual functioning (Abler et al., 2011). Here, we studied the impact of noradrenergic and antidopaminergic medication on neural correlates of visual sexual stimulation in a new sample of subjects. Nineteen healthy heterosexual males (mean age 24 years, SD 3.1) under subchronic intake (7 days) of the noradrenergic agent reboxetine (4 mg/d), the antidopaminergic agent amisulpride (200mg/d), and placebo were included and studied with functional magnetic resonance imaging within a randomized, double-blind, placebo-controlled, within-subjects design during an established erotic video-clip task. Subjective sexual functioning was assessed using the Massachusetts General Hospital-Sexual Functioning Questionnaire. Relative to placebo, subjective sexual functioning was attenuated under reboxetine along with diminished neural activations within the caudate nucleus. Altered neural activations correlated with decreased sexual interest. Under amisulpride, neural activations and subjective sexual functioning remained unchanged. In line with previous interpretations of the role of the caudate nucleus in the context of primary reward processing, attenuated caudate activation may reflect detrimental effects on motivational aspects of erotic stimulus processing under noradrenergic agents. © The Author 2015. Published by Oxford University Press on behalf of CINP.
Full Text Available To obtain reliable transient auditory evoked potentials (AEPs from EEGs recorded using high stimulus rate (HSR paradigm, it is critical to design the stimulus sequences of appropriate frequency properties. Traditionally, the individual stimulus events in a stimulus sequence occur only at discrete time points dependent on the sampling frequency of the recording system and the duration of stimulus sequence. This dependency likely causes the implementation of suboptimal stimulus sequences, sacrificing the reliability of resulting AEPs. In this paper, we explicate the use of continuous-time stimulus sequence for HSR paradigm, which is independent of the discrete electroencephalogram (EEG recording system. We employ simulation studies to examine the applicability of the continuous-time stimulus sequences and the impacts of sampling frequency on AEPs in traditional studies using discrete-time design. Results from these studies show that the continuous-time sequences can offer better frequency properties and improve the reliability of recovered AEPs. Furthermore, we find that the errors in the recovered AEPs depend critically on the sampling frequencies of experimental systems, and their relationship can be fitted using a reciprocal function. As such, our study contributes to the literature by demonstrating the applicability and advantages of continuous-time stimulus sequences for HSR paradigm and by revealing the relationship between the reliability of AEPs and sampling frequencies of the experimental systems when discrete-time stimulus sequences are used in traditional manner for the HSR paradigm.
Kuppen, Sarah; Huss, Martina; Fosker, Tim; Fegan, Natasha; Goswami, Usha
We explore the relationships between basic auditory processing, phonological awareness, vocabulary, and word reading in a sample of 95 children, 55 typically developing children, and 40 children with low IQ. All children received nonspeech auditory processing tasks, phonological processing and literacy measures, and a receptive vocabulary task.…
Full Text Available Children with a spatial processing disorder (SPD require a more favorable signal-to-noise ratio in the classroom because they have difficulty perceiving sound source location cues. Previous research has shown that a novel training program - LiSN & Learn - employing spatialized sound, overcomes this deficit. Here we investigate whether improvements in spatial processing ability are specific to the LiSN & Learn training program. Participants were ten children (aged between 6;0 [years;months] and 9;9 with normal peripheral hearing who were diagnosed as having SPD using the Listening in Spatialized Noise - Sentences test (LiSN-S. In a blinded controlled study, the participants were randomly allocated to train with either the LiSN & Learn or another auditory training program - Earobics - for approximately 15 min per day for twelve weeks. There was a significant improvement post-training on the conditions of the LiSN-S that evaluate spatial processing ability for the LiSN & Learn group (P=0.03 to 0.0008, η 2=0.75 to 0.95, n=5, but not for the Earobics group (P=0.5 to 0.7, η 2=0.1 to 0.04, n=5. Results from questionnaires completed by the participants and their parents and teachers revealed improvements in real-world listening performance post-training were greater in the LiSN & Learn group than the Earobics group. LiSN & Learn training improved binaural processing ability in children with SPD, enhancing their ability to understand speech in noise. Exposure to non-spatialized auditory training does not produce similar outcomes, emphasizing the importance of deficit-specific remediation.
Full Text Available Background: Children with a spatial processing disorder (SPD require a more favorable signal-to-noise ratio in the classroom because they have difficulty perceiving sound source location cues. Previous research has shown that a novel training program - LiSN & Learn - employing spatialized sound, overcomes this deficit. Here we investigate whether improvements in spatial processing ability are specific to the LiSN & Learn training program. Materials and methods: Participants were ten children (aged between 6;0 [years;months] and 9;9 with normal peripheral hearing who were diagnosed as having SPD using the Listening in Spatialized Noise – Sentences Test (LISN-S. In a blinded controlled study, the participants were randomly allocated to train with either the LiSN & Learn or another auditory training program – Earobics - for approximately 15 minutes per day for twelve weeks. Results: There was a significant improvement post-training on the conditions of the LiSN-S that evaluate spatial processing ability for the LiSN & Learn group (p=0.03 to 0.0008, η2=0.75 to 0.95, n=5, but not for the Earobics group (p=0.5 to 0.7, η2=0.1 to 0.04, n=5. Results from questionnaires completed by the participants and their parents and teachers revealed improvements in real-world listening performance post-training were greater in the LiSN & Learn group than the Earobics group. Conclusions: LiSN & Learn training improved binaural processing ability in children with SPD, enhancing their ability to understand speech in noise. Exposure to non-spatialized auditory training does not produce similar outcomes, emphasizing the importance of deficit-specific remediation.
Burnham, Denis; Dodd, Barbara
The McGurk effect, in which auditory [ba] dubbed onto [ga] lip movements is perceived as "da" or "tha," was employed in a real-time task to investigate auditory-visual speech perception in prelingual infants. Experiments 1A and 1B established the validity of real-time dubbing for producing the effect. In Experiment 2, 4 1/2-month-olds were tested in a habituation-test paradigm, in which an auditory-visual stimulus was presented contingent upon visual fixation of a live face. The experimental group was habituated to a McGurk stimulus (auditory [ba] visual [ga]), and the control group to matching auditory-visual [ba]. Each group was then presented with three auditory-only test trials, [ba], [da], and [(delta)a] (as in then). Visual-fixation durations in test trials showed that the experimental group treated the emergent percept in the McGurk effect, [da] or [(delta)a], as familiar (even though they had not heard these sounds previously) and [ba] as novel. For control group infants [da] and [(delta)a] were no more familiar than [ba]. These results are consistent with infants' perception of the McGurk effect, and support the conclusion that prelinguistic infants integrate auditory and visual speech information. Copyright 2004 Wiley Periodicals, Inc.
Dougherty, D M; Lewis, P
Using horses, we investigated the control of operant behavior by a tactile stimulus (the training stimulus) and the generalization of behavior to six other similar test stimuli. In a stall, the experimenters mounted a response panel in the doorway. Located on this panel were a response lever and a grain dispenser. The experimenters secured a tactile-stimulus belt to the horse's back. The stimulus belt was constructed by mounting seven solenoids along a piece of burlap in a manner that allowed...
Rominger, Christian; Bleier, Angelika; Fitz, Werner; Marksteiner, Josef; Fink, Andreas; Papousek, Ilona; Weiss, Elisabeth M
Social cognitive impairments may represent a core feature of schizophrenia and above all are a strong predictor of positive psychotic symptoms. Previous studies could show that reduced inhibitory top-down control contributes to deficits in theory of mind abilities and is involved in the genesis of hallucinations. The current study aimed to investigate the relationship between auditory inhibition, affective theory of mind and the experience of hallucinations in patients with schizophrenia. In the present study, 20 in-patients with schizophrenia and 20 healthy controls completed a social cognition task (the Reading the Mind in the Eyes Test) and an inhibitory top-down Dichotic Listening Test. Schizophrenia patients with greater severity of hallucinations showed impaired affective theory of mind as well as impaired inhibitory top-down control. More dysfunctional top-down inhibition was associated with poorer affective theory of mind performance, and seemed to mediate the association between impairment to affective theory of mind and severity of hallucinations. The findings support the idea of impaired theory of mind as a trait marker of schizophrenia. In addition, dysfunctional top-down inhibition may give rise to hallucinations and may further impair affective theory of mind skills in schizophrenia. Copyright © 2016 Elsevier B.V. All rights reserved.
Yoder, Kathleen M; Lu, Kai; Vicario, David S
Estradiol (E2) has recently been shown to modulate sensory processing in an auditory area of the songbird forebrain, the caudomedial nidopallium (NCM). When a bird hears conspecific song, E2 increases locally in NCM, where neurons express both the aromatase enzyme that synthesizes E2 from precursors and estrogen receptors. Auditory responses in NCM show a form of neuronal memory: repeated playback of the unique learned vocalizations of conspecific individuals induces long-lasting stimulus-specific adaptation of neural responses to each vocalization. To test the role of E2 in this auditory memory, we treated adult male zebra finches (n=16) with either the aromatase inhibitor fadrozole (FAD) or saline for 8 days. We then exposed them to 'training' songs and, 6 h later, recorded multiunit auditory responses with an array of 16 microelectrodes in NCM. Adaptation rates (a measure of stimulus-specific adaptation) to playbacks of training and novel songs were computed, using established methods, to provide a measure of neuronal memory. Recordings from the FAD-treated birds showed a significantly reduced memory for the training songs compared with saline-treated controls, whereas auditory processing for novel songs did not differ between treatment groups. In addition, FAD did not change the response bias in favor of conspecific over heterospecific song stimuli. Our results show that E2 depletion affects the neuronal memory for vocalizations in songbird NCM, and suggest that E2 plays a necessary role in auditory processing and memory for communication signals.
Julie M. Bugg
Full Text Available Cognitive control is by now a large umbrella term referring collectively to multiple processes that plan and coordinate actions to meet task goals. A common feature of paradigms that engage cognitive control is the task requirement to select relevant information despite a habitual tendency (or bias to select goal-irrelevant information. At least since the 70s, researchers have employed proportion congruent manipulations to experimentally establish selection biases and evaluate the mechanisms used to control attention. Proportion congruent manipulations vary the frequency with which irrelevant information conflicts (i.e., is incongruent with relevant information. The purpose of this review is to summarize the growing body of literature on proportion congruent effects across selective attention paradigms, beginning first with Stroop, and then describing parallel effects in flanker and task-switching paradigms. The review chronologically tracks the expansion of the proportion congruent manipulation from its initial implementation at the list-wide level, to more recent implementations at the item-specific and context-specific levels. An important theoretical aim is demonstrating that proportion congruent effects at different levels (e.g., list-wide vs. item or context-specific support a distinction between voluntary forms of cognitive control, which operate based on anticipatory information, and relatively automatic or reflexive forms of cognitive control, which are rapidly triggered by the processing of particular stimuli or stimulus features. A further aim is to highlight those proportion congruent manipulations that allow researchers to dissociate stimulus-driven control from other stimulus-driven processes (e.g., S-R responding; episodic retrieval. We conclude by discussing the utility of proportion congruent manipulations for exploring the distinction between voluntary control and stimulus-driven control in other relevant paradigms.
Tanahashi, Shigehito; Ashihara, Kaoru; Ujike, Hiroyasu
Recent studies have found that self-motion perception induced by simultaneous presentation of visual and auditory motion is facilitated when the directions of visual and auditory motion stimuli are identical. They did not, however, examine possible contributions of auditory motion information for determining direction of self-motion perception. To examine this, a visual stimulus projected on a hemisphere screen and an auditory stimulus presented through headphones were presented separately or simultaneously, depending on experimental conditions. The participant continuously indicated the direction and strength of self-motion during the 130-s experimental trial. When the visual stimulus with a horizontal shearing rotation and the auditory stimulus with a horizontal one-directional rotation were presented simultaneously, the duration and strength of self-motion perceived in the opposite direction of the auditory rotation stimulus were significantly longer and stronger than those perceived in the same direction of the auditory rotation stimulus. However, the auditory stimulus alone could not sufficiently induce self-motion perception, and if it did, its direction was not consistent within each experimental trial. We concluded that auditory motion information can determine perceived direction of self-motion during simultaneous presentation of visual and auditory motion information, at least when visual stimuli moved in opposing directions (around the yaw-axis). We speculate that the contribution of auditory information depends on the plausibility and information balance of visual and auditory information. PMID:26113828
Gromann, Paula M; Tracy, Derek K; Giampietro, Vincent; Brammer, Michael J; Krabbendam, Lydia; Shergill, Sukhwinder S
Repetitive transcranial magnetic stimulation (rTMS) has been shown to have clinically beneficial effects in altering the perception of auditory hallucinations (AH) in patients with schizophrenia. However, the mode of action is not clear. Recent neuroimaging findings indicate that rTMS has the potential to induce not only local effects but also changes in remote, functionally connected brain regions. Frontotemporal dysconnectivity has been proposed as a mechanism leading to psychotic symptoms in schizophrenia. The current study examines functional connectivity between temporal and frontal brain regions after rTMS and the implications for AH in schizophrenia. A connectivity analysis was conducted on the fMRI data of 11 healthy controls receiving rTMS, compared with 11 matched subjects receiving sham TMS, to the temporoparietal junction, before engaging in a task associated with robust frontotemporal activation. Compared to the control group, the rTMS group showed an altered frontotemporal connectivity with stronger connectivity between the right temporoparietal cortex and the dorsolateral prefrontal cortex and the angular gyrus. This finding provides preliminary evidence for the hypothesis that normalizing the functional connectivity between the temporoparietal and frontal brain regions may underlie the therapeutic effect of rTMS on AH in schizophrenia.
Cantiani, Chiara; Lorusso, Maria Luisa; Valnegri, Camilla; Molteni, Massimo
Auditory temporal processing deficits have been proposed as the underlying cause of phonological difficulties in Developmental Dyslexia. The hypothesis was tested in a sample of 20 Italian dyslexic children aged 8-14, and 20 matched control children. Three tasks of auditory processing of non-verbal stimuli, involving discrimination and reproduction of sequences of rapidly presented short sounds were expressly created. Dyslexic subjects performed more poorly than control children, suggesting the presence of a deficit only partially influenced by the duration of the stimuli and of inter-stimulus intervals (ISIs).
Vera Lawo; Iring Koch
Objectives. Using a novel task-switching variant of dichotic selective listening, we examined age-related differences in the ability to intentionally switch auditory attention between 2 speakers defined by their sex.
Rattat, Anne-Claire; Picard, Delphine
The present study sought to determine the format in which visual, auditory and auditory-visual durations ranging from 400 to 600 ms are encoded and maintained in short-term memory, using suppression conditions. Participants compared two stimulus durations separated by an interval of 8 s. During this time, they performed either an articulatory suppression task, a visuospatial tracking task or no specific task at all (control condition). The results showed that the articulatory suppression task decreased recognition performance for auditory durations but not for visual or bimodal ones, whereas the visuospatial task decreased recognition performance for visual durations but not for auditory or bimodal ones. These findings support the modality-specific account of short-term memory for durations.
Robbins, Lindsey; Margulis, Susan W
Several studies have demonstrated that auditory enrichment can reduce stereotypic behaviors in captive animals. The purpose of this study was to determine the relative effectiveness of three different types of auditory enrichment-naturalistic sounds, classical music, and rock music-in reducing stereotypic behavior displayed by Western lowland gorillas (Gorilla gorilla gorilla). Three gorillas (one adult male, two adult females) were observed at the Buffalo Zoo for a total of 24 hr per music trial. A control observation period, during which no sounds were presented, was also included. Each music trial consisted of a total of three weeks with a 1-week control period in between each music type. The results reveal a decrease in stereotypic behaviors from the control period to naturalistic sounds. The naturalistic sounds also affected patterns of several other behaviors including locomotion. In contrast, stereotypy increased in the presence of classical and rock music. These results suggest that auditory enrichment, which is not commonly used in zoos in a systematic way, can be easily utilized by keepers to help decrease stereotypic behavior, but the nature of the stimulus, as well as the differential responses of individual animals, need to be considered. © 2014 Wiley Periodicals, Inc.
Tjepkema-Cloostermans, Marleen C; Wijers, Elisabeth T; van Putten, Michel J A M
To report on a distinct effect of auditory and sensory stimuli on the EEG in comatose patients with severe postanoxic encephalopathy. In two comatose patients admitted to the Intensive Care Unit (ICU) with severe postanoxic encephalopathy and burst-suppression EEG, we studied the effect of external stimuli (sound and touch) on the occurrence of bursts. In patient A bursts could be induced by either auditory or sensory stimuli. In patient B bursts could only be induced by touching different facial regions (forehead, nose and chin). When stimuli were presented with relatively long intervals, bursts persistently followed the stimuli, while stimuli with short intervals (encephalopathy can be induced by external stimuli, resulting in stimulus-dependent burst-suppression. Stimulus induced bursts should not be interpreted as prognostic favourable EEG reactivity. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Yanaga, Ryuichiro; Kawahara, Hideki
A new parameter extraction procedure based on logarithmic transformation of the temporal axis was applied to investigate auditory effects on voice F0 control to overcome artifacts due to natural fluctuations and nonlinearities in speech production mechanisms. The proposed method may add complementary information to recent findings reported by using frequency shift feedback method [Burnett and Larson, J. Acoust. Soc. Am. 112 (2002)], in terms of dynamic aspects of F0 control. In a series of experiments, dependencies of system parameters in F0 control on subjects, F0 and style (musical expressions and speaking) were tested using six participants. They were three male and three female students specialized in musical education. They were asked to sustain a Japanese vowel /a/ for about 10 s repeatedly up to 2 min in total while hearing F0 modulated feedback speech, that was modulated using an M-sequence. The results replicated qualitatively the previous finding [Kawahara and Williams, Vocal Fold Physiology, (1995)] and provided more accurate estimates. Relations with designing an artificial singer also will be discussed. [Work partly supported by the grant in aids in scientific research (B) 14380165 and Wakayama University.
Menceloglu, Melisa; Grabowecky, Marcia; Suzuki, Satoru
Temporal expectation is a process by which people use temporally structured sensory information to explicitly or implicitly predict the onset and/or the duration of future events. Because timing plays a critical role in crossmodal interactions, we investigated how temporal expectation influenced auditory-visual interaction, using an auditory-visual crossmodal congruity effect as a measure of crossmodal interaction. For auditory identification, an incongruent visual stimulus produced stronger interference when the crossmodal stimulus was presented with an expected rather than an unexpected timing. In contrast, for visual identification, an incongruent auditory stimulus produced weaker interference when the crossmodal stimulus was presented with an expected rather than an unexpected timing. The fact that temporal expectation made visual distractors more potent and visual targets less susceptible to auditory interference suggests that temporal expectation increases the perceptual weight of visual signals.
Colzato, Lorenza S; Steenbergen, Laura; Hommel, Bernhard
The aim of the study was to throw more light on the relationship between rumination and cognitive-control processes. Seventy-eight adults were assessed with respect to rumination tendencies by means of the LEIDS-r before performing a Stroop task, an event-file task assessing the automatic retrieval of irrelevant information, an attentional set-shifting task, and the Attentional Network Task, which provided scores for alerting, orienting, and executive control functioning. The size of the Stroop effect and irrelevant retrieval in the event-five task were positively correlated with the tendency to ruminate, while all other scores did not correlate with any rumination scale. Controlling for depressive tendencies eliminated the Stroop-related finding (an observation that may account for previous failures to replicate), but not the event-file finding. Taken altogether, our results suggest that rumination does not affect attention, executive control, or response selection in general, but rather selectively impairs the control of stimulus-induced retrieval of irrelevant information.
Poremba, Amy; Saunders, Richard C; Crane, Alison M; Cook, Michelle; Sokoloff, Louis; Mishkin, Mortimer
Cerebral auditory areas were delineated in the awake, passively listening, rhesus monkey by comparing the rates of glucose utilization in an intact hemisphere and in an acoustically isolated contralateral hemisphere of the same animal. The auditory system defined in this way occupied large portions of cerebral tissue, an extent probably second only to that of the visual system. Cortically, the activated areas included the entire superior temporal gyrus and large portions of the parietal, prefrontal, and limbic lobes. Several auditory areas overlapped with previously identified visual areas, suggesting that the auditory system, like the visual system, contains separate pathways for processing stimulus quality, location, and motion.
Larry E Roberts
Full Text Available Sensory training therapies for tinnitus are based on the assumption that, notwithstanding neural changes related to tinnitus, auditory training can alter the response properties of neurons in auditory pathways. To address this question, we investigated whether brain changes induced by sensory training in tinnitus sufferers and measured by EEG are similar to those induced in age and hearing loss matched individuals without tinnitus trained on the same auditory task. Auditory training was given using a 5 kHz 40-Hz amplitude-modulated sound that was in the tinnitus frequency region of the tinnitus subjects and enabled extraction of the 40-Hz auditory steady-state response (ASSR and P2 transient response known to localize to primary and nonprimary auditory cortex, respectively. P2 amplitude increased with training equally in participants with tinnitus and in control subjects, suggesting normal remodeling of nonprimary auditory regions in tinnitus. However, training-induced changes in the ASSR differed between the tinnitus and control groups. In controls ASSR phase advanced toward the stimulus waveform by about ten degrees over training, in agreement with previous results obtained in young normal hearing individuals. However, ASSR phase did not change significantly with training in the tinnitus group, although some participants showed phase shifts resembling controls. On the other hand, ASSR amplitude increased with training in the tinnitus group, whereas in controls this response (which is difficult to remodel in young normal hearing subjects did not change with training. These results suggest that neural changes related to tinnitus altered how neural plasticity was expressed in the region of primary but not nonprimary auditory cortex. Auditory training did not reduce tinnitus loudness although a small effect on the tinnitus spectrum was detected.
1. Weakly electric fish generate around their bodies low-amplitude, AC electric fields which are used both for the detection of objects and intraspecific communication. The types of modulation in this signal of which the high-frequency wave-type gymnotiform, Apteronotus, is capable are relatively few and stereotyped. Chief among these is the chirp, a signal used in courtship and agonistic displays. Chirps are brief and rapid accelerations in the normally highly regular electric organ discharge (EOD) frequency. 2. Chirping can be elicited artificially in these animals by the use of a stimulus regime identical to that typically used to elicit another behavior, the jamming avoidance response (JAR). The neuronal basis for the JAR, a much slower and lesser alteration in EOD frequency, is well understood. Examination of the stimulus features which induce chirping show that, like the JAR, there is a region of frequency differences between the fish's EOD and the interfering signal that maximally elicits the response. Moreover, the response is sex-specific with regard to the sign of the frequency difference, with females chirping preferentially on the positive and most males on the negative Df. These features imply that the sensory mechanisms involved in the triggering of these communicatory behaviors are fundamentally similar to those explicated for the JAR. 3. Additionally, two other modulatory behaviors of unknown significance are described. The first is a non-selective rise in EOD frequency associated with a JAR stimulus, occurring regardless of the sign of the Df. This modulation shares many characteristics with the JAR. The second behavior, which we have termed a 'yodel', is distinct from and kinetically intermediate to chirping and the JAR. Moreover, unlike the other studied electromotor behaviors it is generally produced only after the termination of the eliciting stimulus.
Bender, Stephan; Bluschke, Annet; Dippel, Gabriel; Rupp, André; Weisbrod, Matthias; Thomas, Christine
To investigate whether automatic auditory post-processing is deficient in patients with Alzheimer's disease and is related to sensory gating. Event-related potentials were recorded during a passive listening task to examine the automatic transient storage of auditory information (short click pairs). Patients with Alzheimer's disease were compared to a healthy age-matched control group. A young healthy control group was included to assess effects of physiological aging. A bilateral frontal negativity in combination with deep temporal positivity occurring 500 ms after stimulus offset was reduced in patients with Alzheimer's disease, but was unaffected by physiological aging. Its amplitude correlated with short-term memory capacity, but was independent of sensory gating in healthy elderly controls. Source analysis revealed a dipole pair in the anterior temporal lobes. Results suggest that auditory post-processing is deficient in Alzheimer's disease, but is not typically related to sensory gating. The deficit could neither be explained by physiological aging nor by problems in earlier stages of auditory perception. Correlations with short-term memory capacity and executive control tasks suggested an association with memory encoding and/or overall cognitive control deficits. An auditory late negative wave could represent a marker of auditory working memory encoding deficits in Alzheimer's disease. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Scott, Brian H; Mishkin, Mortimer
Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory. Published by Elsevier B.V.
Slevc, L Robert; Shell, Alison R
Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. © 2015 Elsevier B.V. All rights reserved.
Full Text Available Auditory Scene Analysis provides a useful framework for understanding atypical auditory perception in autism. Specifically, a failure to segregate the incoming acoustic energy into distinct auditory objects might explain the aversive reaction autistic individuals have to certain auditory stimuli or environments. Previous research with non-autistic participants has demonstrated the presence of an Object Related Negativity (ORN in the auditory event related potential that indexes pre-attentive processes associated with auditory scene analysis. Also evident is a later P400 component that is attention dependent and thought to be related to decision-making about auditory objects. We sought to determine whether there are differences between individuals with and without autism in the levels of processing indexed by these components. Electroencephalography (EEG was used to measure brain responses from a group of 16 autistic adults, and 16 age- and verbal-IQ-matched typically-developing adults. Auditory responses were elicited using lateralized dichotic pitch stimuli in which inter-aural timing differences create the illusory perception of a pitch that is spatially separated from a carrier noise stimulus. As in previous studies, control participants produced an ORN in response to the pitch stimuli. However, this component was significantly reduced in the participants with autism. In contrast, processing differences were not observed between the groups at the attention-dependent level (P400. These findings suggest that autistic individuals have difficulty segregating auditory stimuli into distinct auditory objects, and that this difficulty arises at an early pre-attentive level of processing.
Savelsbergh, G J; Netelenbos, J B; Whiting, H T
From birth onwards, auditory stimulation directs and intensifies visual orientation behaviour. In deaf children, by definition, auditory perception cannot take place and cannot, therefore, make a contribution to visual orientation to objects approaching from outside the initial field of view. In experiment 1, a difference in catching ability is demonstrated between deaf and hearing children (10-13 years of age) when the ball approached from the periphery or from outside the field of view. No differences in catching ability between the two groups occurred when the ball approached from within the field of view. A second experiment was conducted in order to determine if differences in catching ability between deaf and hearing children could be attributed to execution of slow orientating movements and/or slow reaction time as a result of the auditory loss. The deaf children showed slower reaction times. No differences were found in movement times between deaf and hearing children. Overall, the findings suggest that a lack of auditory stimulation during development can lead to deficiencies in the coordination of actions such as catching which are both spatially and temporally constrained.
Granot, Michal; Weissman-Fogel, Irit; Crispel, Yonathan; Pud, Dorit; Granovsky, Yelena; Sprecher, Elliot; Yarnitsky, David
Descending modulation of pain can be demonstrated psychophysically by dual pain stimulation. This study evaluates in 31 healthy subjects the association between parameters of the conditioning stimulus, gender and personality, and the endogenous analgesia (EA) extent assessed by diffuse noxious inhibitory control (DNIC) paradigm. Contact heat pain was applied as the test stimulus to the non-dominant forearm, with stimulation temperature at a psychophysical intensity score of 60 on a 0-100 numerical pain scale. The conditioning stimulus was a 60s immersion of the dominant hand in cold (12, 15, 18 degrees C), hot (44 and 46.5 degrees C), or skin temperature (33 degrees C) water. The test stimulus was repeated on the non-dominant hand during the last 30s of the conditioning immersion. EA extent was calculated as the difference between pain scores of the two test stimuli. State and trait anxiety and pain catastrophizing scores were assessed prior to stimulation. EA was induced only for the pain-generating conditioning stimuli at 46.5 degrees C (p=0.011) and 12 degrees C (p=0.003). EA was independent of conditioning pain modality, or personality, but a significant gender effect was found, with greater EA response in males. Importantly, pain scores of the conditioning stimuli were not correlated with EA extent. The latter is based on both our study population, and on additional 82 patients, who participated in another study, in which EA was induced by immersion at 46.5 degrees C. DNIC testing, thus, seems to be relatively independent of the stimulation conditions, making it an easy to apply tool, suitable for wide range applications in pain psychophysics.
Seibold, Julia C; Nolden, Sophie; Oberem, Josefa; Fels, Janina; Koch, Iring
In an auditory attention-switching paradigm, participants heard two simultaneously spoken number-words, each presented to one ear, and decided whether the target number was smaller or larger than 5 by pressing a left or right key. An instructional cue in each trial indicated which feature had to be used to identify the target number (e.g., female voice). Auditory attention-switch costs were found when this feature changed compared to when it repeated in two consecutive trials. Earlier studies employing this paradigm showed mixed results when they examined whether such cued auditory attention-switches can be prepared actively during the cue-stimulus interval. This study systematically assessed which preconditions are necessary for the advance preparation of auditory attention-switches. Three experiments were conducted that controlled for cue-repetition benefits, modality switches between cue and stimuli, as well as for predictability of the switch-sequence. Only in the third experiment, in which predictability for an attention-switch was maximal due to a pre-instructed switch-sequence and predictable stimulus onsets, active switch-specific preparation was found. These results suggest that the cognitive system can prepare auditory attention-switches, and this preparation seems to be triggered primarily by the memorised switching-sequence and valid expectations about the time of target onset.
Laurent, Agathe; Arzimanoglou, Alexis; Panagiotakaki, Eleni; Sfaello, Ignacio; Kahane, Philippe; Ryvlin, Philippe; Hirsch, Edouard; de Schonen, Scania
A high rate of abnormal social behavioural traits or perceptual deficits is observed in children with unilateral temporal lobe epilepsy. In the present study, perception of auditory and visual social signals, carried by faces and voices, was evaluated in children or adolescents with temporal lobe epilepsy. We prospectively investigated a sample of 62 children with focal non-idiopathic epilepsy early in the course of the disorder. The present analysis included 39 children with a confirmed diagnosis of temporal lobe epilepsy. Control participants (72), distributed across 10 age groups, served as a control group. Our socio-perceptual evaluation protocol comprised three socio-visual tasks (face identity, facial emotion and gaze direction recognition), two socio-auditory tasks (voice identity and emotional prosody recognition), and three control tasks (lip reading, geometrical pattern and linguistic intonation recognition). All 39 patients also benefited from a neuropsychological examination. As a group, children with temporal lobe epilepsy performed at a significantly lower level compared to the control group with regards to recognition of facial identity, direction of eye gaze, and emotional facial expressions. We found no relationship between the type of visual deficit and age at first seizure, duration of epilepsy, or the epilepsy-affected cerebral hemisphere. Deficits in socio-perceptual tasks could be found independently of the presence of deficits in visual or auditory episodic memory, visual non-facial pattern processing (control tasks), or speech perception. A normal FSIQ did not exempt some of the patients from an underlying deficit in some of the socio-perceptual tasks. Temporal lobe epilepsy not only impairs development of emotion recognition, but can also impair development of perception of other socio-perceptual signals in children with or without intellectual deficiency. Prospective studies need to be designed to evaluate the results of appropriate re
Issa, Mohamad; Bisconti, Silvia; Kovelman, Ioulia; Kileny, Paul; Basura, Gregory J
Tinnitus is the phantom perception of sound in the absence of an acoustic stimulus. To date, the purported neural correlates of tinnitus from animal models have not been adequately characterized with translational technology in the human brain. The aim of the present study was to measure changes in oxy-hemoglobin concentration from regions of interest (ROI; auditory cortex) and non-ROI (adjacent nonauditory cortices) during auditory stimulation and silence in participants with subjective tinnitus appreciated equally in both ears and in nontinnitus controls using functional near-infrared spectroscopy (fNIRS). Control and tinnitus participants with normal/near-normal hearing were tested during a passive auditory task. Hemodynamic activity was monitored over ROI and non-ROI under episodic periods of auditory stimulation with 750 or 8000 Hz tones, broadband noise, and silence. During periods of silence, tinnitus participants maintained increased hemodynamic responses in ROI, while a significant deactivation was seen in controls. Interestingly, non-ROI activity was also increased in the tinnitus group as compared to controls during silence. The present results demonstrate that both auditory and select nonauditory cortices have elevated hemodynamic activity in participants with tinnitus in the absence of an external auditory stimulus, a finding that may reflect basic science neural correlates of tinnitus that ultimately contribute to phantom sound perception.
Full Text Available Tinnitus is the phantom perception of sound in the absence of an acoustic stimulus. To date, the purported neural correlates of tinnitus from animal models have not been adequately characterized with translational technology in the human brain. The aim of the present study was to measure changes in oxy-hemoglobin concentration from regions of interest (ROI; auditory cortex and non-ROI (adjacent nonauditory cortices during auditory stimulation and silence in participants with subjective tinnitus appreciated equally in both ears and in nontinnitus controls using functional near-infrared spectroscopy (fNIRS. Control and tinnitus participants with normal/near-normal hearing were tested during a passive auditory task. Hemodynamic activity was monitored over ROI and non-ROI under episodic periods of auditory stimulation with 750 or 8000 Hz tones, broadband noise, and silence. During periods of silence, tinnitus participants maintained increased hemodynamic responses in ROI, while a significant deactivation was seen in controls. Interestingly, non-ROI activity was also increased in the tinnitus group as compared to controls during silence. The present results demonstrate that both auditory and select nonauditory cortices have elevated hemodynamic activity in participants with tinnitus in the absence of an external auditory stimulus, a finding that may reflect basic science neural correlates of tinnitus that ultimately contribute to phantom sound perception.
Menning, Hans; Ackermann, Hermann; Hertrich, Ingo; Mathiak, Klaus
Previous studies have shown that cross-modal processing affects perception at a variety of neuronal levels. In this study, event-related brain responses were recorded via whole-head magnetoencephalography (MEG). Spatial auditory attention was directed via tactile pre-cues (primes) to one of four locations in the peripersonal space (left and right hand versus face). Auditory stimuli were white noise bursts, convoluted with head-related transfer functions, which ensured spatial perception of the four locations. Tactile primes (200-300 ms prior to acoustic onset) were applied randomly to one of these locations. Attentional load was controlled by three different visual distraction tasks. The auditory P50m (about 50 ms after stimulus onset) showed a significant "proximity" effect (larger responses to face stimulation as well as a "contralaterality" effect between side of stimulation and hemisphere). The tactile primes essentially reduced both the P50m and N100m components. However, facial tactile pre-stimulation yielded an enhanced ipsilateral N100m. These results show that earlier responses are mainly governed by exogenous stimulus properties whereas cross-sensory interaction is spatially selective at a later (endogenous) processing stage.
Hazell, J W; Jastreboff, P J
A model is proposed for tinnitus and sensorineural hearing loss involving cochlear pathology. As tinnitus is defined as a cortical perception of sound in the absence of an appropriate external stimulus it must result from a generator in the auditory system which undergoes extensive auditory processing before it is perceived. The concept of spatial nonlinearity in the cochlea is presented as a cause of tinnitus generation controlled by the efferents. Various clinical presentations of tinnitus and the way in which they respond to changes in the environment are discussed with respect to this control mechanism. The concept of auditory retraining as part of the habituation process, and interaction with the prefrontal cortex and limbic system is presented as a central model which emphasizes the importance of the emotional significance and meaning of tinnitus.
Kantrowitz, Joshua T; Hoptman, Matthew J; Leitman, David I; Moreno-Ortega, Marta; Lehrfeld, Jonathan M; Dias, Elisa; Sehatpour, Pejman; Laukka, Petri; Silipo, Gail; Javitt, Daniel C
Deficits in auditory emotion recognition (AER) are a core feature of schizophrenia and a key component of social cognitive impairment. AER deficits are tied behaviorally to impaired ability to interpret tonal ("prosodic") features of speech that normally convey emotion, such as modulations in base pitch (F0M) and pitch variability (F0SD). These modulations can be recreated using synthetic frequency modulated (FM) tones that mimic the prosodic contours of specific emotional stimuli. The present study investigates neural mechanisms underlying impaired AER using a combined event-related potential/resting-state functional connectivity (rsfMRI) approach in 84 schizophrenia/schizoaffective disorder patients and 66 healthy comparison subjects. Mismatch negativity (MMN) to FM tones was assessed in 43 patients/36 controls. rsfMRI between auditory cortex and medial temporal (insula) regions was assessed in 55 patients/51 controls. The relationship between AER, MMN to FM tones, and rsfMRI was assessed in the subset who performed all assessments (14 patients, 21 controls). As predicted, patients showed robust reductions in MMN across FM stimulus type (p = 0.005), particularly to modulations in F0M, along with impairments in AER and FM tone discrimination. MMN source analysis indicated dipoles in both auditory cortex and anterior insula, whereas rsfMRI analyses showed reduced auditory-insula connectivity. MMN to FM tones and functional connectivity together accounted for ∼50% of the variance in AER performance across individuals. These findings demonstrate that impaired preattentive processing of tonal information and reduced auditory-insula connectivity are critical determinants of social cognitive dysfunction in schizophrenia, and thus represent key targets for future research and clinical intervention. Schizophrenia patients show deficits in the ability to infer emotion based upon tone of voice [auditory emotion recognition (AER)] that drive impairments in social cognition
Vanhanen, M; Karhu, J; Koivisto, K; Pääkkönen, A; Partanen, J; Laakso, M; Riekkinen, P
We compared auditory event-related potentials (ERPs) and neuropsychological test scores in nine patients with non-insulin-dependent diabetes mellitus (NIDDM) and in nine control subjects. The measures of automatic stimulus processing, habituation of auditory N100 and mismatch negativity (MMN) were impaired in patients. No differences were observed in the N2b and P3 components, which presumably reflect conscious cognitive analysis of the stimuli. A trend towards impaired performance in the Digit Span backward was found in diabetic subjects, but in the tests of secondary or long-term memory the groups were comparable. Patients with NIDDM may have defects in arousal and in the automatic ability to redirect attention, which can affect their cognitive performance.
Full Text Available The early stages of the auditory system need to preserve the timing information of sounds in order to extract the basic features of acoustic stimuli. At the same time, different processes of neuronal adaptation occur at several levels to further process the auditory information. For instance, auditory nerve fiber responses already experience adaptation of their firing rates, a type of response that can be found in many other auditory nuclei and may be useful for emphasizing the onset of the stimuli. However, it is at higher levels in the auditory hierarchy where more sophisticated types of neuronal processing take place. For example, stimulus-specific adaptation, where neurons show adaptation to frequent, repetitive stimuli, but maintain their responsiveness to stimuli with different physical characteristics, thus representing a distinct kind of processing that may play a role in change and deviance detection. In the auditory cortex, adaptation takes more elaborate forms, and contributes to the processing of complex sequences, auditory scene analysis and attention. Here we review the multiple types of adaptation that occur in the auditory system, which are part of the pool of resources that the neurons employ to process the auditory scene, and are critical to a proper understanding of the neuronal mechanisms that govern auditory perception.
Dougherty, D M; Lewis, P
Using horses, we investigated the control of operant behavior by a tactile stimulus (the training stimulus) and the generalization of behavior to six other similar test stimuli. In a stall, the experimenters mounted a response panel in the doorway. Located on this panel were a response lever and a grain dispenser. The experimenters secured a tactile-stimulus belt to the horse's back. The stimulus belt was constructed by mounting seven solenoids along a piece of burlap in a manner that allowed each to provide the delivery of a tactile stimulus, a repetitive light tapping, at different locations (spaced 10.0 cm apart) along the horse's back. Two preliminary steps were necessary before generalization testing: training a measurable response (lip pressing) and training on several reinforcement schedules in the presence of a training stimulus (tapping by one of the solenoids). We then gave each horse two generalization test sessions. Results indicated that the horses' behavior was effectively controlled by the training stimulus. Horses made the greatest number of responses to the training stimulus, and the tendency to respond to the other test stimuli diminished as the stimuli became farther away from the training stimulus. These findings are discussed in the context of behavioral principles and their relevance to the training of horses.
Coelho, Cesar A. O.; Dunsmoor, Joseph E.; Phelps, Elizabeth A.
Fear-related behaviors are prone to relapse following extinction. We tested in humans a compound extinction design ("deepened extinction") shown in animal studies to reduce post-extinction fear recovery. Adult subjects underwent fear conditioning to a visual and an auditory conditioned stimulus (CSA and CSB, respectively) separately…
Full Text Available Abstract Background Polychlorinated biphenyls (PCBs are a class of organic compounds that bioaccumulate due to their chemical stability and lipophilic properties. Humans are prenatally exposed via trans-placental transfer, through breast milk as infants, and through fish, seafood and fatty foods as adolescents and adults. Exposure has several reported effects ranging from developmental abnormalities to cognitive and motor deficiencies. In the present study, three experimental groups of rats were orally exposed to PCBs typically found in human breast milk and then behaviorally tested for changes in measures of stimulus control (percentage lever-presses on the reinforcer-producing lever, activity level (responses with IRTs > 0.67 s, and responses with short IRTs ( Methods Male offspring from Wistar Kyoto (WKY/NTac dams purchased pregnant from Taconic Farms (Germantown, NY were orally given PCB at around postnatal day 8, 14, and 20 at a dose of 10 mg/kg body weight at each exposure. Three experimental groups were exposed either to PCB 52, PCB 153, or PCB 180. A fourth group fed corn oil only served as controls. From postnatal day 25, for 33 days, the animals were tested for behavioral changes using an operant procedure. Results PCB exposure did not produce behavioral changes during training when responding was frequently reinforced using a variable interval 3 s schedule. When correct responses were reinforced on a variable interval 180 s schedule, animals exposed to PCB 153 or PCB 180 were less active than controls and animals exposed to PCB 52. Stimulus control was better in animals exposed to PCB 180 than in controls and in the PCB 52 group. Also, the PCB 153 and PCB 180 groups had fewer responses with short IRTs than the PCB 52 group. No effects of exposure to PCB 52 were found when compared to controls. Conclusions Exposure to PCBs 153 and 180 produced hypoactivity that continued at least five weeks after the last exposure. No effects of
Hui, Isabel R; Hui, Gabriel K; Roozendaal, Benno; McGaugh, James L; Weinberger, Norman M
A large number of studies have indicated that stress exposure or the administration of stress hormones and other neuroactive drugs immediately after a learning experience modulates the consolidation of long-term memory. However, there has been little investigation into how arousal induced by handling of the animals in order to administer these drugs affects memory. Therefore, the present study examined whether the posttraining injection or handling procedure per se affects memory of auditory-cue classical fear conditioning. Male Sprague-Dawley rats, which had been pre-handled on three days for 1 min each prior to conditioning, received three pairings of a single-frequency auditory stimulus and footshock, followed immediately by either a subcutaneous injection of a vehicle solution or brief handling without injection. A control group was placed back into their home cages without receiving any posttraining treatment. Retention was tested 24 h later in a novel chamber and suppression of ongoing motor behavior during a 10-s presentation of the auditory-cue served as the measure of conditioned fear. Animals that received posttraining injection or handling did not differ from each other but showed significantly less stimulus-induced movement compared to the non-handled control group. These findings thus indicate that the posttraining injection or handling procedure is sufficiently arousing or stressful to facilitate memory consolidation of auditory-cue classical fear conditioning.
Sininger, Yvonne S; Bhatara, Anjali
Laterality (left-right ear differences) of auditory processing was assessed using basic auditory skills: (1) gap detection, (2) frequency discrimination, and (3) intensity discrimination. Stimuli included tones (500, 1000, and 4000 Hz) and wide-band noise presented monaurally to each ear of typical adult listeners. The hypothesis tested was that processing of tonal stimuli would be enhanced by left ear (LE) stimulation and noise by right ear (RE) presentations. To investigate the limits of laterality by (1) spectral width, a narrow-band noise (NBN) of 450-Hz bandwidth was evaluated using intensity discrimination, and (2) stimulus duration, 200, 500, and 1000 ms duration tones were evaluated using frequency discrimination. A left ear advantage (LEA) was demonstrated with tonal stimuli in all experiments, but an expected REA for noise stimuli was not found. The NBN stimulus demonstrated no LEA and was characterised as a noise. No change in laterality was found with changes in stimulus durations. The LEA for tonal stimuli is felt to be due to more direct connections between the left ear and the right auditory cortex, which has been shown to be primary for spectral analysis and tonal processing. The lack of a REA for noise stimuli is unexplained. Sex differences in laterality for noise stimuli were noted but were not statistically significant. This study did establish a subtle but clear pattern of LEA for processing of tonal stimuli.
Riecke, Lars; Scharke, Wolfgang; Valente, Giancarlo; Gutschalk, Alexander
Auditory selective attention plays an essential role for identifying sounds of interest in a scene, but the neural underpinnings are still incompletely understood. Recent findings demonstrate that neural activity that is time-locked to a particular amplitude-modulation (AM) is enhanced in the auditory cortex when the modulated stream of sounds is selectively attended to under sensory competition with other streams. However, the target sounds used in the previous studies differed not only in their AM, but also in other sound features, such as carrier frequency or location. Thus, it remains uncertain whether the observed enhancements reflect AM-selective attention. The present study aims at dissociating the effect of AM frequency on response enhancement in auditory cortex by using an ongoing auditory stimulus that contains two competing targets differing exclusively in their AM frequency. Electroencephalography results showed a sustained response enhancement for auditory attention compared to visual attention, but not for AM-selective attention (attended AM frequency vs. ignored AM frequency). In contrast, the response to the ignored AM frequency was enhanced, although a brief trend toward response enhancement occurred during the initial 15 s. Together with the previous findings, these observations indicate that selective enhancement of attended AMs in auditory cortex is adaptive under sustained AM-selective attention. This finding has implications for our understanding of cortical mechanisms for feature-based attentional gain control.
Riecke, Lars; Scharke, Wolfgang; Valente, Giancarlo; Gutschalk, Alexander
Auditory selective attention plays an essential role for identifying sounds of interest in a scene, but the neural underpinnings are still incompletely understood. Recent findings demonstrate that neural activity that is time-locked to a particular amplitude-modulation (AM) is enhanced in the auditory cortex when the modulated stream of sounds is selectively attended to under sensory competition with other streams. However, the target sounds used in the previous studies differed not only in their AM, but also in other sound features, such as carrier frequency or location. Thus, it remains uncertain whether the observed enhancements reflect AM-selective attention. The present study aims at dissociating the effect of AM frequency on response enhancement in auditory cortex by using an ongoing auditory stimulus that contains two competing targets differing exclusively in their AM frequency. Electroencephalography results showed a sustained response enhancement for auditory attention compared to visual attention, but not for AM-selective attention (attended AM frequency vs. ignored AM frequency). In contrast, the response to the ignored AM frequency was enhanced, although a brief trend toward response enhancement occurred during the initial 15 s. Together with the previous findings, these observations indicate that selective enhancement of attended AMs in auditory cortex is adaptive under sustained AM-selective attention. This finding has implications for our understanding of cortical mechanisms for feature-based attentional gain control. PMID:25259525
Christopher J Plack
Full Text Available Many natural sounds fluctuate over time. The detectability of sounds in a sequence can be reduced by prior stimulation in a process known as forward masking. Forward masking is thought to reflect neural adaptation or neural persistence in the auditory nervous system, but it has been unclear where in the auditory pathway this processing occurs. To address this issue, the present study used a "Huggins pitch" stimulus, the perceptual effects of which depend on central auditory processing. Huggins pitch is an illusory tonal sensation produced when the same noise is presented to the two ears except for a narrow frequency band that is different (decorrelated between the ears. The pitch sensation depends on the combination of the inputs to the two ears, a process that first occurs at the level of the superior olivary complex in the brainstem. Here it is shown that a Huggins pitch stimulus produces more forward masking in the frequency region of the decorrelation than a noise stimulus identical to the Huggins-pitch stimulus except with perfect correlation between the ears. This stimulus has a peripheral neural representation that is identical to that of the Huggins-pitch stimulus. The results show that processing in, or central to, the superior olivary complex can contribute to forward masking in human listeners.
Renata Aparecida Leite
Full Text Available Abstract Introduction The electrophysiological responses obtained with the complex auditory brainstem response (cABR provide objective measures of subcortical processing of speech and other complex stimuli. The cABR has also been used to verify the plasticity in the auditory pathway in the subcortical regions. Objective To compare the results of cABR obtained in children using hearing aids before and after 9 months of adaptation, as well as to compare the results of these children with those obtained in children with normal hearing. Methods Fourteen children with normal hearing (Control Group - CG and 18 children with mild to moderate bilateral sensorineural hearing loss (Study Group - SG, aged 7-12 years, were evaluated. The children were submitted to pure tone and vocal audiometry, acoustic immittance measurements and ABR with speech stimulus, being submitted to the evaluations at three different moments: initial evaluation (M0, 3 months after the initial evaluation (M3 and 9 months after the evaluation (M9; at M0, the children assessed in the study group did not use hearing aids yet. Results When comparing the CG and the SG, it was observed that the SG had a lower median for the V-A amplitude at M0 and M3, lower median for the latency of the component V at M9 and a higher median for the latency of component O at M3 and M9. A reduction in the latency of component A at M9 was observed in the SG. Conclusion Children with mild to moderate hearing loss showed speech stimulus processing deficits and the main impairment is related to the decoding of the transient portion of this stimulus spectrum. It was demonstrated that the use of hearing aids promoted neuronal plasticity of the Central Auditory Nervous System after an extended time of sensory stimulation.
Christianson, G. Björn; Sahani, Maneesh; Linden, Jennifer F.
The computational role of cortical layers within auditory cortex has proven difficult to establish. One hypothesis is that interlaminar cortical processing might be dedicated to analyzing temporal properties of sounds; if so, then there should be systematic depth-dependent changes in cortical sensitivity to the temporal context in which a stimulus occurs. We recorded neural responses simultaneously across cortical depth in primary auditory cortex and anterior auditory field of CBA/Ca mice, an...
... children and adults with auditory neuropathy. Cochlear implants (electronic devices that compensate for damaged or nonworking parts ... and Drug Administration: Information on Cochlear Implants Telecommunications Relay Services Your Baby's Hearing Screening News Deaf health ...
Parving, A; Salomon, G; Elberling, Claus
An investigation of the middle components of the auditory evoked response (10--50 msec post-stimulus) in a patient with auditory agnosia is reported. Bilateral temporal lobe infarctions were proved by means of brain scintigraphy, CAT scanning, and regional cerebral blood flow measurements...
Easton, R. D.; Greene, A. J.; DiZio, P.; Lackner, J. R.
This study assessed whether stationary auditory information could affect body and head sway (as does visual and haptic information) in sighted and congenitally blind people. Two speakers, one placed adjacent to each ear, significantly stabilized center-of-foot-pressure sway in a tandem Romberg stance, while neither a single speaker in front of subjects nor a head-mounted sonar device reduced center-of-pressure sway. Center-of-pressure sway was reduced to the same level in the two-speaker condition for sighted and blind subjects. Both groups also evidenced reduced head sway in the two-speaker condition, although blind subjects' head sway was significantly larger than that of sighted subjects. The advantage of the two-speaker condition was probably attributable to the nature of distance compared with directional auditory information. The results rule out a deficit model of spatial hearing in blind people and are consistent with one version of a compensation model. Analysis of maximum cross-correlations between center-of-pressure and head sway, and associated time lags suggest that blind and sighted people may use different sensorimotor strategies to achieve stability.
Halverson, Hunter E.; Poremba, Amy; Freeman, John H.
Associative learning tasks commonly involve an auditory stimulus, which must be projected through the auditory system to the sites of memory induction for learning to occur. The cochlear nucleus (CN) projection to the pontine nuclei has been posited as the necessary auditory pathway for cerebellar learning, including eyeblink conditioning.…
van der Aa, J.; Honing, H.; ten Cate, C.
Perceiving temporal regularity in an auditory stimulus is considered one of the basic features of musicality. Here we examine whether zebra finches can detect regularity in an isochronous stimulus. Using a go/no go paradigm we show that zebra finches are able to distinguish between an isochronous
Blom, Jan Dirk
Auditory hallucinations constitute a phenomenologically rich group of endogenously mediated percepts which are associated with psychiatric, neurologic, otologic, and other medical conditions, but which are also experienced by 10-15% of all healthy individuals in the general population. The group of phenomena is probably best known for its verbal auditory subtype, but it also includes musical hallucinations, echo of reading, exploding-head syndrome, and many other types. The subgroup of verbal auditory hallucinations has been studied extensively with the aid of neuroimaging techniques, and from those studies emerges an outline of a functional as well as a structural network of widely distributed brain areas involved in their mediation. The present chapter provides an overview of the various types of auditory hallucination described in the literature, summarizes our current knowledge of the auditory networks involved in their mediation, and draws on ideas from the philosophy of science and network science to reconceptualize the auditory hallucinatory experience, and point out directions for future research into its neurobiologic substrates. In addition, it provides an overview of known associations with various clinical conditions and of the existing evidence for pharmacologic and non-pharmacologic treatments. © 2015 Elsevier B.V. All rights reserved.
Gulberti, A; Hamel, W; Buhmann, C; Boelmans, K; Zittel, S; Gerloff, C; Westphal, M; Engel, A K; Schneider, T R; Moll, C K E
While motor effects of dopaminergic medication and subthalamic nucleus deep brain stimulation (STN-DBS) in Parkinson's disease (PD) patients are well explored, their effects on sensory processing are less well understood. Here, we studied the impact of levodopa and STN-DBS on auditory processing. Rhythmic auditory stimulation (RAS) was presented at frequencies between 1 and 6Hz in a passive listening paradigm. High-density EEG-recordings were obtained before (levodopa ON/OFF) and 5months following STN-surgery (ON/OFF STN-DBS). We compared auditory evoked potentials (AEPs) elicited by RAS in 12 PD patients to those in age-matched controls. Tempo-dependent amplitude suppression of the auditory P1/N1-complex was used as an indicator of auditory gating. Parkinsonian patients showed significantly larger AEP-amplitudes (P1, N1) and longer AEP-latencies (N1) compared to controls. Neither interruption of dopaminergic medication nor of STN-DBS had an immediate effect on these AEPs. However, chronic STN-DBS had a significant effect on abnormal auditory gating characteristics of parkinsonian patients and restored a physiological P1/N1-amplitude attenuation profile in response to RAS with increasing stimulus rates. This differential treatment effect suggests a divergent mode of action of levodopa and STN-DBS on auditory processing. STN-DBS may improve early attentive filtering processes of redundant auditory stimuli, possibly at the level of the frontal cortex. Copyright © 2014 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Tonelli, Alessia; Cuturi, Luigi F; Gori, Monica
Size perception can be influenced by several visual cues, such as spatial (e.g., depth or vergence) and temporal contextual cues (e.g., adaptation to steady visual stimulation). Nevertheless, perception is generally multisensory and other sensory modalities, such as auditory, can contribute to the functional estimation of the size of objects. In this study, we investigate whether auditory stimuli at different sound pitches can influence visual size perception after visual adaptation. To this aim, we used an adaptation paradigm (Pooresmaeili et al., 2013) in three experimental conditions: visual-only, visual-sound at 100 Hz and visual-sound at 9,000 Hz. We asked participants to judge the size of a test stimulus in a size discrimination task. First, we obtained a baseline for all conditions. In the visual-sound conditions, the auditory stimulus was concurrent to the test stimulus. Secondly, we repeated the task by presenting an adapter (twice as big as the reference stimulus) before the test stimulus. We replicated the size aftereffect in the visual-only condition: the test stimulus was perceived smaller than its physical size. The new finding is that we found the auditory stimuli have an effect on the perceived size of the test stimulus after visual adaptation: low frequency sound decreased the effect of visual adaptation, making the stimulus perceived bigger compared to the visual-only condition, and contrarily, the high frequency sound had the opposite effect, making the test size perceived even smaller.
Hames, Elizabeth’ C.; Murphy, Brandi; Rajmohan, Ravi; Anderson, Ronald C.; Baker, Mary; Zupancic, Stephen; O’Boyle, Michael; Richman, David
Electroencephalography (EEG) and blood oxygen level dependent functional magnetic resonance imagining (BOLD fMRI) assessed the neurocorrelates of sensory processing of visual and auditory stimuli in 11 adults with autism (ASD) and 10 neurotypical (NT) controls between the ages of 20–28. We hypothesized that ASD performance on combined audiovisual trials would be less accurate with observable decreased EEG power across frontal, temporal, and occipital channels and decreased BOLD fMRI activity in these same regions; reflecting deficits in key sensory processing areas. Analysis focused on EEG power, BOLD fMRI, and accuracy. Lower EEG beta power and lower left auditory cortex fMRI activity were seen in ASD compared to NT when they were presented with auditory stimuli as demonstrated by contrasting the activity from the second presentation of an auditory stimulus in an all auditory block vs. the second presentation of a visual stimulus in an all visual block (AA2-VV2).We conclude that in ASD, combined audiovisual processing is more similar than unimodal processing to NTs. PMID:27148020
Stekelenburg, Jeroen J; Vroomen, Jean
The amplitude of auditory components of the event-related potential (ERP) is attenuated when sounds are self-generated compared to externally generated sounds. This effect has been ascribed to internal forward modals predicting the sensory consequences of one's own motor actions. Auditory potentials are also attenuated when a sound is accompanied by a video of anticipatory visual motion that reliably predicts the sound. Here, we investigated whether the neural underpinnings of prediction of upcoming auditory stimuli are similar for motor-auditory (MA) and visual-auditory (VA) events using a stimulus omission paradigm. In the MA condition, a finger tap triggered the sound of a handclap whereas in the VA condition the same sound was accompanied by a video showing the handclap. In both conditions, the auditory stimulus was omitted in either 50% or 12% of the trials. These auditory omissions induced early and mid-latency ERP components (oN1 and oN2, presumably reflecting prediction and prediction error), and subsequent higher-order error evaluation processes. The oN1 and oN2 of MA and VA were alike in amplitude, topography, and neural sources despite that the origin of the prediction stems from different brain areas (motor versus visual cortex). This suggests that MA and VA predictions activate a sensory template of the sound in auditory cortex. This article is part of a Special Issue entitled SI: Prediction and Attention. Copyright © 2015 Elsevier B.V. All rights reserved.
Christofek, L.; Rapidis, P.; Reinhard, A.; Fermilab
The Stimulus Test Stand was originally constructed and assembled for testing the SVX2 ASIC readout and then upgraded for SVX3 ASIC prototyping and testing. We have modified this system for SVX4 ASIC  prototype testing. We described the individual components below. Additional details for other hardware for SVX4 testing can be found in reference . We provide a description of the Stimulus Test Stand used for prototype testing of the SVX4 chip
van Wouwe, N.C.; van den Wildenberg, W.P.M.; Ridderinkhof, K.R.; Claassen, D.O.; Neimat, J.S.; Wylie, S.A.
The inhibition of impulsive response tendencies that conflict with goal-directed action is a key component of executive control. An emerging literature reveals that the proficiency of inhibitory control is modulated by expected or unexpected opportunities to earn reward or avoid punishment. However,
Ron-Angevin, Ricardo; Velasco-Álvarez, Francisco; Fernández-Rodríguez, Álvaro; Díaz-Estrella, Antonio; Blanca-Mena, María José; Vizcaíno-Martín, Francisco Javier
Certain diseases affect brain areas that control the movements of the patients' body, thereby limiting their autonomy and communication capacity. Research in the field of Brain-Computer Interfaces aims to provide patients with an alternative communication channel not based on muscular activity, but on the processing of brain signals. Through these systems, subjects can control external devices such as spellers to communicate, robotic prostheses to restore limb movements, or domotic systems. The present work focus on the non-muscular control of a robotic wheelchair. A proposal to control a wheelchair through a Brain-Computer Interface based on the discrimination of only two mental tasks is presented in this study. The wheelchair displacement is performed with discrete movements. The control signals used are sensorimotor rhythms modulated through a right-hand motor imagery task or mental idle state. The peculiarity of the control system is that it is based on a serial auditory interface that provides the user with four navigation commands. The use of two mental tasks to select commands may facilitate control and reduce error rates compared to other endogenous control systems for wheelchairs. Seventeen subjects initially participated in the study; nine of them completed the three sessions of the proposed protocol. After the first calibration session, seven subjects were discarded due to a low control of their electroencephalographic signals; nine out of ten subjects controlled a virtual wheelchair during the second session; these same nine subjects achieved a medium accuracy level above 0.83 on the real wheelchair control session. The results suggest that more extensive training with the proposed control system can be an effective and safe option that will allow the displacement of a wheelchair in a controlled environment for potential users suffering from some types of motor neuron diseases.
Ramirez, Luz Angela; Arenas, Angela Maria; Henao, Gloria Cecilia
Introduction: This investigation describes and compares characteristics of visual, semantic and auditory memory in a group of children diagnosed with combined-type attention deficit with hyperactivity, attention deficit predominating, and a control group. Method: 107 boys and girls were selected, from 7 to 11 years of age, all residents in the…
Full Text Available It has been previously demonstrated by our group that a visual stimulus made of dynamically changing luminance evokes an echo or reverberation at ~10 Hz, lasting up to a second. In this study we aimed to reveal whether similar echoes also exist in the auditory modality. A dynamically changing auditory stimulus equivalent to the visual stimulus was designed and employed in two separate series of experiments, and the presence of reverberations was analyzed based on reverse correlations between stimulus sequences and EEG epochs. The first experiment directly compared visual and auditory stimuli: while previous findings of ~10 Hz visual echoes were verified, no similar echo was found in the auditory modality regardless of frequency. In the second experiment, we tested if auditory sequences would influence the visual echoes when they were congruent or incongruent with the visual sequences. However, the results in that case similarly did not reveal any auditory echoes, nor any change in the characteristics of visual echoes as a function of audio-visual congruence. The negative findings from these experiments suggest that brain oscillations do not equivalently affect early sensory processes in the visual and auditory modalities, and that alpha (8-13 Hz oscillations play a special role in vision.
Ulanovsky, Nachum; Las, Liora; Farkas, Dina; Nelken, Israel
Neurons in primary auditory cortex (A1) of cats show strong stimulus-specific adaptation (SSA). In probabilistic settings, in which one stimulus is common and another is rare, responses to common sounds adapt more strongly than responses to rare sounds. This SSA could be a correlate of auditory sensory memory at the level of single A1 neurons. Here we studied adaptation in A1 neurons, using three different probabilistic designs. We showed that SSA has several time scales concurrently, spanning many orders of magnitude, from hundreds of milliseconds to tens of seconds. Similar time scales are known for the auditory memory span of humans, as measured both psychophysically and using evoked potentials. A simple model, with linear dependence on both short-term and long-term stimulus history, provided a good fit to A1 responses. Auditory thalamus neurons did not show SSA, and their responses were poorly fitted by the same model. In addition, SSA increased the proportion of failures in the responses of A1 neurons to the adapting stimulus. Finally, SSA caused a bias in the neuronal responses to unbiased stimuli, enhancing the responses to eccentric stimuli. Therefore, we propose that a major function of SSA in A1 neurons is to encode auditory sensory memory on multiple time scales. This SSA might play a role in stream segregation and in binding of auditory objects over many time scales, a property that is crucial for processing of natural auditory scenes in cats and of speech and music in humans.
Wang, Wuyi; Viswanathan, Shivakumar; Lee, Taraz; Grafton, Scott T
Cortical theta band oscillations (4-8 Hz) in EEG signals have been shown to be important for a variety of different cognitive control operations in visual attention paradigms. However the synchronization source of these signals as defined by fMRI BOLD activity and the extent to which theta oscillations play a role in multimodal attention remains unknown. Here we investigated the extent to which cross-modal visual and auditory attention impacts theta oscillations. Using a simultaneous EEG-fMRI paradigm, healthy human participants performed an attentional vigilance task with six cross-modal conditions using naturalistic stimuli. To assess supramodal mechanisms, modulation of theta oscillation amplitude for attention to either visual or auditory stimuli was correlated with BOLD activity by conjunction analysis. Negative correlation was localized to cortical regions associated with the default mode network and positively with ventral premotor areas. Modality-associated attention to visual stimuli was marked by a positive correlation of theta and BOLD activity in fronto-parietal area that was not observed in the auditory condition. A positive correlation of theta and BOLD activity was observed in auditory cortex, while a negative correlation of theta and BOLD activity was observed in visual cortex during auditory attention. The data support a supramodal interaction of theta activity with of DMN function, and modality-associated processes within fronto-parietal networks related to top-down theta related cognitive control in cross-modal visual attention. On the other hand, in sensory cortices there are opposing effects of theta activity during cross-modal auditory attention.
Pacheco-Unguetti, Antonia Pilar; Parmentier, Fabrice B R
Rare and unexpected changes (deviants) in an otherwise repeated stream of task-irrelevant auditory distractors (standards) capture attention and impair behavioural performance in an ongoing visual task. Recent evidence indicates that this effect is increased by sadness in a task involving neutral stimuli. We tested the hypothesis that such effect may not be limited to negative emotions but reflect a general depletion of attentional resources by examining whether a positive emotion (happiness) would increase deviance distraction too. Prior to performing an auditory-visual oddball task, happiness or a neutral mood was induced in participants by means of the exposure to music and the recollection of an autobiographical event. Results from the oddball task showed significantly larger deviance distraction following the induction of happiness. Interestingly, the small amount of distraction typically observed on the standard trial following a deviant trial (post-deviance distraction) was not increased by happiness. We speculate that happiness might interfere with the disengagement of attention from the deviant sound back towards the target stimulus (through the depletion of cognitive resources and/or mind wandering) but help subsequent cognitive control to recover from distraction. © 2015 The British Psychological Society.
Nees, Michael A.
Researchers have shown increased interest in mechanisms of working memory for nonverbal sounds such as music and environmental sounds. These studies often have used two-stimulus comparison tasks: two sounds separated by a brief retention interval (often 3 to 5 s) are compared, and a same or different judgment is recorded. Researchers seem to have assumed that sensory memory has a negligible impact on performance in auditory two-stimulus comparison tasks. This assumption is examined in detai...
Wang, Kai; Li, Qi; Zheng, Ya; Wang, Hongbin; Liu, Xun
The ability to detect and resolve conflict is an essential function of cognitive control. Laboratory studies often use stimulus-response-compatibility (SRC) tasks to examine conflict processing in order to elucidate the mechanism and modular organization of cognitive control. Inspired by two influential theories regarding cognitive control, the conflict monitoring theory (Botvinick, Braver, Barch, Carter, & Cohen, 2001) and dimensional overlap taxonomy (Kornblum, Hasbroucq, & Osman, 1990), we explored the temporal and spectral similarities and differences between processing of stimulus-stimulus (S-S) and stimulus-response (S-R) conflicts with event related potential (ERP) and time-frequency measures. We predicted that processing of S-S conflict starts earlier than that of S-R conflict and that the two types of conflict may involve different frequency bands. Participants were asked to perform two parallel SRC tasks, both combining the Stroop task (involving S-S conflict) and Simon task (involving S-R conflict). ERP results showed pronounced SRC effects (incongruent vs. congruent) on N2 and P3 components for both S-S and S-R conflicts. In both tasks, SRC effects of S-S conflict took place earlier than those of S-R conflict. Time-frequency analysis revealed that both types of SRC effects modulated theta and alpha bands, while S-R conflict effects additionally modulated power in the beta band. These results indicated that although S-S and S-R conflict processing shared considerable ERP and time-frequency properties, they differed in temporal and spectral dynamics. We suggest that the modular organization of cognitive control should take both commonality and distinction of S-S and S-R conflict processing into consideration. Copyright © 2013 Elsevier Inc. All rights reserved.
Full Text Available In this study, we focus our investigation on task-specific cognitive modulation of early cortical auditory processing in human cerebral cortex. During the experiments, we acquired whole-head magnetoencephalography (MEG data while participants were performing an auditory delayed-match-to-sample (DMS task and associated control tasks. Using a spatial filtering beamformer technique to simultaneously estimate multiple source activities inside the human brain, we observed a significant DMS-specific suppression of the auditory evoked response to the second stimulus in a sound pair, with the center of the effect being located in the vicinity of the left auditory cortex. For the right auditory cortex, a non-invariant suppression effect was observed in both DMS and control tasks. Furthermore, analysis of coherence revealed a beta band (12 ~ 20 Hz DMS-specific enhanced functional interaction between the sources in left auditory cortex and those in left inferior frontal gyrus, which has been shown to involve in short-term memory processing during the delay period of DMS task. Our findings support the view that early evoked cortical responses to incoming acoustic stimuli can be modulated by task-specific cognitive functions by means of frontal-temporal functional interactions.
Seyed Kazem Mousavi-Sadati
Full Text Available Objective: This research was aimed at investigating the theory of multiple resources and central resource of attention on secondary task performance of talking with two types of cell phone during driving. Materials & Methods: Using disposal sampling, 25 male participants were selected and their reaction to auditory stimulus in three different driving conditions (no conversation with phone, conversation with handheld phone and hands-free phone were recorded. Driving conditions have been changed from a participant to another participant in order to control the sequence of tests and participants familiarity with the test conditions. Results: the results of data analysis with descriptive statistics and Mauchly’s Test of Sphericity, One- factor repeated measures ANOVA and Paired-Samples T test showed that different driving conditions can affect the reaction time (P0.001. Phone Conversation with hands-free phone increases drivers’ simple reaction time to auditory stimulus (P<0.001. Using handheld phone does not increase drivers’ reaction time to auditory stimulus over hands-free phone (P<0.001. Conclusion: The results confirmed that the performance quality of dual tasks and multiple tasks can be predicted by Four-dimensional multiple resources model of attention and all traffic laws in connection with the handheld phone also have to be spread to the use of hands-free phone.
Bluell, Alexandra M.; Montgomery, Derek E.
The day-night paradigm, where children respond to a pair of pictures with opposite labels for a series of trials, is a widely used measure of interference control. Recent research has shown that a happy-sad variant of the day-night task was significantly more difficult than the standard day-night task. The present research examined whether the…
Wijngaarden, S.J. van; Bronkhorst, A.W.; Boer, L.C.
Auditory evacuation beacons can be used to guide people to safe exits, even when vision is totally obscured by smoke. Conventional beacons make use of modulated noise signals. Controlled evacuation experiments show that such signals require explicit instructions and are often misunderstood. A new
Heimler, Benedetta; Pavani, Francesco; Donk, Mieke; van Zoest, Wieske
Action videogame players (AVGPs) have been shown to outperform nongamers (NVGPs) in covert visual attention tasks. These advantages have been attributed to improved top-down control in this population. The time course of visual selection, which permits researchers to highlight when top-down strategies start to control performance, has rarely been investigated in AVGPs. Here, we addressed specifically this issue through an oculomotor additional-singleton paradigm. Participants were instructed to make a saccadic eye movement to a unique orientation singleton. The target was presented among homogeneous nontargets and one additional orientation singleton that was more, equally, or less salient than the target. Saliency was manipulated in the color dimension. Our results showed similar patterns of performance for both AVGPs and NVGPs: Fast-initiated saccades were saliency-driven, whereas later-initiated saccades were more goal-driven. However, although AVGPs were faster than NVGPs, they were also less accurate. Importantly, a multinomial model applied to the data revealed comparable underlying saliency-driven and goal-driven functions for the two groups. Taken together, the observed differences in performance are compatible with the presence of a lower decision bound for releasing saccades in AVGPs than in NVGPs, in the context of comparable temporal interplay between the underlying attentional mechanisms. In sum, the present findings show that in both AVGPs and NVGPs, the implementation of top-down control in visual selection takes time to come about, and they argue against the idea of a general enhancement of top-down control in AVGPs.
Jennifer L. O’Brien
Full Text Available Auditory cognitive training (ACT improves attention in older adults; however, the underlying neurophysiological mechanisms are still unknown. The present study examined the effects of ACT on the P3b event-related potential reflecting attention allocation (amplitude and speed of processing (latency during stimulus categorization and the P1-N1-P2 complex reflecting perceptual processing (amplitude and latency. Participants completed an auditory oddball task before and after 10 weeks of ACT (n = 9 or a no contact control period (n = 15. Parietal P3b amplitudes to oddball stimuli decreased at post-test in the trained group as compared to those in the control group, and frontal P3b amplitudes show a similar trend, potentially reflecting more efficient attentional allocation after ACT. No advantages for the ACT group were evident for auditory perceptual processing or speed of processing in this small sample. Our results provide preliminary evidence that ACT may enhance the efficiency of attention allocation, which may account for the positive impact of ACT on the everyday functioning of older adults.
Leiva, Alicia; Parmentier, Fabrice B R; Andrés, Pilar
We report the results of oddball experiments in which an irrelevant stimulus (standard, deviant) was presented before a target stimulus and the modality of these stimuli was manipulated orthogonally (visual/auditory). Experiment 1 showed that auditory deviants yielded distraction irrespective of the target's modality while visual deviants did not impact on performance. When participants were forced to attend the distractors in order to detect a rare target ("target-distractor"), auditory deviants yielded distraction irrespective of the target's modality and visual deviants yielded a small distraction effect when targets were auditory (Experiments 2 & 3). Visual deviants only produced distraction for visual targets when deviant stimuli were not visually distinct from the other distractors (Experiment 4). Our results indicate that while auditory deviants yield distraction irrespective of the targets' modality, visual deviants only do so when attended and under selective conditions, at least when irrelevant and target stimuli are temporally and perceptually decoupled.
Rokem, Ariel; Ahissar, Merav
Congenitally blind individuals have been found to show superior performance in perceptual and memory tasks. In the present study, we asked whether superior stimulus encoding could account for performance in memory tasks. We characterized the performance of a group of congenitally blind individuals on a series of auditory, memory and executive cognitive tasks and compared their performance to that of sighted controls matched for age, education and musical training. As expected, we found superior verbal spans among congenitally blind individuals. Moreover, we found superior speech perception, measured by resilience to noise, and superior auditory frequency discrimination. However, when memory span was measured under conditions of equivalent speech perception, by adjusting the signal to noise ratio for each individual to the same level of perceptual difficulty (80% correct), the advantage in memory span was completely eliminated. Moreover, blind individuals did not possess any advantage in cognitive executive functions, such as manipulation of items in memory and math abilities. We propose that the short-term memory advantage of blind individuals results from better stimulus encoding, rather than from superiority at subsequent processing stages.
Fetterman, J Gregor; Killeen, P Richard
Pigeons pecked on three keys, responses to one of which could be reinforced after 3 flashes of the houselight, to a second key after 6, and to a third key after 12. The flashes were arranged according to variable-interval schedules. Response allocation among the keys was a function of the number of flashes. When flashes were omitted, transitions occurred very late. Increasing flash duration produced a leftward shift in the transitions along a number axis. Increasing reinforcement probability produced a leftward shift, and decreasing reinforcement probability produced a rightward shift. Intermixing different flash rates within sessions separated allocations: Faster flash rates shifted the functions sooner in real time, but later in terms of flash count, and conversely for slower flash rates. A model of control by fading memories of number and time was proposed.
Marks, Kendra L; Martel, David T; Wu, Calvin; Basura, Gregory J; Roberts, Larry E; Schvartz-Leyzac, Kara C; Shore, Susan E
The dorsal cochlear nucleus is the first site of multisensory convergence in mammalian auditory pathways. Principal output neurons, the fusiform cells, integrate auditory nerve inputs from the cochlea with somatosensory inputs from the head and neck. In previous work, we developed a guinea pig model of tinnitus induced by noise exposure and showed that the fusiform cells in these animals exhibited increased spontaneous activity and cross-unit synchrony, which are physiological correlates of tinnitus. We delivered repeated bimodal auditory-somatosensory stimulation to the dorsal cochlear nucleus of guinea pigs with tinnitus, choosing a stimulus interval known to induce long-term depression (LTD). Twenty minutes per day of LTD-inducing bimodal (but not unimodal) stimulation reduced physiological and behavioral evidence of tinnitus in the guinea pigs after 25 days. Next, we applied the same bimodal treatment to 20 human subjects with tinnitus using a double-blinded, sham-controlled, crossover study. Twenty-eight days of LTD-inducing bimodal stimulation reduced tinnitus loudness and intrusiveness. Unimodal auditory stimulation did not deliver either benefit. Bimodal auditory-somatosensory stimulation that induces LTD in the dorsal cochlear nucleus may hold promise for suppressing chronic tinnitus, which reduces quality of life for millions of tinnitus sufferers worldwide. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.
Full Text Available The human auditory system is adept at detecting sound sources of interest from a complex mixture of several other simultaneous sounds. The ability to selectively attend to the speech of one speaker whilst ignoring other speakers and background noise is of vital biological significance-the capacity to make sense of complex 'auditory scenes' is significantly impaired in aging populations as well as those with hearing loss. We investigated this problem by designing a synthetic signal, termed the 'stochastic figure-ground' stimulus that captures essential aspects of complex sounds in the natural environment. Previously, we showed that under controlled laboratory conditions, young listeners sampled from the university subject pool (n = 10 performed very well in detecting targets embedded in the stochastic figure-ground signal. Here, we presented a modified version of this cocktail party paradigm as a 'game' featured in a smartphone app (The Great Brain Experiment and obtained data from a large population with diverse demographical patterns (n = 5148. Despite differences in paradigms and experimental settings, the observed target-detection performance by users of the app was robust and consistent with our previous results from the psychophysical study. Our results highlight the potential use of smartphone apps in capturing robust large-scale auditory behavioral data from normal healthy volunteers, which can also be extended to study auditory deficits in clinical populations with hearing impairments and central auditory disorders.
Teki, Sundeep; Kumar, Sukhbinder; Griffiths, Timothy D
The human auditory system is adept at detecting sound sources of interest from a complex mixture of several other simultaneous sounds. The ability to selectively attend to the speech of one speaker whilst ignoring other speakers and background noise is of vital biological significance-the capacity to make sense of complex 'auditory scenes' is significantly impaired in aging populations as well as those with hearing loss. We investigated this problem by designing a synthetic signal, termed the 'stochastic figure-ground' stimulus that captures essential aspects of complex sounds in the natural environment. Previously, we showed that under controlled laboratory conditions, young listeners sampled from the university subject pool (n = 10) performed very well in detecting targets embedded in the stochastic figure-ground signal. Here, we presented a modified version of this cocktail party paradigm as a 'game' featured in a smartphone app (The Great Brain Experiment) and obtained data from a large population with diverse demographical patterns (n = 5148). Despite differences in paradigms and experimental settings, the observed target-detection performance by users of the app was robust and consistent with our previous results from the psychophysical study. Our results highlight the potential use of smartphone apps in capturing robust large-scale auditory behavioral data from normal healthy volunteers, which can also be extended to study auditory deficits in clinical populations with hearing impairments and central auditory disorders.
Tsang, William W N; Lam, Nazca K Y; Lau, Kit N L; Leung, Harry C H; Tsang, Crystal M S; Lu, Xi
To investigate the effects of aging on postural control and cognitive performance in single- and dual-tasking. A cross-sectional comparative design was conducted in a university motion analysis laboratory. Young adults (n = 30; age 21.9 ± 2.4 years) and older adults (n = 30; age 71.9 ± 6.4 years) were recruited. Postural control after stepping down was measured with and without performing a concurrent auditory response task. Measurement included: (1) reaction time and (2) error rate in performing the cognitive task; (3) total sway path and (4) total sway area after stepping down. Our findings showed that the older adults had significantly longer reaction times and higher error rates than the younger subjects in both the single-tasking and dual-tasking conditions. The older adults had significantly longer reaction times and higher error rates when dual-tasking compared with single-tasking, but the younger adults did not. The older adults demonstrated significantly less total sway path, but larger total sway area in single-leg stance after stepping down than the young adults. The older adults showed no significant change in total sway path and area between the dual-tasking and when compared with single-tasking conditions, while the younger adults showed significant decreases in sway. Older adults prioritize postural control by sacrificing cognitive performance when faced with dual-tasking.
Gandhi, Pritesh Hariprasad; Gokhale, Pradnya A; Mehta, H B; Shah, C J
Reaction time is the time interval between the application of a stimulus and the appearance of appropriate voluntary response by a subject. It involves stimulus processing, decision making, and response programming. Reaction time study has been popular due to their implication in sports physiology. Reaction time has been widely studied as its practical implications may be of great consequence e.g., a slower than normal reaction time while driving can have grave results. To study simple auditory reaction time in congenitally blind subjects and in age sex matched sighted subjects. To compare the simple auditory reaction time between congenitally blind subjects and healthy control subjects. STUDY HAD BEEN CARRIED OUT IN TWO GROUPS: The 1(st) of 50 congenitally blind subjects and 2(nd) group comprises of 50 healthy controls. It was carried out on Multiple Choice Reaction Time Apparatus, Inco Ambala Ltd. (Accuracy±0.001 s) in a sitting position at Government Medical College and Hospital, Bhavnagar and at a Blind School, PNR campus, Bhavnagar, Gujarat, India. Simple auditory reaction time response with four different type of sound (horn, bell, ring, and whistle) was recorded in both groups. According to our study, there is no significant different in reaction time between congenital blind and normal healthy persons. Blind individuals commonly utilize tactual and auditory cues for information and orientation and they reliance on touch and audition, together with more practice in using these modalities to guide behavior, is often reflected in better performance of blind relative to sighted participants in tactile or auditory discrimination tasks, but there is not any difference in reaction time between congenitally blind and sighted people.
Lubomir Kostal; Lansky, Petr; Pilarski, Stevan
One of the primary goals of neuroscience is to understand how neurons encode and process information about their environment. The problem is often approached indirectly by examining the degree to which the neuronal response reflects the stimulus feature of interest. In this context, the methods of signal estimation and detection theory provide the theoretical limits on the decoding accuracy with which the stimulus can be identified. The Cramér-Rao lower bound on the decoding precision is widely used, since it can be evaluated easily once the mathematical model of the stimulus-response relationship is determined. However, little is known about the behavior of different decoding schemes with respect to the bound if the neuronal population size is limited. We show that under broad conditions the optimal decoding displays a threshold-like shift in performance in dependence on the population size. The onset of the threshold determines a critical range where a small increment in size, signal-to-noise ratio or observation time yields a dramatic gain in the decoding precision. We demonstrate the existence of such threshold regions in early auditory and olfactory information coding. We discuss the origin of the threshold effect and its impact on the design of effective coding approaches in terms of relevant population size.
Coleman, A Rand; Williams, J Michael
This study examined implicit semantic and rhyming cues on perception of auditory stimuli among nonaphasic participants who suffered a lesion of the right cerebral hemisphere and auditory neglect of sound perceived by the left ear. Because language represents an elaborate processing of auditory stimuli and the language centers were intact among these patients, it was hypothesized that interactive verbal stimuli presented in a dichotic manner would attenuate neglect. The selected participants were administered an experimental dichotic listening test composed of six types of word pairs: unrelated words, synonyms, antonyms, categorically related words, compound words, and rhyming words. Presentation of word pairs that were semantically related resulted in a dramatic reduction of auditory neglect. Dichotic presentations of rhyming words exacerbated auditory neglect. These findings suggest that the perception of auditory information is strongly affected by the specific content conveyed by the auditory system. Language centers will process a degraded stimulus that contains salient language content. A degraded auditory stimulus is neglected if it is devoid of content that activates the language centers or other cognitive systems. In general, these findings suggest that auditory neglect involves a complex interaction of intact and impaired cerebral processing centers with content that is selectively processed by these centers.
Abdollah Moossavi; Saeideh Mehrkian; Yones Lotfi; Soghrat Faghih zadeh; Hamed Adjedi
Objectives: This study investigated the efficacy of working memory training for improving working memory capacity and related auditory stream segregation in auditory processing disorders children. Methods: Fifteen subjects (9-11 years), clinically diagnosed with auditory processing disorder participated in this non-randomized case-controlled trial. Working memory abilities and auditory stream segregation were evaluated prior to beginning and six weeks after completing the training program...
Lau, Bonnie K; Ruggles, Dorea R; Katyal, Sucharit; Engel, Stephen A; Oxenham, Andrew J
Short-term training can lead to improvements in behavioral discrimination of auditory and visual stimuli, as well as enhanced EEG responses to those stimuli. In the auditory domain, fluency with tonal languages and musical training has been associated with long-term cortical and subcortical plasticity, but less is known about the effects of shorter-term training. This study combined electroencephalography (EEG) and behavioral measures to investigate short-term learning and neural plasticity in both auditory and visual domains. Forty adult participants were divided into four groups. Three groups trained on one of three tasks, involving discrimination of auditory fundamental frequency (F0), auditory amplitude modulation rate (AM), or visual orientation (VIS). The fourth (control) group received no training. Pre- and post-training tests, as well as retention tests 30 days after training, involved behavioral discrimination thresholds, steady-state visually evoked potentials (SSVEP) to the flicker frequencies of visual stimuli, and auditory envelope-following responses simultaneously evoked and measured in response to rapid stimulus F0 (EFR), thought to reflect subcortical generators, and slow amplitude modulation (ASSR), thought to reflect cortical generators. Enhancement of the ASSR was observed in both auditory-trained groups, not specific to the AM-trained group, whereas enhancement of the SSVEP was found only in the visually-trained group. No evidence was found for changes in the EFR. The results suggest that some aspects of neural plasticity can develop rapidly and may generalize across tasks but not across modalities. Behaviorally, the pattern of learning was complex, with significant cross-task and cross-modal learning effects.
Anthony D Cate
Full Text Available BACKGROUND: Recent neuroimaging studies have revealed that putatively unimodal regions of visual cortex can be activated during auditory tasks in sighted as well as in blind subjects. However, the task determinants and functional significance of auditory occipital activations (AOAs remains unclear. METHODOLOGY/PRINCIPAL FINDINGS: We examined AOAs in an intermodal selective attention task to distinguish whether they were stimulus-bound or recruited by higher-level cognitive operations associated with auditory attention. Cortical surface mapping showed that auditory occipital activations were localized to retinotopic visual cortex subserving the far peripheral visual field. AOAs depended strictly on the sustained engagement of auditory attention and were enhanced in more difficult listening conditions. In contrast, unattended sounds produced no AOAs regardless of their intensity, spatial location, or frequency. CONCLUSIONS/SIGNIFICANCE: Auditory attention, but not passive exposure to sounds, routinely activated peripheral regions of visual cortex when subjects attended to sound sources outside the visual field. Functional connections between auditory cortex and visual cortex subserving the peripheral visual field appear to underlie the generation of AOAs, which may reflect the priming of visual regions to process soon-to-appear objects associated with unseen sound sources.
Elizabeth C Hames
Full Text Available Electroencephalography (EEG and Blood Oxygen Level Dependent Functional Magnetic Resonance Imagining (BOLD fMRI assessed the neurocorrelates of sensory processing of visual and auditory stimuli in 11 adults with autism (ASD and 10 neurotypical (NT controls between the ages of 20-28. We hypothesized that ASD performance on combined audiovisual trials would be less accurate with observable decreased EEG power across frontal, temporal, and occipital channels and decreased BOLD fMRI activity in these same regions; reflecting deficits in key sensory processing areas. Analysis focused on EEG power, BOLD fMRI, and accuracy. Lower EEG beta power and lower left auditory cortex fMRI activity were seen in ASD compared to NT when they were presented with auditory stimuli as demonstrated by contrasting the activity from the second presentation of an auditory stimulus in an all auditory block versus the second presentation of a visual stimulus in an all visual block (AA2VV2. We conclude that in ASD, combined audiovisual processing is more similar than unimodal processing to NTs.
Coelho, Cesar A.O.; Dunsmoor, Joseph E.; Phelps, Elizabeth A.
Fear-related behaviors are prone to relapse following extinction. We tested in humans a compound extinction design (“deepened extinction”) shown in animal studies to reduce post-extinction fear recovery. Adult subjects underwent fear conditioning to a visual and an auditory conditioned stimulus (CSA and CSB, respectively) separately paired with an electric shock. The target CS (CSA) was extinguished alone followed by compound presentations of the extinguished CSA and nonextinguished CSB. Reco...
Lu, Xi; Siu, Ka-Chun; Fu, Siu N; Hui-Chan, Christina W Y; Tsang, William W N
To compare the performance of older experienced Tai Chi practitioners and healthy controls in dual-task versus single-task paradigms, namely stepping down with and without performing an auditory response task, a cross-sectional study was conducted in the Center for East-meets-West in Rehabilitation Sciences at The Hong Kong Polytechnic University, Hong Kong. Twenty-eight Tai Chi practitioners (73.6 ± 4.2 years) and 30 healthy control subjects (72.4 ± 6.1 years) were recruited. Participants were asked to step down from a 19-cm-high platform and maintain a single-leg stance for 10 s with and without a concurrent cognitive task. The cognitive task was an auditory Stroop test in which the participants were required to respond to different tones of voices regardless of their word meanings. Postural stability after stepping down under single- and dual-task paradigms, in terms of excursion of the subject's center of pressure (COP) and cognitive performance, was measured for comparison between the two groups. Our findings demonstrated significant between-group differences in more outcome measures during dual-task than single-task performance. Thus, the auditory Stroop test showed that Tai Chi practitioners achieved not only significantly less error rate in single-task, but also significantly faster reaction time in dual-task, when compared with healthy controls similar in age and other relevant demographics. Similarly, the stepping-down task showed that Tai Chi practitioners not only displayed significantly less COP sway area in single-task, but also significantly less COP sway path than healthy controls in dual-task. These results showed that Tai Chi practitioners achieved better postural stability after stepping down as well as better performance in auditory response task than healthy controls. The improved performance that was magnified by dual motor-cognitive task performance may point to the benefits of Tai Chi being a mind-and-body exercise.
BERNARD DEMANZE eLaurence
Full Text Available Posture control is based on central integration of multisensory inputs, and on internal representation of body orientation in space. This multisensory feedback regulates posture control and continuously updates the internal model of body’s position which in turn forwards motor commands adapted to the environmental context and constraints. The peripheral localization of the vestibular system, close to the cochlea, makes vestibular damage possible following cochlear implant (CI surgery. Impaired vestibular function in CI patients, if any, may have a strong impact on posture stability. The simple postural task of quiet standing is generally paired with cognitive activity in most day life conditions, leading therefore to competition for attentional resources in dual-tasking, and increased risk of fall particularly in patients with impaired vestibular function. This study was aimed at evaluating the effects of post-lingual cochlear implantation on posture control in adult deaf patients. Possible impairment of vestibular function was assessed by comparing the postural performance of patients to that of age-matched healthy subjects during a simple postural task performed in static and dynamic conditions, and during dual-tasking with a visual or auditory memory task. Postural tests were done in eyes open (EO and eyes closed (EC conditions, with the cochlear implant activated (ON or not (OFF. Results showed that the CI patients significantly reduced limits of stability and increased postural instability in static conditions. In dynamic conditions, they spent considerably more energy to maintain equilibrium, and their head was stabilized neither in space nor on trunk while the controls showed a whole body rigidification strategy. Hearing (prosthesis on as well as dual-tasking did not really improve the dynamic postural performance of the CI patients. We conclude that CI patients become strongly visual dependent mainly in challenging postural conditions.
Full Text Available BACKGROUND: Repetitive transcranial magnetic stimulation of the left temporo-parietal junction area has been studied as a treatment option for auditory verbal hallucinations. Although the right temporo-parietal junction area has also shown involvement in the genesis of auditory verbal hallucinations, no studies have used bilateral stimulation. Moreover, little is known about durability effects. We studied the short and long term effects of 1 Hz treatment of the left temporo-parietal junction area in schizophrenia patients with persistent auditory verbal hallucinations, compared to sham stimulation, and added an extra treatment arm of bilateral TPJ area stimulation. METHODS: In this randomized controlled trial, 51 patients diagnosed with schizophrenia and persistent auditory verbal hallucinations were randomly allocated to treatment of the left or bilateral temporo-parietal junction area or sham treatment. Patients were treated for six days, twice daily for 20 minutes. Short term efficacy was measured with the Positive and Negative Syndrome Scale (PANSS, the Auditory Hallucinations Rating Scale (AHRS, and the Positive and Negative Affect Scale (PANAS. We included follow-up measures with the AHRS and PANAS at four weeks and three months. RESULTS: The interaction between time and treatment for Hallucination item P3 of the PANSS showed a trend for significance, caused by a small reduction of scores in the left group. Although self-reported hallucination scores, as measured with the AHRS and PANAS, decreased significantly during the trial period, there were no differences between the three treatment groups. CONCLUSION: We did not find convincing evidence for the efficacy of left-sided rTMS, compared to sham rTMS. Moreover, bilateral rTMS was not superior over left rTMS or sham in improving AVH. Optimizing treatment parameters may result in stronger evidence for the efficacy of rTMS treatment of AVH. Moreover, future research should consider
Zokoll, Melanie A; Klump, Georg M; Langemann, Ulrike
This study evaluates auditory memory for variations in the rate of sinusoidal amplitude modulation (SAM) of noise bursts in the European starling (Sturnus vulgaris). To estimate the extent of the starling's auditory short-term memory store, a delayed non-matching-to-sample paradigm was applied. The birds were trained to discriminate between a series of identical "sample stimuli" and a single "test stimulus". The birds classified SAM rates of sample and test stimuli as being either the same or different. Memory performance of the birds was measured as the percentage of correct classifications. Auditory memory persistence time was estimated as a function of the delay between sample and test stimuli. Memory performance was significantly affected by the delay between sample and test and by the number of sample stimuli presented before the test stimulus, but was not affected by the difference in SAM rate between sample and test stimuli. The individuals' auditory memory persistence times varied between 2 and 13 s. The starlings' auditory memory persistence in the present study for signals varying in the temporal domain was significantly shorter compared to that of a previous study (Zokoll et al. in J Acoust Soc Am 121:2842, 2007) applying tonal stimuli varying in the spectral domain.
Buchholz, Jörg; Kerketsos, P
filterbank was designed to approximate auditory filter-shapes measured by Oxenham and Shera [JARO, 2003, 541-554], derived from forward masking data. The results of the present study demonstrate that a “purely” spectrum-based model approach can successfully describe auditory coloration detection even at high......When an early wall reflection is added to a direct sound, a spectral modulation is introduced to the signal's power spectrum. This spectral modulation typically produces an auditory sensation of coloration or pitch. Throughout this study, auditory spectral-integration effects involved in coloration...... detection are investigated. Coloration detection thresholds were therefore measured as a function of reflection delay and stimulus bandwidth. In order to investigate the involved auditory mechanisms, an auditory model was employed that was conceptually similar to the peripheral weighting model [Yost, JASA...
Full Text Available Drawing on theoretical and computational work with the localist Dual Route reading model and results from behavioral studies, Besner, Moroz, and O'Malley (2011 proposed that the ability to perform tasks that require overriding stimulus-specific defaults (e.g., semantics when naming Arabic numerals, and phonology when evaluating the parity of number words necessitate the ability to modulate the strength of connections between cognitive modules for lexical representation, semantics, and phonology on a task- and stimulus-specific basis. We used fMRI to evaluate this account by assessing changes in functional connectivity while participants performed tasks that did and did not require such stimulus-task default overrides. The occipital region showing the greatest modulation of BOLD signal strength for the two stimulus types was used as the seed region for Granger Causality Mapping (GCM. Our GCM analysis revealed a region of rostromedial frontal cortex with a crossover interaction. When participants performed tasks that required overriding stimulus type defaults (i.e., parity judgments of number words and naming Arabic numerals functional connectivity between the occipital region and rostromedial frontal cortex was present. Statistically significant functional connectivity was absent when the tasks were the default for the stimulus type (i.e., parity judgments of Arabic numerals and reading number words. This frontal region (BA 10 has previously been shown to be involved in goal-directed behaviour and maintenance of a specific task-set. We conclude that overriding stimulus-task defaults requires a modulation of connection strengths between cognitive modules and that the override mechanism predicted from cognitive theory is instantiated by frontal modulation of neural activity of brain regions specialized for sensory processing.
Johanna C. Badcock
Full Text Available The National Institute of Mental Health initiative called the Research Domain Criteria (RDoC project aims to provide a new approach to understanding mental illness grounded in the fundamental domains of human behaviour and psychological functioning. To this end the RDoC framework encourages researchers and clinicians to think outside the [diagnostic]box, by studying symptoms, behaviours or biomarkers that cut across traditional mental illness categories. In this article we examine and discuss how the RDoC framework can improve our understanding of psychopathology by zeroing in on hallucinations- now widely recognized as a symptom that occurs in a range of clinical and non-clinical groups. We focus on a single domain of functioning - namely cognitive [inhibitory] control - and assimilate key findings structured around the basic RDoC units of analysis, which span the range from observable behaviour to molecular genetics. Our synthesis and critique of the literature provides a deeper understanding of the mechanisms involved in the emergence of auditory hallucinations, linked to the individual dynamics of inhibitory development before and after puberty; favours separate developmental trajectories for clinical and nonclinical hallucinations; yields new insights into co-occurring emotional and behavioural problems; and suggests some novel avenues for treatment.
Kara, Inci; Apiliogullari, Seza; Bagcı Taylan, Sengal; Bariskaner, Hulagu; Celik, Jale Bengi
This study was designed to investigate whether dexketoprofen added to perineuraly or subcutaneously alters the effects of levobupivacaine in a rat model of sciatic nerve blockade. Thirty-six rats received unilateral sciatic nerve blocks along with a subcutaneous injection by a blinded investigator assigned at random. Combinations were as follows: Group 1 (sham) perineural and subcutaneous saline; Group 2, perineural levobupivacaine alone and subcutaneous saline; Group 3, perineural levobupivacaine plus dexketoprofen and subcutaneous saline; Group 4, perineural levobupivacaine and subcutaneous dexketoprofen; Group 5, perineural dexketoprofen and subcutaneous saline; and Group 6, perineural saline and subcutaneous dexketoprofen. The levobupivacaine concentration was fixed at 0.05%, and the dose of dexketoprofen was 1 mg kg(-1) . Sensory analgesia was assessed by paw withdrawal latency to a thermal stimulus every 30 min. The unblocked paw served as the control for the assessment of systemic, centrally mediated analgesia. Perineural and subcutaneous dexketoprofen coadministered with perineural levobupivacaine did not enhance the duration of sensory blockade when compared with levobupivacaine alone. There were significant differences between the operative and control paws for time points 30-90 min in the perineural levobupivacaine alone, levobupivacaine + dexketoprofen and subcutaneous dexketoprofen added levobupivacaine group. Significant differences were not determined between the levobupivacaine alone group and dexketoprofen added groups in operative paw. The effects of dexketoprofen are unknown for perineural administration. There is no significant difference between the analgesic effects of peripheral nerve blocks using levobupivacaine alone and plus subcutaneous or perineural dexketoprofen. © 2012 The Authors Fundamental and Clinical Pharmacology © 2012 Société Française de Pharmacologie et de Thérapeutique.
Corey, Jason Andrew
An integrated system providing dynamic control of sound source azimuth, distance and proximity to a room boundary within a simulated acoustic space is proposed for use in multichannel music and film sound production. The system has been investigated, implemented, and psychoacoustically tested within the ITU-R BS.775 recommended five-channel (3/2) loudspeaker layout. The work brings together physical and perceptual models of room simulation to allow dynamic placement of virtual sound sources at any location of a simulated space within the horizontal plane. The control system incorporates a number of modules including simulated room modes, "fuzzy" sources, and tracking early reflections, whose parameters are dynamically changed according to sound source location within the simulated space. The control functions of the basic elements, derived from theories of perception of a source in a real room, have been carefully tuned to provide efficient, effective, and intuitive control of a sound source's perceived location. Seven formal listening tests were conducted to evaluate the effectiveness of the algorithm design choices. The tests evaluated: (1) loudness calibration of multichannel sound images; (2) the effectiveness of distance control; (3) the resolution of distance control provided by the system; (4) the effectiveness of the proposed system when compared to a commercially available multichannel room simulation system in terms of control of source distance and proximity to a room boundary; (5) the role of tracking early reflection patterns on the perception of sound source distance; (6) the role of tracking early reflection patterns on the perception of lateral phantom images. The listening tests confirm the effectiveness of the system for control of perceived sound source distance, proximity to room boundaries, and azimuth, through fine, dynamic adjustment of parameters according to source location. All of the parameters are grouped and controlled together to
Full Text Available Abstract Background Parkinson's disease is a progressive neurological disorder resulting from a degeneration of dopamine producing cells in the substantia nigra. Clinical symptoms typically affect gait pattern and motor performance. Evidence suggests that the use of individual auditory cueing devices may be used effectively for the management of gait and freezing in people with Parkinson's disease. The primary aim of the randomised controlled trial is to evaluate the effect of an individual auditory cueing device on freezing and gait speed in people with Parkinson's disease. Methods A prospective multi-centre randomised cross over design trial will be conducted. Forty-seven subjects will be randomised into either Group A or Group B, each with a control and intervention phase. Baseline measurements will be recorded using the Freezing of Gait Questionnaire as the primary outcome measure and 3 secondary outcome measures, the 10 m Walk Test, Timed "Up & Go" Test and the Modified Falls Efficacy Scale. Assessments are taken 3-times over a 3-week period. A follow-up assessment will be completed after three months. A secondary aim of the study is to evaluate the impact of such a device on the quality of life of people with Parkinson's disease using a qualitative methodology. Conclusion The Apple iPod-Shuffle™ and similar devices provide a cost effective and an innovative platform for integration of individual auditory cueing devices into clinical, social and home environments and are shown to have immediate effect on gait, with improvements in walking speed, stride length and freezing. It is evident that individual auditory cueing devices are of benefit to people with Parkinson's disease and the aim of this randomised controlled trial is to maximise the benefits by allowing the individual to use devices in both a clinical and social setting, with minimal disruption to their daily routine. Trial registration The protocol for this study is registered
Manoonpong, Poramate; Pasemann, Frank; Fischer, Joern
and a neural preprocessing system together with a modular neural controller are used to generate a sound tropism of a four-legged walking machine. The neural preprocessing network is acting as a low-pass filter and it is followed by a network which discerns between signals coming from the left or the right....... The parameters of these networks are optimized by an evolutionary algorithm. In addition, a simple modular neural controller then generates the desired different walking patterns such that the machine walks straight, then turns towards a switched-on sound source, and then stops near to it....
Pluta, Scott R; Rowland, Benjamin A; Stanford, Terrence R; Stein, Barry E
In environments containing sensory events at competing locations, selecting a target for orienting requires prioritization of stimulus values. Although the superior colliculus (SC) is causally linked to the stimulus selection process, the manner in which SC multisensory integration operates in a competitive stimulus environment is unknown. Here we examined how the activity of visual-auditory SC neurons is affected by placement of a competing target in the opposite hemifield, a stimulus configuration that would, in principle, promote interhemispheric competition for access to downstream motor circuitry. Competitive interactions between the targets were evident in how they altered unisensory and multisensory responses of individual neurons. Responses elicited by a cross-modal stimulus (multisensory responses) proved to be substantially more resistant to competitor-induced depression than were unisensory responses (evoked by the component modality-specific stimuli). Similarly, when a cross-modal stimulus served as the competitor, it exerted considerably more depression than did its individual component stimuli, in some cases producing more depression than predicted by their linear sum. These findings suggest that multisensory integration can help resolve competition among multiple targets by enhancing orientation to the location of cross-modal events while simultaneously suppressing orientation to events at alternate locations.
Wittenbach, Jason D.
Sequential behaviors are an important part of the behavioral repertoire of many animals and understanding how neural circuits encode and generate such sequences is a long-standing question in neuroscience. The Bengalese finch is a useful model system for studying variable action sequences. The songs of these birds consist of well-defined vocal elements (syllables) that are strung together to form sequences. The ordering of the syllables within the sequence is variable but not random - it shows complex statistical patterns (syntax). While often thought to be first-order, the syntax of the Bengalese finch song shows a distinct form of history dependence where the probability of repeating a syllable decreases as a function of the number of repetitions that have already occurred. Current models of the Bengalese finch song control circuitry offer no explanation for this repetition adaptation. The Bengalese finch also uses real-time auditory feedback to control the song syntax. Considering these facts, we hypothesize that repetition adaptation in the Bengalese finch syntax may be caused by stimulus-specific adaptation - a wide-spread phenomenon where neural responses to a specific stimulus become weaker with repeated presentations of the same stimulus. We begin by proposing a computational model for the song-control circuit where an auditory feedback signal that undergoes stimulus-specific adaptation helps drive repeated syllables. We show that this model does indeed capture the repetition adaptation observed in Bengalese finch syntax; along the way, we derive a new probabilistic model for repetition adaptation. Key predictions of our model are analyzed in light of experiments performed by collaborators. Next we extend the model in order to predict how the syntax will change as a function of brain temperature. These predictions are compared to experimental results from collaborators where portions of the Bengalese finch song circuit are cooled in awake and behaving birds
Ciaramitaro, Vivian M; Chow, Hiu Mei; Eglington, Luke G
We used a cross-modal dual task to examine how changing visual-task demands influenced auditory processing, namely auditory thresholds for amplitude- and frequency-modulated sounds. Observers had to attend to two consecutive intervals of sounds and report which interval contained the auditory stimulus that was modulated in amplitude (Experiment 1) or frequency (Experiment 2). During auditory-stimulus presentation, observers simultaneously attended to a rapid sequential visual presentation-two consecutive intervals of streams of visual letters-and had to report which interval contained a particular color (low load, demanding less attentional resources) or, in separate blocks of trials, which interval contained more of a target letter (high load, demanding more attentional resources). We hypothesized that if attention is a shared resource across vision and audition, an easier visual task should free up more attentional resources for auditory processing on an unrelated task, hence improving auditory thresholds. Auditory detection thresholds were lower-that is, auditory sensitivity was improved-for both amplitude- and frequency-modulated sounds when observers engaged in a less demanding (compared to a more demanding) visual task. In accord with previous work, our findings suggest that visual-task demands can influence the processing of auditory information on an unrelated concurrent task, providing support for shared attentional resources. More importantly, our results suggest that attending to information in a different modality, cross-modal attention, can influence basic auditory contrast sensitivity functions, highlighting potential similarities between basic mechanisms for visual and auditory attention.
Brasileiro, A; Gama, G; Trigueiro, L; Ribeiro, T; Silva, E; Galvão, É; Lindquist, A
Stroke is an important causal factor of deficiency and functional dependence worldwide. To determine the immediate effects of visual and auditory biofeedback, combined with partial body weight supported (PBWS) treadmill training on the gait of individuals with chronic hemiparesis. Randomized controlled trial. Outpatient rehabilitation hospital. Thirty subjects with chronic hemiparesis and ability to walk with some help. Participants were randomized to a control group that underwent only PBWS treadmill training; or experimental I group with visual biofeedback from the display monitor, in the form of symbolic feet as the subject took a step; or experimental group II with auditory biofeedback associated display, using a metronome at 115% of the individual's preferred cadence. They trained for 20 minutes and were evaluated before and after training. Spatio-temporal and angular gait variables were obtained by kinematics from the Qualisys Motion Analysis system. Increases in speed and stride length were observed for all groups over time (speed: F=25.63; Phemiparesis, in short term. Additional studies are needed to determine whether, in long term, the biofeedback will promote additional benefit to the PBWS treadmill training. The findings of this study indicate that visual and auditory biofeedback does not bring immediate benefits on PBWS treadmill training of individuals with chronic hemiparesis. This suggest that, for additional benefits are achieved with biofeedback, effects should be investigated after long-term training, which may determine if some kind of biofeedback is superior to another to improve the hemiparetic gait.
Oray, Serkan; Lu, Zhong-Lin; Dawson, Michael E
To investigate the cross-modal nature of the exogenous attention system, we studied how involuntary attention in the visual modality affects ERPs elicited by sudden onset of events in the auditory modality. Relatively loud auditory white noise bursts were presented to subjects with random and long inter-trial intervals. The noise bursts were either presented alone, or paired with a visual stimulus with a visual to auditory onset asynchrony of 120 ms. In a third condition, the visual stimuli were shown alone. All three conditions, auditory alone, visual alone, and paired visual/auditory, were randomly inter-mixed and presented with equal probabilities. Subjects were instructed to fixate on a point in front of them without task instructions concerning either the auditory or visual stimuli. ERPs were recorded from 28 scalp sites throughout every experimental session. Compared to ERPs in the auditory alone condition, pairing the auditory noise bursts with the visual stimulus reduced the amplitude of the auditory N100 component at Cz by 40% and the auditory P200/P300 component at Cz by 25%. No significant topographical change was observed in the scalp distributions of the N100 and P200/P300. Our results suggest that involuntary attention to visual stimuli suppresses early sensory (N100) as well as late cognitive (P200/P300) processing of sudden auditory events. The activation of the exogenous attention system by sudden auditory onset can be modified by involuntary visual attention in a cross-model, passive prepulse inhibition paradigm.
Full Text Available Hearing loss, presbycusis, is one of the most common sensory declines in the ageing population. Presbycusis is characterised by a deterioration in the processing of temporal sound features as well as a decline in speech perception, thus indicating a possible central component. With the aim to explore the central component of presbycusis, we studied the function of the auditory cortex by functional MRI in two groups of elderly subjects (>65 years and compared the results with young subjects (
Profant, Oliver; Tintěra, Jaroslav; Balogová, Zuzana; Ibrahim, Ibrahim; Jilek, Milan; Syka, Josef
Hearing loss, presbycusis, is one of the most common sensory declines in the ageing population. Presbycusis is characterised by a deterioration in the processing of temporal sound features as well as a decline in speech perception, thus indicating a possible central component. With the aim to explore the central component of presbycusis, we studied the function of the auditory cortex by functional MRI in two groups of elderly subjects (>65 years) and compared the results with young subjects (presbycusis (EP) differed from the elderly group with mild presbycusis (MP) in hearing thresholds measured by pure tone audiometry, presence and amplitudes of transient otoacoustic emissions (TEOAE) and distortion-product oto-acoustic emissions (DPOAE), as well as in speech-understanding under noisy conditions. Acoustically evoked activity (pink noise centered around 350 Hz, 700 Hz, 1.5 kHz, 3 kHz, 8 kHz), recorded by BOLD fMRI from an area centered on Heschl’s gyrus, was used to determine age-related changes at the level of the auditory cortex. The fMRI showed only minimal activation in response to the 8 kHz stimulation, despite the fact that all subjects heard the stimulus. Both elderly groups showed greater activation in response to acoustical stimuli in the temporal lobes in comparison with young subjects. In addition, activation in the right temporal lobe was more expressed than in the left temporal lobe in both elderly groups, whereas in the young control subjects (YC) leftward lateralization was present. No statistically significant differences in activation of the auditory cortex were found between the MP and EP groups. The greater extent of cortical activation in elderly subjects in comparison with young subjects, with an asymmetry towards the right side, may serve as a compensatory mechanism for the impaired processing of auditory information appearing as a consequence of ageing. PMID:25734519
Chen, Yu-Han; Edgar, J Christopher; Huang, Mingxiong; Hunter, Michael A; Epstein, Emerson; Howell, Breannan; Lu, Brett Y; Bustillo, Juan; Miller, Gregory A; Cañive, José M
Although magnetoencephalography (MEG) studies show superior temporal gyrus (STG) auditory processing abnormalities in schizophrenia at 50 and 100 ms, EEG and corticography studies suggest involvement of additional brain areas (e.g., frontal areas) during this interval. Study goals were to identify 30 to 130 ms auditory encoding processes in schizophrenia (SZ) and healthy controls (HC) and group differences throughout the cortex. The standard paired-click task was administered to 19 SZ and 21 HC subjects during MEG recording. Vector-based Spatial-temporal Analysis using L1-minimum-norm (VESTAL) provided 4D maps of activity from 30 to 130 ms. Within-group t-tests compared post-stimulus 50 ms and 100 ms activity to baseline. Between-group t-tests examined 50 and 100 ms group differences. Bilateral 50 and 100 ms STG activity was observed in both groups. HC had stronger bilateral 50 and 100 ms STG activity than SZ. In addition to the STG group difference, non-STG activity was also observed in both groups. For example, whereas HC had stronger left and right inferior frontal gyrus activity than SZ, SZ had stronger right superior frontal gyrus and left supramarginal gyrus activity than HC. Less STG activity was observed in SZ than HC, indicating encoding problems in SZ. Yet auditory encoding abnormalities are not specific to STG, as group differences were observed in frontal and SMG areas. Thus, present findings indicate that individuals with SZ show abnormalities in multiple nodes of a concurrently activated auditory network.
Overgaard, Morten; Lindeløv, Jonas Kristoffer; Svejstrup, Stinna
This paper reports an experiment intended to test a particular hypothesis derived from blindsight research, which we name the “source misidentification hypothesis.” According to this hypothesis, a subject may be correct about a stimulus without being correct about how she had access...... to this knowledge (whether the stimulus was visual, auditory, or something else). We test this hypothesis in healthy subjects, asking them to report whether a masked stimulus was presented auditorily or visually, what the stimulus was, and how clearly they experienced the stimulus using the Perceptual Awareness...... experience of the stimulus. To demonstrate that particular levels of reporting accuracy are obtained, we employ a statistical strategy, which operationally tests the hypothesis of non-equality, such that the usual rejection of the null-hypothesis admits the conclusion of equivalence....
van der Aa, Jeroen; Honing, Henkjan; ten Cate, Carel
Perceiving temporal regularity in an auditory stimulus is considered one of the basic features of musicality. Here we examine whether zebra finches can detect regularity in an isochronous stimulus. Using a go/no go paradigm we show that zebra finches are able to distinguish between an isochronous and an irregular stimulus. However, when the tempo of the isochronous stimulus is changed, it is no longer treated as similar to the training stimulus. Training with three isochronous and three irregular stimuli did not result in improvement of the generalization. In contrast, humans, exposed to the same stimuli, readily generalized across tempo changes. Our results suggest that zebra finches distinguish the different stimuli by learning specific local temporal features of each individual stimulus rather than attending to the global structure of the stimuli, i.e., to the temporal regularity. Copyright © 2015 Elsevier B.V. All rights reserved.
Bernard-Demanze, Laurence; Léonard, Jacques; Dumitrescu, Michel; Meller, Renaud; Magnan, Jacques; Lacour, Michel
Posture control is based on central integration of multisensory inputs, and on internal representation of body orientation in space. This multisensory feedback regulates posture control and continuously updates the internal model of body's position which in turn forwards motor commands adapted to the environmental context and constraints. The peripheral localization of the vestibular system, close to the cochlea, makes vestibular damage possible following cochlear implant (CI) surgery. Impaired vestibular function in CI patients, if any, may have a strong impact on posture stability. The simple postural task of quiet standing is generally paired with cognitive activity in most day life conditions, leading therefore to competition for attentional resources in dual-tasking, and increased risk of fall particularly in patients with impaired vestibular function. This study was aimed at evaluating the effects of postlingual cochlear implantation on posture control in adult deaf patients. Possible impairment of vestibular function was assessed by comparing the postural performance of patients to that of age-matched healthy subjects during a simple postural task performed in static (stable platform) and dynamic (platform in translation) conditions, and during dual-tasking with a visual or auditory memory task. Postural tests were done in eyes open (EO) and eyes closed (EC) conditions, with the CI activated (ON) or not (OFF). Results showed that the postural performance of the CI patients strongly differed from the controls, mainly in the EC condition. The CI patients showed significantly reduced limits of stability and increased postural instability in static conditions. In dynamic conditions, they spent considerably more energy to maintain equilibrium, and their head was stabilized neither in space nor on trunk: they behaved dynamically without vision like an inverted pendulum while the controls showed a whole body rigidification strategy. Hearing (prosthesis on) as well
Stekelenburg, Jeroen J.; Keetels, Mirjam
The Colavita effect refers to the phenomenon that when confronted with an audiovisual stimulus, observers report more often to have perceived the visual than the auditory component. The Colavita effect depends on low-level stimulus factors such as spatial and temporal proximity between the unimodal signals. Here, we examined whether the Colavita effect is modulated by synesthetic congruency between visual size and auditory pitch. If the Colavita effect depends on synesthetic congruency, we ex...
Jones, L.A.; Hills, P.J.; Dick, K.M.; Jones, S.P.; Bright, P.
Sensory gating is a neurophysiological measure of inhibition that is characterised by a reduction in the P50 event-related potential to a repeated identical stimulus. The objective of this work was to determine the cognitive mechanisms that relate to the neurological phenomenon of auditory sensory gating. Sixty participants underwent a battery of 10 cognitive tasks, including qualitatively different measures of attentional inhibition, working memory, and fluid intelligence. Participants additionally completed a paired-stimulus paradigm as a measure of auditory sensory gating. A correlational analysis revealed that several tasks correlated significantly with sensory gating. However once fluid intelligence and working memory were accounted for, only a measure of latent inhibition and accuracy scores on the continuous performance task showed significant sensitivity to sensory gating. We conclude that sensory gating reflects the identification of goal-irrelevant information at the encoding (input) stage and the subsequent ability to selectively attend to goal-relevant information based on that previous identification. PMID:26716891
Fernandez-Del-Olmo, Miguel; Río-Rodríguez, Dan; Iglesias-Soler, Eliseo; Acero, Rafael M.
Fast reaction times and the ability to develop a high rate of force development (RFD) are crucial for sports performance. However, little is known regarding the relationship between these parameters. The aim of this study was to investigate the effects of auditory stimuli of different intensities on the performance of a concentric bench-press exercise. Concentric bench-presses were performed by thirteen trained subjects in response to three different conditions: a visual stimulus (VS); a visual stimulus accompanied by a non-startle auditory stimulus (AS); and a visual stimulus accompanied by a startle auditory stimulus (SS). Peak RFD, peak velocity, onset movement, movement duration and electromyography from pectoralis and tricep muscles were recorded. The SS condition induced an increase in the RFD and peak velocity and a reduction in the movement onset and duration, in comparison with the VS and AS condition. The onset activation of the pectoralis and tricep muscles was shorter for the SS than for the VS and AS conditions. These findings point out to specific enhancement effects of loud auditory stimulation on the rate of force development. This is of relevance since startle stimuli could be used to explore neural adaptations to resistance training. PMID:24489967
Full Text Available Fast reaction times and the ability to develop a high rate of force development (RFD are crucial for sports performance. However, little is known regarding the relationship between these parameters. The aim of this study was to investigate the effects of auditory stimuli of different intensities on the performance of a concentric bench-press exercise. Concentric bench-presses were performed by thirteen trained subjects in response to three different conditions: a visual stimulus (VS; a visual stimulus accompanied by a non-startle auditory stimulus (AS; and a visual stimulus accompanied by a startle auditory stimulus (SS. Peak RFD, peak velocity, onset movement, movement duration and electromyography from pectoralis and tricep muscles were recorded. The SS condition induced an increase in the RFD and peak velocity and a reduction in the movement onset and duration, in comparison with the VS and AS condition. The onset activation of the pectoralis and tricep muscles was shorter for the SS than for the VS and AS conditions. These findings point out to specific enhancement effects of loud auditory stimulation on the rate of force development. This is of relevance since startle stimuli could be used to explore neural adaptations to resistance training.
Murgia, Mauro; Pili, Roberta; Corona, Federica; Sors, Fabrizio; Agostini, Tiziano A; Bernardis, Paolo; Casula, Carlo; Cossu, Giovanni; Guicciardi, Marco; Pau, Massimiliano
The use of rhythmic auditory stimulation (RAS) has been proven useful in the management of gait disturbances associated with Parkinson's disease (PD). Typically, the RAS consists of metronome or music-based sounds (artificial RAS), while ecological footstep sounds (ecological RAS) have never been used for rehabilitation programs. The aim of this study was to compare the effects of a rehabilitation program integrated either with ecological or with artificial RAS. An observer-blind, randomized controlled trial was conducted to investigate the effects of 5 weeks of supervised rehabilitation integrated with RAS. Thirty-eight individuals affected by PD were randomly assigned to one of the two conditions (ecological vs. artificial RAS); thirty-two of them (age 68.2 ± 10.5, Hoehn and Yahr 1.5-3) concluded all phases of the study. Spatio-temporal parameters of gait and clinical variables were assessed before the rehabilitation period, at its end, and after a 3-month follow-up. Thirty-two participants were analyzed. The results revealed that both groups improved in the majority of biomechanical and clinical measures, independently of the type of sound. Moreover, exploratory analyses for separate groups were conducted, revealing improvements on spatio-temporal parameters only in the ecological RAS group. Overall, our results suggest that ecological RAS is equally effective compared to artificial RAS. Future studies should further investigate the role of ecological RAS, on the basis of information revealed by our exploratory analyses. Theoretical, methodological, and practical issues concerning the implementation of ecological sounds in the rehabilitation of PD patients are discussed. www.ClinicalTrials.gov, identifier NCT03228888.
Ward, Ryan D; Gallistel, C R; Jensen, Greg; Richards, Vanessa L; Fairhurst, Stephen; Balsam, Peter D
In a conditioning protocol, the onset of the conditioned stimulus ([CS]) provides information about when to expect reinforcement (unconditioned stimulus [US]). There are two sources of information from the CS in a delay conditioning paradigm in which the CS-US interval is fixed. The first depends on the informativeness, the degree to which CS onset reduces the average expected time to onset of the next US. The second depends only on how precisely a subject can represent a fixed-duration interval (the temporal Weber fraction). In three experiments with mice, we tested the differential impact of these two sources of information on rate of acquisition of conditioned responding (CS-US associability). In Experiment 1, we showed that associability (the inverse of trials to acquisition) increased in proportion to informativeness. In Experiment 2, we showed that fixing the duration of the US-US interval or the CS-US interval or both had no effect on associability. In Experiment 3, we equated the increase in information produced by varying the C/T ratio with the increase produced by fixing the duration of the CS-US interval. Associability increased with increased informativeness, but, as in Experiment 2, fixing the CS-US duration had no effect on associability. These results are consistent with the view that CS-US associability depends on the increased rate of reward signaled by CS onset. The results also provide further evidence that conditioned responding is temporally controlled when it emerges.
Passow, Susanne; Müller, Maike; Westerhausen, René; Hugdahl, Kenneth; Wartenburger, Isabell; Heekeren, Hauke R.; Lindenberger, Ulman; Li, Shu-Chen
Multitalker situations confront listeners with a plethora of competing auditory inputs, and hence require selective attention to relevant information, especially when the perceptual saliency of distracting inputs is high. This study augmented the classical forced-attention dichotic listening paradigm by adding an interaural intensity manipulation…
van Vugt, F T; Kafczyk, T; Kuhn, W; Rollnik, J D; Tillmann, B; Altenmüller, E
Learning to play musical instruments such as piano was previously shown to benefit post-stroke motor rehabilitation. Previous work hypothesised that the mechanism of this rehabilitation is that patients use auditory feedback to correct their movements and therefore show motor learning. We tested this hypothesis by manipulating the auditory feedback timing in a way that should disrupt such error-based learning. We contrasted a patient group undergoing music-supported therapy on a piano that emits sounds immediately (as in previous studies) with a group whose sounds are presented after a jittered delay. The delay was not noticeable to patients. Thirty-four patients in early stroke rehabilitation with moderate motor impairment and no previous musical background learned to play the piano using simple finger exercises and familiar children's songs. Rehabilitation outcome was not impaired in the jitter group relative to the normal group. Conversely, some clinical tests suggests the jitter group outperformed the normal group. Auditory feedback-based motor learning is not the beneficial mechanism of music-supported therapy. Immediate auditory feedback therapy may be suboptimal. Jittered delay may increase efficacy of the proposed therapy and allow patients to fully benefit from motivational factors of music training. Our study shows a novel way to test hypotheses concerning music training in a single-blinded way, which is an important improvement over existing unblinded tests of music interventions.
Varnhagen, Connie K.; And Others
Auditory and visual memory span were examined with 13 Down Syndrome and 15 other trainable mentally retarded young adults. Although all subjects demonstrated relatively poor auditory memory span, Down Syndrome subjects were especially poor at long-term memory access for visual stimulus identification and short-term storage and processing of…
Full Text Available The present paper proposes a highly reconfigurable beamformer stimulus generator of radar antenna array, which includes three main blocks: settings of antenna array, settings of objects (signal sources and a beamforming simulator. Following from the configuration of antenna array and object settings, different stimulus can be generated as the input signal for a beamformer. This stimulus generator is developed under a greater concept with two utterly independent paths where one is the stimulus generator and the other is the hardware beamformer. Both paths can be complemented in final and in intermediate steps as well to check and improve system performance. This way the technology development process is promoted by making each of the future hardware steps more substantive. Stimulus generator configuration capabilities and test results are presented proving the application of the stimulus generator for FPGA based beamforming unit development and tuning as an alternative to an actual antenna system.
Vaviļina, E.; Gaigals, G.
The present paper proposes a highly reconfigurable beamformer stimulus generator of radar antenna array, which includes three main blocks: settings of antenna array, settings of objects (signal sources) and a beamforming simulator. Following from the configuration of antenna array and object settings, different stimulus can be generated as the input signal for a beamformer. This stimulus generator is developed under a greater concept with two utterly independent paths where one is the stimulus generator and the other is the hardware beamformer. Both paths can be complemented in final and in intermediate steps as well to check and improve system performance. This way the technology development process is promoted by making each of the future hardware steps more substantive. Stimulus generator configuration capabilities and test results are presented proving the application of the stimulus generator for FPGA based beamforming unit development and tuning as an alternative to an actual antenna system.
Atilgan, Huriye; Town, Stephen M; Wood, Katherine C; Jones, Gareth P; Maddox, Ross K; Lee, Adrian K C; Bizley, Jennifer K
How and where in the brain audio-visual signals are bound to create multimodal objects remains unknown. One hypothesis is that temporal coherence between dynamic multisensory signals provides a mechanism for binding stimulus features across sensory modalities. Here, we report that when the luminance of a visual stimulus is temporally coherent with the amplitude fluctuations of one sound in a mixture, the representation of that sound is enhanced in auditory cortex. Critically, this enhancement extends to include both binding and non-binding features of the sound. We demonstrate that visual information conveyed from visual cortex via the phase of the local field potential is combined with auditory information within auditory cortex. These data provide evidence that early cross-sensory binding provides a bottom-up mechanism for the formation of cross-sensory objects and that one role for multisensory binding in auditory cortex is to support auditory scene analysis. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.
Full Text Available Abstract Background Prepulse inhibition (PPI of the startle response is an important tool to investigate the biology of schizophrenia. PPI is usually observed by use of a startle reflex such as blinking following an intense sound. A similar phenomenon has not been reported for cortical responses. Results In 12 healthy subjects, change-related cortical activity in response to an abrupt increase of sound pressure by 5 dB above the background of 65 dB SPL (test stimulus was measured using magnetoencephalography. The test stimulus evoked a clear cortical response peaking at around 130 ms (Change-N1m. In Experiment 1, effects of the intensity of a prepulse (0.5 ~ 5 dB on the test response were examined using a paired stimulation paradigm. In Experiment 2, effects of the interval between the prepulse and test stimulus were examined using interstimulus intervals (ISIs of 50 ~ 350 ms. When the test stimulus was preceded by the prepulse, the Change-N1m was more strongly inhibited by a stronger prepulse (Experiment 1 and a shorter ISI prepulse (Experiment 2. In addition, the amplitude of the test Change-N1m correlated positively with both the amplitude of the prepulse-evoked response and the degree of inhibition, suggesting that subjects who are more sensitive to the auditory change are more strongly inhibited by the prepulse. Conclusions Since Change-N1m is easy to measure and control, it would be a valuable tool to investigate mechanisms of sensory gating or the biology of certain mental diseases such as schizophrenia.
Soskey, Laura N; Allen, Paul D; Bennetto, Loisa
One of the earliest observable impairments in autism spectrum disorder (ASD) is a failure to orient to speech and other social stimuli. Auditory spatial attention, a key component of orienting to sounds in the environment, has been shown to be impaired in adults with ASD. Additionally, specific deficits in orienting to social sounds could be related to increased acoustic complexity of speech. We aimed to characterize auditory spatial attention in children with ASD and neurotypical controls, and to determine the effect of auditory stimulus complexity on spatial attention. In a spatial attention task, target and distractor sounds were played randomly in rapid succession from speakers in a free-field array. Participants attended to a central or peripheral location, and were instructed to respond to target sounds at the attended location while ignoring nearby sounds. Stimulus-specific blocks evaluated spatial attention for simple non-speech tones, speech sounds (vowels), and complex non-speech sounds matched to vowels on key acoustic properties. Children with ASD had significantly more diffuse auditory spatial attention than neurotypical children when attending front, indicated by increased responding to sounds at adjacent non-target locations. No significant differences in spatial attention emerged based on stimulus complexity. Additionally, in the ASD group, more diffuse spatial attention was associated with more severe ASD symptoms but not with general inattention symptoms. Spatial attention deficits have important implications for understanding social orienting deficits and atypical attentional processes that contribute to core deficits of ASD. Autism Res 2017, 10: 1405-1416. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.
Moraes, Michele M; Rabelo, Patrícia C R; Pinto, Valéria A; Pires, Washington; Wanner, Samuel P; Szawka, Raphael E; Soares, Danusa D
Listening to melodic music is regarded as a non-pharmacological intervention that ameliorates various disease symptoms, likely by changing the activity of brain monoaminergic systems. Here, we investigated the effects of exposure to melodic music on the concentrations of dopamine (DA), serotonin (5-HT) and their respective metabolites in the caudate-putamen (CPu) and nucleus accumbens (NAcc), areas linked to reward and motor control. Male adult Wistar rats were randomly assigned to a control group or a group exposed to music. The music group was submitted to 8 music sessions [Mozart's sonata for two pianos (K. 488) at an average sound pressure of 65 dB]. The control rats were handled in the same way but were not exposed to music. Immediately after the last exposure or control session, the rats were euthanized, and their brains were quickly removed to analyze the concentrations of 5-HT, DA, 5-hydroxyindoleacetic acid (5-HIAA) and 3,4-dihydroxyphenylacetic acid (DOPAC) in the CPu and NAcc. Auditory stimuli affected the monoaminergic system in these two brain structures. In the CPu, auditory stimuli increased the concentrations of DA and 5-HIAA but did not change the DOPAC or 5-HT levels. In the NAcc, music markedly increased the DOPAC/DA ratio, suggesting an increase in DA turnover. Our data indicate that auditory stimuli, such as exposure to melodic music, increase DA levels and the release of 5-HT in the CPu as well as DA turnover in the NAcc, suggesting that the music had a direct impact on monoamine activity in these brain areas. Copyright © 2018 Elsevier B.V. All rights reserved.
Shrem, Talia; Murray, Micah M; Deouell, Leon Y
Space is a dimension shared by different modalities, but at what stage spatial encoding is affected by multisensory processes is unclear. Early studies observed attenuation of N1/P2 auditory evoked responses following repetition of sounds from the same location. Here, we asked whether this effect is modulated by audiovisual interactions. In two experiments, using a repetition-suppression paradigm, we presented pairs of tones in free field, where the test stimulus was a tone presented at a fixed lateral location. Experiment 1 established a neural index of auditory spatial sensitivity, by comparing the degree of attenuation of the response to test stimuli when they were preceded by an adapter sound at the same location versus 30° or 60° away. We found that the degree of attenuation at the P2 latency was inversely related to the spatial distance between the test stimulus and the adapter stimulus. In Experiment 2, the adapter stimulus was a tone presented from the same location or a more medial location than the test stimulus. The adapter stimulus was accompanied by a simultaneous flash displayed orthogonally from one of the two locations. Sound-flash incongruence reduced accuracy in a same-different location discrimination task (i.e., the ventriloquism effect) and reduced the location-specific repetition-suppression at the P2 latency. Importantly, this multisensory effect included topographic modulations, indicative of changes in the relative contribution of underlying sources across conditions. Our findings suggest that the auditory response at the P2 latency is affected by spatially selective brain activity, which is affected crossmodally by visual information. © 2017 Society for Psychophysiological Research.
Kikuchi, Yukiko; Horwitz, Barry; Mishkin, Mortimer
Connectional anatomical evidence suggests that the auditory core, containing the tonotopic areas A1, R, and RT, constitutes the first stage of auditory cortical processing, with feedforward projections from core outward, first to the surrounding auditory belt and then to the parabelt. Connectional evidence also raises the possibility that the core itself is serially organized, with feedforward projections from A1 to R and with additional projections, although of unknown feed direction, from R to RT. We hypothesized that area RT together with more rostral parts of the supratemporal plane (rSTP) form the anterior extension of a rostrally directed stimulus quality processing stream originating in the auditory core area A1. Here, we analyzed auditory responses of single neurons in three different sectors distributed caudorostrally along the supratemporal plane (STP): sector I, mainly area A1; sector II, mainly area RT; and sector III, principally RTp (the rostrotemporal polar area), including cortex located 3 mm from the temporal tip. Mean onset latency of excitation responses and stimulus selectivity to monkey calls and other sounds, both simple and complex, increased progressively from sector I to III. Also, whereas cells in sector I responded with significantly higher firing rates to the "other" sounds than to monkey calls, those in sectors II and III responded at the same rate to both stimulus types. The pattern of results supports the proposal that the STP contains a rostrally directed, hierarchically organized auditory processing stream, with gradually increasing stimulus selectivity, and that this stream extends from the primary auditory area to the temporal pole.
Paris, Tim; Kim, Jeesun; Davis, Chris
Auditory-visual (AV) events often involve a leading visual cue (e.g. auditory-visual speech) that allows the perceiver to generate predictions about the upcoming auditory event. Electrophysiological evidence suggests that when an auditory event is predicted, processing is sped up, i.e., the N1 component of the ERP occurs earlier (N1 facilitation). However, it is not clear (1) whether N1 facilitation is based specifically on predictive rather than multisensory integration and (2) which particular properties of the visual cue it is based on. The current experiment used artificial AV stimuli in which visual cues predicted but did not co-occur with auditory cues. Visual form cues (high and low salience) and the auditory-visual pairing were manipulated so that auditory predictions could be based on form and timing or on timing only. The results showed that N1 facilitation occurred only for combined form and temporal predictions. These results suggest that faster auditory processing (as indicated by N1 facilitation) is based on predictive processing generated by a visual cue that clearly predicts both what and when the auditory stimulus will occur. Copyright © 2016. Published by Elsevier Ltd.
Mullen, Stuart; Dixon, Mark R.; Belisle, Jordan; Stanley, Caleb
The current study sought to evaluate the efficacy of a stimulus equivalence training procedure in establishing auditory-tactile-visual stimulus classes with 2 children with autism and developmental delays. Participants were exposed to vocal-tactile (A-B) and tactile-picture (B-C) conditional discrimination training and were tested for the…
Stekelenburg, J.J.; Keetels, M.N.
The Colavita effect refers to the phenomenon that when confronted with an audiovisual stimulus, observers report more often to have perceived the visual than the auditory component. The Colavita effect depends on low-level stimulus factors such as spatial and temporal proximity between the unimodal
Bareham, Corinne A; Georgieva, Stanimira D; Kamke, Marc R; Lloyd, David; Bekinschtein, Tristan A; Mattingley, Jason B
Selective attention is the process of directing limited capacity resources to behaviourally relevant stimuli while ignoring competing stimuli that are currently irrelevant. Studies in healthy human participants and in individuals with focal brain lesions have suggested that the right parietal cortex is crucial for resolving competition for attention. Following right-hemisphere damage, for example, patients may have difficulty reporting a brief, left-sided stimulus if it occurs with a competitor on the right, even though the same left stimulus is reported normally when it occurs alone. Such "extinction" of contralesional stimuli has been documented for all the major sense modalities, but it remains unclear whether its occurrence reflects involvement of one or more specific subregions of the temporo-parietal cortex. Here we employed repetitive transcranial magnetic stimulation (rTMS) over the right hemisphere to examine the effect of disruption of two candidate regions - the supramarginal gyrus (SMG) and the superior temporal gyrus (STG) - on auditory selective attention. Eighteen neurologically normal, right-handed participants performed an auditory task, in which they had to detect target digits presented within simultaneous dichotic streams of spoken distractor letters in the left and right channels, both before and after 20 min of 1 Hz rTMS over the SMG, STG or a somatosensory control site (S1). Across blocks, participants were asked to report on auditory streams in the left, right, or both channels, which yielded focused and divided attention conditions. Performance was unchanged for the two focused attention conditions, regardless of stimulation site, but was selectively impaired for contralateral left-sided targets in the divided attention condition following stimulation of the right SMG, but not the STG or S1. Our findings suggest a causal role for the right inferior parietal cortex in auditory selective attention. Copyright © 2017 Elsevier Ltd. All rights
Moossavi, Abdollah; Mehrkian, Saiedeh; Lotfi, Yones; Faghihzadeh, Soghrat; sajedi, Hamed
Auditory processing disorder (APD) describes a complex and heterogeneous disorder characterized by poor speech perception, especially in noisy environments. APD may be responsible for a range of sensory processing deficits associated with learning difficulties. There is no general consensus about the nature of APD and how the disorder should be assessed or managed. This study assessed the effect of cognition abilities (working memory capacity) on sound lateralization in children with auditory processing disorders, in order to determine how "auditory cognition" interacts with APD. The participants in this cross-sectional comparative study were 20 typically developing and 17 children with a diagnosed auditory processing disorder (9-11 years old). Sound lateralization abilities investigated using inter-aural time (ITD) differences and inter-aural intensity (IID) differences with two stimuli (high pass and low pass noise) in nine perceived positions. Working memory capacity was evaluated using the non-word repetition, and forward and backward digits span tasks. Linear regression was employed to measure the degree of association between working memory capacity and localization tests between the two groups. Children in the APD group had consistently lower scores than typically developing subjects in lateralization and working memory capacity measures. The results showed working memory capacity had significantly negative correlation with ITD errors especially with high pass noise stimulus but not with IID errors in APD children. The study highlights the impact of working memory capacity on auditory lateralization. The finding of this research indicates that the extent to which working memory influences auditory processing depend on the type of auditory processing and the nature of stimulus/listening situation. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Thoma, Robert J; Meier, Andrew; Houck, Jon; Clark, Vincent P; Lewine, Jeffrey D; Turner, Jessica; Calhoun, Vince; Stephen, Julia
Auditory sensory gating, assessed in a paired-click paradigm, indicates the extent to which incoming stimuli are filtered, or "gated", in auditory cortex. Gating is typically computed as the ratio of the peak amplitude of the event related potential (ERP) to a second click (S2) divided by the peak amplitude of the ERP to a first click (S1). Higher gating ratios are purportedly indicative of incomplete suppression of S2 and considered to represent sensory processing dysfunction. In schizophrenia, hallucination severity is positively correlated with gating ratios, and it was hypothesized that a failure of sensory control processes early in auditory sensation (gating) may represent a larger system failure within the auditory data stream; resulting in auditory verbal hallucinations (AVH). EEG data were collected while patients (N=12) with treatment-resistant AVH pressed a button to indicate the beginning (AVH-on) and end (AVH-off) of each AVH during a paired click protocol. For each participant, separate gating ratios were computed for the P50, N100, and P200 components for each of the AVH-off and AVH-on states. AVH trait severity was assessed using the Psychotic Symptoms Rating Scales AVH Total score (PSYRATS). The results of a mixed model ANOVA revealed an overall effect for AVH state, such that gating ratios were significantly higher during the AVH-on state than during AVH-off for all three components. PSYRATS score was significantly and negatively correlated with N100 gating ratio only in the AVH-off state. These findings link onset of AVH with a failure of an empirically-defined auditory inhibition system, auditory sensory gating, and pave the way for a sensory gating model of AVH. Copyright © 2017 Elsevier B.V. All rights reserved.
Tabur, S; Korkmaz, H; Baysal, E; Hatipoglu, E; Aytac, I; Akarsu, E
The aim of this study is to determine the changes involving auditory system in cases with acromegaly. Otological examinations of 41 cases with acromegaly (uncontrolled n = 22, controlled n = 19) were compared with those of age and gender-matched 24 healthy subjects. Whereas the cases with acromegaly underwent examination with pure tone audiometry (PTA), speech audiometry for speech discrimination (SD), tympanometry, stapedius reflex evaluation and otoacoustic emission tests, the control group did only have otological examination and PTA. Additionally, previously performed paranasal sinus-computed tomography of all cases with acromegaly and control subjects were obtained to measure the length of internal acoustic canal (IAC). PTA values were higher (p acromegaly group was narrower compared to that in control group (p = 0.03 for right ears and p = 0.02 for left ears). When only cases with acromegaly were taken into consideration, PTA values in left ears had positive correlation with growth hormone and insulin-like growth factor-1 levels (r = 0.4, p = 0.02 and r = 0.3, p = 0.03). Of all cases with acromegaly 13 (32%) had hearing loss in at least one ear, 7 (54%) had sensorineural type and 6 (46%) had conductive type hearing loss. Acromegaly may cause certain changes in the auditory system in cases with acromegaly. The changes in the auditory system may be multifactorial causing both conductive and sensorioneural defects.
Gibson, Brett M; Wasserman, Edward A
The authors taught pigeons to discriminate displays of 16 identical items from displays of 16 nonidentical items. Unlike most same-different discrimination studies--where only stimulus relations could serve a discriminative function--both the identity of the items and the relations among the items were discriminative features of the displays. The pigeons learned about both stimulus identity and stimulus relations when these 2 sources of information served as redundant, relevant cues. In tests of associative competition, identity cues exerted greater stimulus control than relational cues. These results suggest that the pigeon can respond to both specific stimuli and general relations in the environment.
Herrmann, Björn; Maess, Burkhard; Johnsrude, Ingrid S
Optimal perception requires efficient and adaptive neural processing of sensory input. Neurons in nonhuman mammals adapt to the statistical properties of acoustic feature distributions such that they become sensitive to sounds that are most likely to occur in the environment. However, whether human auditory responses adapt to stimulus statistical distributions and how aging affects adaptation to stimulus statistics is unknown. We used MEG to study how exposure to different distributions of sound levels affects adaptation in auditory cortex of younger (mean: 25 years; n = 19) and older (mean: 64 years; n = 20) adults (male and female). Participants passively listened to two sound-level distributions with different modes (either 15 or 45 dB sensation level). In a control block with long interstimulus intervals, allowing neural populations to recover from adaptation, neural response magnitudes were similar between younger and older adults. Critically, both age groups demonstrated adaptation to sound-level stimulus statistics, but adaptation was altered for older compared with younger people: in the older group, neural responses continued to be sensitive to sound level under conditions in which responses were fully adapted in the younger group. The lack of full adaptation to the statistics of the sensory environment may be a physiological mechanism underlying the known difficulty that older adults have with filtering out irrelevant sensory information. SIGNIFICANCE STATEMENT Behavior requires efficient processing of acoustic stimulation. Animal work suggests that neurons accomplish efficient processing by adjusting their response sensitivity depending on statistical properties of the acoustic environment. Little is known about the extent to which this adaptation to stimulus statistics generalizes to humans, particularly to older humans. We used MEG to investigate how aging influences adaptation to sound-level statistics. Listeners were presented with sounds drawn from
Golob, Edward J; Lewald, Jörg; Jungilligens, Johannes; Getzmann, Stephan
The interplay of perception and memory is very evident when we perceive and then recognize familiar stimuli. Conversely, information in long-term memory may also influence how a stimulus is perceived. Prior work on number cognition in the visual modality has shown that in Western number systems long-term memory for the magnitude of smaller numbers can influence performance involving the left side of space, while larger numbers have an influence toward the right. Here, we investigated in the auditory modality whether a related effect may bias the perception of sound location. Subjects (n = 28) used a swivel pointer to localize noise bursts presented from various azimuth positions. The noise bursts were preceded by a spoken number (1-9) or, as a nonsemantic control condition, numbers that were played in reverse. The relative constant error in noise localization (forward minus reversed speech) indicated a systematic shift in localization toward more central locations when the number was smaller and toward more peripheral positions when the preceding number magnitude was larger. These findings do not support the traditional left-right number mapping. Instead, the results may reflect an overlap between codes for number magnitude and codes for sound location as implemented by two channel models of sound localization, or possibly a categorical mapping stage of small versus large magnitudes. © The Author(s) 2015.
Full Text Available Background and Purpose. Training in the virtual environment in post stroke rehab is being established as a new approach for neurorehabilitation, specifically, ReoTherapy (REO a robot-assisted virtual training device. Trunk stabilization strapping has been part of the concept with this device, and literature is lacking to support this for long-term functional changes with individuals after stroke. The purpose of this case series was to measure the feasibility of auditory trunk sensor feedback during REO therapy, in moderate to severely impaired individuals after stroke. Case Description. Using an open label crossover comparison design, 3 chronic stroke subjects were trained for 12 sessions over six weeks on either the REO or the control condition of task related training (TRT; after a washout period of 4 weeks; the alternative therapy was given. Outcomes. With both interventions, clinically relevant improvements were found for measures of body function and structure, as well as for activity, for two participants. Providing auditory feedback during REO training for trunk control was found to be feasible. Discussion. The degree of changes evident varied per protocol and may be due to the appropriateness of the technique chosen, as well as based on patients impaired arm motor control.
Diamantis, Dimitrios A; Ramesova, Sarka; Chatzigiannis, Christos M; Degano, Ilaria; Gerogianni, Paraskevi S; Karadima, Constantina; Perikleous, Sonia; Rekkas, Dimitrios; Gerothanassis, Ioannis P; Galaris, Dimitrios; Mavromoustakos, Thomas; Valsami, Georgia; Sokolova, Romana; Tzakos, Andreas G
Flavonoids possess a rich polypharmacological profile and their biological role is linked to their oxidation state protecting DNA from oxidative stress damage. However, their bioavailability is hampered due to their poor aqueous solubility. This can be surpassed through encapsulation to supramolecular carriers as cyclodextrin (CD). A quercetin- 2HP-β-CD complex has been formerly reported by us. However, once the flavonoid is in its 2HP-β-CD encapsulated state its oxidation potential, its decomplexation mechanism, its potential to protect DNA damage from oxidative stress remained elusive. To unveil this, an array of biophysical techniques was used. The quercetin-2HP-β-CD complex was evaluated through solubility and dissolution experiments, electrochemical and spectroelectrochemical studies (Cyclic Voltammetry) UV-Vis spectroscopy, HPLC-ESI-MS/MS and HPLC-DAD, fluorescence spectroscopy, NMR Spectroscopy, theoretical calculations (density functional theory (DFT)) and biological evaluation of the protection offered against H 2 O 2 -induced DNA damage. Encapsulation of quercetin inside the supramolecule's cavity enhanced its solubility and oxidation profile is retained in its encapsulated state. Although the protective ability of the quercetin-2HP-β-CD complex against H 2 O 2 was diminished, iron serves as a chemical stimulus to dissociate the complex and release quercetin. We found that in a quercetin-2HP-β-CD inclusion complex quercetin retains its oxidation profile similarly to its native state, while iron can operate as a chemical stimulus to release quercetin from its host cavity. The oxidation profile of a natural product once it is encapsulated in a supramolecular cyclodextrin carrier as also it was discovered that decomplexation can be triggered by a chemical stimulus. Copyright © 2018. Published by Elsevier B.V.
Kirkwood, Brent Christopher
Humans are capable of hearing the lengths of wooden rods dropped onto hard floors. In an attempt to understand the influence of the stimulus presentation method for testing this kind of everyday listening task, listener performance was compared for three presentation methods in an auditory length...
Kirkwood, Brent Christopher
Humans are capable of hearing the lengths of wooden rods dropped onto hard floors. In an attempt to understand the influence of the stimulus presentation method for testing this kind of everyday listening task, listener performance was compared for three presentation methods in an auditory length...
van Kesteren, Marlieke T. R.; Wierslnca-Post, J. Esther C.
Purpose: Several studies on auditory temporal-order processing showed gender differences. Women needed longer inter-stimulus intervals than men when indicating the temporal order of two clicks presented to the left and right ear. In this study, we examined whether we could reproduce these results in
van Kesteren, Marlieke T R; Wiersinga-Post, J Esther C
PURPOSE: Several studies on auditory temporal-order processing showed gender differences. Women needed longer inter-stimulus intervals than men when indicating the temporal order of two clicks presented to the left and right ear. In this study, we examined whether we could reproduce these results in
Eggermont, J.J.; Aertsen, A.M.H.J.; Hermes, D.J.; Johannesma, P.I.M.
For neurons in the auditory midbrain of the grass frog the use of a combined spectro-temporal characterization has been evaluated against the separate characterizations of frequency-sensitivity and temporal response properties. By factoring the joint density function of stimulus intensity, I(f, t),
Full Text Available If it is well known that knowledge facilitates higher cognitive functions, such as visual and auditory word recognition, little is known about the influence of knowledge on detection, particularly in the auditory modality. Our study tested the influence of phonological and lexical knowledge on auditory detection. Words, pseudo words and complex non phonological sounds, energetically matched as closely as possible, were presented at a range of presentation levels from sub threshold to clearly audible. The participants performed a detection task (Experiments 1 and 2 that was followed by a two alternative forced choice recognition task in Experiment 2. The results of this second task in Experiment 2 suggest a correct recognition of words in the absence of detection with a subjective threshold approach. In the detection task of both experiments, phonological stimuli (words and pseudo words were better detected than non phonological stimuli (complex sounds, presented close to the auditory threshold. This finding suggests an advantage of speech for signal detection. An additional advantage of words over pseudo words was observed in Experiment 2, suggesting that lexical knowledge could also improve auditory detection when listeners had to recognize the stimulus in a subsequent task. Two simulations of detection performance performed on the sound signals confirmed that the advantage of speech over non speech processing could not be attributed to energetic differences in the stimuli.
Sanfratello, Lori; Aine, Cheryl; Stephen, Julia
Impairments in auditory and visual processing are common in schizophrenia (SP). In the unisensory realm visual deficits are primarily noted for the dorsal visual stream. In addition, insensitivity to timing offsets between stimuli are widely reported for SP. The aim of the present study was to test at the physiological level differences in dorsal/ventral stream visual processing and timing sensitivity between SP and healthy controls (HC) using MEG and a simple auditory/visual task utilizing a variety of multisensory conditions. The paradigm included all combinations of synchronous/asynchronous and central/peripheral stimuli, yielding 4 task conditions. Both HC and SP groups showed activation in parietal areas (dorsal visual stream) during all multisensory conditions, with parietal areas showing decreased activation for SP relative to HC, and a significantly delayed peak of activation for SP in intraparietal sulcus (IPS). We also observed a differential effect of stimulus synchrony on HC and SP parietal response. Furthermore, a (negative) correlation was found between SP positive symptoms and activity in IPS. Taken together, our results provide evidence of impairment of the dorsal visual stream in SP during a multisensory task, along with an altered response to timing offsets between presented multisensory stimuli. Copyright © 2018 Elsevier B.V. All rights reserved.
Martinson, Eric; Brock, Derek
.... From this knowledge of another's auditory perspective, a conversational partner can then adapt his or her auditory output to overcome a variety of environmental challenges and insure that what is said is intelligible...
Full Text Available To compare the development of the auditory system in hearing and completely acoustically deprived animals, naive congenitally deaf white cats (CDCs and hearing controls (HCs were investigated at different developmental stages from birth till adulthood. The CDCs had no hearing experience before the acute experiment. In both groups of animals, responses to cochlear implant stimulation were acutely assessed. Electrically evoked auditory brainstem responses (E-ABRs were recorded with monopolar stimulation at different current levels. CDCs demonstrated extensive development of E-ABRs, from first signs of responses at postnatal (p.n. day 3 through appearance of all waves of brainstem response at day 8 p.n. to mature responses around day 90 p.n.. Wave I of E-ABRs could not be distinguished from the artifact in majority of CDCs, whereas in HCs, it was clearly separated from the stimulus artifact. Waves II, III, and IV demonstrated higher thresholds in CDCs, whereas this difference was not found for wave V. Amplitudes of wave III were significantly higher in HCs, whereas wave V amplitudes were significantly higher in CDCs. No differences in latencies were observed between the animal groups. These data demonstrate significant postnatal subcortical development in absence of hearing, and also divergent effects of deafness on early waves II–IV and wave V of the E-ABR.
Joos, Kathleen; Gilles, Annick; Van de Heyning, Paul; De Ridder, Dirk; Vanneste, Sven
An external auditory stimulus induces an auditory sensation which may lead to a conscious auditory perception. Although the sensory aspect is well known, it is still a question how an auditory stimulus results in an individual's conscious percept. To unravel the uncertainties concerning the neural correlates of a conscious auditory percept, event-related potentials may serve as a useful tool. In the current review we mainly wanted to shed light on the perceptual aspects of auditory processing and therefore we mainly focused on the auditory late-latency responses. Moreover, there is increasing evidence that perception is an active process in which the brain searches for the information it expects to be present, suggesting that auditory perception requires the presence of both bottom-up, i.e. sensory and top-down, i.e. prediction-driven processing. Therefore, the auditory evoked potentials will be interpreted in the context of the Bayesian brain model, in which the brain predicts which information it expects and when this will happen. The internal representation of the auditory environment will be verified by sensation samples of the environment (P50, N100). When this incoming information violates the expectation, it will induce the emission of a prediction error signal (Mismatch Negativity), activating higher-order neural networks and inducing the update of prior internal representations of the environment (P300). Copyright © 2014 Elsevier Ltd. All rights reserved.
Asger Emil Munch Schrøder
Full Text Available Echolocating animals reduce their output level and hearing sensitivity with decreasing echo delays, presumably to stabilize the perceived echo intensity during target approaches. In bats, this variation in hearing sensitivity is formed by a call-induced stapedial reflex that tapers off over time after the call. Here, we test the hypothesis that a similar mechanism exists in toothed whales by subjecting a trained harbour porpoise to a series of double sound pulses varying in delay and frequency, while measuring the magnitudes of the evoked auditory brainstem responses (ABRs. We find that the recovery of the ABR to the second pulse is frequency dependent, and that a stapedial reflex therefore cannot account for the reduced hearing sensitivity at short pulse delays. We propose that toothed whale auditory time-varying gain control during echolocation is not enabled by the middle ear as in bats, but rather by frequency-dependent mechanisms such as forward masking and perhaps higher-order control of efferent feedback to the outer hair cells.
Karla M. I. Freiria Elias
Full Text Available Objective To investigate central auditory processing in children with unilateral stroke and to verify whether the hemisphere affected by the lesion influenced auditory competence. Method 23 children (13 male between 7 and 16 years old were evaluated through speech-in-noise tests (auditory closure; dichotic digit test and staggered spondaic word test (selective attention; pitch pattern and duration pattern sequence tests (temporal processing and their results were compared with control children. Auditory competence was established according to the performance in auditory analysis ability. Results Was verified similar performance between groups in auditory closure ability and pronounced deficits in selective attention and temporal processing abilities. Most children with stroke showed an impaired auditory ability in a moderate degree. Conclusion Children with stroke showed deficits in auditory processing and the degree of impairment was not related to the hemisphere affected by the lesion.
Full Text Available A series of computer simulations using variants of a formal model of attention (Melara & Algom, 2003 probed the role of rejection positivity (RP, a slow-wave electroencephalographic (EEG component, in the inhibitory control of distraction. Behavioral and EEG data were recorded as participants performed auditory selective attention tasks. Simulations that modulated processes of distractor inhibition accounted well for reaction-time (RT performance, whereas those that modulated target excitation did not. A model that incorporated RP from actual EEG recordings in estimating distractor inhibition was superior in predicting changes in RT as a function of distractor salience across conditions. A model that additionally incorporated momentary fluctuations in EEG as the source of trial-to-trial variation in performance precisely predicted individual RTs within each condition. The results lend support to the linking proposition that RP controls the speed of responding to targets through the inhibitory control of distractors.
Full Text Available Abstract Background In daily life, we are exposed to different sound inputs simultaneously. During neural encoding in the auditory pathway, neural activities elicited by these different sounds interact with each other. In the present study, we investigated neural interactions elicited by masker and amplitude-modulated test stimulus in primary and non-primary human auditory cortex during ipsi-lateral and contra-lateral masking by means of magnetoencephalography (MEG. Results We observed significant decrements of auditory evoked responses and a significant inter-hemispheric difference for the N1m response during both ipsi- and contra-lateral masking. Conclusion The decrements of auditory evoked neural activities during simultaneous masking can be explained by neural interactions evoked by masker and test stimulus in peripheral and central auditory systems. The inter-hemispheric differences of N1m decrements during ipsi- and contra-lateral masking reflect a basic hemispheric specialization contributing to the processing of complex auditory stimuli such as speech signals in noisy environments.
McCourt, Mark E; Leone, Lynnette M
We asked whether the perceived direction of visual motion and contrast thresholds for motion discrimination are influenced by the concurrent motion of an auditory sound source. Visual motion stimuli were counterphasing Gabor patches, whose net motion energy was manipulated by adjusting the contrast of the leftward-moving and rightward-moving components. The presentation of these visual stimuli was paired with the simultaneous presentation of auditory stimuli, whose apparent motion in 3D auditory space (rightward, leftward, static, no sound) was manipulated using interaural time and intensity differences, and Doppler cues. In experiment 1, observers judged whether the Gabor visual stimulus appeared to move rightward or leftward. In experiment 2, contrast discrimination thresholds for detecting the interval containing unequal (rightward or leftward) visual motion energy were obtained under the same auditory conditions. Experiment 1 showed that the perceived direction of ambiguous visual motion is powerfully influenced by concurrent auditory motion, such that auditory motion 'captured' ambiguous visual motion. Experiment 2 showed that this interaction occurs at a sensory stage of processing as visual contrast discrimination thresholds (a criterion-free measure of sensitivity) were significantly elevated when paired with congruent auditory motion. These results suggest that auditory and visual motion signals are integrated and combined into a supramodal (audiovisual) representation of motion.
Grosso, A; Cambiaghi, M; Concina, G; Sacco, T; Sacchetti, B
Emotional memories represent the core of human and animal life and drive future choices and behaviors. Early research involving brain lesion studies in animals lead to the idea that the auditory cortex participates in emotional learning by processing the sensory features of auditory stimuli paired with emotional consequences and by transmitting this information to the amygdala. Nevertheless, electrophysiological and imaging studies revealed that, following emotional experiences, the auditory cortex undergoes learning-induced changes that are highly specific, associative and long lasting. These studies suggested that the role played by the auditory cortex goes beyond stimulus elaboration and transmission. Here, we discuss three major perspectives created by these data. In particular, we analyze the possible roles of the auditory cortex in emotional learning, we examine the recruitment of the auditory cortex during early and late memory trace encoding, and finally we consider the functional interplay between the auditory cortex and subcortical nuclei, such as the amygdala, that process affective information. We conclude that, starting from the early phase of memory encoding, the auditory cortex has a more prominent role in emotional learning, through its connections with subcortical nuclei, than is typically acknowledged. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.
Cambiaghi, Marco; Grosso, Anna; Renna, Annamaria; Sacchetti, Benedetto
Memories of frightening events require a protracted consolidation process. Sensory cortex, such as the auditory cortex, is involved in the formation of fearful memories with a more complex sensory stimulus pattern. It remains controversial, however, whether the auditory cortex is also required for fearful memories related to simple sensory stimuli. In the present study, we found that, 1 d after training, the temporary inactivation of either the most anterior region of the auditory cortex, including the primary (Te1) cortex, or the most posterior region, which included the secondary (Te2) component, did not affect the retention of recent memories, which is consistent with the current literature. However, at this time point, the inactivation of the entire auditory cortices completely prevented the formation of new memories. Amnesia was site specific and was not due to auditory stimuli perception or processing and strictly related to the interference with memory consolidation processes. Strikingly, at a late time interval 4 d after training, blocking the posterior part (encompassing the Te2) alone impaired memory retention, whereas the inactivation of the anterior part (encompassing the Te1) left memory unaffected. Together, these data show that the auditory cortex is necessary for the consolidation of auditory fearful memories related to simple tones in rats. Moreover, these results suggest that, at early time intervals, memory information is processed in a distributed network composed of both the anterior and the posterior auditory cortical regions, whereas, at late time intervals, memory processing is concentrated in the most posterior part containing the Te2 region. Memories of threatening experiences undergo a prolonged process of "consolidation" to be maintained for a long time. The dynamic of fearful memory consolidation is poorly understood. Here, we show that 1 d after learning, memory is processed in a distributed network composed of both primary Te1 and
Full Text Available Sequences of higher frequency A and lower frequency B tones repeating in an ABA- triplet pattern are widely used to study auditory streaming. One may experience either an integrated percept, a single ABA-ABA- stream, or a segregated percept, separate but simultaneous streams A-A-A-A- and -B---B--. During minutes-long presentations, subjects may report irregular alternations between these interpretations. We combine neuromechanistic modeling and psychoacoustic experiments to study these persistent alternations and to characterize the effects of manipulating stimulus parameters. Unlike many phenomenological models with abstract, percept-specific competition and fixed inputs, our network model comprises neuronal units with sensory feature dependent inputs that mimic the pulsatile-like A1 responses to tones in the ABA- triplets. It embodies a neuronal computation for percept competition thought to occur beyond primary auditory cortex (A1. Mutual inhibition, adaptation and noise are implemented. We include slow NDMA recurrent excitation for local temporal memory that enables linkage across sound gaps from one triplet to the next. Percepts in our model are identified in the firing patterns of the neuronal units. We predict with the model that manipulations of the frequency difference between tones A and B should affect the dominance durations of the stronger percept, the one dominant a larger fraction of time, more than those of the weaker percept-a property that has been previously established and generalized across several visual bistable paradigms. We confirm the qualitative prediction with our psychoacoustic experiments and use the behavioral data to further constrain and improve the model, achieving quantitative agreement between experimental and modeling results. Our work and model provide a platform that can be extended to consider other stimulus conditions, including the effects of context and volition.
Weinberger, Norman M.
Standard beliefs that the function of the primary auditory cortex (A1) is the analysis of sound have proven to be incorrect. Its involvement in learning, memory and other complex processes in both animals and humans is now well-established, although often not appreciated. Auditory coding is strongly modifed by associative learning, evident as associative representational plasticity (ARP) in which the representation of an acoustic dimension, like frequency, is re-organized to emphasize a sound that has become behaviorally important. For example, the frequency tuning of a cortical neuron can be shifted to match that of a significant sound and the representational area of sounds that acquire behavioral importance can be increased. ARP depends on the learning strategy used to solve an auditory problem and the increased cortical area confers greater strength of auditory memory. Thus, primary auditory cortex is involved in cognitive processes, transcending its assumed function of auditory stimulus analysis. The implications for basic neuroscience and clinical auditory neuroscience are presented and suggestions for remediation of auditory processing disorders are introduced. PMID:25356375
Grau, C; Polo, M D; Yago, E; Gual, A; Escera, C
A pre-conscious auditory sensory (echoic) memory of about 10 s duration can be studied with the event-related brain potential mismatch negativity (MMN). Previous work indicates that this memory is preserved in abstinent chronic alcoholics for a duration of up to 2 s. The authors' aim was to determine the integrity of auditory sensory memory as indexed by MMN in chronic alcoholism, when this memory has to be functionally active for a longer period of time. The presence of MMN for stimuli that differ in duration was tested at memory probe intervals (MPIs) of 0.4 and 5.0 s in 17 abstinent chronic alcoholic patients and in 17 healthy age-matched control subjects. MMN was similar in alcoholics and controls when the MPI was 0.4 s, whereas MMN could not be observed in the patients when the MPI was increased to 5.0 s. These results provide evidence of an impairment of auditory sensory memory in abstinent chronic alcoholics, whereas the automatic stimulus-change detector mechanism, involved in MMN generation, is preserved.
Puvvada, Krishna C; Simon, Jonathan Z
The ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically based representations in the auditory nerve, into perceptually distinct auditory-object-based representations in the auditory cortex. Here, using magnetoencephalography recordings from men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of the auditory cortex. Using systems-theoretic methods of stimulus reconstruction, we show that the primary-like areas in the auditory cortex contain dominantly spectrotemporal-based representations of the entire auditory scene. Here, both attended and ignored speech streams are represented with almost equal fidelity, and a global representation of the full auditory scene with all its streams is a better candidate neural representation than that of individual streams being represented separately. We also show that higher-order auditory cortical areas, by contrast, represent the attended stream separately and with significantly higher fidelity than unattended streams. Furthermore, the unattended background streams are more faithfully represented as a single unsegregated background object rather than as separated objects. Together, these findings demonstrate the progression of the representations and processing of a complex acoustic scene up through the hierarchy of the human auditory cortex. SIGNIFICANCE STATEMENT Using magnetoencephalography recordings from human listeners in a simulated cocktail party environment, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in separate hierarchical stages of the auditory cortex. We show that the primary-like areas in the auditory cortex use a dominantly spectrotemporal-based representation of the entire auditory
Yang, Ming-Tao; Hsu, Chun-Hsien; Yeh, Pei-Wen; Lee, Wang-Tso; Liang, Jao-Shwann; Fu, Wen-Mei; Lee, Chia-Ying
Inattention (IA) has been a major problem in children with attention deficit/hyperactivity disorder (ADHD), accounting for their behavioral and cognitive dysfunctions. However, there are at least three processing steps underlying attentional control for auditory change detection, namely pre-attentive change detection, involuntary attention orienting, and attention reorienting for further evaluation. This study aimed to examine whether children with ADHD would show deficits in any of these subcomponents by using mismatch negativity (MMN), P3a, and late discriminative negativity (LDN) as event-related potential (ERP) markers, under the passive auditory oddball paradigm. Two types of stimuli-pure tones and Mandarin lexical tones-were used to examine if the deficits were general across linguistic and non-linguistic domains. Participants included 15 native Mandarin-speaking children with ADHD and 16 age-matched controls (across groups, age ranged between 6 and 15 years). Two passive auditory oddball paradigms (lexical tones and pure tones) were applied. The pure tone oddball paradigm included a standard stimulus (1000 Hz, 80%) and two deviant stimuli (1015 and 1090 Hz, 10% each). The Mandarin lexical tone oddball paradigm's standard stimulus was /yi3/ (80%) and two deviant stimuli were /yi1/ and /yi2/ (10% each). The results showed no MMN difference, but did show attenuated P3a and enhanced LDN to the large deviants for both pure and lexical tone changes in the ADHD group. Correlation analysis showed that children with higher ADHD tendency, as indexed by parents' and teachers' ratings on ADHD symptoms, showed less positive P3a amplitudes when responding to large lexical tone deviants. Thus, children with ADHD showed impaired auditory change detection for both pure tones and lexical tones in both involuntary attention switching, and attention reorienting for further evaluation. These ERP markers may therefore be used for the evaluation of anti-ADHD drugs that aim to
Clark, Camilla N; Nicholas, Jennifer M; Agustus, Jennifer L; Hardy, Christopher J D; Russell, Lucy L; Brotherhood, Emilie V; Dick, Katrina M; Marshall, Charles R; Mummery, Catherine J; Rohrer, Jonathan D; Warren, Jason D
Impaired analysis of signal conflict and congruence may contribute to diverse socio-emotional symptoms in frontotemporal dementias, however the underlying mechanisms have not been defined. Here we addressed this issue in patients with behavioural variant frontotemporal dementia (bvFTD; n = 19) and semantic dementia (SD; n = 10) relative to healthy older individuals (n = 20). We created auditory scenes in which semantic and emotional congruity of constituent sounds were independently probed; associated tasks controlled for auditory perceptual similarity, scene parsing and semantic competence. Neuroanatomical correlates of auditory congruity processing were assessed using voxel-based morphometry. Relative to healthy controls, both the bvFTD and SD groups had impaired semantic and emotional congruity processing (after taking auditory control task performance into account) and reduced affective integration of sounds into scenes. Grey matter correlates of auditory semantic congruity processing were identified in distributed regions encompassing prefrontal, parieto-temporal and insular areas and correlates of auditory emotional congruity in partly overlapping temporal, insular and striatal regions. Our findings suggest that decoding of auditory signal relatedness may probe a generic cognitive mechanism and neural architecture underpinning frontotemporal dementia syndromes. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Steinmann, Tobias P.; Andrew, Colin M.; Thomsen, Carsten E.
Abstract—In this study event-related potentials (ERPs) were used to investigate the effects of prenatal alcohol exposure on response inhibition identified during task performance. ERPs were recorded during a auditory Go/No Go task in two groups of children with mean age of 12:8years (11years to 14......:7years): one diagnosed with fetal alcohol syndrome (FAS) or partial FAS (FAS/PFAS; n = 12) and a control group of children of same age whose mothers abstained from alcohol or drank minimally during pregnancy (n = 11). The children were instructed to push a button in response to the Go stimulus...... trials, suggesting a less efficient early classification of the stimulus. P3 showed larger amplitudes to No-Go vs. Go in both groups. The study has provided new evidence for inhibition deficits in FAS/PFAS subjects identified by ERPs....
Katz, Phyllis A.
The most significant finding is that stimulus-predifferentiation training elicited lower prejudice scores for children on two indices of ethnic attitudes than did a no-label control condition. (Author)
Schwarz, D W F; Taylor, P
Binaural beat sensations depend upon a central combination of two different temporally encoded tones, separately presented to the two ears. We tested the feasibility to record an auditory steady state evoked response (ASSR) at the binaural beat frequency in order to find a measure for temporal coding of sound in the human EEG. We stimulated each ear with a distinct tone, both differing in frequency by 40Hz, to record a binaural beat ASSR. As control, we evoked a beat ASSR in response to both tones in the same ear. We band-pass filtered the EEG at 40Hz, averaged with respect to stimulus onset and compared ASSR amplitudes and phases, extracted from a sinusoidal non-linear regression fit to a 40Hz period average. A 40Hz binaural beat ASSR was evoked at a low mean stimulus frequency (400Hz) but became undetectable beyond 3kHz. Its amplitude was smaller than that of the acoustic beat ASSR, which was evoked at low and high frequencies. Both ASSR types had maxima at fronto-central leads and displayed a fronto-occipital phase delay of several ms. The dependence of the 40Hz binaural beat ASSR on stimuli at low, temporally coded tone frequencies suggests that it may objectively assess temporal sound coding ability. The phase shift across the electrode array is evidence for more than one origin of the 40Hz oscillations. The binaural beat ASSR is an evoked response, with novel diagnostic potential, to a signal that is not present in the stimulus, but generated within the brain.
Lakatos, Peter; Musacchia, Gabriella; O’Connell, Monica N.; Falchier, Arnaud Y.; Javitt, Daniel C.; Schroeder, Charles E.
SUMMARY While we have convincing evidence that attention to auditory stimuli modulates neuronal responses at or before the level of primary auditory cortex (A1), the underlying physiological mechanisms are unknown. We found that attending to rhythmic auditory streams resulted in the entrainment of ongoing oscillatory activity reflecting rhythmic excitability fluctuations in A1. Strikingly, while the rhythm of the entrained oscillations in A1 neuronal ensembles reflected the temporal structure of the attended stream, the phase depended on the attended frequency content. Counter-phase entrainment across differently tuned A1 regions resulted in both the amplification and sharpening of responses at attended time points, in essence acting as a spectrotemporal filter mechanism. Our data suggest that selective attention generates a dynamically evolving model of attended auditory stimulus streams in the form of modulatory subthreshold oscillations across tonotopically organized neuronal ensembles in A1 that enhances the representation of attended stimuli. PMID:23439126
Roberts Larry E
Full Text Available Abstract Background Under natural circumstances, attention plays an important role in extracting relevant auditory signals from simultaneously present, irrelevant noises. Excitatory and inhibitory neural activity, enhanced by attentional processes, seems to sharpen frequency tuning, contributing to improved auditory performance especially in noisy environments. In the present study, we investigated auditory magnetic fields in humans that were evoked by pure tones embedded in band-eliminated noises during two different stimulus sequencing conditions (constant vs. random under auditory focused attention by means of magnetoencephalography (MEG. Results In total, we used identical auditory stimuli between conditions, but presented them in a different order, thereby manipulating the neural processing and the auditory performance of the listeners. Constant stimulus sequencing blocks were characterized by the simultaneous presentation of pure tones of identical frequency with band-eliminated noises, whereas random sequencing blocks were characterized by the simultaneous presentation of pure tones of random frequencies and band-eliminated noises. We demonstrated that auditory evoked neural responses were larger in the constant sequencing compared to the random sequencing condition, particularly when the simultaneously presented noises contained narrow stop-bands. Conclusion The present study confirmed that population-level frequency tuning in human auditory cortex can be sharpened in a frequency-specific manner. This frequency-specific sharpening may contribute to improved auditory performance during detection and processing of relevant sound inputs characterized by specific frequency distributions in noisy environments.
Sharma, Vishnu; McCreery, Douglas B; Han, Martin; Pikov, Victor
We present versatile multifunctional programmable controller with bidirectional data telemetry, implemented using existing commercial microchips and standard Bluetooth protocol, which adds convenience, reliability, and ease-of-use to neuroprosthetic devices. Controller, weighing 190 g, is placed on animal's back and provides bidirectional sustained telemetry rate of 500 kb/s , allowing real-time control of stimulation parameters and viewing of acquired data. In continuously-active state, controller consumes approximately 420 mW and operates without recharge for 8 h . It features independent 16-channel current-controlled stimulation, allowing current steering; customizable stimulus current waveforms; recording of stimulus voltage waveforms and evoked neuronal responses with stimulus artifact blanking circuitry. Flexibility, scalability, cost-efficiency, and a user-friendly computer interface of this device allow use in animal testing for variety of neuroprosthetic applications. Initial testing of the controller has been done in a feline model of brainstem auditory prosthesis. In this model, the electrical stimulation is applied to the array of microelectrodes implanted in the ventral cochlear nucleus, while the evoked neuronal activity was recorded with the electrode implanted in the contralateral inferior colliculus. Stimulus voltage waveforms to monitor the access impedance of the electrodes were acquired at the rate of 312 kilosamples/s. Evoked neuronal activity in the inferior colliculus was recorded after the blanking (transient silencing) of the recording amplifier during the stimulus pulse, allowing the detection of neuronal responses within 100 mus after the end of the stimulus pulse applied in the cochlear nucleus.
Full Text Available Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial-bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gori et al., 2014. To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds before and after training, either with tactile feedback, verbal feedback or no feedback. Audio thresholds were first measured with a spatial bisection task: subjects judged whether the second sound of a three sound sequence was spatially closer to the first or the third sound. The tactile-feedback group underwent two audio-tactile feedback sessions of 100 trials, where each auditory trial was followed by the same spatial sequence played on the subject’s forearm; auditory spatial bisection thresholds were evaluated after each session. In the verbal-feedback condition, the positions of the sounds were verbally reported to the subject after each feedback trial. The no-feedback group did the same sequence of trials, with no feedback. Performance improved significantly only after audio-tactile feedback. The results suggest that direct tactile feedback interacts with the auditory spatial localization system, possibly by a process of cross-sensory recalibration. Control tests with the subject rotated suggested that this effect occurs only when the tactile and acoustic sequences are spatially coherent. Our results suggest that the tactile system can be used to recalibrate the auditory sense of space. These results encourage the possibility of designing rehabilitation programs to help blind persons establish a robust auditory sense of space, through training with the tactile modality.
Andreas L. Schulz
Full Text Available Goal directed behavior and associated learning processes are tightly linked to neuronal activity in the ventral striatum. Mechanisms that integrate task relevant sensory information into striatal processing during decision making and learning are implicitly assumed in current reinforcementmodels, yet they are still weakly understood. To identify the functional activation of cortico-striatal subpopulations of connections during auditory discrimination learning, we trained Mongolian gerbils in a two-way active avoidance task in a shuttlebox to discriminate between falling and rising frequency modulated tones with identical spectral properties. We assessed functional coupling by analyzing the field-field coherence between the auditory cortex and the ventral striatum of animals performing the task. During the course of training, we observed a selective increase of functionalcoupling during Go-stimulus presentations. These results suggest that the auditory cortex functionally interacts with the ventral striatum during auditory learning and that the strengthening of these functional connections is selectively goal-directed.
Morrill, Ryan J; Hasenstaub, Andrea R
The cerebral cortex is a major hub for the convergence and integration of signals from across the sensory modalities; sensory cortices, including primary regions, are no exception. Here we show that visual stimuli influence neural firing in the auditory cortex of awake male and female mice, using multisite probes to sample single units across multiple cortical layers. We demonstrate that visual stimuli influence firing in both primary and secondary auditory cortex. We then determine the laminar location of recording sites through electrode track tracing with fluorescent dye and optogenetic identification using layer-specific markers. Spiking responses to visual stimulation occur deep in auditory cortex and are particularly prominent in layer 6. Visual modulation of firing rate occurs more frequently at areas with secondary-like auditory responses than those with primary-like responses. Auditory cortical responses to drifting visual gratings are not orientation-tuned, unlike visual cortex responses. The deepest cortical layers thus appear to be an important locus for cross-modal integration in auditory cortex. SIGNIFICANCE STATEMENT The deepest layers of the auditory cortex are often considered its most enigmatic, possessing a wide range of cell morphologies and atypical sensory responses. Here we show that, in mouse auditory cortex, these layers represent a locus of cross-modal convergence, containing many units responsive to visual stimuli. Our results suggest that this visual signal conveys the presence and timing of a stimulus rather than specifics about that stimulus, such as its orientation. These results shed light on both how and what types of cross-modal information is integrated at the earliest stages of sensory cortical processing. Copyright © 2018 the authors 0270-6474/18/382854-09$15.00/0.
Kim, Soo Ji; Kwak, Eunmi E; Park, Eun Sook; Cho, Sung-Rae
To investigate the effects of rhythmic auditory stimulation (RAS) on gait patterns in comparison with changes after neurodevelopmental treatment (NDT/Bobath) in adults with cerebral palsy. A repeated-measures analysis between the pretreatment and posttreatment tests and a comparison study between groups. Human gait analysis laboratory. Twenty-eight cerebral palsy patients with bilateral spasticity participated in this study. The subjects were randomly allocated to either neurodevelopmental treatment (n = 13) or rhythmic auditory stimulation (n = 15). Gait training with rhythmic auditory stimulation or neurodevelopmental treatment was performed three sessions per week for three weeks. Temporal and kinematic data were analysed before and after the intervention. Rhythmic auditory stimulation was provided using a combination of a metronome beat set to the individual's cadence and rhythmic cueing from a live keyboard, while neurodevelopmental treatment was implemented following the traditional method. Temporal data, kinematic parameters and gait deviation index as a measure of overall gait pathology were assessed. Temporal gait measures revealed that rhythmic auditory stimulation significantly increased cadence, walking velocity, stride length, and step length (P rhythmic auditory stimulation (P rhythmic auditory stimulation (P rhythmic auditory stimulation showed aggravated maximal internal rotation in the transverse plane (P rhythmic auditory stimulation or neurodevelopmental treatment elicited differential effects on gait patterns in adults with cerebral palsy.
Nielsen, Lars Bramsløw
An auditory model based on the psychophysics of hearing has been developed and tested. The model simulates the normal ear or an impaired ear with a given hearing loss. Based on reviews of the current literature, the frequency selectivity and loudness growth as functions of threshold and stimulus...... level have been found and implemented in the model. The auditory model was verified against selected results from the literature, and it was confirmed that the normal spread of masking and loudness growth could be simulated in the model. The effects of hearing loss on these parameters was also...... in qualitative agreement with recent findings. The temporal properties of the ear have currently not been included in the model. As an example of a real-world application of the model, loudness spectrograms for a speech utterance were presented. By introducing hearing loss, the speech sounds became less audible...
Kouni, Sophia N; Giannopoulos, Sotirios; Ziavra, Nausika; Koutsojannis, Constantinos
Acoustic signals are transmitted through the external and middle ear mechanically to the cochlea where they are transduced into electrical impulse for further transmission via the auditory nerve. The auditory nerve encodes the acoustic sounds that are conveyed to the auditory brainstem. Multiple brainstem nuclei, the cochlea, the midbrain, the thalamus, and the cortex constitute the central auditory system. In clinical practice, auditory brainstem responses (ABRs) to simple stimuli such as click or tones are widely used. Recently, complex stimuli or complex auditory brain responses (cABRs), such as monosyllabic speech stimuli and music, are being used as a tool to study the brainstem processing of speech sounds. We have used the classic 'click' as well as, for the first time, the artificial successive complex stimuli 'ba', which constitutes the Greek word 'baba' corresponding to the English 'daddy'. Twenty young adults institutionally diagnosed as dyslexic (10 subjects) or light dyslexic (10 subjects) comprised the diseased group. Twenty sex-, age-, education-, hearing sensitivity-, and IQ-matched normal subjects comprised the control group. Measurements included the absolute latencies of waves I through V, the interpeak latencies elicited by the classical acoustic click, the negative peak latencies of A and C waves, as well as the interpeak latencies of A-C elicited by the verbal stimulus 'baba' created on a digital speech synthesizer. The absolute peak latencies of waves I, III, and V in response to monoaural rarefaction clicks as well as the interpeak latencies I-III, III-V, and I-V in the dyslexic subjects, although increased in comparison with normal subjects, did not reach the level of a significant difference (pwave C and the interpeak latencies of A-C elicited by verbal stimuli were found to be increased in the dyslexic group in comparison with the control group (p=0.0004 and p=0.045, respectively). In the subgroup consisting of 10 patients suffering from
Stevenson, Ryan A; Fister, Juliane Krueger; Barnett, Zachary P; Nidiffer, Aaron R; Wallace, Mark T
In natural environments, human sensory systems work in a coordinated and integrated manner to perceive and respond to external events. Previous research has shown that the spatial and temporal relationships of sensory signals are paramount in determining how information is integrated across sensory modalities, but in ecologically plausible settings, these factors are not independent. In the current study, we provide a novel exploration of the impact on behavioral performance for systematic manipulations of the spatial location and temporal synchrony of a visual-auditory stimulus pair. Simple auditory and visual stimuli were presented across a range of spatial locations and stimulus onset asynchronies (SOAs), and participants performed both a spatial localization and simultaneity judgment task. Response times in localizing paired visual-auditory stimuli were slower in the periphery and at larger SOAs, but most importantly, an interaction was found between the two factors, in which the effect of SOA was greater in peripheral as opposed to central locations. Simultaneity judgments also revealed a novel interaction between space and time: individuals were more likely to judge stimuli as synchronous when occurring in the periphery at large SOAs. The results of this study provide novel insights into (a) how the speed of spatial localization of an audiovisual stimulus is affected by location and temporal coincidence and the interaction between these two factors and (b) how the location of a multisensory stimulus impacts judgments concerning the temporal relationship of the paired stimuli. These findings provide strong evidence for a complex interdependency between spatial location and temporal structure in determining the ultimate behavioral and perceptual outcome associated with a paired multisensory (i.e., visual-auditory) stimulus.
Kuriki, Shinya; Numao, Ryousuke; Nemoto, Iku
The auditory illusory perception "scale illusion" occurs when ascending and descending musical scale tones are delivered in a dichotic manner, such that the higher or lower tone at each instant is presented alternately to the right and left ears. Resulting tone sequences have a zigzag pitch in one ear and the reversed (zagzig) pitch in the other ear. Most listeners hear illusory smooth pitch sequences of up-down and down-up streams in the two ears separated in higher and lower halves of the scale. Although many behavioral studies have been conducted, how and where in the brain the illusory percept is formed have not been elucidated. In this study, we conducted functional magnetic resonance imaging using sequential tones that induced scale illusion (ILL) and those that mimicked the percept of scale illusion (PCP), and we compared the activation responses evoked by those stimuli by region-of-interest analysis. We examined the effects of adaptation, i.e., the attenuation of response that occurs when close-frequency sounds are repeated, which might interfere with the changes in activation by the illusion process. Results of the activation difference of the two stimuli, measured at varied tempi of tone presentation, in the superior temporal auditory cortex were not explained by adaptation. Instead, excess activation of the ILL stimulus from the PCP stimulus at moderate tempi (83 and 126 bpm) was significant in the posterior auditory cortex with rightward superiority, while significant prefrontal activation was dominant at the highest tempo (245 bpm). We suggest that the area of the planum temporale posterior to the primary auditory cortex is mainly involved in the illusion formation, and that the illusion-related process is strongly dependent on the rate of tone presentation. Copyright © 2016 Elsevier B.V. All rights reserved.
Li, Qi; Yang, Huamin; Sun, Fang; Wu, Jinglong
Sensory information is multimodal; through audiovisual interaction, task-irrelevant auditory stimuli tend to speed response times and increase visual perception accuracy. However, mechanisms underlying these performance enhancements have remained unclear. We hypothesize that task-irrelevant auditory stimuli might provide reliable temporal and spatial cues for visual target discrimination and behavioral response enhancement. Using signal detection theory, the present study investigated the effects of spatiotemporal relationships on auditory facilitation of visual target discrimination. Three experiments were conducted where an auditory stimulus maintained reliable temporal and/or spatial relationships with visual target stimuli. Results showed that perception sensitivity (d') to visual target stimuli was enhanced only when a task-irrelevant auditory stimulus maintained reliable spatiotemporal relationships with a visual target stimulus. When only reliable spatial or temporal information was contained, perception sensitivity was not enhanced. These results suggest that reliable spatiotemporal relationships between visual and auditory signals are required for audiovisual integration during a visual discrimination task, most likely due to a spread of attention. These results also indicate that auditory facilitation of visual target discrimination follows from late-stage cognitive processes rather than early stage sensory processes. © 2015 SAGE Publications.
Yuasa, Kenichi; Yotsumoto, Yuko
When an object is presented visually and moves or flickers, the perception of its duration tends to be overestimated. Such an overestimation is called time dilation. Perceived time can also be distorted when a stimulus is presented aurally as an auditory flutter, but the mechanisms and their relationship to visual processing remains unclear. In the present study, we measured interval timing perception while modulating the temporal characteristics of visual and auditory stimuli, and investigated whether the interval times of visually and aurally presented objects shared a common mechanism. In these experiments, participants compared the durations of flickering or fluttering stimuli to standard stimuli, which were presented continuously. Perceived durations for auditory flutters were underestimated, while perceived durations of visual flickers were overestimated. When auditory flutters and visual flickers were presented simultaneously, these distortion effects were cancelled out. When auditory flutters were presented with a constantly presented visual stimulus, the interval timing perception of the visual stimulus was affected by the auditory flutters. These results indicate that interval timing perception is governed by independent mechanisms for visual and auditory processing, and that there are some interactions between the two processing systems.
de Cheveigné, Alain; Wong, Daniel D E; Di Liberto, Giovanni M
The relation between a stimulus and the evoked brain response can shed light on perceptual processes within the brain. Signals derived from this relation can also be harnessed to control external devices for Brain Computer Interface (BCI) applications. While the classic event-related potential (ERP...... higher classification scores. CCA strips the brain response of variance unrelated to the stimulus, and the stimulus representation of variance that does not affect the response, and thus improves observations of the relation between stimulus and response....
Fernández, Thalía; Bosch-Bayard, Jorge; Harmony, Thalía; Caballero, María I; Díaz-Comas, Lourdes; Galán, Lídice; Ricardo-Garcell, Josefina; Aubert, Eduardo; Otero-Ojeda, Gloria
Children with learning disabilities (LD) frequently have an EEG characterized by an excess of theta and a deficit of alpha activities. NFB using an auditory stimulus as reinforcer has proven to be a useful tool to treat LD children by positively reinforcing decreases of the theta/alpha ratio. The aim of the present study was to optimize the NFB procedure by comparing the efficacy of visual (with eyes open) versus auditory (with eyes closed) reinforcers. Twenty LD children with an abnormally high theta/alpha ratio were randomly assigned to the Auditory or the Visual group, where a 500 Hz tone or a visual stimulus (a white square), respectively, was used as a positive reinforcer when the value of the theta/alpha ratio was reduced. Both groups had signs consistent with EEG maturation, but only the Auditory Group showed behavioral/cognitive improvements. In conclusion, the auditory reinforcer was more efficacious in reducing the theta/alpha ratio, and it improved the cognitive abilities more than the visual reinforcer.
Zimmermann, Jacqueline F; Moscovitch, Morris; Alain, Claude
Attention to memory describes the process of attending to memory traces when the object is no longer present. It has been studied primarily for representations of visual stimuli with only few studies examining attention to sound object representations in short-term memory. Here, we review the interplay of attention and auditory memory with an emphasis on 1) attending to auditory memory in the absence of related external stimuli (i.e., reflective attention) and 2) effects of existing memory on guiding attention. Attention to auditory memory is discussed in the context of change deafness, and we argue that failures to detect changes in our auditory environments are most likely the result of a faulty comparison system of incoming and stored information. Also, objects are the primary building blocks of auditory attention, but attention can also be directed to individual features (e.g., pitch). We review short-term and long-term memory guided modulation of attention based on characteristic features, location, and/or semantic properties of auditory objects, and propose that auditory attention to memory pathways emerge after sensory memory. A neural model for auditory attention to memory is developed, which comprises two separate pathways in the parietal cortex, one involved in attention to higher-order features and the other involved in attention to sensory information. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.
... role. Auditory cohesion problems: This is when higher-level listening tasks are difficult. Auditory cohesion skills — drawing inferences from conversations, understanding riddles, or comprehending verbal math problems — require heightened auditory processing and language levels. ...
Fuhrman, Susan I; Redfern, Mark S; Jennings, J Richard; Furman, Joseph M
This study investigated whether spatial aspects of an information processing task influence dual-task interference. Two groups (Older/Young) of healthy adults participated in dual-task experiments. Two auditory information processing tasks included a frequency discrimination choice reaction time task (non-spatial task) and a lateralization choice reaction time task (spatial task). Postural tasks included combinations of standing with eyes open or eyes closed on either a fixed floor or a sway-referenced floor. Reaction times and postural sway via center of pressure were recorded. Baseline measures of reaction time and sway were subtracted from the corresponding dual-task results to calculate reaction time task costs and postural task costs. Reaction time task cost increased with eye closure (p = 0.01), sway-referenced flooring (p vision x age interaction indicated that older subjects had a significant vision X task interaction whereas young subjects did not. However, when analyzed by age group, the young group showed minimal differences in interference for the spatial and non-spatial tasks with eyes open, but showed increased interference on the spatial relative to non-spatial task with eyes closed. On the contrary, older subjects demonstrated increased interference on the spatial relative to the non-spatial task with eyes open, but not with eyes closed. These findings suggest that visual-spatial interference may occur in older subjects when vision is used to maintain posture.
Davison, Michael; Baum, William M.
Four pigeons were trained in a procedure in which concurrent-schedule food ratios changed unpredictably across seven unsignaled components after 10 food deliveries. Additional green-key stimulus presentations also occurred on the two alternatives, sometimes in the same ratio as the component food ratio, and sometimes in the inverse ratio. In eight…
Zapata Rodriguez, Valentina; Laugesen, Søren; Jeong, Cheol-Ho
) is considered. Instead of using insert earphones to deliver the stimuli, as is customary, the auditory signals are reproduced from a loudspeaker placed in front of the subject, so as to include the hearing aid in the transmission path. Loudspeaker presentation of the stimulus can lower its effective modulation...... properties of the measurement room has not been considered. The present work explores the relation between the stimulus modulation power and the ASSR amplitude in a simulated sound-field ASSR data set with varying reverberation time. Three rooms were simulated using the Green's function approach...
Bigelow, James; Poremba, Amy
Proactive interference (PI) has traditionally been understood as an adverse consequence of stimulus repetition during memory tasks. Herein, we present data that emphasize costs as well as benefits of PI for monkeys performing an auditory delayed matching-to-sample (DMTS) task. The animals made same/different judgments for a variety of simple and complex sounds separated by a 5-s memory delay. Each session used a stimulus set that included eight sounds; thus, each sound was repeated multiple times per session for match trials and for nonmatch trials as the sample (Cue 1) or test (Cue 2) stimulus. For nonmatch trials, performance was substantially diminished when the test stimulus had been previously presented on a recent trial. However, when the sample stimulus had been recently presented, performance was significantly improved. We also observed a marginal performance benefit when stimuli for match trials had been recently presented. The costs of PI for nonmatch test stimuli were greater than the combined benefits of PI for nonmatch sample stimuli and match trials, indicating that the net influence of PI is detrimental. For all three manifestations of PI, the effects are shown to extend beyond the immediately subsequent trial. Our data suggest that PI in auditory DMTS is best understood as an enduring influence that can be both detrimental and beneficial to memory-task performance. © 2012 Wiley Periodicals, Inc.
Full Text Available Entrepreneurship training aims to prepare participants for entrepreneurship. This training is important because entrepreneurship is not an easy case. Training becomes a way to inculcate the entrepreneurial mentality to be determined to start a business, to face some risks and to be tenacious. In order to create this training succeed, instructors as training spearheads must have skills in conveying materials, even inspiring the participants. The stimulus variation is a form of instructors’ skill. Stimulus variation makes the learning process works well the training becomes fun, so that participants can be comfortable and voluntarily follow the learning process. Training is not a monotonous activity. The instructor can be an inspiration in the classroom, no longer just as a transmitter of learning materials.
Elbert, Sarah Pietertje; Dijkstra, Arie; Oenema, Anke
Mobile phone apps are increasingly used to deliver health interventions, which provide the opportunity to present health information via different communication modes. However, scientific evidence regarding the effects of such health apps is scarce. In a randomized controlled trial, we tested the efficacy of a 6-month intervention delivered via a mobile phone app that communicated either textual or auditory tailored health information aimed at stimulating fruit and vegetable intake. A control condition in which no health information was given was added. Perceived own health and health literacy were included as moderators to assess for which groups the interventions could possibly lead to health behavior change. After downloading the mobile phone app, respondents were exposed monthly to either text-based or audio-based tailored health information and feedback over a period of 6 months via the mobile phone app. In addition, respondents in the control condition only completed the baseline and posttest measures. Within a community sample (online recruitment), self-reported fruit and vegetable intake at 6-month follow-up was our primary outcome measure. In total, 146 respondents (ranging from 40 to 58 per condition) completed the study (attrition rate 55%). A significant main effect of condition was found on fruit intake (P=.049, partial η(2)=0.04). A higher fruit intake was found after exposure to the auditory information, especially in recipients with a poor perceived own health (P=.003, partial η(2)=0.08). In addition, health literacy moderated the effect of condition on vegetable intake 6 months later (Pmobile health app. The app seems to have the potential to change fruit and vegetable intake up to 6 months later, at least for specific groups. We found different effects for fruit and vegetable intake, respectively, suggesting that different underlying psychological mechanisms are associated with these specific behaviors. Based on our results, it seems worthwhile
Full Text Available In this study, it is demonstrated that moving sounds have an effect on the direction in which one sees visual stimuli move. During the main experiment sounds were presented consecutively at four speaker locations inducing left- or rightwards auditory apparent motion. On the path of auditory apparent motion, visual apparent motion stimuli were presented with a high degree of directional ambiguity. The main outcome of this experiment is that our participants perceived visual apparent motion stimuli that were ambiguous (equally likely to be perceived as moving left- or rightwards more often as moving in the same direction than in the opposite direction of auditory apparent motion. During the control experiment we replicated this finding and found no effect of sound motion direction on eye movements. This indicates that auditory motion can capture our visual motion percept when visual motion direction is insufficiently determinate without affecting eye movements.
Full Text Available Objectives: Rehabilitation strategies play a pivotal role in reliving the inappropriate behaviors and improving children's performance during school. Concentration and visual and auditory comprehension in children are crucial to effective learning and have drawn interest from researchers and clinicians. Vestibular function deficits usually cause high level of alertness and vigilance, and problems in maintaining focus, paying selective attention, and altering in precision and attention to the stimulus. The aim of this study is to investigate the correlation between vestibular stimulation and auditory perception in children with attention deficit hyperactivity disorder. Methods: Totally 30 children aged from 7 to 12 years with attention deficit hyperactivity disorder participated in this study. They were assessed based on the criteria of diagnostic and statistical manual of mental disorders. After obtaining guardian and parental consent, they were enrolled and randomly matched on age to two groups of intervention and control. Integrated visual and auditory continuous performance test was carried out as a pre-test. Those in the intervention group received vestibular stimulation during the therapy sessions, twice a week for 10 weeks. At the end the test was done to both groups as post-test. Results: The pre-and post-test scores were measured and compared the differences between means for two subject groups. Statistical analyses found a significant difference for the mean differences regarding auditory comprehension improvement. Discussion: The findings suggest that vestibular training is a reliable and powerful option treatment for attention deficit hyperactivity disorder especially along with other trainings, meaning that stimulating the sense of balance highlights the importance of interaction between inhabitation and cognition.
Schwent, V. L.; Hillyard, S. A.
Ten subjects were presented with random, rapid sequences of four auditory tones which were separated in pitch and apparent spatial position. The N1 component of the auditory vertex evoked potential (EP) measured relative to a baseline was observed to increase with attention. It was concluded that the N1 enhancement reflects a finely tuned selective attention to one stimulus channel among several concurrent, competing channels. This EP enhancement probably increases with increased information load on the subject.
Stekelenburg, Jeroen J; Keetels, Mirjam
The Colavita effect refers to the phenomenon that when confronted with an audiovisual stimulus, observers report more often to have perceived the visual than the auditory component. The Colavita effect depends on low-level stimulus factors such as spatial and temporal proximity between the unimodal signals. Here, we examined whether the Colavita effect is modulated by synesthetic congruency between visual size and auditory pitch. If the Colavita effect depends on synesthetic congruency, we expect a larger Colavita effect for synesthetically congruent size/pitch (large visual stimulus/low-pitched tone; small visual stimulus/high-pitched tone) than synesthetically incongruent (large visual stimulus/high-pitched tone; small visual stimulus/low-pitched tone) combinations. Participants had to identify stimulus type (visual, auditory or audiovisual). The study replicated the Colavita effect because participants reported more often the visual than auditory component of the audiovisual stimuli. Synesthetic congruency had, however, no effect on the magnitude of the Colavita effect. EEG recordings to congruent and incongruent audiovisual pairings showed a late frontal congruency effect at 400-550 ms and an occipitoparietal effect at 690-800 ms with neural sources in the anterior cingulate and premotor cortex for the 400- to 550-ms window and premotor cortex, inferior parietal lobule and the posterior middle temporal gyrus for the 690- to 800-ms window. The electrophysiological data show that synesthetic congruency was probably detected in a processing stage subsequent to the Colavita effect. We conclude that-in a modality detection task-the Colavita effect can be modulated by low-level structural factors but not by higher-order associations between auditory and visual inputs.
Brown, Rachel M; Palmer, Caroline
In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.
Antonio-Santos, Aileen; Vedula, Satyanarayana S; Hatt, Sarah R; Powell, Christine
Stimulus deprivation amblyopia (SDA) develops due to an obstruction to the passage of light secondary to a condition such as cataract. The obstruction prevents formation of a clear image on the retina. SDA can be resistant to treatment, leading to poor visual prognosis. SDA probably constitutes less than 3% of all amblyopia cases, although precise estimates of prevalence are unknown. In developed countries, most patients present under the age of one year; in less developed parts of the world patients are likely to be older at the time of presentation. The mainstay of treatment is removal of the cataract and then occlusion of the better-seeing eye, but regimens vary, can be difficult to execute, and traditionally are believed to lead to disappointing results. Our objective was to evaluate the effectiveness of occlusion therapy for SDA in an attempt to establish realistic treatment outcomes. Where data were available, we also planned to examine evidence of any dose response effect and to assess the effect of the duration, severity, and causative factor on the size and direction of the treatment effect. We searched CENTRAL (which contains the Cochrane Eyes and Vision Group Trials Register) (The Cochrane Library 2013, Issue 9), Ovid MEDLINE, Ovid MEDLINE In-Process and Other Non-Indexed Citations, Ovid MEDLINE Daily, Ovid OLDMEDLINE (January 1946 to October 2013), EMBASE (January 1980 to October 2013), the Latin American and Caribbean Literature on Health Sciences (LILACS) (January 1982 to October 2013), PubMed (January 1946 to October 2013), the metaRegister of Controlled Trials (mRCT) (www.controlled-trials.com ), ClinicalTrials.gov (www.clinicaltrials.gov) and the WHO International Clinical Trials Registry Platform (ICTRP) (www.who.int/ictrp/search/en). We did not use any date or language restrictions in the electronic searches for trials. We last searched the electronic databases on 28 October 2013. We planned to include randomized and quasi-randomized controlled
Chemtob, C M; Roitblat, H L; Hamada, R S; Carlson, J G; Muraoka, M Y; Bauer, G B
We present word and picture stimuli constituting a validated stimulus set appropriate for cognitive investigations of posttraumatic stress disorder (PTSD). Combat related and neutral words and pictures were rated by Vietnam veterans with PTSD and by three comparison groups along four dimensions: unpleasantness, Vietnam relevance, stressfulness, and memorability. There were distinctive patterns of responses by the PTSD group which efficiently discriminated the individuals in this group from those in the control groups. These stimuli have the potential to be developed as a diagnostic instrument.
Iversen, John R.; Patel, Aniruddh D.; Nicodemus, Brenda; Emmorey, Karen
A striking asymmetry in human sensorimotor processing is that humans synchronize movements to rhythmic sound with far greater precision than to temporally equivalent visual stimuli (e.g., to an auditory vs. a flashing visual metronome). Traditionally, this finding is thought to reflect a fundamental difference in auditory vs. visual processing, i.e., superior temporal processing by the auditory system and/or privileged coupling between the auditory and motor systems. It is unclear whether this asymmetry is an inevitable consequence of brain organization or whether it can be modified (or even eliminated) by stimulus characteristics or by experience. With respect to stimulus characteristics, we found that a moving, colliding visual stimulus (a silent image of a bouncing ball with a distinct collision point on the floor) was able to drive synchronization nearly as accurately as sound in hearing participants. To study the role of experience, we compared synchronization to flashing metronomes in hearing and profoundly deaf individuals. Deaf individuals performed better than hearing individuals when synchronizing with visual flashes, suggesting that cross-modal plasticity enhances the ability to synchronize with temporally discrete visual stimuli. Furthermore, when deaf (but not hearing) individuals synchronized with the bouncing ball, their tapping patterns suggest that visual timing may access higher-order beat perception mechanisms for deaf individuals. These results indicate that the auditory advantage in rhythmic synchronization is more experience- and stimulus-dependent than has been previously reported. PMID:25460395
Chen, Xi; Guo, Yiping; Feng, Jingyu; Liao, Zhengli; Li, Xinjian; Wang, Haitao; Li, Xiao; He, Jufang
Damage to the medial temporal lobe impairs the encoding of new memories and the retrieval of memories acquired immediately before the damage in human. In this study, we demonstrated that artificial visuoauditory memory traces can be established in the rat auditory cortex and that their encoding and retrieval depend on the entorhinal cortex of the medial temporal lobe in the rat. We trained rats to associate a visual stimulus with electrical stimulation of the auditory cortex using a classical conditioning protocol. After conditioning, we examined the associative memory traces electrophysiologically (i.e., visual stimulus-evoked responses of auditory cortical neurons) and behaviorally (i.e., visual stimulus-induced freezing and visual stimulus-guided reward retrieval). The establishment of a visuoauditory memory trace in the auditory cortex, which was detectable by electrophysiological recordings, was achieved over 20-30 conditioning trials and was blocked by unilateral, temporary inactivation of the entorhinal cortex. Retrieval of a previously established visuoauditory memory was also affected by unilateral entorhinal cortex inactivation. These findings suggest that the entorhinal cortex is necessary for the encoding and involved in the retrieval of artificial visuoauditory memory in the auditory cortex, at least during the early stages of memory consolidation.
Moors, Pieter; Huygelier, Hanne; Wagemans, Johan; de-Wit, Lee; van Ee, Raymond
Previous studies using binocular rivalry have shown that signals in a modality other than the visual can bias dominance durations depending on their congruency with the rivaling stimuli. More recently, studies using continuous flash suppression (CFS) have reported that multisensory integration influences how long visual stimuli remain suppressed. In this study, using CFS, we examined whether the contrast thresholds for detecting visual looming stimuli are influenced by a congruent auditory stimulus. In Experiment 1, we show that a looming visual stimulus can result in lower detection thresholds compared to a static concentric grating, but that auditory tone pips congruent with the looming stimulus did not lower suppression thresholds any further. In Experiments 2, 3, and 4, we again observed no advantage for congruent multisensory stimuli. These results add to our understanding of the conditions under which multisensory integration is possible, and suggest that certain forms of multisensory integration are not evident when the visual stimulus is suppressed from awareness using CFS.
Full Text Available Speech perception is known to rely on both auditory and visual information. However, sound specific somatosensory input has been shown also to influence speech perceptual processing (Ito et al., 2009. In the present study we addressed further the relationship between somatosensory information and speech perceptual processing by addressing the hypothesis that the temporal relationship between orofacial movement and sound processing contributes to somatosensory-auditory interaction in speech perception. We examined the changes in event-related potentials in response to multisensory synchronous (simultaneous and asynchronous (90 ms lag and lead somatosensory and auditory stimulation compared to individual unisensory auditory and somatosensory stimulation alone. We used a robotic device to apply facial skin somatosensory deformations that were similar in timing and duration to those experienced in speech production. Following synchronous multisensory stimulation the amplitude of the event-related potential was reliably different from the two unisensory potentials. More importantly, the magnitude of the event-related potential difference varied as a function of the relative timing of the somatosensory-auditory stimulation. Event-related activity change due to stimulus timing was seen between 160-220 ms following somatosensory onset, mostly around the parietal area. The results demonstrate a dynamic modulation of somatosensory-auditory convergence and suggest the contribution of somatosensory information for speech processing process is dependent on the specific temporal order of sensory inputs in speech production.
Paul Wallace Anderson
Full Text Available Past research has shown that auditory distance estimation improves when listeners are given the opportunity to see all possible sound sources when compared to no visual input. It has also been established that distance estimation is more accurate in vision than in audition. The present study investigates the degree to which auditory distance estimation is improved when matched with a congruent visual stimulus. Virtual sound sources based on binaural room impulse response (BRIR measurements made from distances ranging from approximately 0.3 to 9.8 m in a concert hall were used as auditory stimuli. Visual stimuli were photographs taken from the listener’s perspective at each distance in the impulse response measurement setup presented on a large HDTV monitor. Listeners were asked to estimate egocentric distance to the sound source in each of three conditions: auditory only (A, visual only (V, and congruent auditory/visual stimuli (A+V. Each condition was presented within its own block. Sixty-two listeners were tested in order to quantify the response variability inherent in auditory distance perception. Distance estimates from both the V and A+V conditions were found to be considerably more accurate and less variable than estimates from the A condition.
Lanuza, E; Moncho-Bogani, J; Ledoux, J E
The lateral nucleus of the amygdala (LA) is a site of convergence for auditory (conditioned stimulus) and foot-shock (unconditioned stimulus) inputs during fear conditioning. The auditory pathways to LA are well characterized, but less is known about the pathways through which foot shock is transmitted. Anatomical tracing and physiological recording studies suggest that the posterior intralaminar thalamic nucleus, which projects to LA, receives both auditory and somatosensory inputs. In the present study we examined the expression of the immediate-early gene c-fos in the LA in rats in response to foot-shock stimulation. We then determined the effects of posterior intralaminar thalamic lesions on foot-shock-induced c-Fos expression in the LA. Foot-shock stimulation led to an increase in the density of c-Fos-positive cells in all LA subnuclei in comparison to controls exposed to the conditioning box but not shocked. However, some differences among the dorsolateral, ventrolateral and ventromedial subnuclei were observed. The ventrolateral subnucleus showed a homogeneous activation throughout its antero-posterior extension. In contrast, only the rostral aspect of the ventromedial subnucleus and the central aspect of the dorsolateral subnucleus showed a significant increment in c-Fos expression. The density of c-Fos-labeled cells in all LA subnuclei was also increased in animals placed in the box in comparison to untreated animals. Unilateral electrolytic lesions of the posterior intralaminar thalamic nucleus and the medial division of the medial geniculate body reduced foot-shock-induced c-Fos activation in the LA ipsilateral to the lesion. The number of c-Fos labeled cells on the lesioned side was reduced to the levels observed in the animals exposed only to the box. These results indicate that the LA is involved in processing information about the foot-shock unconditioned stimulus and receives this kind of somatosensory information from the posterior intralaminar
Akhoun, Idrick; Moulin, Annie; Jeanvoine, Arnaud; Ménard, Mikael; Buret, François; Vollaire, Christian; Scorretti, Riccardo; Veuillet, Evelyne; Berger-Vachon, Christian; Collet, Lionel; Thai-Van, Hung
Speech elicited auditory brainstem responses (Speech ABR) have been shown to be an objective measurement of speech processing in the brainstem. Given the simultaneous stimulation and recording, and the similarities between the recording and the speech stimulus envelope, there is a great risk of artefactual recordings. This study sought to systematically investigate the source of artefactual contamination in Speech ABR response. In a first part, we measured the sound level thresholds over which artefactual responses were obtained, for different types of transducers and experimental setup parameters. A watermelon model was used to model the human head susceptibility to electromagnetic artefact. It was found that impedances between the electrodes had a great effect on electromagnetic susceptibility and that the most prominent artefact is due to the transducer's electromagnetic leakage. The only artefact-free condition was obtained with insert-earphones shielded in a Faraday cage linked to common ground. In a second part of the study, using the previously defined artefact-free condition, we recorded speech ABR in unilateral deaf subjects and bilateral normal hearing subjects. In an additional control condition, Speech ABR was recorded with the insert-earphones used to deliver the stimulation, unplugged from the ears, so that the subjects did not perceive the stimulus. No responses were obtained from the deaf ear of unilaterally hearing impaired subjects, nor in the insert-out-of-the-ear condition in all the subjects, showing that Speech ABR reflects the functioning of the auditory pathways.
DiMattina, Christopher; Zhang, Kechen
In this paper, we review several lines of recent work aimed at developing practical methods for adaptive on-line stimulus generation for sensory neurophysiology. We consider various experimental paradigms where on-line stimulus optimization is utilized, including the classical optimal stimulus paradigm where the goal of experiments is to identify a stimulus which maximizes neural responses, the iso-response paradigm which finds sets of stimuli giving rise to constant responses, and the system...
Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depressin, and hyperacute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of The Sound of a Moracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.
Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depression, and hyper acute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of the sound of a miracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.
Full Text Available Various studies have highlighted plasticity of the auditory system from visual stimuli, limiting the trained field of perception. The aim of the present study is to investigate auditory system adaptation using an audio-kinesthetic platform. Participants were placed in a Virtual Auditory Environment allowing the association of the physical position of a virtual sound source with an alternate set of acoustic spectral cues or Head-Related Transfer Function (HRTF through the use of a tracked ball manipulated by the subject. This set-up has the advantage to be not being limited to the visual field while also offering a natural perception-action coupling through the constant awareness of one's hand position. Adaptation process to non-individualized HRTF was realized through a spatial search game application. A total of 25 subjects participated, consisting of subjects presented with modified cues using non-individualized HRTF and a control group using individual measured HRTFs to account for any learning effect due to the game itself. The training game lasted 12 minutes and was repeated over 3 consecutive days. Adaptation effects were measured with repeated localization tests. Results showed a significant performance improvement for vertical localization and a significant reduction in the front/back confusion rate after 3 sessions.
Ali Akbar Tahaei
Full Text Available Auditory processing deficits have been hypothesized as an underlying mechanism for stuttering. Previous studies have demonstrated abnormal responses in subjects with persistent developmental stuttering (PDS at the higher level of the central auditory system using speech stimuli. Recently, the potential usefulness of speech evoked auditory brainstem responses in central auditory processing disorders has been emphasized. The current study used the speech evoked ABR to investigate the hypothesis that subjects with PDS have specific auditory perceptual dysfunction. Objectives. To determine whether brainstem responses to speech stimuli differ between PDS subjects and normal fluent speakers. Methods. Twenty-five subjects with PDS participated in this study. The speech-ABRs were elicited by the 5-formant synthesized syllable/da/, with duration of 40 ms. Results. There were significant group differences for the onset and offset transient peaks. Subjects with PDS had longer latencies for the onset and offset peaks relative to the control group. Conclusions. Subjects with PDS showed a deficient neural timing in the early stages of the auditory pathway consistent with temporal processing deficits and their abnormal timing may underlie to their disfluency.
Heather L Chapin
Full Text Available The aim of this study was to explore the role of attention in pulse and meter perception using complex rhythms. We used a selective attention paradigm in which participants attended to either a complex auditory rhythm or a visually presented word list. Performance on a reproduction task was used to gauge whether participants were attending to the appropriate stimulus. We hypothesized that attention to complex rhythms – which contain no energy at the pulse frequency – would lead to activations in motor areas involved in pulse perception. Moreover, because multiple repetitions of a complex rhythm are needed to perceive a pulse, activations in pulse related areas would be seen only after sufficient time had elapsed for pulse perception to develop. Selective attention was also expected to modulate activity in sensory areas specific to the modality. We found that selective attention to rhythms led to increased BOLD responses in basal ganglia, and basal ganglia activity was observed only after the rhythms had cycled enough times for a stable pulse percept to develop. These observations suggest that attention is needed to recruit motor activations associated with the perception of pulse in complex rhythms. Moreover, attention to the auditory stimulus enhanced activity in an attentional sensory network including primary auditory, insula, anterior cingulate, and prefrontal cortex, and suppressed activity in sensory areas associated with attending to the visual stimulus.
Lim, Sung-Joo; Wöstmann, Malte; Obleser, Jonas
Selective attention to a task-relevant stimulus facilitates encoding of that stimulus into a working memory representation. It is less clear whether selective attention also improves the precision of a stimulus already represented in memory. Here, we investigate the behavioral and neural dynamics of selective attention to representations in auditory working memory (i.e., auditory objects) using psychophysical modeling and model-based analysis of electroencephalographic signals. Human listeners performed a syllable pitch discrimination task where two syllables served as to-be-encoded auditory objects. Valid (vs neutral) retroactive cues were presented during retention to allow listeners to selectively attend to the to-be-probed auditory object in memory. Behaviorally, listeners represented auditory objects in memory more precisely (expressed by steeper slopes of a psychometric curve) and made faster perceptual decisions when valid compared to neutral retrocues were presented. Neurally, valid compared to neutral retrocues elicited a larger frontocentral sustained negativity in the evoked potential as well as enhanced parietal alpha/low-beta oscillatory power (9-18 Hz) during memory retention. Critically, individual magnitudes of alpha oscillatory power (7-11 Hz) modulation predicted the degree to which valid retrocues benefitted individuals' behavior. Our results indicate that selective attention to a specific object in auditory memory does benefit human performance not by simply reducing memory load, but by actively engaging complementary neural resources to sharpen the precision of the task-relevant object in memory. Can selective attention improve the representational precision with which objects are held in memory? And if so, what are the neural mechanisms that support such improvement? These issues have been rarely examined within the auditory modality, in which acoustic signals change and vanish on a milliseconds time scale. Introducing a new auditory memory
Martins, H R; Romao, M; Placido, D; Provenzano, F; Tierra-Criollo, C J
The technological improvement helps many medical areas. The audiometric exams involving the auditory evoked potentials can make better diagnoses of auditory disorders. This paper proposes the development of a stimulator based on Digital Signal Processor. This stimulator is the first step of an auditory evoked potential system based on the ADSP-BF533 EZ KIT LITE (Analog Devices Company - USA). The stimulator can generate arbitrary waveform like Sine Waves, Modulated Amplitude, Pulses, Bursts and Pips. The waveforms are generated through a graphical interface programmed in C++ in which the user can define the parameters of the waveform. Furthermore, the user can set the exam parameters as number of stimuli, time with stimulation (Time ON) and time without stimulus (Time OFF). In future works will be implemented another parts of the system that includes the acquirement of electroencephalogram and signal processing to estimate and analyze the evoked potential
Martins, H R; Romao, M; Placido, D; Provenzano, F; Tierra-Criollo, C J [Universidade Federal de Minas Gerais (UFMG), Departamento de Engenharia Eletrica (DEE), Nucleo de Estudos e Pesquisa em Engenharia Biomedica NEPEB, Av. Ant. Carlos, 6627, sala 2206, Pampulha, Belo Horizonte, MG, 31.270-901 (Brazil)
The technological improvement helps many medical areas. The audiometric exams involving the auditory evoked potentials can make better diagnoses of auditory disorders. This paper proposes the development of a stimulator based on Digital Signal Processor. This stimulator is the first step of an auditory evoked potential system based on the ADSP-BF533 EZ KIT LITE (Analog Devices Company - USA). The stimulator can generate arbitrary waveform like Sine Waves, Modulated Amplitude, Pulses, Bursts and Pips. The waveforms are generated through a graphical interface programmed in C++ in which the user can define the parameters of the waveform. Furthermore, the user can set the exam parameters as number of stimuli, time with stimulation (Time ON) and time without stimulus (Time OFF). In future works will be implemented another parts of the system that includes the acquirement of electroencephalogram and signal processing to estimate and analyze the evoked potential.
Liang, Feixue; Bai, Lin; Tao, Huizhong W.; Zhang, Li I.; Xiao, Zhongju
It is generally thought that background noise can mask auditory information. However, how the noise specifically transforms neuronal auditory processing in a level-dependent manner remains to be carefully determined. Here, with in vivo loose-patch cell-attached recordings in layer 4 of the rat primary auditory cortex (A1), we systematically examined how continuous wideband noise of different levels affected receptive field properties of individual neurons. We found that the background noise, when above a certain critical/effective level, resulted in an elevation of intensity threshold for tone-evoked responses. This increase of threshold was linearly dependent on the noise intensity above the critical level. As such, the tonal receptive field (TRF) of individual neurons was translated upward as an entirety toward high intensities along the intensity domain. This resulted in preserved preferred characteristic frequency (CF) and the overall shape of TRF, but reduced frequency responding range and an enhanced frequency selectivity for the same stimulus intensity. Such translational effects on intensity threshold were observed in both excitatory and fast-spiking inhibitory neurons, as well as in both monotonic and nonmonotonic (intensity-tuned) A1 neurons. Our results suggest that in a noise background, fundamental auditory representations are modulated through a background level-dependent linear shifting along intensity domain, which is equivalent to reducing stimulus intensity. PMID:25426029
Liang, Feixue; Bai, Lin; Tao, Huizhong W; Zhang, Li I; Xiao, Zhongju
It is generally thought that background noise can mask auditory information. However, how the noise specifically transforms neuronal auditory processing in a level-dependent manner remains to be carefully determined. Here, with in vivo loose-patch cell-attached recordings in layer 4 of the rat primary auditory cortex (A1), we systematically examined how continuous wideband noise of different levels affected receptive field properties of individual neurons. We found that the background noise, when above a certain critical/effective level, resulted in an elevation of intensity threshold for tone-evoked responses. This increase of threshold was linearly dependent on the noise intensity above the critical level. As such, the tonal receptive field (TRF) of individual neurons was translated upward as an entirety toward high intensities along the intensity domain. This resulted in preserved preferred characteristic frequency (CF) and the overall shape of TRF, but reduced frequency responding range and an enhanced frequency selectivity for the same stimulus intensity. Such translational effects on intensity threshold were observed in both excitatory and fast-spiking inhibitory neurons, as well as in both monotonic and nonmonotonic (intensity-tuned) A1 neurons. Our results suggest that in a noise background, fundamental auditory representations are modulated through a background level-dependent linear shifting along intensity domain, which is equivalent to reducing stimulus intensity.
Full Text Available Previous empirical observations have led researchers to propose that auditory feedback (the auditory perception of self-produced sounds when speaking functions abnormally in the speech motor systems of persons who stutter (PWS. Researchers have theorized that an important neural basis of stuttering is the aberrant integration of auditory information into incipient speech motor commands. Because of the circumstantial support for these hypotheses and the differences and contradictions between them, there is a need for carefully designed experiments that directly examine auditory-motor integration during speech production in PWS. In the current study, we used real-time manipulation of auditory feedback to directly investigate whether the speech motor system of PWS utilizes auditory feedback abnormally during articulation and to characterize potential deficits of this auditory-motor integration. Twenty-one PWS and 18 fluent control participants were recruited. Using a short-latency formant-perturbation system, we examined participants' compensatory responses to unanticipated perturbation of auditory feedback of the first formant frequency during the production of the monophthong [ε]. The PWS showed compensatory responses that were qualitatively similar to the controls' and had close-to-normal latencies (∼150 ms, but the magnitudes of their responses were substantially and significantly smaller than those of the control participants (by 47% on average, p<0.05. Measurements of auditory acuity indicate that the weaker-than-normal compensatory responses in PWS were not attributable to a deficit in low-level auditory processing. These findings are consistent with the hypothesis that stuttering is associated with functional defects in the inverse models responsible for the transformation from the domain of auditory targets and auditory error information into the domain of speech motor commands.
Sheinin, Anton; Lavi, Ayal; Michaelevski, Izhak
Electrical stimulus isolator is a widely used device in electrophysiology. The timing of the stimulus application is usually automated and controlled by the external device or acquisition software; however, the intensity of the stimulus is adjusted manually. Inaccuracy, lack of reproducibility and no automation of the experimental protocol are disadvantages of the manual adjustment. To overcome these shortcomings, we developed StimDuino, an inexpensive Arduino-controlled stimulus isolator allowing highly accurate, reproducible automated setting of the stimulation current. The intensity of the stimulation current delivered by StimDuino is controlled by Arduino, an open-source microcontroller development platform. The automatic stimulation patterns are software-controlled and the parameters are set from Matlab-coded simple, intuitive and user-friendly graphical user interface. The software also allows remote control of the device over the network. Electrical current measurements showed that StimDuino produces the requested current output with high accuracy. In both hippocampal slice and in vivo recordings, the fEPSP measurements obtained with StimDuino and the commercial stimulus isolators showed high correlation. Commercial stimulus isolators are manually managed, while StimDuino generates automatic stimulation patterns with increasing current intensity. The pattern is utilized for the input-output relationship analysis, necessary for assessment of excitability. In contrast to StimuDuino, not all commercial devices are capable for remote control of the parameters and stimulation process. StimDuino-generated automation of the input-output relationship assessment eliminates need for the current intensity manually adjusting, improves stimulation reproducibility, accuracy and allows on-site and remote control of the stimulation parameters. Copyright © 2015 Elsevier B.V. All rights reserved.
Strait, Dana L; Kraus, Nina; Parbery-Clark, Alexandra; Ashley, Richard
A growing body of research suggests that cognitive functions, such as attention and memory, drive perception by tuning sensory mechanisms to relevant acoustic features. Long-term musical experience also modulates lower-level auditory function, although the mechanisms by which this occurs remain uncertain. In order to tease apart the mechanisms that drive perceptual enhancements in musicians, we posed the question: do well-developed cognitive abilities fine-tune auditory perception in a top-down fashion? We administered a standardized battery of perceptual and cognitive tests to adult musicians and non-musicians, including tasks either more or less susceptible to cognitive control (e.g., backward versus simultaneous masking) and more or less dependent on auditory or visual processing (e.g., auditory versus visual attention). Outcomes indicate lower perceptual thresholds in musicians specifically for auditory tasks that relate with cognitive abilities, such as backward masking and auditory attention. These enhancements were observed in the absence of group differences for the simultaneous masking and visual attention tasks. Our results suggest that long-term musical practice strengthens cognitive functions and that these functions benefit auditory skills. Musical training bolsters higher-level mechanisms that, when impaired, relate to language and literacy deficits. Thus, musical training may serve to lessen the impact of these deficits by strengthening the corticofugal system for hearing. 2009 Elsevier B.V. All rights reserved.
Ker, Ming-Dou; Lin, Chun-Yu; Chen, Wei-Ling
A stimulus driver circuit for a micro-stimulator used in an implantable device is presented in this paper. For epileptic seizure control, the target of the driver was to output 30 µA stimulus currents when the electrode impedance varied between 20 and 200 kΩ. The driver, which consisted of the output stage, control block and adaptor, was integrated in a single chip. The averaged power consumption of the stimulus driver was 0.24-0.56 mW at 800 Hz stimulation rate. Fabricated in a 0.35 µm 3.3 V/24 V CMOS process and applied to a closed-loop epileptic seizure monitoring and controlling system, the proposed design has been successfully verified in the experimental results of Long-Evans rats with epileptic seizures.
Nees, Michael A
Researchers have shown increased interest in mechanisms of working memory for nonverbal sounds such as music and environmental sounds. These studies often have used two-stimulus comparison tasks: two sounds separated by a brief retention interval (often 3-5 s) are compared, and a "same" or "different" judgment is recorded. Researchers seem to have assumed that sensory memory has a negligible impact on performance in auditory two-stimulus comparison tasks. This assumption is examined in detail in this comment. According to seminal texts and recent research reports, sensory memory persists in parallel with working memory for a period of time following hearing a stimulus and can influence behavioral responses on memory tasks. Unlike verbal working memory studies that use serial recall tasks, research paradigms for exploring nonverbal working memory-especially two-stimulus comparison tasks-may not be differentiating working memory from sensory memory processes in analyses of behavioral responses, because retention interval durations have not excluded the possibility that the sensory memory trace drives task performance. This conflation of different constructs may be one contributor to discrepant research findings and the resulting proliferation of theoretical conjectures regarding mechanisms of working memory for nonverbal sounds.
Michael A. Nees
Full Text Available Researchers have shown increased interest in mechanisms of working memory for nonverbal sounds such as music and environmental sounds. These studies often have used two-stimulus comparison tasks: two sounds separated by a brief retention interval (often 3 to 5 s are compared, and a same or different judgment is recorded. Researchers seem to have assumed that sensory memory has a negligible impact on performance in auditory two-stimulus comparison tasks. This assumption is examined in detail in this comment. According to seminal texts and recent research reports, sensory memory persists in parallel with working memory for a period of time following hearing a stimulus and can influence behavioral responses on memory tasks. Unlike verbal working memory studies that use serial recall tasks, research paradigms for exploring nonverbal working memory—especially two-stimulus comparison tasks—may not be differentiating working memory from sensory memory processes in analyses of behavioral responses, because retention interval durations have not excluded the possibility that the sensory memory trace drives task performance. This conflation of different constructs may be one contributor to discrepant research findings and the resulting proliferation of theoretical conjectures regarding mechanisms of working memory for nonverbal sounds.
Kawasaki, Masahiro; Kitajo, Keiichi; Yamaguchi, Yoko
In humans, theta phase (4-8 Hz) synchronization observed on electroencephalography (EEG) plays an important role in the manipulation of mental representations during working memory (WM) tasks; fronto-temporal synchronization is involved in auditory-verbal WM tasks and fronto-parietal synchronization is involved in visual WM tasks. However, whether or not theta phase synchronization is able to select the to-be-manipulated modalities is uncertain. To address the issue, we recorded EEG data from subjects who were performing auditory-verbal and visual WM tasks; we compared the theta synchronizations when subjects performed either auditory-verbal or visual manipulations in separate WM tasks, or performed both two manipulations in the same WM task. The auditory-verbal WM task required subjects to calculate numbers presented by an auditory-verbal stimulus, whereas the visual WM task required subjects to move a spatial location in a mental representation in response to a visual stimulus. The dual WM task required subjects to manipulate auditory-verbal, visual, or both auditory-verbal and visual representations while maintaining auditory-verbal and visual representations. Our time-frequency EEG analyses revealed significant fronto-temporal theta phase synchronization during auditory-verbal manipulation in both auditory-verbal and auditory-verbal/visual WM tasks, but not during visual manipulation tasks. Similarly, we observed significant fronto-parietal theta phase synchronization during visual manipulation tasks, but not during auditory-verbal manipulation tasks. Moreover, we observed significant synchronization in both the fronto-temporal and fronto-parietal theta signals during simultaneous auditory-verbal/visual manipulations. These findings suggest that theta synchronization seems to flexibly connect the brain areas that manipulate WM.
Full Text Available In humans, theta phase (4–8 Hz synchronization observed on electroencephalography (EEG plays an important role in the manipulation of mental representations during working memory (WM tasks; fronto-temporal synchronization is involved in auditory-verbal WM tasks and fronto-parietal synchronization is involved in visual WM tasks. However, whether or not theta phase synchronization is able to select the to-be-manipulated modalities is uncertain. To address the issue, we recorded EEG data from subjects who were performing auditory-verbal and visual WM tasks; we compared the theta synchronizations when subjects performed either auditory-verbal or visual manipulations in separate WM tasks, or performed both two manipulations in the same WM task. The auditory-verbal WM task required subjects to calculate numbers presented by an auditory-verbal stimulus, whereas the visual WM task required subjects to move a spatial location in a mental representation in response to a visual stimulus. The dual WM task required subjects to manipulate auditory-verbal, visual, or both auditory-verbal and visual representations while maintaining auditory-verbal and visual representations. Our time-frequency EEG analyses revealed significant fronto-temporal theta phase synchronization during auditory-verbal manipulation in both auditory-verbal and auditory-verbal/visual WM tasks, but not during visual manipulation tasks. Similarly, we observed significant fronto-parietal theta phase synchronization during visual manipulation tasks, but not during auditory-verbal manipulation tasks. Moreover, we observed significant synchronization in both the fronto-temporal and fronto-parietal theta signals during simultaneous auditory-verbal/visual manipulations. These findings suggest that theta synchronization seems to flexibly connect the brain areas that manipulate WM.
Delorme, Arnaud; Polich, John
Long-term Vipassana meditators sat in meditation vs. a control (instructed mind wandering) states for 25 min, electroencephalography (EEG) was recorded and condition order counterbalanced. For the last 4 min, a three-stimulus auditory oddball series was presented during both meditation and control periods through headphones and no task imposed. Time-frequency analysis demonstrated that meditation relative to the control condition evinced decreased evoked delta (2–4 Hz) power to distracter stimuli concomitantly with a greater event-related reduction of late (500–900 ms) alpha-1 (8–10 Hz) activity, which indexed altered dynamics of attentional engagement to distracters. Additionally, standard stimuli were associated with increased early event-related alpha phase synchrony (inter-trial coherence) and evoked theta (4–8 Hz) phase synchrony, suggesting enhanced processing of the habituated standard background stimuli. Finally, during meditation, there was a greater differential early-evoked gamma power to the different stimulus classes. Correlation analysis indicated that this effect stemmed from a meditation state-related increase in early distracter-evoked gamma power and phase synchrony specific to longer-term expert practitioners. The findings suggest that Vipassana meditation evokes a brain state of enhanced perceptual clarity and decreased automated reactivity. PMID:22648958
Robyn S Kim
Full Text Available BACKGROUND: Studies of perceptual learning have largely focused on unisensory stimuli. However, multisensory interactions are ubiquitous in perception, even at early processing stages, and thus can potentially play a role in learning. Here, we examine the effect of auditory-visual congruency on visual learning. METHODOLOGY/PRINCIPLE FINDINGS: Subjects were trained over five days on a visual motion coherence detection task with either congruent audiovisual, or incongruent audiovisual stimuli. Comparing performance on visual-only trials, we find that training with congruent audiovisual stimuli produces significantly better learning than training with incongruent audiovisual stimuli or with only visual stimuli. CONCLUSIONS/SIGNIFICANCE: This advantage from stimulus congruency during training suggests that the benefits of multisensory training may result from audiovisual interactions at a perceptual rather than cognitive level.
Max F K Happel
Full Text Available Robust perception of auditory objects over a large range of sound intensities is a fundamental feature of the auditory system. However, firing characteristics of single neurons across the entire auditory system, like the frequency tuning, can change significantly with stimulus intensity. Physiological correlates of level-constancy of auditory representations hence should be manifested on the level of larger neuronal assemblies or population patterns. In this study we have investigated how information of frequency and sound level is integrated on the circuit-level in the primary auditory cortex (AI of the Mongolian gerbil. We used a combination of pharmacological silencing of corticocortically relayed activity and laminar current source density (CSD analysis. Our data demonstrate that with increasing stimulus intensities progressively lower frequencies lead to the maximal impulse response within cortical input layers at a given cortical site inherited from thalamocortical synaptic inputs. We further identified a temporally precise intercolumnar synaptic convergence of early thalamocortical and horizontal corticocortical inputs. Later tone-evoked activity in upper layers showed a preservation of broad tonotopic tuning across sound levels without shifts towards lower frequencies. Synaptic integration within corticocortical circuits may hence contribute to a level-robust representation of auditory information on a neuronal population level in the auditory cortex.
Wass, Sam V; Clackson, Kaili; Georgieva, Stanimira D; Brightman, Laura; Nutbrown, Rebecca; Leong, Victoria
Previous research has suggested that when a social partner, such as a parent, pays attention to an object, this increases the attention that infants pay to that object during spontaneous, naturalistic play. There are two contrasting reasons why this might be: first, social context may influence increases in infants' endogenous (voluntary) attention control; second, social settings may offer increased opportunities for exogenous attentional capture. To differentiate these possibilities, we compared 12-month-old infants' naturalistic attention patterns in two settings: Solo Play and Joint Play with a social partner (the parent). Consistent with previous research, we found that infants' look durations toward play objects were longer during Joint Play, and that moments of inattentiveness were fewer, and shorter. Follow-up analyses, conducted to differentiate the two above-proposed hypotheses, were more consistent with the latter hypothesis. We found that infants' rate of change of attentiveness was faster during Joint Play than Solo Play, suggesting that internal attention factors, such as attentional inertia, may influence looking behaviour less during Joint Play. We also found that adults' attention forwards-predicted infants' subsequent attention more than vice versa, suggesting that adults' behaviour may drive infants' behaviour. Finally, we found that mutual gaze did not directly facilitate infant attentiveness. Overall, our results suggest that infants spend more time attending to objects during Joint Play than Solo Play, but that these differences are more likely attributable to increased exogenous attentional scaffolding from the parent during social play, rather than to increased endogenous attention control from the infant. © 2018 John Wiley & Sons Ltd.
Rodríguez, Gabriel; Márquez, Raúl; Gil, Marta; Alonso, Gumersinda; Hall, Geoffrey
According to a recent theory (Hall & Rodriguez, 2010), the latent inhibition produced by nonreinforced exposure to a target stimulus (B) will be deepened by subsequent exposure of that stimulus in compound with another (AB). This effect of compound exposure is taken to depend on the addition of a novel A to the familiar B and is not predicted for equivalent preexposure on which AB trials precede the A trials. This prediction was tested in 2 experiments using rats. Experiment 1 used an aversive procedure with flavors as the stimuli; Experiment 2 used an appetitive procedure with visual and auditory stimuli. In both, we found that conditioning with B as the conditioned stimulus proceeded more slowly (i.e., latent inhibition was greater) in subjects given the B-AB sequence in preexposure than in subjects given the AB-B sequence.
Wallentin, Mikkel; Skakkebæk, Anne; Bojesen, Anders; Fedder, Jens; Laurberg, Peter; Østergaard, John R; Hertz, Jens Michael; Pedersen, Anders Degn; Gravholt, Claus Højbjerg
Klinefelter syndrome (47, XXY) (KS) is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS = 49; Controls = 49) responded to whether the words "GREEN" or "RED" were displayed in green or red (incongruent versus congruent colors). One of the colors was presented three times as often as the other, making it possible to study both congruency and adaptation effects independently. Auditory stimuli saying "GREEN" or "RED" had the same distribution, making it possible to study effects of perceptual modality as well as Frequency effects across modalities. We found that KS had an increased response to motor output in primary motor cortex and an increased response to auditory stimuli in auditory cortices, but no difference in primary visual cortices. KS displayed a diminished response to written visual stimuli in secondary visual regions near the Visual Word Form Area, consistent with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop) or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain's motor network with no difference between groups. No effects of testosterone level or brain volume were found. In sum, the present findings suggest that auditory and motor systems in KS are selectively affected, perhaps as a compensatory strategy, and that this is not a systemic effect as it is not seen in the visual system.
Full Text Available Klinefelter syndrome (47, XXY (KS is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS = 49; Controls = 49 responded to whether the words “GREEN” or “RED” were displayed in green or red (incongruent versus congruent colors. One of the colors was presented three times as often as the other, making it possible to study both congruency and adaptation effects independently. Auditory stimuli saying “GREEN” or “RED” had the same distribution, making it possible to study effects of perceptual modality as well as Frequency effects across modalities. We found that KS had an increased response to motor output in primary motor cortex and an increased response to auditory stimuli in auditory cortices, but no difference in primary visual cortices. KS displayed a diminished response to written visual stimuli in secondary visual regions near the Visual Word Form Area, consistent with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain's motor network with no difference between groups. No effects of testosterone level or brain volume were found. In sum, the present findings suggest that auditory and motor systems in KS are selectively affected, perhaps as a compensatory strategy, and that this is not a systemic effect as it is not seen in the visual system.
Mustovic, Henrietta; Scheffler, Klaus; Di Salle, Francesco; Esposito, Fabrizio; Neuhoff, John G; Hennig, Jürgen; Seifritz, Erich
Temporal integration is a fundamental process that the brain carries out to construct coherent percepts from serial sensory events. This process critically depends on the formation of memory traces reconciling past with present events and is particularly important in the auditory domain where sensory information is received both serially and in parallel. It has been suggested that buffers for transient auditory memory traces reside in the auditory cortex. However, previous studies investigating "echoic memory" did not distinguish between brain response to novel auditory stimulus characteristics on the level of basic sound processing and a higher level involving matching of present with stored information. Here we used functional magnetic resonance imaging in combination with a regular pattern of sounds repeated every 100 ms and deviant interspersed stimuli of 100-ms duration, which were either brief presentations of louder sounds or brief periods of silence, to probe the formation of auditory memory traces. To avoid interaction with scanner noise, the auditory stimulation sequence was implemented into the image acquisition scheme. Compared to increased loudness events, silent periods produced specific neural activation in the right planum temporale and temporoparietal junction. Our findings suggest that this area posterior to the auditory cortex plays a critical role in integrating sequential auditory events and is involved in the formation of short-term auditory memory traces. This function of the planum temporale appears to be fundamental in the segregation of simultaneous sound sources.
Lisa L Matragrano
Full Text Available Catecholaminergic (CA neurons innervate sensory areas and affect the processing of sensory signals. For example, in birds, CA fibers innervate the auditory pathway at each level, including the midbrain, thalamus, and forebrain. We have shown previously that in female European starlings, CA activity in the auditory forebrain can be enhanced by exposure to attractive male song for one week. It is not known, however, whether hearing song can initiate that activity more rapidly. Here, we exposed estrogen-primed, female white-throated sparrows to conspecific male song and looked for evidence of rapid synthesis of catecholamines in auditory areas. In one hemisphere of the brain, we used immunohistochemistry to detect the phosphorylation of tyrosine hydroxylase (TH, a rate-limiting enzyme in the CA synthetic pathway. We found that immunoreactivity for TH phosphorylated at serine 40 increased dramatically in the auditory forebrain, but not the auditory thalamus and midbrain, after 15 min of song exposure. In the other hemisphere, we used high pressure liquid chromatography to measure catecholamines and their metabolites. We found that two dopamine metabolites, dihydroxyphenylacetic acid and homovanillic acid, increased in the auditory forebrain but not the auditory midbrain after 30 min of exposure to conspecific song. Our results are consistent with the hypothesis that exposure to a behaviorally relevant auditory stimulus rapidly induces CA activity, which may play a role in auditory responses.
Jørgensen, M B; Christensen-Dalsgaard, J
We studied the directionality of spike timing in the responses of single auditory nerve fibers of the grass frog, Rana temporaria, to tone burst stimulation. Both the latency of the first spike after stimulus onset and the preferred firing phase during the stimulus were studied. In addition, the ...
Namazi, Hamidreza; Khosrowabadi, Reza; Hussaini, Jamal; Habibi, Shaghayegh; Farid, Ali Akhavan; Kulish, Vladimir V
One of the major challenges in brain research is to relate the structural features of the auditory stimulus to structural features of Electroencephalogram (EEG) signal. Memory content is an important feature of EEG signal and accordingly the brain. On the other hand, the memory content can also be considered in case of stimulus. Beside all works done on analysis of the effect of stimuli on human EEG and brain memory, no work discussed about the stimulus memory and also the relationship that may exist between the memory content of stimulus and the memory content of EEG signal. For this purpose we consider the Hurst exponent as the measure of memory. This study reveals the plasticity of human EEG signals in relation to the auditory stimuli. For the first time we demonstrated that the memory content of an EEG signal shifts towards the memory content of the auditory stimulus used. The results of this analysis showed that an auditory stimulus with higher memory content causes a larger increment in the memory content of an EEG signal. For the verification of this result, we benefit from approximate entropy as indicator of time series randomness. The capability, observed in this research, can be further investigated in relation to human memory.
Dignath, David; Eder, Andreas B
According to a recent extension of the conflict-monitoring theory, conflict between two competing response tendencies is registered as an aversive event and triggers a motivation to avoid the source of conflict. In the present study, we tested this assumption. Over five experiments, we examined whether conflict is associated with an avoidance motivation and whether stimulus conflict or response conflict triggers an avoidance tendency. Participants first performed a color Stroop task. In a subsequent motivation test, participants responded to Stroop stimuli with approach- and avoidance-related lever movements. These results showed that Stroop-conflict stimuli increased the frequency of avoidance responses in a free-choice motivation test, and also increased the speed of avoidance relative to approach responses in a forced-choice test. High and low proportions of response conflict in the Stroop task had no effect on avoidance in the motivation test. Avoidance of conflict was, however, obtained even with new conflict stimuli that had not been presented before in a Stroop task, and when the Stroop task was replaced with an unrelated filler task. Taken together, these results suggest that stimulus conflict is sufficient to trigger avoidance.
Robson, Holly; Grube, Manon; Lambon Ralph, Matthew A; Griffiths, Timothy D; Sage, Karen
This work investigates the nature of the comprehension impairment in Wernicke's aphasia (WA), by examining the relationship between deficits in auditory processing of fundamental, non-verbal acoustic stimuli and auditory comprehension. WA, a condition resulting in severely disrupted auditory comprehension, primarily occurs following a cerebrovascular accident (CVA) to the left temporo-parietal cortex. Whilst damage to posterior superior temporal areas is associated with auditory linguistic comprehension impairments, functional-imaging indicates that these areas may not be specific to speech processing but part of a network for generic auditory analysis. We examined analysis of basic acoustic stimuli in WA participants (n = 10) using auditory stimuli reflective of theories of cortical auditory processing and of speech cues. Auditory spectral, temporal and spectro-temporal analysis was assessed using pure-tone frequency discrimination, frequency modulation (FM) detection and the detection of dynamic modulation (DM) in "moving ripple" stimuli. All tasks used criterion-free, adaptive measures of threshold to ensure reliable results at the individual level. Participants with WA showed normal frequency discrimination but significant impairments in FM and DM detection, relative to age- and hearing-matched controls at the group level (n = 10). At the individual level, there was considerable variation in performance, and thresholds for both FM and DM detection correlated significantly with auditory comprehension abilities in the WA participants. These results demonstrate the co-occurrence of a deficit in fundamental auditory processing of temporal and spectro-temporal non-verbal stimuli in WA, which may have a causal contribution to the auditory language comprehension impairment. Results are discussed in the context of traditional neuropsychology and current models of cortical auditory processing. Copyright © 2012 Elsevier Ltd. All rights reserved.
Aghamolaei, Maryam; Zarnowiec, Katarzyna; Grimm, Sabine; Escera, Carles
Auditory deviance detection based on regularity encoding appears as one of the basic functional properties of the auditory system. It has traditionally been assessed with the mismatch negativity (MMN) long-latency component of the auditory evoked potential (AEP). Recent studies have found earlier correlates of deviance detection based on regularity encoding. They occur in humans in the first 50 ms after sound onset, at the level of the middle-latency response of the AEP, and parallel findings of stimulus-specific adaptation observed in animal studies. However, the functional relationship between these different levels of regularity encoding and deviance detection along the auditory hierarchy has not yet been clarified. Here we addressed this issue by examining deviant-related responses at different levels of the auditory hierarchy to stimulus changes varying in their degree of deviation regarding the spatial location of a repeated standard stimulus. Auditory stimuli were presented randomly from five loudspeakers at azimuthal angles of 0°, 12°, 24°, 36° and 48° during oddball and reversed-oddball conditions. Middle-latency responses and MMN were measured. Our results revealed that middle-latency responses were sensitive to deviance but not the degree of deviation, whereas the MMN amplitude increased as a function of deviance magnitude. These findings indicated that acoustic regularity can be encoded at the level of the middle-latency response but that it takes a higher step in the auditory hierarchy for deviance magnitude to be encoded, thus providing a functional dissociation between regularity encoding and deviance detection along the auditory hierarchy. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Javitt, D C; Grochowski, S; Shelley, A M; Ritter, W
Schizophrenia is a severe mental disorder associated with disturbances in perception and cognition. Event-related potentials (ERP) provide a mechanism for evaluating potential mechanisms underlying neurophysiological dysfunction in schizophrenia. Mismatch negativity (MMN) is a short-duration auditory cognitive ERP component that indexes operation of the auditory sensory ('echoic') memory system. Prior studies have demonstrated impaired MMN generation in schizophrenia along with deficits in auditory sensory memory performance. MMN is elicited in an auditory oddball paradigm in which a sequence of repetitive standard tones is interrupted infrequently by a physically deviant ('oddball') stimulus. The present study evaluates MMN generation as a function of deviant stimulus probability, interstimulus interval, interdeviant interval and the degree of pitch separation between the standard and deviant stimuli. The major findings of the present study are first, that MMN amplitude is decreased in schizophrenia across a broad range of stimulus conditions, and second, that the degree of deficit in schizophrenia is largest under conditions when MMN is normally largest. The pattern of deficit observed in schizophrenia differs from the pattern observed in other conditions associated with MMN dysfunction, including Alzheimer's disease, stroke, and alcohol intoxication.
Jacks, Adam; Haley, Katarina L.
Purpose: To study the effects of masked auditory feedback (MAF) on speech fluency in adults with aphasia and/or apraxia of speech (APH/AOS). We hypothesized that adults with AOS would increase speech fluency when speaking with noise. Altered auditory feedback (AAF; i.e., delayed/frequency-shifted feedback) was included as a control condition not…
Hall, M.; Smeele, P.M.T.; Kuhl, P.K.
The integration of auditory and visual speech is observed when modes specify different places of articulation. Influences of auditory variation on integration were examined using consonant identifi-cation, plus quality and similarity ratings. Auditory identification predicted auditory-visual
Scheerer, Nichole E; Jones, Jeffery A
Speech production requires the combined effort of a feedback control system driven by sensory feedback, and a feedforward control system driven by internal models. However, the factors that dictate the relative weighting of these feedback and feedforward control systems are unclear. In this event-related potential (ERP) study, participants produced vocalisations while being exposed to blocks of frequency-altered feedback (FAF) perturbations that were either predictable in magnitude (consistently either 50 or 100 cents) or unpredictable in magnitude (50- and 100-cent perturbations varying randomly within each vocalisation). Vocal and P1-N1-P2 ERP responses revealed decreases in the magnitude and trial-to-trial variability of vocal responses, smaller N1 amplitudes, and shorter vocal, P1 and N1 response latencies following predictable FAF perturbation magnitudes. In addition, vocal response magnitudes correlated with N1 amplitudes, vocal response latencies, and P2 latencies. This pattern of results suggests that after repeated exposure to predictable FAF perturbations, the contribution of the feedforward control system increases. Examination of the presentation order of the FAF perturbations revealed smaller compensatory responses, smaller P1 and P2 amplitudes, and shorter N1 latencies when the block of predictable 100-cent perturbations occurred prior to the block of predictable 50-cent perturbations. These results suggest that exposure to large perturbations modulates responses to subsequent perturbations of equal or smaller size. Similarly, exposure to a 100-cent perturbation prior to a 50-cent perturbation within a vocalisation decreased the magnitude of vocal and N1 responses, but increased P1 and P2 latencies. Thus, exposure to a single perturbation can affect responses to subsequent perturbations. © 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Jastreboff, P J; Brennan, J F; Coleman, J K; Sasaki, C T
In order to measure tinnitus induced by sodium salicylate injections, 84 pigmented rats, distributed among 14 groups in five experiments, were used in a conditioned suppression paradigm. In Experiment 1, all groups were trained with a conditioned stimulus (CS) consisting of the offset of a continuous background noise. One group began salicylate injections before Pavlovian training, a second group started injections after training, and a control group received daily saline injections. Resistance to extinction was profound when injections started before training, but minimal when initiated after training, which suggests that salicylate-induced effects acquired differential conditioned value. In Experiment 2 we mimicked the salicylate treatments by substituting a 7 kHz tone in place of respective injections, resulting in effects equivalent to salicylate-induced behavior. In a third experiment we included a 3 kHz CS, and again replicated the salicylate findings. In Experiment 4 we decreased the motivational level, and the sequential relation between salicylate-induced effects and suppression training was retained. Finally, no salicylate effects emerged when the visual modality was used. These findings support the demonstration of phantom auditory sensations in animals.
Yahata, Izumi; Kanno, Akitake; Hidaka, Hiroshi; Sakamoto, Shuichi; Nakasato, Nobukazu; Kawashima, Ryuta; Katori, Yukio
The effects of visual speech (the moving image of the speaker’s face uttering speech sound) on early auditory evoked fields (AEFs) were examined using a helmet-shaped magnetoencephalography system in 12 healthy volunteers (9 males, mean age 35.5 years). AEFs (N100m) in response to the monosyllabic sound /be/ were recorded and analyzed under three different visual stimulus conditions, the moving image of the same speaker’s face uttering /be/ (congruent visual stimuli) or uttering /ge/ (incongruent visual stimuli), and visual noise (still image processed from speaker’s face using a strong Gaussian filter: control condition). On average, latency of N100m was significantly shortened in the bilateral hemispheres for both congruent and incongruent auditory/visual (A/V) stimuli, compared to the control A/V condition. However, the degree of N100m shortening was not significantly different between the congruent and incongruent A/V conditions, despite the significant differences in psychophysical responses between these two A/V conditions. Moreover, analysis of the magnitudes of these visual effects on AEFs in individuals showed that the lip-reading effects on AEFs tended to be well correlated between the two different audio-visual conditions (congruent vs. incongruent visual stimuli) in the bilateral hemispheres but were not significantly correlated between right and left hemisphere. On the other hand, no significant correlation was observed between the magnitudes of visual speech effects and psychophysical responses. These results may indicate that the auditory-visual interaction observed on the N100m is a fundamental process which does not depend on the congruency of the visual information. PMID:28141836
Wang, Rong; Wu, Lingjie; Tang, Zuohua; Sun, Xinghuai; Feng, Xiaoyuan; Tang, Weijun; Qian, Wen; Wang, Jie; Jin, Lixin; Zhong, Yufeng; Xiao, Zebin
Cross-modal plasticity within the visual and auditory cortices of early binocularly blind macaques is not well studied. In this study, four healthy neonatal macaques were assigned to group A (control group) or group B (binocularly blind group). Sixteen months later, blood oxygenation level-dependent functional imaging (BOLD-fMRI) was conducted to examine the activation in the visual and auditory cortices of each macaque while being tested using pure tones as auditory stimuli. The changes in the BOLD response in the visual and auditory cortices of all macaques were compared with immunofluorescence staining findings. Compared with group A, greater BOLD activity was observed in the bilateral visual cortices of group B, and this effect was particularly obvious in the right visual cortex. In addition, more activated volumes were found in the bilateral auditory cortices of group B than of group A, especially in the right auditory cortex. These findings were consistent with the fact that there were more c-Fos-positive cells in the bilateral visual and auditory cortices of group B compared with group A (p visual cortices of binocularly blind macaques can be reorganized to process auditory stimuli after visual deprivation, and this effect is more obvious in the right than the left visual cortex. These results indicate the establishment of cross-modal plasticity within the visual and auditory cortices. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Brosch, Michael; Selezneva, Elena; Scheich, Henning
This study aimed at a deeper understanding of which cognitive and motivational aspects of tasks affect auditory cortical activity. To this end we trained two macaque monkeys to perform two different tasks on the same audiovisual stimulus and to do this with two different sizes of water rewards. The monkeys had to touch a bar after a tone had been turned on together with an LED, and to hold the bar until either the tone (auditory task) or the LED (visual task) was turned off. In 399 multiunits recorded from core fields of auditory cortex we confirmed that during task engagement neurons responded to auditory and non-auditory stimuli that were task-relevant, such as light and water. We also confirmed that firing rates slowly increased or decreased for several seconds during various phases of the tasks. Responses to non-auditory stimuli and slow firing changes were observed during both the auditory and the visual task, with some differences between them. There was also a weak task-dependent modulation of the responses to auditory stimuli. In contrast to these cognitive aspects, motivational aspects of the tasks were not reflected in the firing, except during delivery of the water reward. In conclusion, the present study supports our previous proposal that there are two response types in the auditory cortex that represent the timing and the type of auditory and non-auditory elements of a auditory tasks as well the association between elements. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Berwick, Robert C; Pietroski, Paul; Yankama, Beracah; Chomsky, Noam
A central goal of modern generative grammar has been to discover invariant properties of human languages that reflect "the innate schematism of mind that is applied to the data of experience" and that "might reasonably be attributed to the organism itself as its contribution to the task of the acquisition of knowledge" (Chomsky, 1971). Candidates for such invariances include the structure dependence of grammatical rules, and in particular, certain constraints on question formation. Various "poverty of stimulus" (POS) arguments suggest that these invariances reflect an innate human endowment, as opposed to common experience: Such experience warrants selection of the grammars acquired only if humans assume, a priori, that selectable grammars respect substantive constraints. Recently, several researchers have tried to rebut these POS arguments. In response, we illustrate why POS arguments remain an important source of support for appeal to a priori structure-dependent constraints on the grammars that humans naturally acquire. Copyright © 2011 Cognitive Science Society, Inc.
Beckers, Gabriël J L; Gahr, Manfred
Auditory systems bias responses to sounds that are unexpected on the basis of recent stimulus history, a phenomenon that has been widely studied using sequences of unmodulated tones (mismatch negativity; stimulus-specific adaptation). Such a paradigm, however, does not directly reflect problems that neural systems normally solve for adaptive behavior. We recorded multiunit responses in the caudomedial auditory forebrain of anesthetized zebra finches (Taeniopygia guttata) at 32 sites simultaneously, to contact calls that recur probabilistically at a rate that is used in communication. Neurons in secondary, but not primary, auditory areas respond preferentially to calls when they are unexpected (deviant) compared with the same calls when they are expected (standard). This response bias is predominantly due to sites more often not responding to standard events than to deviant events. When two call stimuli alternate between standard and deviant roles, most sites exhibit a response bias to deviant events of both stimuli. This suggests that biases are not based on a use-dependent decrease in response strength but involve a more complex mechanism that is sensitive to auditory deviance per se. Furthermore, between many secondary sites, responses are tightly synchronized, a phenomenon that is driven by internal neuronal interactions rather than by the timing of stimulus acoustic features. We hypothesize that this deviance-sensitive, internally synchronized network of neurons is involved in the involuntary capturing of attention by unexpected and behaviorally potentially relevant events in natural auditory scenes.
Davidson, Gray D; Pitts, Michael A
Previous event-related potential (ERP) experiments have consistently identified two components associated with perceptual transitions of bistable visual stimuli, the "reversal negativity" (RN) and the "late positive complex" (LPC). The RN (~200 ms post-stimulus, bilateral occipital-parietal distribution) is thought to reflect transitions between neural representations that form the moment-to-moment contents of conscious perception, while the LPC (~400 ms, central-parietal) is considered an index of post-perceptual processing related to accessing and reporting one's percept. To explore the generality of these components across sensory modalities, the present experiment utilized a novel bistable auditory stimulus. Pairs of complex tones with ambiguous pitch relationships were presented sequentially while subjects reported whether they perceived the tone pairs as ascending or descending in pitch. ERPs elicited by the tones were compared according to whether perceived pitch motion changed direction or remained the same across successive trials. An auditory reversal negativity (aRN) component was evident at ~170 ms post-stimulus over bilateral fronto-central scalp locations. An auditory LPC component (aLPC) was evident at subsequent latencies (~350 ms, fronto-central distribution). These two components may be auditory analogs of the visual RN and LPC, suggesting functionally equivalent but anatomically distinct processes in auditory vs. visual bistable perception.
内野, 八潮; 箱田, 裕司
This article reviewed a number of studies which revealed superiority of addition over deletion. Such an asymmetric effect was found in picture recognitioa memory, discrimination learning, proofreading for misspellings and so on. However, few studies have controlled typicality of original stimulus or the effect of addition and deletion on typicality of changed stimulus. Therefore this article focussed particularly on the studies in which addition and deletion applied to original stimulus was d...
Matas, Carla Gentile; Samelli, Alessandra Giannella; Angrisani, Rosanna Giaffredo; Magliaro, Fernanda Cristina Leite; Segurado, Aluísio C
To characterize the findings of brainstem auditory evoked potential in HIV-positive individuals exposed and not exposed to antiretroviral treatment. This research was a cross-sectional, observational, and descriptive study. Forty-five HIV-positive individuals (18 not exposed and 27 exposed to the antiretroviral treatment - research groups I and II, respectively - and 30 control group individuals) were assessed through brainstem auditory evoked potential. There were no significant between-group differences regarding wave latencies. A higher percentage of altered brainstem auditory evoked potential was observed in the HIV-positive groups when compared to the control group. The most common alteration was in the low brainstem. HIV-positive individuals have a higher percentage of altered brainstem auditory evoked potential that suggests central auditory pathway impairment when compared to HIV-negative individuals. There was no significant difference between individuals exposed and not exposed to antiretroviral treatment.
Full Text Available Pedro H Pondé,1 Eduardo P de Sena,2 Joan A Camprodon,3 Arão Nogueira de Araújo,2 Mário F Neto,4 Melany DiBiasi,5 Abrahão Fontes Baptista,6,7 Lidia MVR Moura,8 Camila Cosmo2,3,6,9,10 1Dynamics of Neuromusculoskeletal System Laboratory, Bahiana School of Medicine and Public Health, 2Postgraduate Program in Interactive Process of Organs and Systems, Federal University of Bahia, Salvador, Bahia, Brazil; 3Laboratory for Neuropsychiatry and Neuromodulation and Transcranial Magnetic Stimulation Clinical Service, Department of Psychiatry, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; 4Scientific Training Center Department, School of Medicine of Bahia, Federal University of Bahia, Salvador, Bahia, Brazil; 5Neuromodulation Center, Spaulding Rehabilitation Hospital, Harvard Medical School, Boston, MA, USA; 6Functional Electrostimulation Laboratory, Biomorphology Department, 7Postgraduate Program on Medicine and Human Health, School of Medicine, Federal University of Bahia, Salvador, Bahia, Brazil; 8Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; 9Center for Technological Innovation in Rehabilitation, Federal University of Bahia, 10Bahia State Health Department (SESAB, Salvador, Bahia, Brazil Introduction: Auditory hallucinations are defined as experiences of auditory perceptions in the absence of a provoking external stimulus. They are the most prevalent symptoms of schizophrenia with high capacity for chronicity and refractoriness during the course of disease. The transcranial direct current stimulation (tDCS – a safe, portable, and inexpensive neuromodulation technique – has emerged as a promising treatment for the management of auditory hallucinations. Objective: The aim of this study is to analyze the level of evidence in the literature available for the use of tDCS as a treatment for auditory hallucinations in schizophrenia. Methods: A systematic review was performed
Park, Myoung-Ok; Lee, Sang-Heon
Preservation and enhancement of cognitive function are essential for the restoration of functional abilities and independence following stroke. While cognitive-motor dual-task training (CMDT) has been utilized in rehabilitation settings, many patients with stroke experience impairments in cognitive function that can interfere with dual-task performance. In the present study, we investigated the effects of CMDT combined with auditory motor synchronization training (AMST) utilizing rhythmic cues on cognitive function in patients with stroke. The present randomized controlled trial was conducted at a single rehabilitation hospital. Thirty patients with chronic stroke were randomly divided an experimental group (n = 15) and a control group (n = 15). The experimental group received 3 CMDT + AMST sessions per week for 6 weeks, whereas the control group received CMDT only 3 times per week for 6 weeks. Changes in cognitive function were evaluated using the trail making test (TMT), digit span test (DST), and stroop test (ST). Significant differences in TMT-A and B (P = .001, P = .001), DST-forward (P = .001, P = .001), DST-backward (P = .000, P = .001), ST-word (P = .001, P = .001), and ST-color (P = .002, P = .001) scores were observed in both the control and experimental groups, respectively. Significant differences in TMT-A (P = .001), DST-forward (P = .027), DST-backward (P = .002), and ST-word (P = .025) scores were observed between the 2 groups. Performance speed on the TMT-A was faster in the CMDT + AMST group than in the CMDT group. Moreover, DST-forward and DST-backward scores were higher in the CMDT + AMST group than in the CDMT group. Although ST-color results were similar in the 2 groups, ST-word scores were higher in the CMDT + AMST group than in the CMDT group. This finding indicates that the combined therapy CMDT and AMST can be used to increase attention, memory, and executive
Professor Yoichi Ando, acoustic architectural designer of the Kirishima International Concert Hall in Japan, presents a comprehensive rational-scientific approach to designing performance spaces. His theory is based on systematic psychoacoustical observations of spatial hearing and listener preferences, whose neuronal correlates are observed in the neurophysiology of the human brain. A correlation-based model of neuronal signal processing in the central auditory system is proposed in which temporal sensations (pitch, timbre, loudness, duration) are represented by an internal autocorrelation representation, and spatial sensations (sound location, size, diffuseness related to envelopment) are represented by an internal interaural crosscorrelation function. Together these two internal central auditory representations account for the basic auditory qualities that are relevant for listening to music and speech in indoor performance spaces. Observed psychological and neurophysiological commonalities between auditor...
Ikeda, Kazunari; Sekiguchi, Takahiro; Hayashi, Akiko
This study examined a notion that auditory discrimination is a requisite for attention-related modulation of the auditory brainstem response (ABR) during contralateral noise exposure. Given that the right ear was exposed continuously with white noise at an intensity of 60-80 dB sound pressure level, tone pips at 80 dB sound pressure level were delivered to the left ear through either single-stimulus or oddball procedures. Participants conducted reading (ignoring task) and counting target tones (attentive task) during stimulation. The oddball but not the single-stimulus procedures elicited task-related modulations in both early (ABR) and late (processing negativity) event-related potentials simultaneously. The elicitation of the attention-related ABR modulation during contralateral noise exposure is thus considered to require auditory discrimination and have the corticofugal nature evidently.
Full Text Available Background and Aim: Blocking of the adenosine receptor in central nervous system by caffeine can lead to increasing the level of neurotransmitters like glutamate. As the adenosine receptors are present in almost all brain areas like central auditory pathway, it seems caffeine can change conduction in this way. The purpose of this study was to evaluate the effects of caffeine on latency and amplitude of auditory brainstem response(ABR.Materials and Methods: In this clinical trial study 43 normal 18-25 years old male students were participated. The subjects consumed 0, 2 and 3 mg/kg BW caffeine in three different sessions. Auditory brainstem responses were recorded before and 30 minute after caffeine consumption. The results were analyzed by Friedman and Wilcoxone test to assess the effects of caffeine on auditory brainstem response.Results: Compared to control group the latencies of waves III,V and I-V interpeak interval of the cases decreased significantly after 2 and 3mg/kg BW caffeine consumption. Wave I latency significantly decreased after 3mg/kg BW caffeine consumption(p<0.01. Conclusion: Increasing of the glutamate level resulted from the adenosine receptor blocking brings about changes in conduction in the central auditory pathway.
Clement, Sylvain; Moroni, Christine; Samson, Séverine
The goal of this paper was to review various experimental and neuropsychological studies that support the modular conception of auditory sensory memory or auditory short-term memory. Based on initial findings demonstrating that verbal sensory memory system can be dissociated from a general auditory memory store at the functional and anatomical levels. we reported a series of studies that provided evidence in favor of multiple auditory sensory stores specialized in retaining eit...
San Juan, Juan; Hu, Xiao-Su; Issa, Mohamad; Bisconti, Silvia; Kovelman, Ioulia; Kileny, Paul; Basura, Gregory
Tinnitus, or phantom sound perception, leads to increased spontaneous neural firing rates and enhanced synchrony in central auditory circuits in animal models. These putative physiologic correlates of tinnitus to date have not been well translated in the brain of the human tinnitus sufferer. Using functional near-infrared spectroscopy (fNIRS) we recently showed that tinnitus in humans leads to maintained hemodynamic activity in auditory and adjacent, non-auditory cortices. Here we used fNIRS technology to investigate changes in resting state functional connectivity between human auditory and non-auditory brain regions in normal-hearing, bilateral subjective tinnitus and controls before and after auditory stimulation. Hemodynamic activity was monitored over the region of interest (primary auditory cortex) and non-region of interest (adjacent non-auditory cortices) and functional brain connectivity was measured during a 60-second baseline/period of silence before and after a passive auditory challenge consisting of alternating pure tones (750 and 8000Hz), broadband noise and silence. Functional connectivity was measured between all channel-pairs. Prior to stimulation, connectivity of the region of interest to the temporal and fronto-temporal region was decreased in tinnitus participants compared to controls. Overall, connectivity in tinnitus was differentially altered as compared to controls following sound stimulation. Enhanced connectivity was seen in both auditory and non-auditory regions in the tinnitus brain, while controls showed a decrease in connectivity following sound stimulation. In tinnitus, the strength of connectivity was increased between auditory cortex and fronto-temporal, fronto-parietal, temporal, occipito-temporal and occipital cortices. Together these data suggest that central auditory and non-auditory brain regions are modified in tinnitus and that resting functional connectivity measured by fNIRS technology may contribute to conscious phantom
Juan San Juan
Full Text Available Tinnitus, or phantom sound perception, leads to increased spontaneous neural firing rates and enhanced synchrony in central auditory circuits in animal models. These putative physiologic correlates of tinnitus to date have not been well translated in the brain of the human tinnitus sufferer. Using functional near-infrared spectroscopy (fNIRS we recently showed that tinnitus in humans leads to maintained hemodynamic activity in auditory and adjacent, non-auditory cortices. Here we used fNIRS technology to investigate changes in resting state functional connectivity between human auditory and non-auditory brain regions in normal-hearing, bilateral subjective tinnitus and controls before and after auditory stimulation. Hemodynamic activity was monitored over the region of interest (primary auditory cortex and non-region of interest (adjacent non-auditory cortices and functional brain connectivity was measured during a 60-second baseline/period of silence before and after a passive auditory challenge consisting of alternating pure tones (750 and 8000Hz, broadband noise and silence. Functional connectivity was measured between all channel-pairs. Prior to stimulation, connectivity of the region of interest to the temporal and fronto-temporal region was decreased in tinnitus participants compared to controls. Overall, connectivity in tinnitus was differentially altered as compared to controls following sound stimulation. Enhanced connectivity was seen in both auditory and non-auditory regions in the tinnitus brain, while controls showed a decrease in connectivity following sound stimulation. In tinnitus, the strength of connectivity was increased between auditory cortex and fronto-temporal, fronto-parietal, temporal, occipito-temporal and occipital cortices. Together these data suggest that central auditory and non-auditory brain regions are modified in tinnitus and that resting functional connectivity measured by fNIRS technology may contribute to
Fostick, Leah; Babkoff, Harvey; Zukerman, Gil
Purpose: To test the effects of 24 hr of sleep deprivation on auditory and linguistic perception and to assess the magnitude of this effect by comparing such performance with that of aging adults on speech perception and with that of dyslexic readers on phonological awareness. Method: Fifty-five sleep-deprived young adults were compared with 29…
Chen, Ling-Chia; Sandmann, Pascale; Thorne, Jeremy D; Herrmann, Christoph S; Debener, Stefan
Functional near-infrared spectroscopy (fNIRS) has been proven reliable for investigation of low-level visual processing in both infants and adults. Similar investigation of fundamental auditory processes with fNIRS, however, remains only partially complete. Here we employed a systematic three-level validation approach to investigate whether fNIRS could capture fundamental aspects of bottom-up acoustic processing. We performed a simultaneous fNIRS-EEG experiment with visual and auditory stimulation in 24 participants, which allowed the relationship between changes in neural activity and hemoglobin concentrations to be studied. In the first level, the fNIRS results showed a clear distinction between visual and auditory sensory modalities. Specifically, the results demonstrated area specificity, that is, maximal fNIRS responses in visual and auditory areas for the visual and auditory stimuli respectively, and stimulus selectivity, whereby the visual and auditory areas responded mainly toward their respective stimuli. In the second level, a stimulus-dependent modulation of the fNIRS signal was observed in the visual area, as well as a loudness modulation in the auditory area. Finally in the last level, we observed significant correlations between simultaneously-recorded visual evoked potentials and deoxygenated hemoglobin (DeoxyHb) concentration, and between late auditory evoked potentials and oxygenated hemoglobin (OxyHb) concentration. In sum, these results suggest good sensitivity of fNIRS to low-level sensory processing in both the visual and the auditory domain, and provide further evidence of the neurovascular coupling between hemoglobin concentration changes and non-invasive brain electrical activity.
Ponnath, Abhilash; Farris, Hamilton E
Descending circuitry can modulate auditory processing, biasing sensitivity to particular stimulus parameters and locations. Using awake in vivo single unit recordings, this study tested whether electrical stimulation of the thalamus modulates auditory excitability and relative binaural sensitivity in neurons of the amphibian midbrain. In addition, by using electrical stimuli that were either longer than the acoustic stimuli (i.e., seconds) or presented on a sound-by-sound basis (ms), experiments addressed whether the form of modulation depended on the temporal structure of the electrical stimulus. Following long duration electrical stimulation (3-10 s of 20 Hz square pulses), excitability (spikes/acoustic stimulus) to free-field noise stimuli decreased by 32%, but returned over 600 s. In contrast, sound-by-sound electrical stimulation using a single 2 ms duration electrical pulse 25 ms before each noise stimulus caused faster and varied forms of modulation: modulation lasted sound-by-sound electrical stimulation varied between different acoustic stimuli, including for different male calls, suggesting modulation is specific to certain stimulus attributes. For binaural units, modulation depended on the ear of input, as sound-by-sound electrical stimulation preceding dichotic acoustic stimulation caused asymmetric modulatory effects: sensitivity shifted for sounds at only one ear, or by different relative amounts for both ears. This caused a change in the relative difference in binaural sensitivity. Thus, sound-by-sound electrical stimulation revealed fast and ear-specific (i.e., lateralized) auditory modulation that is potentially suited to shifts in auditory attention during sound segregation in the auditory scene.
Full Text Available We have recently demonstrated that alternating left-right sound sources induce motion perception to static visual stimuli along the horizontal plane (SIVM: sound-induced visual motion perception, Hidaka et al., 2009. The aim of the current study was to elucidate whether auditory motion signals, rather than auditory positional signals, can directly contribute to the SIVM. We presented static visual flashes at retinal locations outside the fovea together with a lateral auditory motion provided by a virtual stereo noise source smoothly shifting in the horizontal plane. The flashes appeared to move in the situation where auditory positional information would have little influence on the perceived position of visual stimuli; the spatiotemporal position of the flashes was in the middle of the auditory motion trajectory. Furthermore, the auditory motion altered visual motion perception in a global motion display; in this display, different localized motion signals of multiple visual stimuli were combined to produce a coherent visual motion perception so that there was no clear one-to-one correspondence between the auditory stimuli and each visual stimulus. These findings suggest the existence of direct interactions between the auditory and visual modalities in motion processing and motion perception.
Karen V. Chenausky
Full Text Available We tested the effect of Auditory-Motor Mapping Training (AMMT, a novel, intonation-based treatment for spoken language originally developed for minimally verbal (MV children with autism, on a more-verbal child with autism. We compared this child’s performance after 25 therapy sessions with that of: (1 a child matched on age, autism severity, and expressive language level who received 25 sessions of a non-intonation-based control treatment Speech Repetition Therapy (SRT; and (2 a matched pair of MV children (one of whom received AMMT; the other, SRT. We found a significant Time × Treatment effect in favor of AMMT for number of Syllables Correct and Consonants Correct per stimulus for both pairs of children, as well as a significant Time × Treatment effect in favor of AMMT for number of Vowels Correct per stimulus for the more-verbal pair. Magnitudes of the difference in post-treatment performance between AMMT and SRT, adjusted for Baseline differences, were: (a larger for the more-verbal pair than for the MV pair; and (b associated with very large effect sizes (Cohen’s d > 1.3 in the more-verbal pair. Results hold promise for the efficacy of AMMT for improving spoken language production in more-verbal children with autism as well as their MV peers and suggest hypotheses about brain function that are testable in both correlational and causal behavioral-imaging studies.
Rutherford, Kimberley D; Kavanagh, Katherine; Parham, Kourosh
To determine whether mupirocin (440 µg/mL) and vancomycin otic drops (25 mg/mL) show evidence of ototoxicity in CBA/J mice immediately following a 7-day course of daily intratympanic (IT) injections and 1 month following treatment. Nonrandomized controlled trial. Academic hospital laboratory. Twenty CBA/J mice. Mean auditory brainstem response (ABR) thresholds increased in all drug- and saline-treated ears immediately after 7 days of IT injections but returned to baseline for most stimulus frequencies by 30 days later. This finding appeared to be correlated with the presence and subsequent resolution of tympanic membrane (TM) perforations and granulation tissue at the injection sites. Mupirocin-treated ears showed no significant difference in ABR thresholds compared to saline-treated ears. No significant differences were noted between vancomycin- and saline-treated ears, but there was a significant interaction between testing day and stimulus frequency (P injections (95% confidence interval, -13.5 to -5.5, P application of mupirocin solution (440 µg/mL) caused no significant change in the ABR thresholds in a murine model, vancomycin solution (25 mg/mL) resulted in high-frequency threshold elevations in both the ear directly injected and the contralateral ear. Mupirocin solution may be beneficial in managing otitis externa and media caused by resistant pathogens. Further studies of ototopical vancomycin are needed to define parameters governing its safe use.
Full Text Available For Brain-Computer Interface (BCI systems that are designed for users with severe impairments of the oculomotor system, an appropriate mode of presenting stimuli to the user is crucial. To investigate whether multi-sensory integration can be exploited in the gaze-independent event-related potentials (ERP speller and to enhance BCI performance, we designed a visual-auditory speller. We investigate the possibility to enhance stimulus presentation by combining visual and auditory stimuli within gaze-independent spellers. In this study with N = 15 healthy users, two different ways of combining the two sensory modalities are proposed: simultaneous redundant streams (Combined-Speller and interleaved independent streams (Parallel-Speller. Unimodal stimuli were applied as control conditions. The workload, ERP components, classification accuracy and resulting spelling speed were analyzed for each condition. The Combined-speller showed a lower workload than uni-modal paradigms, without the sacrifice of spelling performance. Besides, shorter latencies, lower amplitudes, as well as a shift of the temporal and spatial distribution of discriminative information were observed for Combined-speller. These results are important and are inspirations for future studies to search the reason for these differences. For the more innovative and demanding Parallel-Speller, where the auditory and visual domains are independent from each other, a proof of concept was obtained: fifteen users could spell online with a mean accuracy of 87.7% (chance level <3% showing a competitive average speed of 1.65 symbols per minute. The fact that it requires only one selection period per symbol makes it a good candidate for a fast communication channel. It brings a new insight into the true multisensory stimuli paradigms. Novel approaches for combining two sensory modalities were designed here, which are valuable for the development of ERP-based BCI paradigms.
Sergent, Claire; Ruff, Christian C; Barbot, Antoine; Driver, Jon; Rees, Geraint
Modulations of sensory processing in early visual areas are thought to play an important role in conscious perception. To date, most empirical studies focused on effects occurring before or during visual presentation. By contrast, several emerging theories postulate that sensory processing and conscious visual perception may also crucially depend on late top-down influences, potentially arising after a visual display. To provide a direct test of this, we performed an fMRI study using a postcued report procedure. The ability to report a target at a specific spatial location in a visual display can be enhanced behaviorally by symbolic auditory postcues presented shortly after that display. Here we showed that such auditory postcues can enhance target-specific signals in early human visual cortex (V1 and V2). For postcues presented 200 msec after stimulus termination, this target-specific enhancement in visual cortex was specifically associated with correct conscious report. The strength of this modulation predicted individual levels of performance in behavior. By contrast, although later postcues presented 1000 msec after stimulus termination had some impact on activity in early visual cortex, this modulation no longer related to conscious report. These results demonstrate that within a critical time window of a few hundred milliseconds after a visual stimulus has disappeared, successful conscious report of that stimulus still relates to the strength of top-down modulation in early visual cortex. We suggest that, within this critical time window, sensory representation of a visual stimulus is still under construction and so can still be flexibly influenced by top-down modulatory processes.
Ohyama, Masashi; Kitamura, Shin; Terashi, Akiro; Senda, Michio.
In order to investigate the relation between auditory cognitive function and regional brain activation, we measured the changes in the regional cerebral blood flow (CBF) using positron emission tomography (PET) during the 'odd-ball' paradigm in ten normal healthy volunteers. The subjects underwent 3 tasks, twice for each, while the evoked potential was recorded. In these tasks, the auditory stimulus was a series of pure tones delivered every 1.5 sec binaurally at 75 dB from the earphones. Task A: the stimulus was a series of tones with 1000 Hz only, and the subject was instructed to only hear. Task B: the stimulus was a series of tones with 1000 Hz only, and the subject was instructed to push the button on detecting a tone. Task C: the stimulus was a series of pure tones delivered every 1.5 sec binaurally at 75 dB with a frequency of 1000 Hz (non-target) in 80% and 2000 Hz (target) in 20% at random, and the subject was instructed to push the button on detecting a target tone. The event related potential (P300) was observed in task C (Pz: 334.3±19.6 msec). At each task, the CBF was measured using PET with i.v. injection of 1.5 GBq of O-15 water. The changes in CBF associated with auditory cognition was evaluated by the difference between the CBF images in task C and B. Localized increase was observed in the anterior cingulate cortex (in all subjects), the bilateral associate auditory cortex, the prefrontal cortex and the parietal cortex. The latter three areas had a large individual variation in the location of foci. These results suggested the role of those cortical areas in auditory cognition. The anterior cingulate was most activated (15.0±2.24% of global CBF). This region was not activated in the condition of task B minus task A. The anterior cingulate is a part of Papez's circuit that is related to memory and other higher cortical function. These results suggested that this area may play an important role in cognition as well as in attention. (author)
Kabella, Danielle M; Flynn, Lucinda; Peters, Amanda; Kodituwakku, Piyadasa; Stephen, Julia M
Prior studies indicate that the auditory mismatch response is sensitive to early alterations in brain development in multiple developmental disorders. Prenatal alcohol exposure is known to impact early auditory processing. The current study hypothesized alterations in the mismatch response in young children with FASD. Participants in this study were 9 children with a fetal alcohol spectrum disorder and 17 control children (Control) aged 3 to 6 years. Participants underwent MEG and structural MRI scans separately. We compared groups on neurophysiological Mismatch Negativity (MMN) responses to auditory stimuli measured using the auditory oddball paradigm. Frequent (1000 Hz) and rare (1200Hz) tones were presented at 72 dB. There was no significant group difference in MMN response latency or amplitude represented by the peak located ~200 ms after stimulus presentation in the difference timecourse between frequent and infrequent tones. Examining the timecourses to the frequent and infrequent tones separately, RM-ANOVA with condition (frequent vs. rare), peak (N100m and N200m), and hemisphere as within-subject factors and diagnosis and sex as the between-subject factors showed a significant interaction of peak by diagnosis (p = 0.001), with a pattern of decreased amplitude from N100m to N200m in Control children and the opposite pattern in children with FASD. However, no significant difference was found with the simple effects comparisons. No group differences were found in the response latencies of the rare auditory evoked fields (AEFs). The results indicate that there was no detectable effect of alcohol exposure on the amplitude or latency of the MMNm response to simple tones modulated by frequency change in preschool-age children with FASD. However, while discrimination abilities to simple tones may be intact, early auditory sensory processing revealed by the interaction between N100m and N200m amplitude indicates that auditory sensory processing may be altered in
Full Text Available Abstract Background Previous studies have shown that spatio-tactile acuity is influenced by the clarity of the cortical response in primary somatosensory cortex (SI. Stimulus characteristics such as frequency, amplitude, and location of tactile stimuli presented to the skin have been shown to have a significant effect on the response in SI. The present study observes the effect of changing stimulus parameters of 25 Hz sinusoidal vertical skin displacement stimulation ("flutter" on a human subject's ability to discriminate between two adjacent or near-adjacent skin sites. Based on results obtained from recent neurophysiological studies of the SI response to different conditions of vibrotactile stimulation, we predicted that the addition of 200 Hz vibration to the same site that a two-point flutter stimulus was delivered on the skin would improve a subject's spatio-tactile acuity over that measured with flutter alone. Additionally, similar neurophysiological studies predict that the presence of either a 25 Hz flutter or 200 Hz vibration stimulus on the unattended hand (on the opposite side of the body from the site of two-point limen testing – the condition of bilateral stimulation – which has been shown to evoke less SI cortical activity than the contralateral-only stimulus condition would decrease a subject's ability to discriminate between two points on the skin. Results A Bekesy tracking method was employed to track a subject's ability to discriminate between two-point stimuli delivered to the skin. The distance between the two points of stimulation was varied on a trial-by-trial basis, and several different stimulus conditions were examined: (1 The "control" condition, in which 25 Hz flutter stimuli were delivered simultaneously to the two points on the skin of the attended hand, (2 the "complex" condition, in which a combination of 25 Hz flutter and 200 Hz vibration stimuli were delivered to the two points on the attended hand, and (3 a
McKeown, Denis; Wellsted, David
Psychophysical studies are reported examining how the context of recent auditory stimulation may modulate the processing of new sounds. The question posed is how recent tone stimulation may affect ongoing performance in a discrimination task. In the task, two complex sounds occurred in successive intervals. A single target component of one complex…
Marcella de Castro Campos Velten
Full Text Available Spatial region concepts such as front, back, left and right reflect our typical interaction with space, and the corresponding surrounding regions have different statuses in memory. We examined the representation of spatial directions in the auditory space, specifically in how far natural response actions, such as orientation movements towards a sound source, would affect the categorization of egocentric auditory space. While standing in the middle of a circle with 16 loudspeakers, participants were presented acoustic stimuli coming from the loudspeakers in randomized order, and verbally described their directions by using the concept labels front, back, left, right, front-right, front-left, back-right and back-left. Response actions varied in three blocked conditions: 1 facing front, 2 turning the head and upper body to face the stimulus, and 3 turning the head and upper body plus pointing with the hand and outstretched arm towards the stimulus. In addition to a protocol of the verbal utterances, motion capture and video recording generated a detailed corpus for subsequent analysis of the participants’ behavior. Chi-square tests revealed an effect of response condition for directions within the left and right sides. We conclude that movement-based response actions influence the representation of auditory space, especially within the sides’ regions.
Kang, Su Jin; Kim, Jae Hyoung; Shin, Tae Min
To obtain preliminary data for understanding the central auditory neural pathway by means of functional MR imaging (fMRI) of the cerebral auditory cortex during linguistic and non-linguistic auditory stimulation. In three right-handed volunteers we conducted fMRI of auditory cortex stimulation at 1.5 T using a conventional gradient-echo technique (TR/TE/flip angle: 80/60/40 deg). Using a pulsed tone of 1000 Hz and speech as non-linguistic and linguistic auditory stimuli, respectively, images-including those of the superior temporal gyrus of both hemispheres-were obtained in sagittal plases. Both stimuli were separately delivered binaurally or monoaurally through a plastic earphone. Images were activated by processing with homemade software. In order to analyze patterns of auditory cortex activation according to type of stimulus and which side of the ear was stimulated, the number and extent of activated pixels were compared between both temporal lobes. Biaural stimulation led to bilateral activation of the superior temporal gyrus, while monoaural stimulation led to more activation in the contralateral temporal lobe than in the ipsilateral. A trend toward slight activation of the left (dominant) temporal lobe in ipsilateral stimulation, particularly with a linguistic stimulus, was observed. During both biaural and monoaural stimulation, a linguistic stimulus produced more widespread activation than did a non-linguistic one. The superior temporal gyri of both temporal lobes are associated with acoustic-phonetic analysis, and the left (dominant) superior temporal gyrus is likely to play a dominant role in this processing. For better understanding of physiological and pathological central auditory pathways, further investigation is needed
Corina, David P; Blau, Shane; LaMarr, Todd; Lawyer, Laurel A; Coffey-Corina, Sharon
Deaf children who receive a cochlear implant early in life and engage in intensive oral/aural therapy often make great strides in spoken language acquisition. However, despite clinicians' best efforts, there is a great deal of variability in language outcomes. One concern is that cortical regions which normally support auditory processing may become reorganized for visual function, leaving fewer available resources for auditory language acquisition. The conditions under which these changes occur are not well understood, but we may begin investigating this phenomenon by looking for interactions between auditory and visual evoked cortical potentials in deaf children. If children with abnormal auditory responses show increased sensitivity to visual stimuli, this may indicate the presence of maladaptive cortical plasticity. We recorded evoked potentials, using both auditory and visual paradigms, from 25 typical hearing children and 26 deaf children (ages 2-8 years) with cochlear implants. An auditory oddball paradigm was used (85% /ba/ syllables vs. 15% frequency modulated tone sweeps) to elicit an auditory P1 component. Visual evoked potentials (VEPs) were recorded during presentation of an intermittent peripheral radial checkerboard while children watched a silent cartoon, eliciting a P1-N1 response. We observed reduced auditory P1 amplitudes and a lack of latency shift associated with normative aging in our deaf sample. We also observed shorter latencies in N1 VEPs to visual stimulus offset in deaf participants. While these data demonstrate cortical changes associated with auditory deprivation, we did not find evidence for a relationship between cortical auditory evoked potentials and the VEPs. This is consistent with descriptions of intra-modal plasticity within visual systems of deaf children, but do not provide evidence for cross-modal plasticity. In addition, we note that sign language experience had no effect on deaf children's early auditory and visual ERP
Besle, Julien; Fort, Alexandra; Giard, Marie-Hélène
The mismatch negativity (MMN) component of auditory event-related brain potentials can be used as a probe to study the representation of sounds in auditory sensory memory (ASM). Yet it has been shown that an auditory MMN can also be elicited by an illusory auditory deviance induced by visual changes. This suggests that some visual information may be encoded in ASM and is accessible to the auditory MMN process. It is not known, however, whether visual information affects ASM representation for any audiovisual event or whether this phenomenon is limited to specific domains in which strong audiovisual illusions occur. To highlight this issue, we have compared the topographies of MMNs elicited by non-speech audiovisual stimuli deviating from audiovisual standards on the visual, the auditory, or both dimensions. Contrary to what occurs with audiovisual illusions, each unimodal deviant elicited sensory-specific MMNs, and the MMN to audiovisual deviants included both sensory components. The visual MMN was, however, different from a genuine visual MMN obtained in a visual-only control oddball paradigm, suggesting that auditory and visual information interacts before the MMN process occurs. Furthermore, the MMN to audiovisual deviants was significantly different from the sum of the two sensory-specific MMNs, showing that the processes of visual and auditory change detection are not completely independent.
Kent, Christopher; Lamberts, Koen
This study investigated the effect of stimulus presentation probability on accuracy and response times in an absolute identification task. Three schedules of presentation were used to investigate the interaction between presentation probability and stimulus position within the set. Data from individual participants indicated strong effects of…
Sale, Martin V; Nydam, Abbey S; Mattingley, Jason B
Plasticity can be induced in human cortex using paired associative stimulation (PAS), which repeatedly and predictably pairs a peripheral electrical stimulus with transcranial magnetic stimulation (TMS) to the contralateral motor region. Many studies have reported small or inconsistent effects of PAS. Given that uncertain stimuli can promote learning, the predictable nature of the stimulation in conventional PAS paradigms might serve to attenuate plasticity induction. Here, we introduced stimulus uncertainty into the PAS paradigm to investigate if it can boost plasticity induction. Across two experimental sessions, participants (n = 28) received a modified PAS paradigm consisting of a random combination of 90 paired stimuli and 90 unpaired (TMS-only) stimuli. Prior to each of these stimuli, participants also received an auditory cue which either reliably predicted whether the upcoming stimulus was paired or unpaired (no uncertainty condition) or did not predict the upcoming stimulus (maximum uncertainty condition). Motor evoked potentials (MEPs) evoked from abductor pollicis brevis (APB) muscle quantified cortical excitability before and after PAS. MEP amplitude increased significantly 15 min following PAS in the maximum uncertainty condition. There was no reliable change in MEP amplitude in the no uncertainty condition, nor between post-PAS MEP amplitudes across the two conditions. These results suggest that stimulus uncertainty may provide a novel means to enhance plasticity induction with the PAS paradigm in human motor cortex. To provide further support to the notion that stimulus uncertainty and prediction error promote plasticity, future studies should further explore the time course of these changes, and investigate what aspects of stimulus uncertainty are critical in boosting plasticity. Copyright © 2016 Elsevier Ltd. All rights reserved.
Formby, Craig; Hawley, Monica L.; Sherlock, LaGuinn P.; Gold, Susan; Payne, JoAnne; Brooks, Rebecca; Parton, Jason M.; Juneau, Roger; Desporte, Edward J.; Siegle, Gregory R.
The primary aim of this research was to evaluate the validity, efficacy, and generalization of principles underlying a sound therapy–based treatment for promoting expansion of the auditory dynamic range (DR) for loudness. The basic sound therapy principles, originally devised for treatment of hyperacusis among patients with tinnitus, were evaluated in this study in a target sample of unsuccessfully fit and/or problematic prospective hearing aid users with diminished DRs (owing to their elevated audiometric thresholds and reduced sound tolerance). Secondary aims included: (1) delineation of the treatment contributions from the counseling and sound therapy components to the full-treatment protocol and, in turn, the isolated treatment effects from each of these individual components to intervention success; and (2) characterization of the respective dynamics for full, partial, and control treatments. Thirty-six participants with bilateral sensorineural hearing losses and reduced DRs, which affected their actual or perceived ability to use hearing aids, were enrolled in and completed a placebo-controlled (for sound therapy) randomized clinical trial. The 2 × 2 factorial trial design was implemented with or without various assignments of counseling and sound therapy. Specifically, participants were assigned randomly to one of four treatment groups (nine participants per group), including: (1) group 1—full treatment achieved with scripted counseling plus sound therapy implemented with binaural sound generators; (2) group 2—partial treatment achieved with counseling and placebo sound generators (PSGs); (3) group 3—partial treatment achieved with binaural sound generators alone; and (4) group 4—a neutral control treatment implemented with the PSGs alone. Repeated measurements of categorical loudness judgments served as the primary outcome measure. The full-treatment categorical-loudness judgments for group 1, measured at treatment termination, were
Rincover, Arnold; Ducharme, Joseph M.
Three variables (diagnosis, location of cues, and mental age of learners) influencing stimulus control and stimulus overselectivity were assessed with eight autistic children (mean age 12 years) and eight average children matched for mean age. Among results were that autistic subjects tended to respond overselectively only in the extra-stimulus…
Singer, Bryan F.; Bryan, Myranda A.; Popov, Pavlo; Scarff, Raymond; Carter, Cody; Wright, Erin; Aragona, Brandon J.; Robinson, Terry E.
The sensory properties of a reward-paired cue (a conditioned stimulus; CS) may impact the motivational value attributed to the cue, and in turn influence the form of the conditioned response (CR) that develops. A cue with multiple sensory qualities, such as a moving lever-CS, may activate numerous neural pathways that process auditory and visual…
Hironori Kuga, M.D.
We acquired BOLD responses elicited by click trains of 20, 30, 40 and 80-Hz frequencies from 15 patients with acute episode schizophrenia (AESZ, 14 symptom-severity-matched patients with non-acute episode schizophrenia (NASZ, and 24 healthy controls (HC, assessed via a standard general linear-model-based analysis. The AESZ group showed significantly increased ASSR-BOLD signals to 80-Hz stimuli in the left auditory cortex compared with the HC and NASZ groups. In addition, enhanced 80-Hz ASSR-BOLD signals were associated with more severe auditory hallucination experiences in AESZ participants. The present results indicate that neural over activation occurs during 80-Hz auditory stimulation of the left auditory cortex in individuals with acute state schizophrenia. Given the possible association between abnormal gamma activity and increased glutamate levels, our data may reflect glutamate toxicity in the auditory cortex in the acute state of schizophrenia, which might lead to progressive changes in the left transverse temporal gyrus.
Weinberger, Norman M
Primary ("early") sensory cortices have been viewed as stimulus analyzers devoid of function in learning, memory, and cognition. However, studies combining sensory neurophysiology and learning protocols have revealed that associative learning systematically modifies the encoding of stimulus dimensions in the primary auditory cortex (A1) to accentuate behaviorally important sounds. This "representational plasticity" (RP) is manifest at different levels. The sensitivity and selectivity of signal tones increase near threshold, tuning above threshold shifts toward the frequency of acoustic signals, and their area of representation can increase within the tonotopic map of A1. The magnitude of area gain encodes the level of behavioral stimulus importance and serves as a substrate of memory strength. RP has the same characteristics as behavioral memory: it is associative, specific, develops rapidly, consolidates, and can last indefinitely. Pairing tone with stimulation of the cholinergic nucleus basalis induces RP and implants specific behavioral memory, while directly increasing the representational area of a tone in A1 produces matching behavioral memory. Thus, RP satisfies key criteria for serving as a substrate of auditory memory. The findings suggest a basis for posttraumatic stress disorder in abnormally augmented cortical representations and emphasize the need for a new model of the cerebral cortex. © 2015 Elsevier B.V. All rights reserved.
Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong
This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160-200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360-400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides.
Allison, Robert S; Howard, Ian P; Fang, Xueping
Over what region of space are horizontal disparities integrated to form the stimulus for vergence? The vergence system might be expected to respond to disparities within a small area of interest to bring them into the range of precise stereoscopic processing. However, the literature suggests that disparities are integrated over a fairly large parafoveal area. We report the results of six experiments designed to explore the spatial characteristics of the stimulus for vergence. Binocular eye movements were recorded using magnetic search coils. Each dichoptic display consisted of a central target stimulus that the subject attempted to fuse, and a competing stimulus with conflicting disparity. In some conditions the target was stationary, providing a fixation stimulus. In other conditions, the disparity of the target changed to provide a vergence-tracking stimulus. The target and competing stimulus were combined in a variety of conditions including those in which (1) a transparent textured-disc target was superimposed on a competing textured background, (2) a textured-disc target filled the centre of a competing annular background, and (3) a small target was presented within the centre of a competing annular background of various inner diameters. In some conditions the target and competing stimulus were separated in stereoscopic depth. The results are consistent with a disparity integration area with a diameter of about 5 degrees. Stimuli beyond this integration area can drive vergence in their own right, but they do not appear to be summed or averaged with a central stimulus to form a combined disparity signal. A competing stimulus had less effect on vergence when separated from the target by a disparity pedestal. As a result, we propose that it may be more useful to think in terms of an integration volume for vergence rather than a two-dimensional retinal integration area.
Bar-Haim, Yair; Henkin, Yael; Ari-Even-Roth, Daphne; Tetin-Schneider, Simona; Hildesheimer, Minka; Muchnik, Chava
Selective mutism is a psychiatric disorder of childhood characterized by consistent inability to speak in specific situations despite the ability to speak normally in others. The objective of this study was to test whether reduced auditory efferent activity, which may have direct bearings on speaking behavior, is compromised in selectively mute children. Participants were 16 children with selective mutism and 16 normally developing control children matched for age and gender. All children were tested for pure-tone audiometry, speech reception thresholds, speech discrimination, middle-ear acoustic reflex thresholds and decay function, transient evoked otoacoustic emission, suppression of transient evoked otoacoustic emission, and auditory brainstem response. Compared with control children, selectively mute children displayed specific deficiencies in auditory efferent activity. These aberrations in efferent activity appear along with normal pure-tone and speech audiometry and normal brainstem transmission as indicated by auditory brainstem response latencies. The diminished auditory efferent activity detected in some children with SM may result in desensitization of their auditory pathways by self-vocalization and in reduced control of masking and distortion of incoming speech sounds. These children may gradually learn to restrict vocalization to the minimal amount possible in contexts that require complex auditory processing.
Arie, Miri; Henkin, Yael; Lamy, Dominique; Tetin-Schneider, Simona; Apter, Alan; Sadeh, Avi; Bar-Haim, Yair
Because abnormal Auditory Efferent Activity (AEA) is associated with auditory distortions during vocalization, we tested whether auditory processing is impaired during vocalization in children with Selective Mutism (SM). Participants were children with SM and abnormal AEA, children with SM and normal AEA, and normally speaking controls, who had to detect aurally presented target words embedded within word lists under two conditions: silence (single task), and while vocalizing (dual task). To ascertain specificity of auditory-vocal deficit, effects of concurrent vocalizing were also examined during a visual task. Children with SM and abnormal AEA showed impaired auditory processing during vocalization relative to children with SM and normal AEA, and relative to control children. This impairment is specific to the auditory modality and does not reflect difficulties in dual task per se. The data extends previous findings suggesting that deficient auditory processing is involved in speech selectivity in SM.
Nosch, Daniela S; Pult, Heiko; Albon, Julie; Purslow, Christine; Murphy, Paul J
Belmonte Ocular Pain Meter (OPM) air jet aesthesiometry overcomes some of the limitations of the Cochet-Bonnet aesthesiometer. However, for true mechanical corneal sensitivity measurement, the airflow stimulus temperature of the aesthesiometer must equal ocular surface temperature (OST), to avoid additional response from temperature-sensitive nerves. The aim of this study was to determine: (A) the stimulus temperature inducing no or least change in OST; and (B) to evaluate if OST remains unchanged with different stimulus durations and airflow rates. A total of 14 subjects (mean age 25.14 ± 2.18 years; seven women) participated in this clinical cohort study: (A) OST was recorded using an infrared camera (FLIR A310) during the presentation of airflow stimuli, at five temperatures, ambient temperature (AT) +5°C, +10°C, +15°C, +20°C and +30°C, using the OPM aesthesiometer (duration three seconds; over a four millimetre distance; airflow rate 60 ml/min); and (B) OST measurements were repeated with two stimulus temperatures (AT +10°C and +15°C) while varying stimulus durations (three seconds and five seconds) and airflow rates (30, 60, 80 and 100 ml/min). Inclusion criteria were age measures (analysis of variance) and appropriate post-hoc t-tests were applied. (A) Stimulus temperatures of AT +10°C and +15°C induced the least changes in OST (-0.20 ± 0.13°C and 0.08 ± 0.05°C). (B) OST changes were statistically significant with both stimulus temperatures and increased with increasing airflow rates (p air stimulus of the Belmonte OPM because its air jet stimulus with mechanical setting is likely to have a thermal component. Appropriate stimulus selection for an air jet aesthesiometer must incorporate stimulus temperature control that can vary with stimulus duration and airflow rate. © 2017 Optometry Australia.
Almeida, Jorge; He, Dongjun; Chen, Quanjing; Mahon, Bradford Z.; Zhang, Fan; Gonçalves, Óscar F.; Fang, Fang; Bi, Yanchao
Sensory cortices of individuals who are congenitally deprived of a sense can exhibit considerable plasticity and be recruited to process information from the senses that remain intact. Here, we explored whether the auditory cortex of congenitally deaf individuals represents visual field location of a stimulus—a dimension that is represented in early visual areas. We used functional MRI to measure neural activity in auditory and visual cortices of congenitally deaf and hearing humans while they observed stimuli typically used for mapping visual field preferences in visual cortex. We found that the location of a visual stimulus can be successfully decoded from the patterns of neural activity in auditory cortex of congenitally deaf but not hearing individuals. This is particularly true for locations within the horizontal plane and within peripheral vision. These data show that the representations stored within neuroplastically changed auditory cortex can align with dimensions that are typically represented in visual cortex. PMID:26423461
Plaud, J J; Gaither, G A; Weller, L A; Bigwood, S J; Barth, J; von Duvillard, S P
Stimulus equivalence is a behavioral approach to analyzing the "meaning" of stimulus sets and has an implication for clinical psychology. The formation of three-member (A --> B --> C) stimulus equivalence classes was used to investigate the effects of three different sets of sample and comparison stimuli on emergent behavior. The three stimulus sets were composed of Rational-Emotive Behavior Therapy (REBT)-related words, non-REBT emotionally charged words, and a third category of neutral words composed of flower labels. Sixty-two women and men participated in a modified matching-to-sample experiment. Using a mixed cross-over design, and controlling for serial order effects, participants received conditional training and emergent relationship training in the three stimulus set conditions. Results revealed a significant interaction between the formation of stimulus equivalence classes and stimulus meaning, indicating consistently biased responding in favor of reaching criterion responding more slowly for REBT-related and non-REBT emotionally charged words. Results were examined in the context of an analysis of the importance of stimulus meaning on behavior and the relation of stimulus meaning to behavioral and cognitive theories, with special appraisal given to the influence of fear-related discriminative stimuli on behavior.
von Trapp, Gardiner; Buran, Bradley N; Sen, Kamal; Semple, Malcolm N; Sanes, Dan H
The detection of a sensory stimulus arises from a significant change in neural activity, but a sensory neuron's response is rarely identical to successive presentations of the same stimulus. Large trial-to-trial variability would limit the central nervous system's ability to reliably detect a stimulus, presumably affecting perceptual performance. However, if response variability were to decrease while firing rate remained constant, then neural sensitivity could improve. Here, we asked whether engagement in an auditory detection task can modulate response variability, thereby increasing neural sensitivity. We recorded telemetrically from the core auditory cortex of gerbils, both while they engaged in an amplitude-modulation detection task and while they sat quietly listening to the identical stimuli. Using a signal detection theory framework, we found that neural sensitivity was improved during task performance, and this improvement was closely associated with a decrease in response variability. Moreover, units with the greatest change in response variability had absolute neural thresholds most closely aligned with simultaneously measured perceptual thresholds. Our findings suggest that the limitations imposed by response variability diminish during task performance, thereby improving the sensitivity of neural encoding and potentially leading to better perceptual sensitivity. The detection of a sensory stimulus arises from a significant change in neural activity. However, trial-to-trial variability of the neural response may limit perceptual performance. If the neural response to a stimulus is quite variable, then the response on a given trial could be confused with the pattern of neural activity generated when the stimulus is absent. Therefore, a neural mechanism that served to reduce response variability would allow for better stimulus detection. By recording from the cortex of freely moving animals engaged in an auditory detection task, we found that variability
Emberson, Lauren L.; Cannon, Grace; Palmeri, Holly; Richards, John E.; Aslin, Richard N.
How does the developing brain respond to recent experience? Repetition suppression (RS) is a robust and well-characterized response of to recent experience found, predominantly, in the perceptual cortices of the adult brain. We use functional near-infrared spectroscopy (fNIRS) to investigate how perceptual (temporal and occipital) and frontal cortices in the infant brain respond to auditory and visual stimulus repetitions (spoken words and faces). In Experiment 1, we find strong evidence of repetition suppression in the frontal cortex but only for auditory stimuli. In perceptual cortices, we find only suggestive evidence of auditory RS in the temporal cortex and no evidence of visual RS in any ROI. In Experiments 2 and 3, we replicate and extend these findings. Overall, we provide the first evidence that infant and adult brains respond differently to stimulus repetition. We suggest that the frontal lobe may support the development of RS in perceptual cortices. PMID:28012401
Full Text Available INTRODUCTION: Individuals with sensorineural hearing loss are often able to regain some lost auditory function with the help of hearing aids. However, hearing aids are not able to overcome auditory distortions such as impaired frequency resolution and speech understanding in noisy environments. The coexistence of peripheral hearing loss and a central auditory deficit may contribute to patient dissatisfaction with amplification, even when audiological tests indicate nearly normal hearing thresholds. OBJECTIVE: This study was designed to validate the effects of a formal auditory training program in adult hearing aid users with mild to moderate sensorineural hearing loss. METHODS: Fourteen bilateral hearing aid users were divided into two groups: seven who received auditory training and seven who did not. The training program was designed to improve auditory closure, figure-to-ground for verbal and nonverbal sounds and temporal processing (frequency and duration of sounds. Pre- and post-training evaluations included measuring electrophysiological and behavioral auditory processing and administration of the Abbreviated Profile of Hearing Aid Benefit (APHAB self-report scale. RESULTS: The post-training evaluation of the experimental group demonstrated a statistically significant reduction in P3 latency, improved performance in some of the behavioral auditory processing tests and higher hearing aid benefit in noisy situations (p-value < 0,05. No changes were noted for the control group (p-value <0,05. CONCLUSION: The results demonstrated that auditory training in adult hearing aid users can lead to a reduction in P3 latency, improvements in sound localization, memory for nonverbal sounds in sequence, auditory closure, figure-to-ground for verbal sounds and greater benefits in reverberant and noisy environments.
Auditory development involves changes in the peripheral and central nervous system along the auditory pathways, and these occur naturally, and in response to stimulation. Human development occurs along a trajectory that can last decades, and is studied using behavioral psychophysics, as well as physiologic measurements with neural imaging. The auditory system constructs a perceptual space that takes information from objects and groups, segregates sounds, and provides meaning and access to communication tools such as language. Auditory signals are processed in a series of analysis stages, from peripheral to central. Coding of information has been studied for features of sound, including frequency, intensity, loudness, and location, in quiet and in the presence of maskers. In the latter case, the ability of the auditory system to perform an analysis of the scene becomes highly relevant. While some basic abilities are well developed at birth, there is a clear prolonged maturation of auditory development well into the teenage years. Maturation involves auditory pathways. However, non-auditory changes (attention, memory, cognition) play an important role in auditory development. The ability of the auditory system to adapt in response to novel stimuli is a key feature of development throughout the nervous system, known as neural plasticity. PMID:25726262
Sounds in the natural environment need to be assigned to acoustic sources to evaluate complex auditory scenes. Separating sources will affect the analysis of auditory features of sounds. As the benefits of assigning sounds to specific sources accrue to all species communicating acoustically, the ability for auditory scene analysis is widespread among different animals. Animal studies allow for a deeper insight into the neuronal mechanisms underlying auditory scene analysis. Here, we will review the paradigms applied in the study of auditory scene analysis and streaming of sequential sounds in animal models. We will compare the psychophysical results from the animal studies to the evidence obtained in human psychophysics of auditory streaming, i.e. in a task commonly used for measuring the capability for auditory scene analysis. Furthermore, the neuronal correlates of auditory streaming will be reviewed in different animal models and the observations of the neurons’ response measures will be related to perception. The across-species comparison will reveal whether similar demands in the analysis of acoustic scenes have resulted in similar perceptual and neuronal processing mechanisms in the wide range of species being capable of auditory scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044022
DiMattina, Christopher; Zhang, Kechen
In this paper, we review several lines of recent work aimed at developing practical methods for adaptive on-line stimulus generation for sensory neurophysiology. We consider various experimental paradigms where on-line stimulus optimization is utilized, including the classical optimal stimulus paradigm where the goal of experiments is to identify a stimulus which maximizes neural responses, the iso-response paradigm which finds sets of stimuli giving rise to constant responses, and the system identification paradigm where the experimental goal is to estimate and possibly compare sensory processing models. We discuss various theoretical and practical aspects of adaptive firing rate optimization, including optimization with stimulus space constraints, firing rate adaptation, and possible network constraints on the optimal stimulus. We consider the problem of system identification, and show how accurate estimation of non-linear models can be highly dependent on the stimulus set used to probe the network. We suggest that optimizing stimuli for accurate model estimation may make it possible to successfully identify non-linear models which are otherwise intractable, and summarize several recent studies of this type. Finally, we present a two-stage stimulus design procedure which combines the dual goals of model estimation and model comparison and may be especially useful for system identification experiments where the appropriate model is unknown beforehand. We propose that fast, on-line stimulus optimization enabled by increasing computer power can make it practical to move sensory neuroscience away from a descriptive paradigm and toward a new paradigm of real-time model estimation and comparison.
Chen, Ling-Chia; Sandmann, Pascale; Thorne, Jeremy D; Bleichner, Martin G; Debener, Stefan
Cochlear implant (CI) users show higher auditory-evoked activations in visual cortex and higher visual-evoked activation in auditory cortex compared to normal hearing (NH) controls, reflecting functional reorganization of both visual and auditory modalities. Visual-evoked activation in auditory cortex is a maladaptive functional reorganization whereas auditory-evoked activation in visual cortex is beneficial for speech recognition in CI users. We investigated their joint influence on CI users' speech recognition, by testing 20 postlingually deafened CI users and 20 NH controls with functional near-infrared spectroscopy (fNIRS). Optodes were placed over occipital and temporal areas to measure visual and auditory responses when presenting visual checkerboard and auditory word stimuli. Higher cross-modal activations were confirmed in both auditory and visual cortex for CI users compared to NH controls, demonstrating that functional reorganization of both auditory and visual cortex can be identified with fNIRS. Additionally, the combined reorganization of auditory and visual cortex was found to be associated with speech recognition performance. Speech performance was good as long as the beneficial auditory-evoked activation in visual cortex was higher than the visual-evoked activation in the auditory cortex. These results indicate the importance of considering cross-modal activations in both visual and auditory cortex for potential clinical outcome estimation.
Full Text Available Cochlear implant (CI users show higher auditory-evoked activations in visual cortex and higher visual-evoked activation in auditory cortex compared to normal hearing (NH controls, reflecting functional reorganization of both visual and auditory modalities. Visual-evoked activation in auditory cortex is a maladaptive functional reorganization whereas auditory-evoked activation in visual cortex is beneficial for speech recognition in CI users. We investigated their joint influence on CI users’ speech recognition, by testing 20 postlingually deafened CI users and 20 NH controls with functional near-infrared spectroscopy (fNIRS. Optodes were placed over occipital and temporal areas to measure visual and auditory responses when presenting visual checkerboard and auditory word stimuli. Higher cross-modal activations were confirmed in both auditory and visual cortex for CI users compared to NH controls, demonstrating that functional reorganization of both auditory and visual cortex can be identified with fNIRS. Additionally, the combined reorganization of auditory and visual cortex was found to be associated with speech recognition performance. Speech performance was good as long as the beneficial auditory-evoked activation in visual cortex was higher than the visual-evoked activation in the auditory cortex. These results indicate the importance of considering cross-modal activations in both visual and auditory cortex for potential clinical outcome estimation.
Bayat, Arash; Farhadi, Mohammad; Emamdjomeh, Hesam; Saki, Nader; Mirmomeni, Golshan; Rahim, Fakher
It has been demonstrated that long-term Conductive Hearing Loss (CHL) may influence the precise detection of the temporal features of acoustic signals or Auditory Temporal Processing (ATP). It can be argued that ATP may be the underlying component of many central auditory processing capabilities such as speech comprehension or sound localization. Little is known about the consequences of CHL on temporal aspects of central auditory processing. This study was designed to assess auditory temporal processing ability in individuals with chronic CHL. During this analytical cross-sectional study, 52 patients with mild to moderate chronic CHL and 52 normal-hearing listeners (control), aged between 18 and 45 year-old, were recruited. In order to evaluate auditory temporal processing, the Gaps-in-Noise (GIN) test was used. The results obtained for each ear were analyzed based on the gap perception threshold and the percentage of correct responses. The average of GIN thresholds was significantly smaller for the control group than for the CHL group for both ears (right: p=0.004; left: phearing for both sides (phearing loss in either group (p>0.05). The results suggest reduced auditory temporal processing ability in adults with CHL compared to normal hearing subjects. Therefore, developing a clinical protocol to evaluate auditory temporal processing in this population is recommended. Copyright © 2017 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Walsh, Kyle P; Pasanen, Edward G; McFadden, Dennis
Previous studies have demonstrated that the otoacoustic emissions (OAEs) measured during behavioral tasks can have different magnitudes when subjects are attending selectively or not attending. The implication is that the cognitive and perceptual demands of a task can affect the first neural stage of auditory processing-the sensory receptors themselves. However, the directions of the reported attentional effects have been inconsistent, the magnitudes of the observed differences typically have been small, and comparisons across studies have been made difficult by significant procedural differences. In this study, a nonlinear version of the stimulus-frequency OAE (SFOAE), called the nSFOAE, was used to measure cochlear responses from human subjects while they simultaneously performed behavioral tasks requiring selective auditory attention (dichotic or diotic listening), selective visual attention, or relative inattention. Within subjects, the differences in nSFOAE magnitude between inattention and attention conditions were about 2-3 dB for both auditory and visual modalities, and the effect sizes for the differences typically were large for both nSFOAE magnitude and phase. These results reveal that the cochlear efferent reflex is differentially active during selective attention and inattention, for both auditory and visual tasks, although they do not reveal how attention is improved when efferent activity is greater.
Zhang, Dian; Cui, Jianguo; Tang, Yezhong
In anurans reproductive behavior is strongly seasonal. During the spring, frogs emerge from hibernation and males vocalize for mating or advertising territories. Female frogs have the ability to evaluate the quality of the males' resources on the basis of these vocalizations. Although studies revealed that central single torus semicircularis neurons in frogs exhibit season plasticity, the plasticity of peripheral auditory sensitivity in frog is unknown. In this study the seasonally plasticity of peripheral auditory sensitivity was test in the Emei music frog Babina daunchina, by comparing thresholds and latencies of auditory brainstem responses (ABRs) evoked by tone pips and clicks in the reproductive and non-reproductive seasons. The results show that both ABR thresholds and latency differ significantly between the reproductive and non-reproductive seasons. The thresholds of tone pip evoked ABRs in the non-reproductive season increased significantly about 10 dB than those in the reproductive season for frequencies from 1 KHz to 6 KHz. ABR latencies to waveform valley values for tone pips for the same frequencies using appropriate threshold stimulus levels are longer than those in the reproductive season for frequencies from 1.5 to 6 KHz range, although from 0.2 to 1.5 KHz range it is shorter in the non-reproductive season. These results demonstrated that peripheral auditory frequency sensitivity exhibits seasonal plasticity changes which may be adaptive to seasonal reproductive behavior in frogs.
Full Text Available In anurans reproductive behavior is strongly seasonal. During the spring, frogs emerge from hibernation and males vocalize for mating or advertising territories. Female frogs have the ability to evaluate the quality of the males' resources on the basis of these vocalizations. Although studies revealed that central single torus semicircularis neurons in frogs exhibit season plasticity, the plasticity of peripheral auditory sensitivity in frog is unknown. In this study the seasonally plasticity of peripheral auditory sensitivity was test in the Emei music frog Babina daunchina, by comparing thresholds and latencies of auditory brainstem responses (ABRs evoked by tone pips and clicks in the reproductive and non-reproductive seasons. The results show that both ABR thresholds and latency differ significantly between the reproductive and non-reproductive seasons. The thresholds of tone pip evoked ABRs in the non-reproductive season increased significantly about 10 dB than those in the reproductive season for frequencies from 1 KHz to 6 KHz. ABR latencies to waveform valley values for tone pips for the same frequencies using appropriate threshold stimulus levels are longer than those in the reproductive season for frequencies from 1.5 to 6 KHz range, although from 0.2 to 1.5 KHz range it is shorter in the non-reproductive season. These results demonstrated that peripheral auditory frequency sensitivity exhibits seasonal plasticity changes which may be adaptive to seasonal reproductive behavior in frogs.
Scharinger, Mathias; Henry, Molly J; Erb, Julia; Meyer, Lars; Obleser, Jonas
Auditory categorization is a vital skill involving the attribution of meaning to acoustic events, engaging domain-specific (i.e., auditory) as well as domain-general (e.g., executive) brain networks. A listener's ability to categorize novel acoustic stimuli should therefore depend on both, with the domain-general network being particularly relevant for adaptively changing listening strategies and directing attention to relevant acoustic cues. Here we assessed adaptive listening behavior, using complex acoustic stimuli with an initially salient (but later degraded) spectral cue and a secondary, duration cue that remained nondegraded. We employed voxel-based morphometry (VBM) to identify cortical and subcortical brain structures whose individual neuroanatomy predicted task performance and the ability to optimally switch to making use of temporal cues after spectral degradation. Behavioral listening strategies were assessed by logistic regression and revealed mainly strategy switches in the expected direction, with considerable individual differences. Gray-matter probability in the left inferior parietal lobule (BA 40) and left precentral gyrus was predictive of "optimal" strategy switch, while gray-matter probability in thalamic areas, comprising the medial geniculate body, co-varied with overall performance. Taken together, our findings suggest that successful auditory categorization relies on domain-specific neural circuits in the ascending auditory pathway, while adaptive listening behavior depends more on brain structure in parietal cortex, enabling the (re)direction of attention to salient stimulus properties. © 2013 Published by Elsevier Ltd.
Walsh, Kyle P.; Pasanen, Edward G.; McFadden, Dennis
Previous studies have demonstrated that the otoacoustic emissions (OAEs) measured during behavioral tasks can have different magnitudes when subjects are attending selectively or not attending. The implication is that the cognitive and perceptual demands of a task can affect the first neural stage of auditory processing—the sensory receptors themselves. However, the directions of the reported attentional effects have been inconsistent, the magnitudes of the observed differences typically have been small, and comparisons across studies have been made difficult by significant procedural differences. In this study, a nonlinear version of the stimulus-frequency OAE (SFOAE), called the nSFOAE, was used to measure cochlear responses from human subjects while they simultaneously performed behavioral tasks requiring selective auditory attention (dichotic or diotic listening), selective visual attention, or relative inattention. Within subjects, the differences in nSFOAE magnitude between inattention and attention conditions were about 2–3 dB for both auditory and visual modalities, and the effect sizes for the differences typically were large for both nSFOAE magnitude and phase. These results reveal that the cochlear efferent reflex is differentially active during selective attention and inattention, for both auditory and visual tasks, although they do not reveal how attention is improved when efferent activity is greater. PMID:25994703
Wilsch, Anna; Obleser, Jonas
Working memory is a limited resource: brains can only maintain small amounts of sensory input (memory load) over a brief period of time (memory decay). The dynamics of slow neural oscillations as recorded using magneto- and electroencephalography (M/EEG) provide a window into the neural mechanics of these limitations. Especially oscillations in the alpha range (8-13Hz) are a sensitive marker for memory load. Moreover, according to current models, the resultant working memory load is determined by the relative noise in the neural representation of maintained information. The auditory domain allows memory researchers to apply and test the concept of noise quite literally: Employing degraded stimulus acoustics increases memory load and, at the same time, allows assessing the cognitive resources required to process speech in noise in an ecologically valid and clinically relevant way. The present review first summarizes recent findings on neural oscillations, especially alpha power, and how they reflect memory load and memory decay in auditory working memory. The focus is specifically on memory load resulting from acoustic degradation. These findings are then contrasted with contextual factors that benefit neural as well as behavioral markers of memory performance, by reducing representational noise. We end on discussing the functional role of alpha power in auditory working memory and suggest extensions of the current methodological toolkit. This article is part of a Special Issue entitled SI: Auditory working memory. Published by Elsevier B.V.
Sharma, Vishnu; McCreery, Douglas B.; Han, Martin; Pikov, Victor
We present versatile multifunctional programmable controller with bidirectional data telemetry, implemented using existing commercial microchips and standard Bluetooth protocol, which adds convenience, reliability, and ease-of-use to neuroprosthetic devices. Controller, weighing 190 g, is placed on animal's back and provides bidirectional sustained telemetry rate of 500 kb/s, allowing real-time control of stimulation parameters and viewing of acquired data. In continuously-active state, controller consumes ∼420 mW and operates without recharge for 8 h. It features independent 16-channel current-controlled stimulation, allowing current steering; customizable stimulus current waveforms; recording of stimulus voltage waveforms and evoked neuronal responses with stimulus artifact blanking circuitry. Flexibility, scalability, cost-efficiency, and a user-friendly computer interface of this device allow use in animal testing for variety of neuroprosthetic applications. Initial testing of the controller has been done in a feline model of brainstem auditory prosthesis. In this model, the electrical stimulation is applied to the array of microelectrodes implanted in the ventral cochlear nucleus, while the evoked neuronal activity was recorded with the electrode implanted in the contralateral inferior colliculus. Stimulus voltage waveforms to monitor the access impedance of the electrodes were acquired at the rate of 312 kilosamples/s. Evoked neuronal activity in the inferior colliculus was recorded after the blanking (transient silencing) of the recording amplifier during the stimulus pulse, allowing the detection of neuronal responses within 100 μs after the end of the stimulus pulse applied in the cochlear nucleus. PMID:19933010
in listeners with SNHL, it is likely that HI listeners rely on the enhanced envelope cues to retrieve the pitch of unresolved harmonics. Hence, the relative importance of pitch cues may be altered in HI listeners, whereby envelope cues may be used instead of TFS cues to obtain a similar performance in pitch......Understanding how the human auditory system processes the physical properties of an acoustical stimulus to give rise to a pitch percept is a fascinating aspect of hearing research. Since most natural sounds are harmonic complex tones, this work focused on the nature of pitch-relevant cues...... that are necessary for the auditory system to retrieve the pitch of complex sounds. The existence of different pitch-coding mechanisms for low-numbered (spectrally resolved) and high-numbered (unresolved) harmonics was investigated by comparing pitch-discrimination performance across different cohorts of listeners...
Gadzella, B M; Whitehead, D A
Ten experimental conditions were used to study the effects of auditory and visual (printed words, uncolored and colored pictures) modalities and their various combinations with college students. A recall paradigm was employed in which subjects responded in a written test. Analysis of data showed the auditory modality was superior to visual (pictures) ones but was not significantly different from visual (printed words) modality. In visual modalities, printed words were superior to colored pictures. Generally, conditions with multiple modes of representation of stimuli were significantly higher than for conditions with single modes. Multiple modalities, consisting of two or three modes, did not differ significantly from each other. It was concluded that any two modalities of the stimuli presented simultaneously were just as effective as three in recall of stimulus words.
Matusz, Pawel J; Thelen, Antonia; Amrein, Sarah; Geiser, Eveline; Anken, Jacques; Murray, Micah M
Single-trial encounters with multisensory stimuli affect both memory performance and early-latency brain responses to visual stimuli. Whether and how auditory cortices support memory processes based on single-trial multisensory learning is unknown and may differ qualitatively and quantitatively from comparable processes within visual cortices due to purported differences in memory capacities across the senses. We recorded event-related potentials (ERPs) as healthy adults (n = 18) performed a continuous recognition task in the auditory modality, discriminating initial (new) from repeated (old) sounds of environmental objects. Initial presentations were either unisensory or multisensory; the latter entailed synchronous presentation of a semantically congruent or a meaningless image. Repeated presentations were exclusively auditory, thus differing only according to the context in which the sound was initially encountered. Discrimination abilities (indexed by d') were increased for repeated sounds that were initially encountered with a semantically congruent image versus sounds initially encountered with either a meaningless or no image. Analyses of ERPs within an electrical neuroimaging framework revealed that early stages of auditory processing of repeated sounds were affected by prior single-trial multisensory contexts. These effects followed from significantly reduced activity within a distributed network, including the right superior temporal cortex, suggesting an inverse relationship between brain activity and behavioural outcome on this task. The present findings demonstrate how auditory cortices contribute to long-term effects of multisensory experiences on auditory object discrimination. We propose a new framework for the efficacy of multisensory processes to impact both current multisensory stimulus processing and unisensory discrimination abilities later in time. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Pratt, Hillel; Starr, Arnold; Michalewski, Henry J; Dimitrijevic, Andrew; Bleich, Naomi; Mittelman, Nomi
To define brain activity corresponding to an auditory illusion of 3 and 6Hz binaural beats in 250Hz or 1000Hz base frequencies, and compare it to the sound onset response. Event-Related Potentials (ERPs) were recorded in response to unmodulated tones of 250 or 1000Hz to one ear and 3 or 6Hz higher to the other, creating an illusion of amplitude modulations (beats) of 3Hz and 6Hz, in base frequencies of 250Hz and 1000Hz. Tones were 2000ms in duration and presented with approximately 1s intervals. Latency, amplitude and source current density estimates of ERP components to tone onset and subsequent beats-evoked oscillations were determined and compared across beat frequencies with both base frequencies. All stimuli evoked tone-onset P(50), N(100) and P(200) components followed by oscillations corresponding to the beat frequency, and a subsequent tone-offset complex. Beats-evoked oscillations were higher in amplitude with the low base frequency and to the low beat frequency. Sources of the beats-evoked oscillations across all stimulus conditions located mostly to left lateral and inferior temporal lobe areas in all stimulus conditions. Onset-evoked components were not different across stimulus conditions; P(50) had significantly different sources than the beats-evoked oscillations; and N(100) and P(200) sources located to the same temporal lobe regions as beats-evoked oscillations, but were bilateral and also included frontal and parietal contributions. Neural activity with slightly different volley frequencies from left and right ear converges and interacts in the central auditory brainstem pathways to generate beats of neural activity to modulate activities in the left temporal lobe, giving rise to the illusion of binaural beats. Cortical potentials recorded to binaural beats are distinct from onset responses. Brain activity corresponding to an auditory illusion of low frequency beats can be recorded from the scalp.
Höhne, Johannes; Tangermann, Michael
Realizing the decoding of brain signals into control commands, brain-computer interfaces (BCI) aim to establish an alternative communication pathway for locked-in patients. In contrast to most visual BCI approaches which use event-related potentials (ERP) of the electroencephalogram, auditory BCI systems are challenged with ERP responses, which are less class-discriminant between attended and unattended stimuli. Furthermore, these auditory approaches have more complex interfaces which imposes a substantial workload on their users. Aiming for a maximally user-friendly spelling interface, this study introduces a novel auditory paradigm: “CharStreamer”. The speller can be used with an instruction as simple as “please attend to what you want to spell”. The stimuli of CharStreamer comprise 30 spoken sounds of letters and actions. As each of them is represented by the sound of itself and not by an artificial substitute, it can be selected in a one-step procedure. The mental mapping effort (sound stimuli to actions) is thus minimized. Usability is further accounted for by an alphabetical stimulus presentation: contrary to random presentation orders, the user can foresee the presentation time of the target letter sound. Healthy, normal hearing users (n = 10) of the CharStreamer paradigm displayed ERP responses that systematically differed between target and non-target sounds. Class-discriminant features, however, varied individually from the typical N1-P2 complex and P3 ERP components found in control conditions with random sequences. To fully exploit the sequential presentation structure of CharStreamer, novel data analysis approaches and classification methods were introduced. The results of online spelling tests showed that a competitive spelling speed can be achieved with CharStreamer. With respect to user rating, it clearly outperforms a control setup with random presentation sequences. PMID:24886978
Full Text Available Auditory cues can create the illusion of self-motion (vection in the absence of visual or physical stimulation. The present study aimed to determine whether auditory cues alone can also elicit motion sickness and how auditory cues contribute to motion sickness when added to visual motion stimuli. Twenty participants were seated in front of a curved projection display and were exposed to a virtual scene that constantly rotated around the participant's vertical axis. The virtual scene contained either visual-only, auditory-only, or a combination of corresponding visual and auditory cues. All participants performed all three conditions in a counterbalanced order. Participants tilted their heads alternately towards the right or left shoulder in all conditions during stimulus exposure in order to create pseudo-Coriolis effects and to maximize the likelihood for motion sickness. Measurements of motion sickness (onset, severity, vection (latency, strength, duration, and postural steadiness (center of pressure were recorded. Results showed that adding auditory cues to the visual stimuli did not, on average, affect motion sickness and postural steadiness, but it did reduce vection onset times and increased vection strength compared to pure visual or pure auditory stimulation. Eighteen of the 20 participants reported at least slight motion sickness in the two conditions including visual stimuli. More interestingly, six participants also reported slight motion sickness during pure auditory stimulation and two of the six participants stopped the pure auditory test session due to motion sickness. The present study is the first to demonstrate that motion sickness may be caused by pure auditory stimulation, which we refer to as "auditorily induced motion sickness".
Bigelow, James; Poremba, Amy
Studies of the memory capabilities of nonhuman primates have consistently revealed a relative weakness for auditory compared to visual or tactile stimuli: extensive training is required to learn auditory memory tasks, and subjects are only capable of retaining acoustic information for a brief period of time. Whether a parallel deficit exists in human auditory memory remains an outstanding question. In the current study, a short-term memory paradigm was used to test human subjects' retention of simple auditory, visual, and tactile stimuli that were carefully equated in terms of discriminability, stimulus exposure time, and temporal dynamics. Mean accuracy did not differ significantly among sensory modalities at very short retention intervals (1-4 s). However, at longer retention intervals (8-32 s), accuracy for auditory stimuli fell substantially below that observed for visual and tactile stimuli. In the interest of extending the ecological validity of these findings, a second experiment tested recognition memory for complex, naturalistic stimuli that would likely be encountered in everyday life. Subjects were able to identify all stimuli when retention was not required, however, recognition accuracy following a delay period was again inferior for auditory compared to visual and tactile stimuli. Thus, the outcomes of both experiments provide a human parallel to the pattern of results observed in nonhuman primates. The results are interpreted in light of neuropsychological data from nonhuman primates, which suggest a difference in the degree to which auditory, visual, and tactile memory are mediated by the perirhinal and entorhinal cortices.
Full Text Available Studies of the memory capabilities of nonhuman primates have consistently revealed a relative weakness for auditory compared to visual or tactile stimuli: extensive training is required to learn auditory memory tasks, and subjects are only capable of retaining acoustic information for a brief period of time. Whether a parallel deficit exists in human auditory memory remains an outstanding question. In the current study, a short-term memory paradigm was used to test human subjects' retention of simple auditory, visual, and tactile stimuli that were carefully equated in terms of discriminability, stimulus exposure time, and temporal dynamics. Mean accuracy did not differ significantly among sensory modalities at very short retention intervals (1-4 s. However, at longer retention intervals (8-32 s, accuracy for auditory stimuli fell substantially below that observed for visual and tactile stimuli. In the interest of extending the ecological validity of these findings, a second experiment tested recognition memory for complex, naturalistic stimuli that would likely be encountered in everyday life. Subjects were able to identify all stimuli when retention was not required, however, recognition accuracy following a delay period was again inferior for auditory compared to visual and tactile stimuli. Thus, the outcomes of both experiments provide a human parallel to the pattern of results observed in nonhuman primates. The results are interpreted in light of neuropsychological data from nonhuman primates, which suggest a difference in the degree to which auditory, visual, and tactile memory are mediated by the perirhinal and entorhinal cortices.
Mandikal Vasuki, Pragati Rao; Sharma, Mridula; Ibrahim, Ronny; Arciuli, Joanne
The question whether musical training is associated with enhanced auditory and cognitive abilities in children is of considerable interest. In the present study, we compared children with music training versus those without music training across a range of auditory and cognitive measures, including the ability to detect implicitly statistical regularities in input (statistical learning). Statistical learning of regularities embedded in auditory and visual stimuli was measured in musically trained and age-matched untrained children between the ages of 9-11years. In addition to collecting behavioural measures, we recorded electrophysiological measures to obtain an online measure of segmentation during the statistical learning tasks. Musically trained children showed better performance on melody discrimination, rhythm discrimination, frequency discrimination, and auditory statistical learning. Furthermore, grand-averaged ERPs showed that triplet onset (initial stimulus) elicited larger responses in the musically trained children during both auditory and visual statistical learning tasks. In addition, children's music skills were associated with performance on auditory and visual behavioural statistical learning tasks. Our data suggests that individual differences in musical skills are associated with children's ability to detect regularities. The ERP data suggest that musical training is associated with better encoding of both auditory and visual stimuli. Although causality must be explored in further research, these results may have implications for developing music-based remediation strategies for children with learning impairments. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.
Harris, Jill; Kamke, Marc R
Selective attention fundamentally alters sensory perception, but little is known about the functioning of attention in individuals who use a cochlear implant. This study aimed to investigate visual and auditory attention in adolescent cochlear implant users. Event related potentials were used to investigate the influence of attention on visual and auditory evoked potentials in six cochlear implant users and age-matched normally-hearing children. Participants were presented with streams of alternating visual and auditory stimuli in an oddball paradigm: each modality contained frequently presented 'standard' and infrequent 'deviant' stimuli. Across different blocks attention was directed to either the visual or auditory modality. For the visual stimuli attention boosted the early N1 potential, but this effect was larger for cochlear implant users. Attention was also associated with a later P3 component for the visual deviant stimulus, but there was no difference between groups in the later attention effects. For the auditory stimuli, attention was associated with a decrease in N1 latency as well as a robust P3 for the deviant tone. Importantly, there was no difference between groups in these auditory attention effects. The results suggest that basic mechanisms of auditory attention are largely normal in children who are proficient cochlear implant users, but that visual attention may be altered. Ultimately, a better understanding of how selective attention influences sensory perception in cochlear implant users will be important for optimising habilitation strategies. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Bigelow, James; Poremba, Amy
Studies of the memory capabilities of nonhuman primates have consistently revealed a relative weakness for auditory compared to visual or tactile stimuli: extensive training is required to learn auditory memory tasks, and subjects are only capable of retaining acoustic information for a brief period of time. Whether a parallel deficit exists in human auditory memory remains an outstanding question. In the current study, a short-term memory paradigm was used to test human subjects’ retention of simple auditory, visual, and tactile stimuli that were carefully equated in terms of discriminability, stimulus exposure time, and temporal dynamics. Mean accuracy did not differ significantly among sensory modalities at very short retention intervals (1–4 s). However, at longer retention intervals (8–32 s), accuracy for auditory stimuli fell substantially below that observed for visual and tactile stimuli. In the interest of extending the ecological validity of these findings, a second experiment tested recognition memory for complex, naturalistic stimuli that would likely be encountered in everyday life. Subjects were able to identify all stimuli when retention was not required, however, recognition accuracy following a delay period was again inferior for auditory compared to visual and tactile stimuli. Thus, the outcomes of both experiments provide a human parallel to the pattern of results observed in nonhuman primates. The results are interpreted in light of neuropsychological data from nonhuman primates, which suggest a difference in the degree to which auditory, visual, and tactile memory are mediated by the perirhinal and entorhinal cortices. PMID:24587119
Puschmann, Sebastian; Brechmann, André; Thiel, Christiane M
Animal experiments provide evidence that learning to associate an auditory stimulus with a reward causes representational changes in auditory cortex. However, most studies did not investigate the temporal formation of learning-dependent plasticity during the task but rather compared auditory cortex receptive fields before and after conditioning. We here present a functional magnetic resonance imaging study on learning-related plasticity in the human auditory cortex during operant appetitive conditioning. Participants had to learn to associate a specific category of frequency-modulated tones with a reward. Only participants who learned this association developed learning-dependent plasticity in left auditory cortex over the course of the experiment. No differential responses to reward predicting and nonreward predicting tones were found in auditory cortex in nonlearners. In addition, learners showed similar learning-induced differential responses to reward-predicting and nonreward-predicting tones in the ventral tegmental area and the nucleus accumbens, two core regions of the dopaminergic neurotransmitter system. This may indicate a dopaminergic influence on the formation of learning-dependent plasticity in auditory cortex, as it has been suggested by previous animal studies. Copyright © 2012 Wiley Periodicals, Inc.