WorldWideScience

Sample records for auditory hierarchical stimuli

  1. Auditory attention to frequency and time: an analogy to visual local–global stimuli

    OpenAIRE

    Justus, Timothy; List, Alexandra

    2005-01-01

    Two priming experiments demonstrated exogenous attentional persistence to the fundamental auditory dimensions of frequency (Experiment 1) and time (Experiment 2). In a divided-attention task, participants responded to an independent dimension, the identification of three-tone sequence patterns, for both prime and probe stimuli. The stimuli were specifically designed to parallel the local–global hierarchical letter stimuli of [Navon D. (1977). Forest before trees: The precedence of global feat...

  2. Discrimination of auditory stimuli during isoflurane anesthesia.

    Science.gov (United States)

    Rojas, Manuel J; Navas, Jinna A; Greene, Stephen A; Rector, David M

    2008-10-01

    Deep isoflurane anesthesia initiates a burst suppression pattern in which high-amplitude bursts are preceded by periods of nearly silent electroencephalogram. The burst suppression ratio (BSR) is the percentage of suppression (silent electroencephalogram) during the burst suppression pattern and is one parameter used to assess anesthesia depth. We investigated cortical burst activity in rats in response to different auditory stimuli presented during the burst suppression state. We noted a rapid appearance of bursts and a significant decrease in the BSR during stimulation. The BSR changes were distinctive for the different stimuli applied, and the BSR decreased significantly more when stimulated with a voice familiar to the rat as compared with an unfamiliar voice. These results show that the cortex can show differential sensory responses during deep isoflurane anesthesia.

  3. Happiness increases distraction by auditory deviant stimuli.

    Science.gov (United States)

    Pacheco-Unguetti, Antonia Pilar; Parmentier, Fabrice B R

    2016-08-01

    Rare and unexpected changes (deviants) in an otherwise repeated stream of task-irrelevant auditory distractors (standards) capture attention and impair behavioural performance in an ongoing visual task. Recent evidence indicates that this effect is increased by sadness in a task involving neutral stimuli. We tested the hypothesis that such effect may not be limited to negative emotions but reflect a general depletion of attentional resources by examining whether a positive emotion (happiness) would increase deviance distraction too. Prior to performing an auditory-visual oddball task, happiness or a neutral mood was induced in participants by means of the exposure to music and the recollection of an autobiographical event. Results from the oddball task showed significantly larger deviance distraction following the induction of happiness. Interestingly, the small amount of distraction typically observed on the standard trial following a deviant trial (post-deviance distraction) was not increased by happiness. We speculate that happiness might interfere with the disengagement of attention from the deviant sound back towards the target stimulus (through the depletion of cognitive resources and/or mind wandering) but help subsequent cognitive control to recover from distraction. PMID:26302716

  4. Relationship between Sympathetic Skin Responses and Auditory Hypersensitivity to Different Auditory Stimuli.

    Science.gov (United States)

    Kato, Fumi; Iwanaga, Ryoichiro; Chono, Mami; Fujihara, Saori; Tokunaga, Akiko; Murata, Jun; Tanaka, Koji; Nakane, Hideyuki; Tanaka, Goro

    2014-07-01

    [Purpose] Auditory hypersensitivity has been widely reported in patients with autism spectrum disorders. However, the neurological background of auditory hypersensitivity is currently not clear. The present study examined the relationship between sympathetic nervous system responses and auditory hypersensitivity induced by different types of auditory stimuli. [Methods] We exposed 20 healthy young adults to six different types of auditory stimuli. The amounts of palmar sweating resulting from the auditory stimuli were compared between groups with (hypersensitive) and without (non-hypersensitive) auditory hypersensitivity. [Results] Although no group × type of stimulus × first stimulus interaction was observed for the extent of reaction, significant type of stimulus × first stimulus interaction was noted for the extent of reaction. For an 80 dB-6,000 Hz stimulus, the trends for palmar sweating differed between the groups. For the first stimulus, the variance became larger in the hypersensitive group than in the non-hypersensitive group. [Conclusion] Subjects who regularly felt excessive reactions to auditory stimuli tended to have excessive sympathetic responses to repeated loud noises compared with subjects who did not feel excessive reactions. People with auditory hypersensitivity may be classified into several subtypes depending on their reaction patterns to auditory stimuli.

  5. Affective priming with auditory speech stimuli

    NARCIS (Netherlands)

    J. Degner

    2011-01-01

    Four experiments explored the applicability of auditory stimulus presentation in affective priming tasks. In Experiment 1, it was found that standard affective priming effects occur when prime and target words are presented simultaneously via headphones similar to a dichotic listening procedure. In

  6. Startle auditory stimuli enhance the performance of fast dynamic contractions.

    Science.gov (United States)

    Fernandez-Del-Olmo, Miguel; Río-Rodríguez, Dan; Iglesias-Soler, Eliseo; Acero, Rafael M

    2014-01-01

    Fast reaction times and the ability to develop a high rate of force development (RFD) are crucial for sports performance. However, little is known regarding the relationship between these parameters. The aim of this study was to investigate the effects of auditory stimuli of different intensities on the performance of a concentric bench-press exercise. Concentric bench-presses were performed by thirteen trained subjects in response to three different conditions: a visual stimulus (VS); a visual stimulus accompanied by a non-startle auditory stimulus (AS); and a visual stimulus accompanied by a startle auditory stimulus (SS). Peak RFD, peak velocity, onset movement, movement duration and electromyography from pectoralis and tricep muscles were recorded. The SS condition induced an increase in the RFD and peak velocity and a reduction in the movement onset and duration, in comparison with the VS and AS condition. The onset activation of the pectoralis and tricep muscles was shorter for the SS than for the VS and AS conditions. These findings point out to specific enhancement effects of loud auditory stimulation on the rate of force development. This is of relevance since startle stimuli could be used to explore neural adaptations to resistance training.

  7. 40 Hz auditory steady state response to linguistic features of stimuli during auditory hallucinations.

    Science.gov (United States)

    Ying, Jun; Yan, Zheng; Gao, Xiao-rong

    2013-10-01

    The auditory steady state response (ASSR) may reflect activity from different regions of the brain, depending on the modulation frequency used. In general, responses induced by low rates (≤40 Hz) emanate mostly from central structures of the brain, and responses from high rates (≥80 Hz) emanate mostly from the peripheral auditory nerve or brainstem structures. Besides, it was reported that the gamma band ASSR (30-90 Hz) played an important role in working memory, speech understanding and recognition. This paper investigated the 40 Hz ASSR evoked by modulated speech and reversed speech. The speech was Chinese phrase voice, and the noise-like reversed speech was obtained by temporally reversing the speech. Both auditory stimuli were modulated with a frequency of 40 Hz. Ten healthy subjects and 5 patients with hallucination symptom participated in the experiment. Results showed reduction in left auditory cortex response when healthy subjects listened to the reversed speech compared with the speech. In contrast, when the patients who experienced auditory hallucinations listened to the reversed speech, the auditory cortex of left hemispheric responded more actively. The ASSR results were consistent with the behavior results of patients. Therefore, the gamma band ASSR is expected to be helpful for rapid and objective diagnosis of hallucination in clinic. PMID:24142731

  8. EEG Responses to Auditory Stimuli for Automatic Affect Recognition

    Science.gov (United States)

    Hettich, Dirk T.; Bolinger, Elaina; Matuz, Tamara; Birbaumer, Niels; Rosenstiel, Wolfgang; Spüler, Martin

    2016-01-01

    Brain state classification for communication and control has been well established in the area of brain-computer interfaces over the last decades. Recently, the passive and automatic extraction of additional information regarding the psychological state of users from neurophysiological signals has gained increased attention in the interdisciplinary field of affective computing. We investigated how well specific emotional reactions, induced by auditory stimuli, can be detected in EEG recordings. We introduce an auditory emotion induction paradigm based on the International Affective Digitized Sounds 2nd Edition (IADS-2) database also suitable for disabled individuals. Stimuli are grouped in three valence categories: unpleasant, neutral, and pleasant. Significant differences in time domain domain event-related potentials are found in the electroencephalogram (EEG) between unpleasant and neutral, as well as pleasant and neutral conditions over midline electrodes. Time domain data were classified in three binary classification problems using a linear support vector machine (SVM) classifier. We discuss three classification performance measures in the context of affective computing and outline some strategies for conducting and reporting affect classification studies. PMID:27375410

  9. Hierarchical processing of auditory objects in humans.

    Directory of Open Access Journals (Sweden)

    Sukhbinder Kumar

    2007-06-01

    Full Text Available This work examines the computational architecture used by the brain during the analysis of the spectral envelope of sounds, an important acoustic feature for defining auditory objects. Dynamic causal modelling and Bayesian model selection were used to evaluate a family of 16 network models explaining functional magnetic resonance imaging responses in the right temporal lobe during spectral envelope analysis. The models encode different hypotheses about the effective connectivity between Heschl's Gyrus (HG, containing the primary auditory cortex, planum temporale (PT, and superior temporal sulcus (STS, and the modulation of that coupling during spectral envelope analysis. In particular, we aimed to determine whether information processing during spectral envelope analysis takes place in a serial or parallel fashion. The analysis provides strong support for a serial architecture with connections from HG to PT and from PT to STS and an increase of the HG to PT connection during spectral envelope analysis. The work supports a computational model of auditory object processing, based on the abstraction of spectro-temporal "templates" in the PT before further analysis of the abstracted form in anterior temporal lobe areas.

  10. Auditory Preferences of Young Children with and without Hearing Loss for Meaningful Auditory-Visual Compound Stimuli

    Science.gov (United States)

    Zupan, Barbra; Sussman, Joan E.

    2009-01-01

    Experiment 1 examined modality preferences in children and adults with normal hearing to combined auditory-visual stimuli. Experiment 2 compared modality preferences in children using cochlear implants participating in an auditory emphasized therapy approach to the children with normal hearing from Experiment 1. A second objective in both…

  11. Hierarchical auditory processing directed rostrally along the monkey's supratemporal plane.

    Science.gov (United States)

    Kikuchi, Yukiko; Horwitz, Barry; Mishkin, Mortimer

    2010-09-29

    Connectional anatomical evidence suggests that the auditory core, containing the tonotopic areas A1, R, and RT, constitutes the first stage of auditory cortical processing, with feedforward projections from core outward, first to the surrounding auditory belt and then to the parabelt. Connectional evidence also raises the possibility that the core itself is serially organized, with feedforward projections from A1 to R and with additional projections, although of unknown feed direction, from R to RT. We hypothesized that area RT together with more rostral parts of the supratemporal plane (rSTP) form the anterior extension of a rostrally directed stimulus quality processing stream originating in the auditory core area A1. Here, we analyzed auditory responses of single neurons in three different sectors distributed caudorostrally along the supratemporal plane (STP): sector I, mainly area A1; sector II, mainly area RT; and sector III, principally RTp (the rostrotemporal polar area), including cortex located 3 mm from the temporal tip. Mean onset latency of excitation responses and stimulus selectivity to monkey calls and other sounds, both simple and complex, increased progressively from sector I to III. Also, whereas cells in sector I responded with significantly higher firing rates to the "other" sounds than to monkey calls, those in sectors II and III responded at the same rate to both stimulus types. The pattern of results supports the proposal that the STP contains a rostrally directed, hierarchically organized auditory processing stream, with gradually increasing stimulus selectivity, and that this stream extends from the primary auditory area to the temporal pole. PMID:20881120

  12. Event-related potentials in response to 3-D auditory stimuli.

    Science.gov (United States)

    Fuchigami, Tatsuo; Okubo, Osami; Fujita, Yukihiko; Kohira, Ryutaro; Arakawa, Chikako; Endo, Ayumi; Haruyama, Wakako; Imai, Yuki; Mugishima, Hideo

    2009-09-01

    To evaluate auditory spatial cognitive function, age correlations for event-related potentials (ERPs) in response to auditory stimuli with a Doppler effect were studied in normal children. A sound with a Doppler effect is perceived as a moving audio image. A total of 99 normal subjects (age range, 4-21 years) were tested. In the task-relevant oddball paradigm, P300 and key-press reaction time were elicited using auditory stimuli (1000 Hz fixed and enlarged tones with a Doppler effect). From the age of 4 years, the P300 latency for the enlarged tone with a Doppler effect shortened more rapidly with age than did the P300 latency for tone-pips, and the latencies for the different conditions became similar towards the late teens. The P300 of auditory stimuli with a Doppler effect may be used to evaluate auditory spatial cognitive function in children.

  13. Long-latency auditory evoked potentials with verbal and nonverbal stimuli,

    OpenAIRE

    Sheila Jacques Oppitz; Dayane Domeneghini Didoné; Débora Durigon da Silva; Marjana Gois; Jordana Folgearini; Geise Corrêa Ferreira; Michele Vargas Garcia

    2015-01-01

    ABSTRACT INTRODUCTION: Long-latency auditory evoked potentials represent the cortical activity related to attention, memory, and auditory discrimination skills. Acoustic signal processing occurs differently between verbal and nonverbal stimuli, influencing the latency and amplitude patterns. OBJECTIVE: To describe the latencies of the cortical potentials P1, N1, P2, N2, and P3, as well as P3 amplitude, with different speech stimuli and tone bursts, and to classify them in the presence and...

  14. Modeling auditory evoked brainstem responses to transient stimuli

    DEFF Research Database (Denmark)

    Rønne, Filip Munch; Dau, Torsten; Harte, James;

    2012-01-01

    A quantitative model is presented that describes the formation of auditory brainstem responses (ABR) to tone pulses, clicks and rising chirps as a function of stimulation level. The model computes the convolution of the instantaneous discharge rates using the “humanized” nonlinear auditory-nerve ...

  15. Cerebral processing of auditory stimuli in patients with irritable bowel syndrome

    Institute of Scientific and Technical Information of China (English)

    Viola Andresen; Peter Kobelt; Claus Zimmer; Bertram Wiedenmann; Burghard F Klapp; Hubert Monnikes; Alexander Poellinger; Chedwa Tsrouya; Dominik Bach; Albrecht Stroh; Annette Foerschler; Petra Georgiewa; Marco Schmidtmann; Ivo R van der Voort

    2006-01-01

    AIM: To determine by brain functional magnetic resonance imaging (fMRI) whether cerebral processing of non-visceral stimuli is altered in irritable bowel syndrome (IBS) patients compared with healthy subjects. To circumvent spinal viscerosomatic convergence mechanisms,we used auditory stimulation, and to identify a possible influence of psychological factors the stimuli differed in their emotional quality.METHODS: In 8 IBS patients and 8 controls, fMRI measurements were performed using a block design of 4 auditory stimuli of different emotional quality (pleasant sounds of chimes, unpleasant peep (2000 Hz), neutral words, and emotional words). A gradient echo T2*-weighted sequence was used for the functional scans.Statistical maps were constructed using the general linear model.RESULTS: To emotional auditory stimuli, IBS patients relative to controls responded with stronger deactivations in a greater variety of emotional processing regions, while the response patterns, unlike in controls, did not differentiate between distressing or pleasant sounds.To neutral auditory stimuli, by contrast, only IBS patients responded with large significant activations.CONCLUSION: Altered cerebral response patterns to auditory stimuli in emotional stimulus-processing regions suggest that altered sensory processing in IBS may not be specific for visceral sensation, but might reflect generalized changes in emotional sensitivity and affectire reactivity, possibly associated with the psychological comorbidity often found in IBS patients.

  16. Effects of passive tactile and auditory stimuli on left visual neglect.

    Science.gov (United States)

    Hommel, M; Peres, B; Pollak, P; Memin, B; Besson, G; Gaio, J M; Perret, J

    1990-05-01

    Patients with left-sided visual neglect fail to copy the left part of drawings or the drawings on the left side of a sheet of paper. Our aim was to study the variations in copying drawings induced by passive stimulation in patients with left-sided visual neglect. No stimulation at all, tactile unilateral and bilateral, binaural auditory verbal, and nonverbal stimuli were randomly applied to 14 patients with right-hemisphere strokes. Only nonverbal stimuli decreased the neglect. As nonverbal stimuli mainly activate the right hemisphere, the decrease in neglect suggests right-hemispheric hypoactivity at rest in these patients. The absence of modification of neglect during verbal stimulation suggests a bilateral hemispheric activation and the persistence of interhemispheric imbalance. Our results showed that auditory pathways take part in the network involved with neglect. Passive nonverbal auditory stimuli may be of interest in the rehabilitation of patients with left visual neglect. PMID:2334306

  17. Auditory stimuli mimicking ambient sounds drive temporal "delta-brushes" in premature infants.

    Directory of Open Access Journals (Sweden)

    Mathilde Chipaux

    Full Text Available In the premature infant, somatosensory and visual stimuli trigger an immature electroencephalographic (EEG pattern, "delta-brushes," in the corresponding sensory cortical areas. Whether auditory stimuli evoke delta-brushes in the premature auditory cortex has not been reported. Here, responses to auditory stimuli were studied in 46 premature infants without neurologic risk aged 31 to 38 postmenstrual weeks (PMW during routine EEG recording. Stimuli consisted of either low-volume technogenic "clicks" near the background noise level of the neonatal care unit, or a human voice at conversational sound level. Stimuli were administrated pseudo-randomly during quiet and active sleep. In another protocol, the cortical response to a composite stimulus ("click" and voice was manually triggered during EEG hypoactive periods of quiet sleep. Cortical responses were analyzed by event detection, power frequency analysis and stimulus locked averaging. Before 34 PMW, both voice and "click" stimuli evoked cortical responses with similar frequency-power topographic characteristics, namely a temporal negative slow-wave and rapid oscillations similar to spontaneous delta-brushes. Responses to composite stimuli also showed a maximal frequency-power increase in temporal areas before 35 PMW. From 34 PMW the topography of responses in quiet sleep was different for "click" and voice stimuli: responses to "clicks" became diffuse but responses to voice remained limited to temporal areas. After the age of 35 PMW auditory evoked delta-brushes progressively disappeared and were replaced by a low amplitude response in the same location. Our data show that auditory stimuli mimicking ambient sounds efficiently evoke delta-brushes in temporal areas in the premature infant before 35 PMW. Along with findings in other sensory modalities (visual and somatosensory, these findings suggest that sensory driven delta-brushes represent a ubiquitous feature of the human sensory cortex

  18. Natural stimuli improve auditory BCIs with respect to ergonomics and performance

    Science.gov (United States)

    Höhne, Johannes; Krenzlin, Konrad; Dähne, Sven; Tangermann, Michael

    2012-08-01

    Moving from well-controlled, brisk artificial stimuli to natural and less-controlled stimuli seems counter-intuitive for event-related potential (ERP) studies. As natural stimuli typically contain a richer internal structure, they might introduce higher levels of variance and jitter in the ERP responses. Both characteristics are unfavorable for a good single-trial classification of ERPs in the context of a multi-class brain-computer interface (BCI) system, where the class-discriminant information between target stimuli and non-target stimuli must be maximized. For the application in an auditory BCI system, however, the transition from simple artificial tones to natural syllables can be useful despite the variance introduced. In the presented study, healthy users (N = 9) participated in an offline auditory nine-class BCI experiment with artificial and natural stimuli. It is shown that the use of syllables as natural stimuli does not only improve the users’ ergonomic ratings; also the classification performance is increased. Moreover, natural stimuli obtain a better balance in multi-class decisions, such that the number of systematic confusions between the nine classes is reduced. Hopefully, our findings may contribute to make auditory BCI paradigms more user friendly and applicable for patients.

  19. Visual and auditory stimuli associated with swallowing activate mirror neurons: a magnetoencephalography study.

    Science.gov (United States)

    Ushioda, Takashi; Watanabe, Yutaka; Sanjo, Yusuke; Yamane, Gen-Yuki; Abe, Shinichi; Tsuji, Yusuke; Ishiyama, Atushi

    2012-12-01

    In the present study, we evaluated activated areas of the cerebral cortex with regard to the mirror neuron system during swallowing. To identify the activated areas, we used magnetoencephalography. Subjects were ten consenting volunteers. Swallowing-related stimuli comprised an animated image of the left profile of a person swallowing water with laryngeal elevation as a visual swallowing trigger stimulus and a swallowing sound as an auditory swallowing trigger stimulus. As control stimuli, a still frame image of the left profile without an additional trigger was shown, and an artificial sound as a false auditory trigger was provided. Triggers were presented at 3,000 ms after the start of image presentation. The stimuli were combined and presented and the areas activated were identified for each stimulus. With animation and still-frame stimuli, the visual association area (Brodmann area (BA) 18) was activated at the start of image presentation, while with the swallowing sound and artificial sound stimuli, the auditory areas BA 41 and BA 42 were activated at the time of trigger presentation. However, with animation stimuli (animation stimulus, animation + swallowing sound stimuli, and animation + artificial sound stimuli), activation in BA 6 and BA 40, corresponding to mirror neurons, was observed between 620 and 720 ms before the trigger. Besides, there were also significant differences in latency time and peak intensity between animation stimulus and animation + swallowing sound stimuli. Our results suggest that mirror neurons are activated by swallowing-related visual and auditory stimuli.

  20. Effects of auditory stimuli in the horizontal plane on audiovisual integration: an event-related potential study.

    Science.gov (United States)

    Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong

    2013-01-01

    This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160-200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360-400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides.

  1. Source analysis of bimodal event-related potentials with auditory-visual stimuli

    OpenAIRE

    Cui, H; Xie, X.; Yan, H; Feng, L; Xu, S; Hu, Y.

    2013-01-01

    Dipole source analysis is applied to model brain generators of surface-recorded evoked potentials, epileptiform activity, and event-related potentials (ERP). The aim of this study was to explore brain activity of interaction between bimodal sensory cognition. Seven healthy volunteers were recruited in the study and ERP to these stimuli were recorded by 64 electrodes EEG recording system. Subjects were exposed to either the auditory and the visual stimulus alone or the combined auditory-visual...

  2. Auditory stimulus timing influences perceived duration of co-occurring visual stimuli

    Directory of Open Access Journals (Sweden)

    Vincenzo eRomei

    2011-09-01

    Full Text Available There is increasing interest in multisensory influences upon sensory-specific judgements, such as when auditory stimuli affect visual perception. Here we studied whether the duration of an auditory event can objectively affect the perceived duration of a co-occurring visual event. On each trial, participants were presented with a pair of successive flashes and had to judge whether the first or second was longer. Two beeps were presented with the flashes. The order of short and long stimuli could be the same across audition and vision (audiovisual congruent or reversed, so that the longer flash was accompanied by the shorter beep and vice versa (audiovisual incongruent; or the two beeps could have the same duration as each other. Beeps and flashes could onset synchronously or asynchronously. In a further control experiment, the beep durations were much longer (tripled than the flashes. Results showed that visual duration-discrimination sensitivity (d' was significantly higher for congruent (and significantly lower for incongruent audiovisual synchronous combinations, relative to the visual only presentation. This effect was abolished when auditory and visual stimuli were presented asynchronously, or when sound durations tripled those of flashes. We conclude that the temporal properties of co-occurring auditory stimuli influence the perceived duration of visual stimuli and that this can reflect genuine changes in visual sensitivity rather than mere response bias.

  3. Influence of auditory and audiovisual stimuli on the right-left prevalence effect

    DEFF Research Database (Denmark)

    Vu, Kim-Phuong L; Minakata, Katsumi; Ngo, Mary Kim

    2014-01-01

    occurs when the two-dimensional stimuli are audiovisual, as well as whether there will be cross-modal facilitation of response selection for the horizontal and vertical dimensions. We also examined whether there is an additional benefit of adding a pitch dimension to the auditory stimulus to facilitate...... vertical coding through use of the spatial-musical association of response codes (SMARC) effect, where pitch is coded in terms of height in space. In Experiment 1, we found a larger right-left prevalence effect for unimodal auditory than visual stimuli. Neutral, non-pitch coded, audiovisual stimuli did...... not result in cross-modal facilitation, but did show evidence of visual dominance. The right-left prevalence effect was eliminated in the presence of SMARC audiovisual stimuli, but the effect influenced horizontal rather than vertical coding. Experiment 2 showed that the influence of the pitch dimension...

  4. Klinefelter syndrome has increased brain responses to auditory stimuli and motor output, but not to visual stimuli or Stroop adaptation

    Directory of Open Access Journals (Sweden)

    Mikkel Wallentin

    2016-01-01

    Full Text Available Klinefelter syndrome (47, XXY (KS is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS = 49; Controls = 49 responded to whether the words “GREEN” or “RED” were displayed in green or red (incongruent versus congruent colors. One of the colors was presented three times as often as the other, making it possible to study both congruency and adaptation effects independently. Auditory stimuli saying “GREEN” or “RED” had the same distribution, making it possible to study effects of perceptual modality as well as Frequency effects across modalities. We found that KS had an increased response to motor output in primary motor cortex and an increased response to auditory stimuli in auditory cortices, but no difference in primary visual cortices. KS displayed a diminished response to written visual stimuli in secondary visual regions near the Visual Word Form Area, consistent with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain's motor network with no difference between groups. No effects of testosterone level or brain volume were found. In sum, the present findings suggest that auditory and motor systems in KS are selectively affected, perhaps as a compensatory strategy, and that this is not a systemic effect as it is not seen in the visual system.

  5. Long-latency auditory evoked potentials with verbal and nonverbal stimuli,

    Directory of Open Access Journals (Sweden)

    Sheila Jacques Oppitz

    2015-12-01

    Full Text Available ABSTRACT INTRODUCTION: Long-latency auditory evoked potentials represent the cortical activity related to attention, memory, and auditory discrimination skills. Acoustic signal processing occurs differently between verbal and nonverbal stimuli, influencing the latency and amplitude patterns. OBJECTIVE: To describe the latencies of the cortical potentials P1, N1, P2, N2, and P3, as well as P3 amplitude, with different speech stimuli and tone bursts, and to classify them in the presence and absence of these data. METHODS: A total of 30 subjects with normal hearing were assessed, aged 18-32 years old, matched by gender. Nonverbal stimuli were used (tone burst; 1000 Hz - frequent and 4000 Hz - rare; and verbal (/ba/ - frequent; /ga/, /da/, and /di/ - rare. RESULTS: Considering the component N2 for tone burst, the lowest latency found was 217.45 ms for the BA/DI stimulus; the highest latency found was 256.5 ms. For the P3 component, the shortest latency with tone burst stimuli was 298.7 with BA/GA stimuli, the highest, was 340 ms. For the P3 amplitude, there was no statistically significant difference among the different stimuli. For latencies of components P1, N1, P2, N2, P3, there were no statistical differences among them, regardless of the stimuli used. CONCLUSION: There was a difference in the latency of potentials N2 and P3 among the stimuli employed but no difference was observed for the P3 amplitude.

  6. Data Collection and Analysis Techniques for Evaluating the Perceptual Qualities of Auditory Stimuli

    Energy Technology Data Exchange (ETDEWEB)

    Bonebright, T.L.; Caudell, T.P.; Goldsmith, T.E.; Miner, N.E.

    1998-11-17

    This paper describes a general methodological framework for evaluating the perceptual properties of auditory stimuli. The framework provides analysis techniques that can ensure the effective use of sound for a variety of applications including virtual reality and data sonification systems. Specifically, we discuss data collection techniques for the perceptual qualities of single auditory stimuli including identification tasks, context-based ratings, and attribute ratings. In addition, we present methods for comparing auditory stimuli, such as discrimination tasks, similarity ratings, and sorting tasks. Finally, we discuss statistical techniques that focus on the perceptual relations among stimuli, such as Multidimensional Scaling (MDS) and Pathfinder Analysis. These methods are presented as a starting point for an organized and systematic approach for non-experts in perceptual experimental methods, rather than as a complete manual for performing the statistical techniques and data collection methods. It is our hope that this paper will help foster further interdisciplinary collaboration among perceptual researchers, designers, engineers, and others in the development of effective auditory displays.

  7. Sensory Symptoms and Processing of Nonverbal Auditory and Visual Stimuli in Children with Autism Spectrum Disorder

    Science.gov (United States)

    Stewart, Claire R.; Sanchez, Sandra S.; Grenesko, Emily L.; Brown, Christine M.; Chen, Colleen P.; Keehn, Brandon; Velasquez, Francisco; Lincoln, Alan J.; Müller, Ralph-Axel

    2016-01-01

    Atypical sensory responses are common in autism spectrum disorder (ASD). While evidence suggests impaired auditory-visual integration for verbal information, findings for nonverbal stimuli are inconsistent. We tested for sensory symptoms in children with ASD (using the Adolescent/Adult Sensory Profile) and examined unisensory and bisensory…

  8. High gamma activity in response to deviant auditory stimuli recorded directly from human cortex.

    Science.gov (United States)

    Edwards, Erik; Soltani, Maryam; Deouell, Leon Y; Berger, Mitchel S; Knight, Robert T

    2005-12-01

    We recorded electrophysiological responses from the left frontal and temporal cortex of awake neurosurgical patients to both repetitive background and rare deviant auditory stimuli. Prominent sensory event-related potentials (ERPs) were recorded from auditory association cortex of the temporal lobe and adjacent regions surrounding the posterior Sylvian fissure. Deviant stimuli generated an additional longer latency mismatch response, maximal at more anterior temporal lobe sites. We found low gamma (30-60 Hz) in auditory association cortex, and we also show the existence of high-frequency oscillations above the traditional gamma range (high gamma, 60-250 Hz). Sensory and mismatch potentials were not reliably observed at frontal recording sites. We suggest that the high gamma oscillations are sensory-induced neocortical ripples, similar in physiological origin to the well-studied ripples of the hippocampus. PMID:16093343

  9. Early influence of auditory stimuli on upper-limb movements in young human infants: an overview

    Directory of Open Access Journals (Sweden)

    Priscilla Augusta Monteiro Ferronato

    2014-09-01

    Full Text Available Given that the auditory system is rather well developed at the end of the third trimester of pregnancy, it is likely that couplings between acoustics and motor activity can be integrated as early as at the beginning of postnatal life. The aim of the present mini-review was to summarize and discuss studies on early auditory-motor integration, focusing particularly on upper-limb movements (one of the most crucial means to interact with the environment in association with auditory stimuli, to develop further understanding of their significance with regard to early infant development. Many studies have investigated the relationship between various infant behaviors (e.g., sucking, visual fixation, head turning and auditory stimuli, and established that human infants can be observed displaying couplings between action and environmental sensory stimulation already from just after birth, clearly indicating a propensity for intentional behavior. Surprisingly few studies, however, have investigated the associations between upper-limb movements and different auditory stimuli in newborns and young infants, infants born at risk for developmental disorders/delays in particular. Findings from studies of early auditory-motor interaction support that the developing integration of sensory and motor systems is a fundamental part of the process guiding the development of goal-directed action in infancy, of great importance for continued motor, perceptual and cognitive development. At-risk infants (e.g., those born preterm may display increasing central auditory processing disorders, negatively affecting early sensory-motor integration, and resulting in long-term consequences on gesturing, language development and social communication. Consequently, there is a need for more studies on such implications

  10. SPET monitoring of perfusion changes in auditory cortex following mono- and multi-frequency stimuli

    Energy Technology Data Exchange (ETDEWEB)

    De Rossi, G. [Nuclear Medicine Inst., Policlinico A. Gemelli, Rome (Italy); Paludetti, G. [Otorhinolaryngology Inst., Policlinico A. Gemelli, Rome (Italy); Di Nardo, W. [Otorhinolaryngology Inst., Policlinico A. Gemelli, Rome (Italy); Calcagni, M.L. [Nuclear Medicine Inst., Policlinico A. Gemelli, Rome (Italy); Di Giuda, D. [Nuclear Medicine Inst., Policlinico A. Gemelli, Rome (Italy); Almadori, G. [Otorhinolaryngology Inst., Policlinico A. Gemelli, Rome (Italy); Galli, J. [Otorhinolaryngology Inst., Policlinico A. Gemelli, Rome (Italy)

    1996-08-01

    In order to assess the relationship between auditory cortex perfusion and the frequency of acoustic stimuli, twenty normally-hearing subjects underwent cerebral SPET. In 10 patients a multi-frequency stimulus (250-4000 Hz at 40 dB SL) was delivered, while 10 subjects were stimulated with a 500 Hz pure tone at 40 dB SL. The prestimulation SPET was subtracted from poststimulation study and auditory cortex activation was expressed as percent increments. Contralateral cortex was the most active area with multifrequency and monofrequency stimuli as well. A clear demonstration of a tonotopic distribution of acoustic stimuli in the auditory cortex was achieved. In addition, the accessory role played by homolateral accoustic areas was confirmed. The results of the present research support the hypothesis that brain SPET may be useful to obtain semiquantitative reliable information on low frequency auditory level in profoundly deaf patients. This may be achieved comparing the extension of the cortical areas activated by high-intensity multifrequency stimuli. (orig.) [Deutsch] Zur Aufklaerung der Beziehung von regionaler Perfusion des auditorischen Kortex und Frequenz des akustischen Stimulus wurden 20 Normalpatienten mit Hilfe von Hirn-SPECT untersucht. Bei je 10 Patienten wurde ein Multifrequenzstimulus (250-2000 Hz bei 60 dB) bzw. ein Monofrequenzstimulus (500 Hz bei 60 dB) verwendet. Die vor der Stimulation akquirierten SPECT-Daten wurden jeweils von den nach der Stimulation akquirierten SPECT-Daten abgezogen und die aditorische Kortexaktivation als prozentuale Steigerung ausgedrueckt. Der kontralaterale Kortex war das am staerksten aktivierte Areal sowohl bei der Multifrequenz- als auch bei der Monofrequenzstimulation. Es konnte eine klare tonotopische Verteilung der akustischen Stimuli im auditorischen Koretx demonstriert werden. Zusaetzlich konnte die akzessorische Rolle des homolateralen akustischen Kortex bestaetigt werden. Die Ergebnisse dieser Studie unterstuetzen

  11. Category Variability Effect in Category Learning with Auditory Stimuli

    Directory of Open Access Journals (Sweden)

    Lee-Xieng eYang

    2014-10-01

    Full Text Available The category variability effect refers to that people tend to classify the midpoint item between two categories as the category more variable. This effect is regarded as evidence against the exemplar model, such as GCM (Generalized Context Model and favoring the rule model, such as GRT (i.e., the decision bound model. Although this effect has been found in conceptual category learning, it is not often observed in perceptual category learning. To figure out why the category variability effect is seldom reported in the past studies, we propose two hypotheses. First, due to sequence effect, the midpoint item would be classified as different categories, when following different items. When we combine these inconsistent responses for the midpoint item, no category variability effect occurs. Second, instead of the combination of sequence effect in different categorization conditions, the combination of different categorization strategies conceals the category variability effect. One experiment is conducted with single tones of different frequencies as stimuli. The collected data reveal sequence effect. However, the modeling results with the MAC model and the decision bound model support that the existence of individual differences is the reason for why no category variability effect occurs. Three groups are identified by their categorization strategy. Group 1 is rule user, placing the category boundary close to the low-variability category, hence inducing category variability effect. Group 2 takes the MAC strategy and classifies the midpoint item as different categories, depending on its preceding item. Group 3 classifies the midpoint item as the low-variability category, which is consistent with the prediction of the decision bound model as well as GCM. Nonetheless, our conclusion is that category variability effect can be found in perceptual category learning, but might be concealed by the averaged data.

  12. Age-related change in shifting attention between global and local levels of hierarchical stimuli

    NARCIS (Netherlands)

    M. Huizinga; J.A. Burack; M.W. van der Molen

    2010-01-01

    The focus of this study was the developmental pattern of the ability to shift attention between global and local levels of hierarchical stimuli. Children aged 7 years and 11 years and 21-year-old adults were administered a task (two experiments) that allowed for the examination of 1) the direction o

  13. Effects of visual working memory on brain information processing of irrelevant auditory stimuli.

    Directory of Open Access Journals (Sweden)

    Jiagui Qu

    Full Text Available Selective attention has traditionally been viewed as a sensory processing modulator that promotes cognitive processing efficiency by favoring relevant stimuli while inhibiting irrelevant stimuli. However, the cross-modal processing of irrelevant information during working memory (WM has been rarely investigated. In this study, the modulation of irrelevant auditory information by the brain during a visual WM task was investigated. The N100 auditory evoked potential (N100-AEP following an auditory click was used to evaluate the selective attention to auditory stimulus during WM processing and at rest. N100-AEP amplitudes were found to be significantly affected in the left-prefrontal, mid-prefrontal, right-prefrontal, left-frontal, and mid-frontal regions while performing a high WM load task. In contrast, no significant differences were found between N100-AEP amplitudes in WM states and rest states under a low WM load task in all recorded brain regions. Furthermore, no differences were found between the time latencies of N100-AEP troughs in WM states and rest states while performing either the high or low WM load task. These findings suggested that the prefrontal cortex (PFC may integrate information from different sensory channels to protect perceptual integrity during cognitive processing.

  14. Bio-inspired fabrication of stimuli-responsive photonic crystals with hierarchical structures and their applications

    Science.gov (United States)

    Lu, Tao; Peng, Wenhong; Zhu, Shenmin; Zhang, Di

    2016-03-01

    When the constitutive materials of photonic crystals (PCs) are stimuli-responsive, the resultant PCs exhibit optical properties that can be tuned by the stimuli. This can be exploited for promising applications in colour displays, biological and chemical sensors, inks and paints, and many optically active components. However, the preparation of the required photonic structures is the first issue to be solved. In the past two decades, approaches such as microfabrication and self-assembly have been developed to incorporate stimuli-responsive materials into existing periodic structures for the fabrication of PCs, either as the initial building blocks or as the surrounding matrix. Generally, the materials that respond to thermal, pH, chemical, optical, electrical, or magnetic stimuli are either soft or aggregate, which is why the manufacture of three-dimensional hierarchical photonic structures with responsive properties is a great challenge. Recently, inspired by biological PCs in nature which exhibit both flexible and responsive properties, researchers have developed various methods to synthesize metals and metal oxides with hierarchical structures by using a biological PC as the template. This review will focus on the recent developments in this field. In particular, PCs with biological hierarchical structures that can be tuned by external stimuli have recently been successfully fabricated. These findings offer innovative insights into the design of responsive PCs and should be of great importance for future applications of these materials.

  15. An online brain-computer interface based on shifting attention to concurrent streams of auditory stimuli

    Science.gov (United States)

    Hill, N. J.; Schölkopf, B.

    2012-04-01

    We report on the development and online testing of an electroencephalogram-based brain-computer interface (BCI) that aims to be usable by completely paralysed users—for whom visual or motor-system-based BCIs may not be suitable, and among whom reports of successful BCI use have so far been very rare. The current approach exploits covert shifts of attention to auditory stimuli in a dichotic-listening stimulus design. To compare the efficacy of event-related potentials (ERPs) and steady-state auditory evoked potentials (SSAEPs), the stimuli were designed such that they elicited both ERPs and SSAEPs simultaneously. Trial-by-trial feedback was provided online, based on subjects' modulation of N1 and P3 ERP components measured during single 5 s stimulation intervals. All 13 healthy subjects were able to use the BCI, with performance in a binary left/right choice task ranging from 75% to 96% correct across subjects (mean 85%). BCI classification was based on the contrast between stimuli in the attended stream and stimuli in the unattended stream, making use of every stimulus, rather than contrasting frequent standard and rare ‘oddball’ stimuli. SSAEPs were assessed offline: for all subjects, spectral components at the two exactly known modulation frequencies allowed discrimination of pre-stimulus from stimulus intervals, and of left-only stimuli from right-only stimuli when one side of the dichotic stimulus pair was muted. However, attention modulation of SSAEPs was not sufficient for single-trial BCI communication, even when the subject's attention was clearly focused well enough to allow classification of the same trials via ERPs. ERPs clearly provided a superior basis for BCI. The ERP results are a promising step towards the development of a simple-to-use, reliable yes/no communication system for users in the most severely paralysed states, as well as potential attention-monitoring and -training applications outside the context of assistive technology.

  16. Hierarchical effects of task engagement on amplitude modulation encoding in auditory cortex.

    Science.gov (United States)

    Niwa, Mamiko; O'Connor, Kevin N; Engall, Elizabeth; Johnson, Jeffrey S; Sutter, M L

    2015-01-01

    We recorded from middle lateral belt (ML) and primary (A1) auditory cortical neurons while animals discriminated amplitude-modulated (AM) sounds and also while they sat passively. Engagement in AM discrimination improved ML and A1 neurons' ability to discriminate AM with both firing rate and phase-locking; however, task engagement affected neural AM discrimination differently in the two fields. The results suggest that these two areas utilize different AM coding schemes: a "single mode" in A1 that relies on increased activity for AM relative to unmodulated sounds and a "dual-polar mode" in ML that uses both increases and decreases in neural activity to encode modulation. In the dual-polar ML code, nonsynchronized responses might play a special role. The results are consistent with findings in the primary and secondary somatosensory cortices during discrimination of vibrotactile modulation frequency, implicating a common scheme in the hierarchical processing of temporal information among different modalities. The time course of activity differences between behaving and passive conditions was also distinct in A1 and ML and may have implications for auditory attention. At modulation depths ≥ 16% (approximately behavioral threshold), A1 neurons' improvement in distinguishing AM from unmodulated noise is relatively constant or improves slightly with increasing modulation depth. In ML, improvement during engagement is most pronounced near threshold and disappears at highly suprathreshold depths. This ML effect is evident later in the stimulus, and mainly in nonsynchronized responses. This suggests that attention-related increases in activity are stronger or longer-lasting for more difficult stimuli in ML.

  17. Effect of contingent auditory stimuli on concurrent schedule performance: an alternative punisher to electric shock.

    Science.gov (United States)

    Reed, Phil; Yoshino, Toshihiko

    2008-07-01

    This study explored whether load auditory stimuli could be used as functional punishing stimuli in place of electric shock. Three experiments examined the effect of a loud auditory stimulus on rats' responding maintained by a concurrent reinforcement schedule. In Experiment 1, overall response rate decreased when a concurrent 1.5 s tone presentation schedule was superimposed on the concurrent variable interval (VI) 180-s, VI 180-s reinforcement schedule. On the contrary, response rate increased when a click presentation schedule was added. In Experiment 2, the extent of the response suppression with a 1.5 s tone presentation varied as a function of the frequency of the reinforcement schedule maintaining responses; the leaner the schedule employed, the greater the response suppression. In Experiment 3, response suppression was observed to be inversely related to the duration of the tone; response facilitation was observed when a 3.0-s tone was used. In Experiments 1 and 2, a preference shift towards the alternative with richer reinforcement was observed when the tone schedule was added. In contrast, the preference shifted towards the leaner alternative when the click or longer duration stimulus was used. These results imply that both the type and duration of a loud auditory stimulus, as well as the reinforcement schedule maintaining responses, have a critical role in determining the effect of the stimuli on responding. They also suggest that a loud auditory stimulus can be used as a positive punisher in a choice situation for rats, when the duration of the tone is brief, and the reinforcement schedule maintaining responses is lean. PMID:18406078

  18. Hierarchical computation in the canonical auditory cortical circuit

    OpenAIRE

    Atencio, Craig A.; Sharpee, Tatyana O.; Christoph E Schreiner

    2009-01-01

    Sensory cortical anatomy has identified a canonical microcircuit underlying computations between and within layers. This feed-forward circuit processes information serially from granular to supragranular and to infragranular layers. How this substrate correlates with an auditory cortical processing hierarchy is unclear. We recorded simultaneously from all layers in cat primary auditory cortex (AI) and estimated spectrotemporal receptive fields (STRFs) and associated nonlinearities. Spike-trig...

  19. Exploring combinations of auditory and visual stimuli for gaze-independent brain-computer interfaces.

    Directory of Open Access Journals (Sweden)

    Xingwei An

    Full Text Available For Brain-Computer Interface (BCI systems that are designed for users with severe impairments of the oculomotor system, an appropriate mode of presenting stimuli to the user is crucial. To investigate whether multi-sensory integration can be exploited in the gaze-independent event-related potentials (ERP speller and to enhance BCI performance, we designed a visual-auditory speller. We investigate the possibility to enhance stimulus presentation by combining visual and auditory stimuli within gaze-independent spellers. In this study with N = 15 healthy users, two different ways of combining the two sensory modalities are proposed: simultaneous redundant streams (Combined-Speller and interleaved independent streams (Parallel-Speller. Unimodal stimuli were applied as control conditions. The workload, ERP components, classification accuracy and resulting spelling speed were analyzed for each condition. The Combined-speller showed a lower workload than uni-modal paradigms, without the sacrifice of spelling performance. Besides, shorter latencies, lower amplitudes, as well as a shift of the temporal and spatial distribution of discriminative information were observed for Combined-speller. These results are important and are inspirations for future studies to search the reason for these differences. For the more innovative and demanding Parallel-Speller, where the auditory and visual domains are independent from each other, a proof of concept was obtained: fifteen users could spell online with a mean accuracy of 87.7% (chance level <3% showing a competitive average speed of 1.65 symbols per minute. The fact that it requires only one selection period per symbol makes it a good candidate for a fast communication channel. It brings a new insight into the true multisensory stimuli paradigms. Novel approaches for combining two sensory modalities were designed here, which are valuable for the development of ERP-based BCI paradigms.

  20. A novel method of brainstem auditory evoked potentials using complex verbal stimuli

    Directory of Open Access Journals (Sweden)

    Sophia N Kouni

    2014-01-01

    Full Text Available Background: The click and tone-evoked auditory brainstem responses are widely used in clinical practice due to their consistency and predictability. More recently, the speech-evoked responses have been used to evaluate subcortical processing of complex signals, not revealed by responses to clicks and tones. Aims: Disyllable stimuli corresponding to familiar words can induce a pattern of voltage fluctuations in the brain stem resulting in a familiar waveform, and they can yield better information about brain stem nuclei along the ascending central auditory pathway. Materials and Methods: We describe a new method with the use of the disyllable word "baba" corresponding to English "daddy" that is commonly used in many other ethnic languages spanning from West Africa to the Eastern Mediterranean all the way to the East Asia. Results: This method was applied in 20 young adults institutionally diagnosed as dyslexic (10 subjects or light dyslexic (10 subjects who were matched with 20 sex, age, education, hearing sensitivity, and IQ-matched normal subjects. The absolute peak latencies of the negative wave C and the interpeak latencies of A-C elicited by verbal stimuli "baba" were found to be significantly increased in the dyslexic group in comparison with the control group. Conclusions: The method is easy and helpful to diagnose abnormalities affecting the auditory pathway, to identify subjects with early perception and cortical representation abnormalities, and to apply the suitable therapeutic and rehabilitation management.

  1. Learning of arbitrary association between visual and auditory novel stimuli in adults: the "bond effect" of haptic exploration.

    Directory of Open Access Journals (Sweden)

    Benjamin Fredembach

    Full Text Available BACKGROUND: It is well-known that human beings are able to associate stimuli (novel or not perceived in their environment. For example, this ability is used by children in reading acquisition when arbitrary associations between visual and auditory stimuli must be learned. The studies tend to consider it as an "implicit" process triggered by the learning of letter/sound correspondences. The study described in this paper examined whether the addition of the visuo-haptic exploration would help adults to learn more effectively the arbitrary association between visual and auditory novel stimuli. METHODOLOGY/PRINCIPAL FINDINGS: Adults were asked to learn 15 new arbitrary associations between visual stimuli and their corresponding sounds using two learning methods which differed according to the perceptual modalities involved in the exploration of the visual stimuli. Adults used their visual modality in the "classic" learning method and both their visual and haptic modalities in the "multisensory" learning one. After both learning methods, participants showed a similar above-chance ability to recognize the visual and auditory stimuli and the audio-visual associations. However, the ability to recognize the visual-auditory associations was better after the multisensory method than after the classic one. CONCLUSION/SIGNIFICANCE: This study revealed that adults learned more efficiently the arbitrary association between visual and auditory novel stimuli when the visual stimuli were explored with both vision and touch. The results are discussed from the perspective of how they relate to the functional differences of the manual haptic modality and the hypothesis of a "haptic bond" between visual and auditory stimuli.

  2. Responses of mink to auditory stimuli: Prerequisites for applying the ‘cognitive bias’ approach

    DEFF Research Database (Denmark)

    Svendsen, Pernille Maj; Malmkvist, Jens; Halekoh, Ulrich;

    2012-01-01

    The aim of the study was to determine and validate prerequisites for applying a cognitive (judgement) bias approach to assessing welfare in farmed mink (Neovison vison). We investigated discrimination ability and associative learning ability using auditory cues. The mink (n = 15 females) were...... mink only showed habituation in experiment 2. Regardless of the frequency used (2 and 18 kHz), cues predicting the danger situation initially elicited slower responses compared to those predicting the safe situation but quickly became faster. Using auditory cues as discrimination stimuli for female...... farmed mink in a judgement bias approach would thus appear to be feasible. However several specific issues are to be considered in order to successfully adapt a cognitive bias approach to mink, and these are discussed....

  3. Assessing Nonverbal Same/Different Judgments of Auditory Stimuli in Individuals with Intellectual Disabilities: A Methodological Investigation.

    Science.gov (United States)

    Serna, Richard W; Preston, Mark A; Thompson, G Brooks

    2009-01-01

    This methodological paper reports an initial attempt to evaluate the feasibility and utility of a nonverbal task for assessing generalized same/different judgments of auditory stimuli in individuals who have intellectual disabilities. Study 1 asked whether participants could readily acquire a baseline of auditory same/different, go-left/go-right performance with minimal prompting. Sample stimuli consisted of pairs of successively presented sine-wave tones. If the tones were identical, participants were reinforced for selections of a visual stimulus on the left side of the computer screen; if the two stimuli were different, selections of the visual stimulus on the right were reinforced. Two of five participants readily acquired the task, generalized performance to other stimuli and completed a rudimentary protocol for examining auditory discriminations that are potentially more difficult than those used to establish the initial task. In Study 2, two participants who could not perform the go-left/go-right task with tone stimuli, but could do so with spoken-word stimuli, successfully transferred control by spoken words to tones via an auditory superimposition-and-fading procedure. The findings support the feasibility of using the task as a general-purpose auditory discrimination assessment.

  4. Auditory detection of non-speech and speech stimuli in noise: Native speech advantage.

    Science.gov (United States)

    Huo, Shuting; Tao, Sha; Wang, Wenjing; Li, Mingshuang; Dong, Qi; Liu, Chang

    2016-05-01

    Detection thresholds of Chinese vowels, Korean vowels, and a complex tone, with harmonic and noise carriers were measured in noise for Mandarin Chinese-native listeners. The harmonic index was calculated as the difference between detection thresholds of the stimuli with harmonic carriers and those with noise carriers. The harmonic index for Chinese vowels was significantly greater than that for Korean vowels and the complex tone. Moreover, native speech sounds were rated significantly more native-like than non-native speech and non-speech sounds. The results indicate that native speech has an advantage over other sounds in simple auditory tasks like sound detection. PMID:27250202

  5. Voluntary movement affects simultaneous perception of auditory and tactile stimuli presented to a non-moving body part

    Science.gov (United States)

    Hao, Qiao; Ora, Hiroki; Ogawa, Ken-ichiro; Ogata, Taiki; Miyake, Yoshihiro

    2016-01-01

    The simultaneous perception of multimodal sensory information has a crucial role for effective reactions to the external environment. Voluntary movements are known to occasionally affect simultaneous perception of auditory and tactile stimuli presented to the moving body part. However, little is known about spatial limits on the effect of voluntary movements on simultaneous perception, especially when tactile stimuli are presented to a non-moving body part. We examined the effect of voluntary movement on the simultaneous perception of auditory and tactile stimuli presented to the non-moving body part. We considered the possible mechanism using a temporal order judgement task under three experimental conditions: voluntary movement, where participants voluntarily moved their right index finger and judged the temporal order of auditory and tactile stimuli presented to their non-moving left index finger; passive movement; and no movement. During voluntary movement, the auditory stimulus needed to be presented before the tactile stimulus so that they were perceived as occurring simultaneously. This subjective simultaneity differed significantly from the passive movement and no movement conditions. This finding indicates that the effect of voluntary movement on simultaneous perception of auditory and tactile stimuli extends to the non-moving body part. PMID:27622584

  6. Voluntary movement affects simultaneous perception of auditory and tactile stimuli presented to a non-moving body part.

    Science.gov (United States)

    Hao, Qiao; Ora, Hiroki; Ogawa, Ken-Ichiro; Ogata, Taiki; Miyake, Yoshihiro

    2016-01-01

    The simultaneous perception of multimodal sensory information has a crucial role for effective reactions to the external environment. Voluntary movements are known to occasionally affect simultaneous perception of auditory and tactile stimuli presented to the moving body part. However, little is known about spatial limits on the effect of voluntary movements on simultaneous perception, especially when tactile stimuli are presented to a non-moving body part. We examined the effect of voluntary movement on the simultaneous perception of auditory and tactile stimuli presented to the non-moving body part. We considered the possible mechanism using a temporal order judgement task under three experimental conditions: voluntary movement, where participants voluntarily moved their right index finger and judged the temporal order of auditory and tactile stimuli presented to their non-moving left index finger; passive movement; and no movement. During voluntary movement, the auditory stimulus needed to be presented before the tactile stimulus so that they were perceived as occurring simultaneously. This subjective simultaneity differed significantly from the passive movement and no movement conditions. This finding indicates that the effect of voluntary movement on simultaneous perception of auditory and tactile stimuli extends to the non-moving body part. PMID:27622584

  7. Distributed functions of detection and discrimination of vibrotactile stimuli in the hierarchical human somatosensory system

    Directory of Open Access Journals (Sweden)

    Junsuk eKim

    2015-01-01

    Full Text Available According to the hierarchical view of human somatosensory network, somatic sensory information is relayed from the thalamus to primary somatosensory cortex (S1, and then distributed to adjacent cortical regions to perform further perceptual and cognitive functions. Although a number of neuroimaging studies have examined neuronal activity correlated with tactile stimuli, comparatively less attention has been devoted toward understanding how vibrotactile stimulus information is processed in the hierarchical somatosensory cortical network. To explore the hierarchical perspective of tactile information processing, we studied two cases: (a discrimination between the locations of finger stimulation, and (b detection of stimulation against no stimulation on individual fingers, using both standard general linear model (GLM and searchlight multi-voxel pattern analysis (MVPA techniques. These two cases were studied on the same data set resulting from a passive vibrotactile stimulation experiment. Our results showed that vibrotactile stimulus locations on fingers could be discriminated from measurements of human functional magnetic resonance imaging (fMRI. In particular, it was in case (a where we observed activity in contralateral posterior parietal cortex (PPC and supramarginal gyrus (SMG but not in S1, while in case (b we found significant cortical activations in S1 but not in PPC and SMG. These discrepant observations suggest the functional specialization with regard to vibrotactile stimulus locations, especially, the hierarchical information processing in the human somatosensory cortical areas. Our findings moreover support the general understanding that S1 is the main sensory receptive area for the sense of touch, and adjacent cortical regions (i.e., PPC and SMG are in charge of a higher level of processing and may thus contribute most for the successful classification between stimulated finger locations.

  8. Effect of complex treatment using visual and auditory stimuli on the symptoms of attention deficit/hyperactivity disorder in children.

    Science.gov (United States)

    Park, Mi-Sook; Byun, Ki-Won; Park, Yong-Kyung; Kim, Mi-Han; Jung, Sung-Hwa; Kim, Hong

    2013-04-01

    We investigated the effects of complex treatment using visual and auditory stimuli on the symptoms of attention deficit/hyperactivity disorder (ADHD) in children. Forty-seven male children (7-13 yr old), who were clinically diagnosed with ADHD at the Balance Brain Center in Seoul, Korea, were included in this study. The complex treatment consisted of visual and auditory stimuli, core muscle exercise, targeting ball exercise, ocular motor exercise, and visual motor integration. All subjects completed the complex treatment for 60 min/day, 2-3 times/week for more than 12 weeks. Data regarding visual and auditory reaction time and cognitive function were obtained using the Neurosync program, Stroop Color-Word Test, and test of nonverbal intelligence (TONI) at pre- and post-treatment. The complex treatment significantly decreased the total reaction time, while it increased the number of combo actions on visual and auditory stimuli (PStroop color, word, and color-word scores were significantly increased at post-treatment compared to the scores at pretreatment (Peffective ADHD intervention. PMID:24278878

  9. A case report of an autistic boy. Selective responding to components of bidimensional visual and auditory stimuli.

    Science.gov (United States)

    Edwards, J L; Shigley, R H; Edwards, R P

    1976-06-01

    A case study was reported in which a 9-year-old male autistic boy was initially trained to discriminate between two auditory stimuli and two visual stimuli. He was then tested for overselective responding to bidimensional combinations of the four stimuli. It was hypothesized that the overselectivity results reported in previous studies were partially a function of a procedure in which autistic children were reinforced for responding in the presence of a multidimensional stimulus complex and then tested with individual stimuli. The child in the present investigation, with the alternative procedure, did not demonstrate overselective responding. Two interpretations of the results were presented, neither of which was consistent with an overselectivity theory. Future research should delineate the specific conditions which produce overselective responding, and suggest methods to facilitate more adaptive responding of autistic children.

  10. Children with autism do not show sequence effects with auditory stimuli.

    Science.gov (United States)

    Molesworth, Catherine; Chevallier, Coralie; Happé, Francesca; Hampton, James A

    2015-02-01

    Categorization decisions that reflect constantly changing memory representations might be an important adaptive response to dynamic environments. We assessed One such influence from memory (i.e., sequence effects) on categorization decisions made by individuals with autism. A model of categorization (i.e., memory and contrast model, Stewart, Brown, & Chater, 2002) assumes that contextual influences in the form of sequence effects drive categorization performance in individuals with typical development. Difficulties with contextual processing in autism, described by the weak central coherence account (Frith, 1989; Frith & Happé, 1994) imply reduced sequence effects for this participant group. The experiment reported in this article tested this implication. High-functioning children and adolescents with autism (ages 10 to 15 years), matched on age and IQ with typically developing children, completed a test that measures sequence effects (i.e., category contrast effect task, Stewart et al., 2002) using auditory tones. Participants also completed a pitch discrimination task to measure any potential confound arising from possible enhanced discrimination sensitivity within the autism spectrum disorder group. The typically developing group alone demonstrated a category contrast effect. The data suggest that this finding cannot be attributed readily to participant group differences in discrimination sensitivity, perseveration, difficulties on the associated binary categorization task, or greater reliance on long-term memory. We discuss the broad methodological implication that comparison between autism spectrum disorder group and control group responses to sequential perceptual stimuli might be confounded by the influence of preceding trials. We also discuss implications for the weak central coherence account and models of typical cognition. PMID:25365532

  11. Intact spectral but abnormal temporal processing of auditory stimuli in autism.

    NARCIS (Netherlands)

    Groen, W.B.; Orsouw, L. van; Huurne, N.; Swinkels, S.H.N.; Gaag, R.J. van der; Buitelaar, J.K.; Zwiers, M.P.

    2009-01-01

    The perceptual pattern in autism has been related to either a specific localized processing deficit or a pathway-independent, complexity-specific anomaly. We examined auditory perception in autism using an auditory disembedding task that required spectral and temporal integration. 23 children with h

  12. Sex differences in the representation of call stimuli in a songbird secondary auditory area.

    Science.gov (United States)

    Giret, Nicolas; Menardy, Fabien; Del Negro, Catherine

    2015-01-01

    Understanding how communication sounds are encoded in the central auditory system is critical to deciphering the neural bases of acoustic communication. Songbirds use learned or unlearned vocalizations in a variety of social interactions. They have telencephalic auditory areas specialized for processing natural sounds and considered as playing a critical role in the discrimination of behaviorally relevant vocal sounds. The zebra finch, a highly social songbird species, forms lifelong pair bonds. Only male zebra finches sing. However, both sexes produce the distance call when placed in visual isolation. This call is sexually dimorphic, is learned only in males and provides support for individual recognition in both sexes. Here, we assessed whether auditory processing of distance calls differs between paired males and females by recording spiking activity in a secondary auditory area, the caudolateral mesopallium (CLM), while presenting the distance calls of a variety of individuals, including the bird itself, the mate, familiar and unfamiliar males and females. In males, the CLM is potentially involved in auditory feedback processing important for vocal learning. Based on both the analyses of spike rates and temporal aspects of discharges, our results clearly indicate that call-evoked responses of CLM neurons are sexually dimorphic, being stronger, lasting longer, and conveying more information about calls in males than in females. In addition, how auditory responses vary among call types differ between sexes. In females, response strength differs between familiar male and female calls. In males, temporal features of responses reveal a sensitivity to the bird's own call. These findings provide evidence that sexual dimorphism occurs in higher-order processing areas within the auditory system. They suggest a sexual dimorphism in the function of the CLM, contributing to transmit information about the self-generated calls in males and to storage of information about the

  13. Sex differences in the representation of call stimuli in a songbird secondary auditory area

    Directory of Open Access Journals (Sweden)

    Nicolas eGiret

    2015-10-01

    Full Text Available Understanding how communication sounds are encoded in the central auditory system is critical to deciphering the neural bases of acoustic communication. Songbirds use learned or unlearned vocalizations in a variety of social interactions. They have telencephalic auditory areas specialized for processing natural sounds and considered as playing a critical role in the discrimination of behaviorally relevant vocal sounds. The zebra finch, a highly social songbird species, forms lifelong pair bonds. Only male zebra finches sing. However, both sexes produce the distance call when placed in visual isolation. This call is sexually dimorphic, is learned only in males and provides support for individual recognition in both sexes. Here, we assessed whether auditory processing of distance calls differs between paired males and females by recording spiking activity in a secondary auditory area, the caudolateral mesopallium (CLM, while presenting the distance calls of a variety of individuals, including the bird itself, the mate, familiar and unfamiliar males and females. In males, the CLM is potentially involved in auditory feedback processing important for vocal learning. Based on both the analyses of spike rates and temporal aspects of discharges, our results clearly indicate that call-evoked responses of CLM neurons are sexually dimorphic, being stronger, lasting longer and conveying more information about calls in males than in females. In addition, how auditory responses vary among call types differ between sexes. In females, response strength differs between familiar male and female calls. In males, temporal features of responses reveal a sensitivity to the bird’s own call. These findings provide evidence that sexual dimorphism occurs in higher-order processing areas within the auditory system. They suggest a sexual dimorphism in the function of the CLM, contributing to transmit information about the self-generated calls in males and to storage of

  14. The sensory channel of presentation alters subjective ratings and autonomic responses towards disgusting stimuli -Blood pressure, heart rate and skin conductance in response to visual, auditory, haptic and olfactory presented disgusting stimuli-

    OpenAIRE

    Ilona eCroy; Kerstin eLaqua; Frank eSuess; Peter eJoraschky; Tjalf eZiemssen; Thomas eHummel

    2013-01-01

    Disgust causes specific reaction patterns, observable in mimic responses and body reactions. Most research on disgust deals with visual stimuli. However, pictures may cause another disgust experience than sounds, odors or tactile stimuli. Therefore disgust experience evoked by four different sensory channels was compared.A total of 119 participants received 3 different disgusting and one control stimulus, each presented through the visual, auditory, tactile and olfactory channel. Ratings of e...

  15. The sensory channel of presentation alters subjective ratings and autonomic responses towards disgusting stimuli -Blood pressure, heart rate and skin conductance in response to visual, auditory, haptic and olfactory presented disgusting stimuli-

    Directory of Open Access Journals (Sweden)

    Ilona eCroy

    2013-09-01

    Full Text Available Disgust causes specific reaction patterns, observable in mimic responses and body reactions. Most research on disgust deals with visual stimuli. However, pictures may cause another disgust experience than sounds, odors or tactile stimuli. Therefore disgust experience evoked by four different sensory channels was compared.A total of 119 participants received 3 different disgusting and one control stimulus, each presented through the visual, auditory, tactile and olfactory channel. Ratings of evoked disgust as well as responses of the autonomic nervous system (heart rate, skin conductance level, systolic blood pressure were recorded and the effect of stimulus labeling and of repeated presentation was analyzed. Ratings suggested that disgust could be evoked through all senses; they were highest for visual stimuli. However, autonomic reaction towards disgusting stimuli differed according to the channel of presentation. In contrast to the other, olfactory disgust stimuli provoked a strong decrease of systolic blood pressure. Additionally, labeling enhanced disgust ratings and autonomic reaction for olfactory and tactile, but not for visual and auditory stimuli. Repeated presentation indicated that participant’s disgust rating diminishes to all but olfactory disgust stimuli. Taken together we argue that the sensory channel through which a disgust reaction is evoked matters.

  16. Musical Brains. A study of evoked musical sensations without external auditory stimuli. Preliminary report of three cases

    International Nuclear Information System (INIS)

    Background: There are individuals, usually musicians, who are seemingly able to evoke musical sensations without external auditory stimuli. However, to date there is no available evidence to determine if it is feasible to have musical sensations without using external sensory receptors nor if there is a biological substrate to these sensations. Study design: Two single photon emission computerized tomography (SPECT) evaluations with [99mTc]-HMPAO were conducted in each of three female musicians. One was done under basal conditions (without evoking) and the other one while evoking these sensations. Results: In the NeuroSPECT studies of the musicians who were tested while evoking a musical composition, there was a significant increase in perfusion above the normal mean in the right and left hemispheres in Brodmann's areas 9 and 8 (frontal executive area) and in areas 40 on the left side (auditory center). However, under basal conditions there was no hyper perfusion of areas 9, 8, 39 and 40. In one case hyper perfusion was found under basal conditions in area 45, however it was less than when she was evoking. Conclusions: These findings are suggestive of a biological substrate to the process of evoking musical sensations (au)

  17. Effect of heroin-conditioned auditory stimuli on cerebral functional activity in rats

    Energy Technology Data Exchange (ETDEWEB)

    Trusk, T.C.; Stein, E.A.

    1988-08-01

    Cerebral functional activity was measured as changes in distribution of the free fatty acid (1-14C)octanoate in autoradiograms obtained from rats during brief presentation of a tone previously paired to infusions of heroin or saline. Rats were trained in groups of three consisting of one heroin self-administering animal and two animals receiving yoked infusions of heroin or saline. Behavioral experiments in separate groups of rats demonstrated that these training parameters imparts secondary reinforcing properties to the tone for animals self-administering heroin while the tone remains behaviorally neutral in yoked-infusion animals. The optical densities of thirty-seven brain regions were normalized to a relative index for comparisons between groups. Previous pairing of the tone to heroin infusions irrespective of behavior (yoked-heroin vs. yoked-saline groups) produced functional activity changes in fifteen brain areas. In addition, nineteen regional differences in octanoate labeling density were evident when comparison was made between animals previously trained to self-administer heroin to those receiving yoked-heroin infusions, while twelve differences were noted when comparisons were made between the yoked vehicle and self administration group. These functional activity changes are presumed related to the secondary reinforcing capacity of the tone acquired by association with heroin, and may identify neural substrates involved in auditory signalled conditioning of positive reinforcement to opiates.

  18. Pulse and entrainment to non-isochronous auditory stimuli: the case of north Indian alap.

    Science.gov (United States)

    Will, Udo; Clayton, Martin; Wertheim, Ira; Leante, Laura; Berg, Eric

    2015-01-01

    Pulse is often understood as a feature of a (quasi-) isochronous event sequence that is picked up by an entrained subject. However, entrainment does not only occur between quasi-periodic rhythms. This paper demonstrates the expression of pulse by subjects listening to non-periodic musical stimuli and investigates the processes behind this behaviour. The stimuli are extracts from the introductory sections of North Indian (Hindustani) classical music performances (alap, jor and jhala). The first of three experiments demonstrates regular motor responses to both irregular alap and more regular jor sections: responses to alap appear related to individual spontaneous tempi, while for jor they relate to the stimulus event rate. A second experiment investigated whether subjects respond to average periodicities of the alap section, and whether their responses show phase alignment to the musical events. In the third experiment we investigated responses to a broader sample of performances, testing their relationship to spontaneous tempo, and the effect of prior experience with this music. Our results suggest an entrainment model in which pulse is understood as the experience of one's internal periodicity: it is not necessarily linked to temporally regular, structured sensory input streams; it can arise spontaneously through the performance of repetitive motor actions, or on exposure to event sequences with rather irregular temporal structures. Greater regularity in the external event sequence leads to entrainment between motor responses and stimulus sequence, modifying subjects' internal periodicities in such a way that they are either identical or harmonically related to each other. This can be considered as the basis for shared (rhythmic) experience and may be an important process supporting 'social' effects of temporally regular music. PMID:25849357

  19. Pulse and entrainment to non-isochronous auditory stimuli: the case of north Indian alap.

    Directory of Open Access Journals (Sweden)

    Udo Will

    Full Text Available Pulse is often understood as a feature of a (quasi- isochronous event sequence that is picked up by an entrained subject. However, entrainment does not only occur between quasi-periodic rhythms. This paper demonstrates the expression of pulse by subjects listening to non-periodic musical stimuli and investigates the processes behind this behaviour. The stimuli are extracts from the introductory sections of North Indian (Hindustani classical music performances (alap, jor and jhala. The first of three experiments demonstrates regular motor responses to both irregular alap and more regular jor sections: responses to alap appear related to individual spontaneous tempi, while for jor they relate to the stimulus event rate. A second experiment investigated whether subjects respond to average periodicities of the alap section, and whether their responses show phase alignment to the musical events. In the third experiment we investigated responses to a broader sample of performances, testing their relationship to spontaneous tempo, and the effect of prior experience with this music. Our results suggest an entrainment model in which pulse is understood as the experience of one's internal periodicity: it is not necessarily linked to temporally regular, structured sensory input streams; it can arise spontaneously through the performance of repetitive motor actions, or on exposure to event sequences with rather irregular temporal structures. Greater regularity in the external event sequence leads to entrainment between motor responses and stimulus sequence, modifying subjects' internal periodicities in such a way that they are either identical or harmonically related to each other. This can be considered as the basis for shared (rhythmic experience and may be an important process supporting 'social' effects of temporally regular music.

  20. Hierarchical and serial processing in the spatial auditory cortical pathway is degraded by natural aging

    OpenAIRE

    Juarez-Salinas, Dina L.; Engle, James R.; Navarro, Xochi O.; Gregg H Recanzone

    2010-01-01

    The compromised abilities to localize sounds and to understand speech are two hallmark deficits in aged individuals. The auditory cortex is necessary for these processes, yet we know little about how normal aging affects these early cortical fields. In this study, we recorded the spatial tuning of single neurons in primary (area A1) and secondary (area CL) auditory cortical areas in young and aged alert rhesus macaques. We found that the neurons of aged animals had greater spontaneous and dri...

  1. Behavioral determination of stimulus pair discrimination of auditory acoustic and electrical stimuli using a classical conditioning and heart-rate approach.

    Science.gov (United States)

    Morgan, Simeon J; Paolini, Antonio G

    2012-06-06

    Acute animal preparations have been used in research prospectively investigating electrode designs and stimulation techniques for integration into neural auditory prostheses, such as auditory brainstem implants and auditory midbrain implants. While acute experiments can give initial insight to the effectiveness of the implant, testing the chronically implanted and awake animals provides the advantage of examining the psychophysical properties of the sensations induced using implanted devices. Several techniques such as reward-based operant conditioning, conditioned avoidance, or classical fear conditioning have been used to provide behavioral confirmation of detection of a relevant stimulus attribute. Selection of a technique involves balancing aspects including time efficiency (often poor in reward-based approaches), the ability to test a plurality of stimulus attributes simultaneously (limited in conditioned avoidance), and measure reliability of repeated stimuli (a potential constraint when physiological measures are employed). Here, a classical fear conditioning behavioral method is presented which may be used to simultaneously test both detection of a stimulus, and discrimination between two stimuli. Heart-rate is used as a measure of fear response, which reduces or eliminates the requirement for time-consuming video coding for freeze behaviour or other such measures (although such measures could be included to provide convergent evidence). Animals were conditioned using these techniques in three 2-hour conditioning sessions, each providing 48 stimulus trials. Subsequent 48-trial testing sessions were then used to test for detection of each stimulus in presented pairs, and test discrimination between the member stimuli of each pair. This behavioral method is presented in the context of its utilisation in auditory prosthetic research. The implantation of electrocardiogram telemetry devices is shown. Subsequent implantation of brain electrodes into the Cochlear

  2. Comparison of Gated Audiovisual Speech Identification in Elderly Hearing Aid Users and Elderly Normal-Hearing Individuals: Effects of Adding Visual Cues to Auditory Speech Stimuli.

    Science.gov (United States)

    Moradi, Shahram; Lidestam, Björn; Rönnberg, Jerker

    2016-06-17

    The present study compared elderly hearing aid (EHA) users (n = 20) with elderly normal-hearing (ENH) listeners (n = 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context.

  3. Auditory Scene Analysis and sonified visual images. Does consonance negatively impact on object formation when using complex sonified stimuli?

    Directory of Open Access Journals (Sweden)

    David J Brown

    2015-10-01

    Full Text Available A critical task for the brain is the sensory representation and identification of perceptual objects in the world. When the visual sense is impaired, hearing and touch must take primary roles and in recent times compensatory techniques have been developed that employ the tactile or auditory system as a substitute for the visual system. Visual-to-auditory sonifications provide a complex, feature-based auditory representation that must be decoded and integrated into an object-based representation by the listener. However, we don’t yet know what role the auditory system plays in the object integration stage and whether the principles of auditory scene analysis apply. Here we used coarse sonified images in a two-tone discrimination task to test whether auditory feature-based representations of visual objects would be confounded when their features conflicted with the principles of auditory consonance. We found that listeners (N = 36 performed worse in an object recognition task when the auditory feature-based representation was harmonically consonant. We also found that this conflict was not negated with the provision of congruent audio-visual information. The findings suggest that early auditory processes of harmonic grouping dominate the object formation process and that the complexity of the signal, and additional sensory information have limited effect on this.

  4. AROUSAL-RELATED P3a TO NOVEL AUDITORY STIMULI IS ABOLISHED BY MODERATELY LOW ALCOHOL DOSE

    OpenAIRE

    Marinkovic, Ksenija; Halgren, Eric; Maltzman, Irving

    2001-01-01

    Concurrent measures of event-related potentials (ERPs) and skin conductance responses were obtained in an auditory oddball task consisting of rare target, rare non-signal unique novel and frequent standard tones. Twelve right-handed male social drinkers participated in all four cells of the balanced placebo design in which effects of beverage and instructions as to the beverage content (expectancy) were independently manipulated. The beverage contained either juice only, or vodka mixed with j...

  5. The sensory channel of presentation alters subjective ratings and autonomic responses toward disgusting stimuli—Blood pressure, heart rate and skin conductance in response to visual, auditory, haptic and olfactory presented disgusting stimuli

    OpenAIRE

    Croy, Ilona; Laqua, Kerstin; Süß, Frank; Joraschky, Peter; Ziemssen, Tjalf; Hummel, Thomas

    2013-01-01

    Disgust causes specific reaction patterns, observable in mimic responses and body reactions. Most research on disgust deals with visual stimuli. However, pictures may cause another disgust experience than sounds, odors, or tactile stimuli. Therefore, disgust experience evoked by four different sensory channels was compared. A total of 119 participants received 3 different disgusting and one control stimulus, each presented through the visual, auditory, tactile, and olfactory channel. Ratings ...

  6. An auditory multiclass brain-computer interface with natural stimuli: usability evaluation with healthy participants and a motor impaired end user

    Directory of Open Access Journals (Sweden)

    Nadine eSimon

    2015-01-01

    Full Text Available Brain-computer interfaces (BCIs can serve as muscle independent communication aids. Persons, who are unable to control their eye muscles (e.g. in the completely locked-in state or have severe visual impairments for other reasons, need BCI systems that do not rely on the visual modality. For this reason, BCIs that employ auditory stimuli were suggested. In this study, a multiclass BCI spelling system was implemented that uses animal voices with directional cues to code rows and columns of a letter matrix. To reveal possible training effects with the system, 11 healthy participants performed spelling tasks on two consecutive days. In a second step, the system was tested by a participant with amyotrophic lateral sclerosis (ALS in two sessions. In the first session, healthy participants spelled with an average accuracy of 76% (3.29 bits/min that increased to 90% (4.23 bits/min on the second day. Spelling accuracy by the participant with ALS was 20% in the first and 47% in the second session. The results indicate a strong training effect for both the healthy participants and the participant with ALS. While healthy participants reached high accuracies in the first session and second session, accuracies for the participant with ALS were not sufficient for satisfactory communication in both sessions. More training sessions might be needed to improve spelling accuracies. The study demonstrated the feasibility of the auditory BCI with healthy users and stresses the importance of training with auditory multiclass BCIs, especially for potential end-users of BCI with disease.

  7. A California sea lion (Zalophus californianus) can keep the beat: motor entrainment to rhythmic auditory stimuli in a non vocal mimic.

    Science.gov (United States)

    Cook, Peter; Rouse, Andrew; Wilson, Margaret; Reichmuth, Colleen

    2013-11-01

    Is the ability to entrain motor activity to a rhythmic auditory stimulus, that is "keep a beat," dependent on neural adaptations supporting vocal mimicry? That is the premise of the vocal learning and synchronization hypothesis, recently advanced to explain the basis of this behavior (A. Patel, 2006, Musical Rhythm, Linguistic Rhythm, and Human Evolution, Music Perception, 24, 99-104). Prior to the current study, only vocal mimics, including humans, cockatoos, and budgerigars, have been shown to be capable of motoric entrainment. Here we demonstrate that a less vocally flexible animal, a California sea lion (Zalophus californianus), can learn to entrain head bobbing to an auditory rhythm meeting three criteria: a behavioral response that does not reproduce the stimulus; performance transfer to a range of novel tempos; and entrainment to complex, musical stimuli. These findings show that the capacity for entrainment of movement to rhythmic sounds does not depend on a capacity for vocal mimicry, and may be more widespread in the animal kingdom than previously hypothesized.

  8. Temporal-order judgment of visual and auditory stimuli: Modulations in situations with and without stimulus discrimination

    Directory of Open Access Journals (Sweden)

    Elisabeth eHendrich

    2012-08-01

    Full Text Available Temporal-order judgment (TOJ tasks are an important paradigm to investigate processing times of information in different modalities. There are a lot of studies on how temporal order decisions can be influenced by stimuli characteristics. However, so far it has not been investigated whether the addition of a choice reaction time task has an influence on temporal-order judgment. Moreover, it is not known when during processing the decision about the temporal order of two stimuli is made. We investigated the first of these two questions by comparing a regular TOJ task with a dual task. In both tasks, we manipulated different processing stages to investigate whether the manipulations have an influence on temporal-order judgment and to determine thereby the time of processing at which the decision about temporal order is made. The results show that the addition of a choice reaction time task does have an influence on the temporal-order judgment, but the influence seems to be linked to the kind of manipulation of the processing stages that is used. The results of the manipulations indicate that the temporal order decision in the dual task paradigm is made after perceptual processing of the stimuli.

  9. Electrophysiological changes elicited by auditory stimuli given a positive or negative value: a study comparing anhedonic with hedonic subjects.

    Science.gov (United States)

    Pierson, A; Ragot, R; Ripoche, A; Lesevre, N

    1987-07-01

    The present experiment investigates in 'normal' subjects the relationship between personality characteristics (anhedonia versus hedonia) and the influence of the affective value of acoustic stimuli (positive, negative, neutral) on various electrophysiological indices reflecting either tonic activation or phasic arousal (EEG power spectra, contingent negative variation: CNV, heart rate, skin potential responses: SPR) as well as on behavioural indices (reaction time: RT). Eighteen subjects were divided into two groups according to their scores at two self-rating questionnaires, the Chapman's Physical Anhedonia Scale (PAS) and the Beck-Weissman's Dysfunctional Attitude Scale (DAS) that quantifies cognitive distortions presumed to constitute high risk for depression: 9 with high scores at both scales formed the A group (Anhedonic-dysfunctional), 9 with low scores at both scales, the H group (Hedonic-adapted) The electrophysiological indices were recorded during 3 situations: the first one was a classical CNV paradigm with a motor reaction time task in which one of 3 tones of different pitch represented the warning stimulus S1; during the second, conditioning phase, two of these tones were associated with either a success (and reward) or a failure (and punishment) during a memory task in order to make them acquire either a positive or a negative affective value; the third situation consisted in the repeating of the first CNV paradigm in order to test the effect of the positive and the negative stimuli versus the neutral one on RTs and electrophysiological data. Significant between-group differences were found regarding tonic activation as well as phasic arousal indices from the very beginning of the experiment when all stimuli were neutral ones, the anhedonics exhibiting higher activation and arousal than the hedonics at the cortical (increased CNV amplitude, increased power in the beta frequency band), cardiovascular (higher heart rate habituating more slowly) and

  10. Alterations in attention capture to auditory emotional stimuli in job burnout: an event-related potential study.

    Science.gov (United States)

    Sokka, Laura; Huotilainen, Minna; Leinikka, Marianne; Korpela, Jussi; Henelius, Andreas; Alain, Claude; Müller, Kiti; Pakarinen, Satu

    2014-12-01

    Job burnout is a significant cause of work absenteeism. Evidence from behavioral studies and patient reports suggests that job burnout is associated with impairments of attention and decreased working capacity, and it has overlapping elements with depression, anxiety and sleep disturbances. Here, we examined the electrophysiological correlates of automatic sound change detection and involuntary attention allocation in job burnout using scalp recordings of event-related potentials (ERP). Volunteers with job burnout symptoms but without severe depression and anxiety disorders and their non-burnout controls were presented with natural speech sound stimuli (standard and nine deviants), as well as three rarely occurring speech sounds with strong emotional prosody. All stimuli elicited mismatch negativity (MMN) responses that were comparable in both groups. The groups differed with respect to the P3a, an ERP component reflecting involuntary shift of attention: job burnout group showed a shorter P3a latency in response to the emotionally negative stimulus, and a longer latency in response to the positive stimulus. Results indicate that in job burnout, automatic speech sound discrimination is intact, but there is an attention capture tendency that is faster for negative, and slower to positive information compared to that of controls.

  11. Representations of modality-specific affective processing for visual and auditory stimuli derived from functional magnetic resonance imaging data.

    Science.gov (United States)

    Shinkareva, Svetlana V; Wang, Jing; Kim, Jongwan; Facciani, Matthew J; Baucom, Laura B; Wedell, Douglas H

    2014-07-01

    There is converging evidence that people rapidly and automatically encode affective dimensions of objects, events, and environments that they encounter in the normal course of their daily routines. An important research question is whether affective representations differ with sensory modality. This research examined the nature of the dependency of affect and sensory modality at a whole-brain level of analysis in an incidental affective processing paradigm. Participants were presented with picture and sound stimuli that differed in positive or negative valence in an event-related functional magnetic resonance imaging experiment. Global statistical tests, applied at a level of the individual, demonstrated significant sensitivity to valence within modality, but not valence across modalities. Modality-general and modality-specific valence hypotheses predict distinctly different multidimensional patterns of the stimulus conditions. Examination of lower dimensional representation of the data demonstrated separable dimensions for valence processing within each modality. These results provide support for modality-specific valence processing in an incidental affective processing paradigm at a whole-brain level of analysis. Future research should further investigate how stimulus-specific emotional decoding may be mediated by the physical properties of the stimuli.

  12. Influence of selective attention on implicit learning with auditory stimuli%选择性注意对听觉内隐学习的影响

    Institute of Scientific and Technical Information of China (English)

    李秀君; 石文典

    2016-01-01

    内隐学习被认为是人类无意识、无目的获得复杂规则的自动化过程。已有研究表明,在人工语法学习范式下,视觉内隐学习的发生需要选择性注意。为了考察选择性注意对内隐学习的影响是否具有通道特异性,本研究以90名大学生为被试,以人工语法为学习任务,采用双耳分听技术,在听觉通道同时呈现具有不同规则的字母序列和数字序列,考查被试在听觉刺激下对注意序列和未注意序列构成规则的习得情况。结果发现:只有选择注意的序列规则被习得,未选择注意的序列规则未能被习得。研究表明:在人工语法学习范式下,只有选择注意的刺激维度能够发生内隐学习。选择性注意对内隐学习的影响具有跨通道的适用性,不仅适用于视觉刺激,也同样适用于听觉刺激。%Implicit learning refers to people’s tendency to acquire complex regularities or patterns without intention or awareness (Reber, 1989). Given regularities are acquired without intention, and largely unconsciousness, implicit learning is often considered to occur without attention. The processes responsible for such learning were once contrasted with a selective intentional “system” (Guo et al., 2013; Jiang & Leung, 2005). However, more recent researches show that actually implicit learning processes are highly selective (Eitam, schul, & Hassin, 2009; Eitam et al., 2013; Tanaka, Kiyokawa, Yamada, Dienes, & Shigemasu, 2008; Weiermann & Meier, 2012). Therefore it is necessary to do more exploration about the roles of attention in implicit learning. So far, all previous Artificial Grammar Learning (AGL) studies used visual stimuli. Thus, it remains unclear whether AGL may be due to the presence of a visual regularity. To investigate the generality of effect of selective attention on AGL, we extend the experimental materials to auditory stimuli. 90 college students were recruited in two

  13. Signaled two-way avoidance learning using electrical stimulation of the inferior colliculus as negative reinforcement: effects of visual and auditory cues as warning stimuli

    Directory of Open Access Journals (Sweden)

    A.C. Troncoso

    1998-03-01

    Full Text Available The inferior colliculus is a primary relay for the processing of auditory information in the brainstem. The inferior colliculus is also part of the so-called brain aversion system as animals learn to switch off the electrical stimulation of this structure. The purpose of the present study was to determine whether associative learning occurs between aversion induced by electrical stimulation of the inferior colliculus and visual and auditory warning stimuli. Rats implanted with electrodes into the central nucleus of the inferior colliculus were placed inside an open-field and thresholds for the escape response to electrical stimulation of the inferior colliculus were determined. The rats were then placed inside a shuttle-box and submitted to a two-way avoidance paradigm. Electrical stimulation of the inferior colliculus at the escape threshold (98.12 ± 6.15 (A, peak-to-peak was used as negative reinforcement and light or tone as the warning stimulus. Each session consisted of 50 trials and was divided into two segments of 25 trials in order to determine the learning rate of the animals during the sessions. The rats learned to avoid the inferior colliculus stimulation when light was used as the warning stimulus (13.25 ± 0.60 s and 8.63 ± 0.93 s for latencies and 12.5 ± 2.04 and 19.62 ± 1.65 for frequencies in the first and second halves of the sessions, respectively, P0.05 in both cases. Taken together, the present results suggest that rats learn to avoid the inferior colliculus stimulation when light is used as the warning stimulus. However, this learning process does not occur when the neutral stimulus used is an acoustic one. Electrical stimulation of the inferior colliculus may disturb the signal transmission of the stimulus to be conditioned from the inferior colliculus to higher brain structures such as amygdala

  14. Cortical Suppression to Delayed Self-Initiated Auditory Stimuli in Schizotypy: Neurophysiological Evidence for a Continuum of Psychosis.

    Science.gov (United States)

    Oestreich, Lena K L; Mifsud, Nathan G; Ford, Judith M; Roach, Brian J; Mathalon, Daniel H; Whitford, Thomas J

    2016-01-01

    Schizophrenia patients have been shown to exhibit subnormal levels of electrophysiological suppression to self-initiated, button press elicited sounds. These self-suppression deficits have been shown to improve following the imposition of a subsecond delay between the button press and the evoked sound. The current study aimed to investigate whether nonclinical individuals who scored highly on the personality dimension of schizotypy would exhibit similar patterns of self-suppression abnormalities to those exhibited in schizophrenia. Thirty-nine nonclinical individuals scoring above the median (High Schizotypy) and 41 individuals scoring below the median (Low Schizotypy) on the Schizotypal Personality Questionnaire (SPQ) underwent electroencephalographic recording. The amplitude of the N1-component was calculated while participants (1) listened to tones initiated by a willed button press and played back with varying delay periods between the button press and the tone (Active conditions) and (2) passively listened to a series of tones (Listen condition). N1-suppression was calculated by subtracting the amplitude of the N1-component of the auditory evoked potential in the Active condition from that of the Listen condition, while controlling for the activity evoked by the button press per se. The Low Schizotypy group exhibited significantly higher levels of N1-suppression to undelayed tones compared to the High Schizotypy group. Furthermore, while N1-suppression was found to decrease linearly with increasing delays between the button press and the tone in the Low Schizotypy group, this was not the case in the High Schizotypy group. The findings of this study suggest that nonclinical, highly schizotypal individuals exhibit subnormal levels of N1-suppression to undelayed self-initiated tones and an abnormal pattern of N1-suppression to delayed self-initiated tones. To the extent that these results are similar to those previously reported in patients with schizophrenia

  15. Visual tracking of auditory stimuli.

    Science.gov (United States)

    Stream, R W; Whitson, E T; Honrubia, V

    1980-07-01

    A white noise sound stimulus was emitted successively in an anechoic chamber across 24 loudspeakers equally spaced in the horizontal plane in a semicircle with diameter of 11 ft. Eye movements produced by each of 20 normal-hearing young adults in the center of this arc who tracked the sound at 10 different velocities (15--180 degrees/sec) were recorded with standard ENG methods. During each rotating cycle of the stimulus the eyes were able to follow the sound with discrete saccades, but did not produce nystagmic-like movements. Increased stimulus velocity resulted in (1) diminution of the amplitude of the tracking cycles, (2) decrease in the number of saccades, and (3) increase in the average velocity of the eye. Ss performed better with lights on than off. The additional quantitative findings from the present study further indicate the limitation in the ability of human Ss to localize a moving acoustic source in space. PMID:7347744

  16. Auditory imagery: empirical findings.

    Science.gov (United States)

    Hubbard, Timothy L

    2010-03-01

    The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d) auditory imagery's relationship to perception and memory (detection, encoding, recall, mnemonic properties, phonological loop), and (e) individual differences in auditory imagery (in vividness, musical ability and experience, synesthesia, musical hallucinosis, schizophrenia, amusia) are considered. It is concluded that auditory imagery (a) preserves many structural and temporal properties of auditory stimuli, (b) can facilitate auditory discrimination but interfere with auditory detection, (c) involves many of the same brain areas as auditory perception, (d) is often but not necessarily influenced by subvocalization, (e) involves semantically interpreted information and expectancies, (f) involves depictive components and descriptive components, (g) can function as a mnemonic but is distinct from rehearsal, and (h) is related to musical ability and experience (although the mechanisms of that relationship are not clear). PMID:20192565

  17. Visual–auditory spatial processing in auditory cortical neurons

    OpenAIRE

    Bizley, Jennifer K.; King, Andrew J

    2008-01-01

    Neurons responsive to visual stimulation have now been described in the auditory cortex of various species, but their functions are largely unknown. Here we investigate the auditory and visual spatial sensitivity of neurons recorded in 5 different primary and non-primary auditory cortical areas of the ferret. We quantified the spatial tuning of neurons by measuring the responses to stimuli presented across a range of azimuthal positions and calculating the mutual information (MI) between the ...

  18. Decreases in energy and increases in phase locking of event-related oscillations to auditory stimuli occur during adolescence in human and rodent brain.

    Science.gov (United States)

    Ehlers, Cindy L; Wills, Derek N; Desikan, Anita; Phillips, Evelyn; Havstad, James

    2014-01-01

    Synchrony of phase (phase locking) of event-related oscillations (EROs) within and between different brain areas has been suggested to reflect communication exchange between neural networks and as such may be a sensitive and translational measure of changes in brain remodeling that occur during adolescence. This study sought to investigate developmental changes in EROs using a similar auditory event-related potential (ERP) paradigm in both rats and humans. Energy and phase variability of EROs collected from 38 young adult men (aged 18-25 years), 33 periadolescent boys (aged 10-14 years), 15 male periadolescent rats [at postnatal day (PD) 36] and 19 male adult rats (at PD103) were investigated. Three channels of ERP data (frontal cortex, central cortex and parietal cortex) were collected from the humans using an 'oddball plus noise' paradigm that was presented under passive (no behavioral response required) conditions in the periadolescents and under active conditions (where each subject was instructed to depress a counter each time he detected an infrequent target tone) in adults and adolescents. ERPs were recorded in rats using only the passive paradigm. In order to compare the tasks used in rats to those used in humans, we first studied whether three ERO measures [energy, phase locking index (PLI) within an electrode site and phase difference locking index (PDLI) between different electrode sites] differentiated the 'active' from 'passive' ERP tasks. Secondly, we explored our main question of whether the three ERO measures differentiated adults from periadolescents in a similar manner in both humans and rats. No significant changes were found in measures of ERO energy between the active and passive tasks in the periadolescent human participants. There was a smaller but significant increase in PLI but not PDLI as a function of active task requirements. Developmental differences were found in energy, PLI and PDLI values between the periadolescents and adults in

  19. Auditory Display

    DEFF Research Database (Denmark)

    volume. The conference's topics include auditory exploration of data via sonification and audification; real time monitoring of multivariate date; sound in immersive interfaces and teleoperation; perceptual issues in auditory display; sound in generalized computer interfaces; technologies supporting...... auditory display creation; data handling for auditory display systems; applications of auditory display....

  20. Adaptation in the auditory system: an overview

    OpenAIRE

    David ePérez-González; Malmierca, Manuel S.

    2014-01-01

    The early stages of the auditory system need to preserve the timing information of sounds in order to extract the basic features of acoustic stimuli. At the same time, different processes of neuronal adaptation occur at several levels to further process the auditory information. For instance, auditory nerve fiber responses already experience adaptation of their firing rates, a type of response that can be found in many other auditory nuclei and may be useful for emphasizing the onset of the s...

  1. Auditory Processing Disorders

    Science.gov (United States)

    Auditory Processing Disorders Auditory processing disorders (APDs) are referred to by many names: central auditory processing disorders , auditory perceptual disorders , and central auditory disorders . APDs ...

  2. Modeling auditory evoked potentials to complex stimuli

    DEFF Research Database (Denmark)

    Rønne, Filip Munch

    . Sensorineural hearing impairments is commonly associated with a loss of outer hair-cell functionality, and a measurable consequence is the decreased amount of cochlear compression at frequencies corresponding to the damaged locations in the cochlea. In clinical diagnostics, a fast and objective measure of local...

  3. Potencial evocado auditivo de longa latência para estímulo de fala apresentado com diferentes transdutores em crianças ouvintes Late auditory evoked potentials to speech stimuli presented with different transducers in hearing children

    Directory of Open Access Journals (Sweden)

    Raquel Sampaio Agostinho-Pesse

    2013-01-01

    rate of 1.9 stimuli per second. Whenever present, P1, N1 and P2 components were analyzed as to latency and amplitude. RESULTS: it was found a strong level of agreement between the researcher and the judge. There was no statistically significant difference when comparing the values of latency and amplitude of the P1, N1 and P2 components, when considering gender and ear, as well as the latency of components when considering the types of transducers. However, there was a statistically significant difference for the amplitude of the P1 and N1 components with greater amplitude for the speaker transducer. CONCLUSION: the latency values of the P1, N1 and P2 components and P2 amplitude obtained with insertion phone may be used as normal reference independent of the transducer used for the recording of auditory evoked potentials of long latency.

  4. Auditory and visual spatial impression: Recent studies of three auditoria

    Science.gov (United States)

    Nguyen, Andy; Cabrera, Densil

    2004-10-01

    Auditory spatial impression is widely studied for its contribution to auditorium acoustical quality. By contrast, visual spatial impression in auditoria has received relatively little attention in formal studies. This paper reports results from a series of experiments investigating the auditory and visual spatial impression of concert auditoria. For auditory stimuli, a fragment of an anechoic recording of orchestral music was convolved with calibrated binaural impulse responses, which had been made with the dummy head microphone at a wide range of positions in three auditoria and the sound source on the stage. For visual stimuli, greyscale photographs were used, taken at the same positions in the three auditoria, with a visual target on the stage. Subjective experiments were conducted with auditory stimuli alone, visual stimuli alone, and visual and auditory stimuli combined. In these experiments, subjects rated apparent source width, listener envelopment, intimacy and source distance (auditory stimuli), and spaciousness, envelopment, stage dominance, intimacy and target distance (visual stimuli). Results show target distance to be of primary importance in auditory and visual spatial impression-thereby providing a basis for covariance between some attributes of auditory and visual spatial impression. Nevertheless, some attributes of spatial impression diverge between the senses.

  5. Are auditory percepts determined by experience?

    Science.gov (United States)

    Monson, Brian B; Han, Shui'Er; Purves, Dale

    2013-01-01

    Audition--what listeners hear--is generally studied in terms of the physical properties of sound stimuli and physiological properties of the auditory system. Based on recent work in vision, we here consider an alternative perspective that sensory percepts are based on past experience. In this framework, basic auditory qualities (e.g., loudness and pitch) are based on the frequency of occurrence of stimulus patterns in natural acoustic stimuli. To explore this concept of audition, we examined five well-documented psychophysical functions. The frequency of occurrence of acoustic patterns in a database of natural sound stimuli (speech) predicts some qualitative aspects of these functions, but with substantial quantitative discrepancies. This approach may offer a rationale for auditory phenomena that are difficult to explain in terms of the physical attributes of the stimuli as such.

  6. Are auditory percepts determined by experience?

    Directory of Open Access Journals (Sweden)

    Brian B Monson

    Full Text Available Audition--what listeners hear--is generally studied in terms of the physical properties of sound stimuli and physiological properties of the auditory system. Based on recent work in vision, we here consider an alternative perspective that sensory percepts are based on past experience. In this framework, basic auditory qualities (e.g., loudness and pitch are based on the frequency of occurrence of stimulus patterns in natural acoustic stimuli. To explore this concept of audition, we examined five well-documented psychophysical functions. The frequency of occurrence of acoustic patterns in a database of natural sound stimuli (speech predicts some qualitative aspects of these functions, but with substantial quantitative discrepancies. This approach may offer a rationale for auditory phenomena that are difficult to explain in terms of the physical attributes of the stimuli as such.

  7. Auditory Neuropathy

    Science.gov (United States)

    ... field differ in their opinions about the potential benefits of hearing aids, cochlear implants, and other technologies for people with auditory neuropathy. Some professionals report that hearing aids and personal listening devices such as frequency modulation (FM) systems are ...

  8. Psychophysical and Neural Correlates of Auditory Attraction and Aversion

    Science.gov (United States)

    Patten, Kristopher Jakob

    This study explores the psychophysical and neural processes associated with the perception of sounds as either pleasant or aversive. The underlying psychophysical theory is based on auditory scene analysis, the process through which listeners parse auditory signals into individual acoustic sources. The first experiment tests and confirms that a self-rated pleasantness continuum reliably exists for 20 various stimuli (r = .48). In addition, the pleasantness continuum correlated with the physical acoustic characteristics of consonance/dissonance (r = .78), which can facilitate auditory parsing processes. The second experiment uses an fMRI block design to test blood oxygen level dependent (BOLD) changes elicited by a subset of 5 exemplar stimuli chosen from Experiment 1 that are evenly distributed over the pleasantness continuum. Specifically, it tests and confirms that the pleasantness continuum produces systematic changes in brain activity for unpleasant acoustic stimuli beyond what occurs with pleasant auditory stimuli. Results revealed that the combination of two positively and two negatively valenced experimental sounds compared to one neutral baseline control elicited BOLD increases in the primary auditory cortex, specifically the bilateral superior temporal gyrus, and left dorsomedial prefrontal cortex; the latter being consistent with a frontal decision-making process common in identification tasks. The negatively-valenced stimuli yielded additional BOLD increases in the left insula, which typically indicates processing of visceral emotions. The positively-valenced stimuli did not yield any significant BOLD activation, consistent with consonant, harmonic stimuli being the prototypical acoustic pattern of auditory objects that is optimal for auditory scene analysis. Both the psychophysical findings of Experiment 1 and the neural processing findings of Experiment 2 support that consonance is an important dimension of sound that is processed in a manner that aids

  9. Auditory short-term memory in the primate auditory cortex.

    Science.gov (United States)

    Scott, Brian H; Mishkin, Mortimer

    2016-06-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory. PMID:26541581

  10. Auditory short-term memory in the primate auditory cortex.

    Science.gov (United States)

    Scott, Brian H; Mishkin, Mortimer

    2016-06-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory.

  11. Speech perception as complex auditory categorization

    Science.gov (United States)

    Holt, Lori L.

    2002-05-01

    Despite a long and rich history of categorization research in cognitive psychology, very little work has addressed the issue of complex auditory category formation. This is especially unfortunate because the general underlying cognitive and perceptual mechanisms that guide auditory category formation are of great importance to understanding speech perception. I will discuss a new methodological approach to examining complex auditory category formation that specifically addresses issues relevant to speech perception. This approach utilizes novel nonspeech sound stimuli to gain full experimental control over listeners' history of experience. As such, the course of learning is readily measurable. Results from this methodology indicate that the structure and formation of auditory categories are a function of the statistical input distributions of sound that listeners hear, aspects of the operating characteristics of the auditory system, and characteristics of the perceptual categorization system. These results have important implications for phonetic acquisition and speech perception.

  12. Quantification of the auditory startle reflex in children

    NARCIS (Netherlands)

    Bakker, Mirte J.; Boer, Frits; van der Meer, Johan N.; Koelman, Johannes H. T. M.; Boeree, Thijs; Bour, Lo; Tijssen, Marina A. J.

    2009-01-01

    Objective: To find an adequate tool to assess the auditory startle reflex (ASR) in children. Methods: We investigated the effect of stimulus repetition, gender and age on several quantifications of the ASR. ASR's were elicited by eight consecutive auditory stimuli in 27 healthy children. Electromyog

  13. Continuity of visual and auditory rhythms influences sensorimotor coordination.

    Directory of Open Access Journals (Sweden)

    Manuel Varlet

    Full Text Available People often coordinate their movement with visual and auditory environmental rhythms. Previous research showed better performances when coordinating with auditory compared to visual stimuli, and with bimodal compared to unimodal stimuli. However, these results have been demonstrated with discrete rhythms and it is possible that such effects depend on the continuity of the stimulus rhythms (i.e., whether they are discrete or continuous. The aim of the current study was to investigate the influence of the continuity of visual and auditory rhythms on sensorimotor coordination. We examined the dynamics of synchronized oscillations of a wrist pendulum with auditory and visual rhythms at different frequencies, which were either unimodal or bimodal and discrete or continuous. Specifically, the stimuli used were a light flash, a fading light, a short tone and a frequency-modulated tone. The results demonstrate that the continuity of the stimulus rhythms strongly influences visual and auditory motor coordination. Participants' movement led continuous stimuli and followed discrete stimuli. Asymmetries between the half-cycles of the movement in term of duration and nonlinearity of the trajectory occurred with slower discrete rhythms. Furthermore, the results show that the differences of performance between visual and auditory modalities depend on the continuity of the stimulus rhythms as indicated by movements closer to the instructed coordination for the auditory modality when coordinating with discrete stimuli. The results also indicate that visual and auditory rhythms are integrated together in order to better coordinate irrespective of their continuity, as indicated by less variable coordination closer to the instructed pattern. Generally, the findings have important implications for understanding how we coordinate our movements with visual and auditory environmental rhythms in everyday life.

  14. Efficacy of Individual Computer-Based Auditory Training for People with Hearing Loss: A Systematic Review of the Evidence

    OpenAIRE

    Helen Henshaw; Ferguson, Melanie A.

    2013-01-01

    BACKGROUND: Auditory training involves active listening to auditory stimuli and aims to improve performance in auditory tasks. As such, auditory training is a potential intervention for the management of people with hearing loss. OBJECTIVE: This systematic review (PROSPERO 2011: CRD42011001406) evaluated the published evidence-base for the efficacy of individual computer-based auditory training to improve speech intelligibility, cognition and communication abilities in adults with hearing los...

  15. Comparação dos estímulos clique e CE-chirp® no registro do Potencial Evocado Auditivo de Tronco Encefálico Comparison of click and CE-chirp® stimuli on Brainstem Auditory Evoked Potential recording

    Directory of Open Access Journals (Sweden)

    Gabriela Ribeiro Ivo Rodrigues

    2012-12-01

    Full Text Available OBJETIVO: Comparar as latências e as amplitudes da onda V no registro do Potencial Evocado Auditivo de Tronco Encefálico (PEATE com os estímulos clique e CE-chirp® e a presença ou ausência das ondas I, III e V em fortes intensidades. MÉTODOS: Estudo transversal com 12 adultos com limiares audiométricos PURPOSE: To compare the latencies and amplitudes of wave V on the Brainstem Auditory Evoked Potential (BAEP recording obtained with click and CE-chirp® stimuli and the presence or absence of waves I, III and V in high intensities. METHODS: Cross-sectional study with 12 adults with audiometric thresholds <15 dBHL (24 ears and mean age of 27 years. The parameters used for the recording with both stimuli in intensities of 80, 60, 40, 20 dBnHL were alternate polarity and repetition rate of 27.1 Hz. RESULTS: The CE-chirp® latencies for wave V were longer than click latencies at low intensity levels (20 and 40 dBnHL. At high intensity levels (60 and 80 dBnHL, the opposite occurred. Larger wave V amplitudes were observed with CE-chirp® in all intensity levels, except at 80 dBnHL. CONCLUSION: The CE-chirp® showed shorter latencies than those observed with clicks at high intensity levels and larger amplitudes at all intensity levels, except at 80 dBnHL. The waves I and III tended to disappear with CE-chirp® stimulation.

  16. Exploring Auditory Saltation Using the "Reduced-Rabbit" Paradigm

    Science.gov (United States)

    Getzmann, Stephan

    2009-01-01

    Sensory saltation is a spatiotemporal illusion in which the judged positions of stimuli are shifted toward subsequent stimuli that follow closely in time. So far, studies on saltation in the auditory domain have usually employed subjective rating techniques, making it difficult to exactly quantify the extent of saltation. In this study, temporal…

  17. AUDITORY REACTION TIME IN BASKETBALL PLAYERS AND HEALTHY CONTROLS

    Directory of Open Access Journals (Sweden)

    Ghuntla Tejas P.

    2013-08-01

    Full Text Available Reaction is purposeful voluntary response to different stimuli as visual or auditory stimuli. Auditory reaction time is time required to response to auditory stimuli. Quickness of response is very important in games like basketball. This study was conducted to compare auditory reaction time of basketball players and healthy controls. The auditory reaction time was measured by the reaction time instrument in healthy controls and basketball players. Simple reaction time and choice reaction time measured. During the reaction time testing, auditory stimuli were given for three times and minimum reaction time was taken as the final reaction time for that sensory modality of that subject. The results were statistically analyzed and were recorded as mean + standard deviation and student’s unpaired t-test was applied to check the level of significance. The study shows that basketball players have shorter reaction time than healthy controls. As reaction time gives the information how fast a person gives a response to sensory stimuli, it is a good indicator of performance in reactive sports like basketball. Sportsman should be trained to improve their reaction time to improve their performance.

  18. Exposure to virtual social stimuli modulates subjective pain reports

    OpenAIRE

    Jacob M Vigil; Daniel Torres; Alexander Wolff; Katy Hughes

    2014-01-01

    BACKGROUND: Contextual factors, including the gender of researchers, influence experimental and patient pain reports. It is currently not known how social stimuli influence pain percepts, nor which types of sensory modalities of communication, such as auditory, visual or olfactory cues associated with person perception and gender processing, produce these effects.OBJECTIVES: To determine whether exposure to two forms of social stimuli (audio and visual) from a virtual male or female stranger ...

  19. Contribution of psychoacoustics and neuroaudiology in revealing correlation of mental disorders with central auditory processing disorders

    OpenAIRE

    Iliadou, V; Iakovides, S

    2003-01-01

    Background Psychoacoustics is a fascinating developing field concerned with the evaluation of the hearing sensation as an outcome of a sound or speech stimulus. Neuroaudiology with electrophysiologic testing, records the electrical activity of the auditory pathways, extending from the 8th cranial nerve up to the cortical auditory centers as a result of external auditory stimuli. Central Auditory Processing Disorders may co-exist with mental disorders and complicate diagnosis and outcome. Desi...

  20. Auditory priming of frequency and temporal information: Effects of lateralized presentation

    OpenAIRE

    List, Alexandra; Justus, Timothy

    2007-01-01

    Asymmetric distribution of function between the cerebral hemispheres has been widely investigated in the auditory modality. The current approach borrows heavily from visual local-global research in an attempt to determine whether, as in vision, local-global auditory processing is lateralized. In vision, lateralized local-global processing likely relies on spatial frequency information. Drawing analogies between visual spatial frequency and auditory dimensions, two sets of auditory stimuli wer...

  1. Auditory sustained field responses to periodic noise

    Directory of Open Access Journals (Sweden)

    Keceli Sumru

    2012-01-01

    Full Text Available Abstract Background Auditory sustained responses have been recently suggested to reflect neural processing of speech sounds in the auditory cortex. As periodic fluctuations below the pitch range are important for speech perception, it is necessary to investigate how low frequency periodic sounds are processed in the human auditory cortex. Auditory sustained responses have been shown to be sensitive to temporal regularity but the relationship between the amplitudes of auditory evoked sustained responses and the repetitive rates of auditory inputs remains elusive. As the temporal and spectral features of sounds enhance different components of sustained responses, previous studies with click trains and vowel stimuli presented diverging results. In order to investigate the effect of repetition rate on cortical responses, we analyzed the auditory sustained fields evoked by periodic and aperiodic noises using magnetoencephalography. Results Sustained fields were elicited by white noise and repeating frozen noise stimuli with repetition rates of 5-, 10-, 50-, 200- and 500 Hz. The sustained field amplitudes were significantly larger for all the periodic stimuli than for white noise. Although the sustained field amplitudes showed a rising and falling pattern within the repetition rate range, the response amplitudes to 5 Hz repetition rate were significantly larger than to 500 Hz. Conclusions The enhanced sustained field responses to periodic noises show that cortical sensitivity to periodic sounds is maintained for a wide range of repetition rates. Persistence of periodicity sensitivity below the pitch range suggests that in addition to processing the fundamental frequency of voice, sustained field generators can also resolve low frequency temporal modulations in speech envelope.

  2. In search of an auditory engram

    Science.gov (United States)

    Fritz, Jonathan; Mishkin, Mortimer; Saunders, Richard C.

    2005-01-01

    Monkeys trained preoperatively on a task designed to assess auditory recognition memory were impaired after removal of either the rostral superior temporal gyrus or the medial temporal lobe but were unaffected by lesions of the rhinal cortex. Behavioral analysis indicated that this result occurred because the monkeys did not or could not use long-term auditory recognition, and so depended instead on short-term working memory, which is unaffected by rhinal lesions. The findings suggest that monkeys may be unable to place representations of auditory stimuli into a long-term store and thus question whether the monkey's cerebral memory mechanisms in audition are intrinsically different from those in other sensory modalities. Furthermore, it raises the possibility that language is unique to humans not only because it depends on speech but also because it requires long-term auditory memory. PMID:15967995

  3. Finding the missing stimulus mismatch negativity (MMN): emitted MMN to violations of an auditory gestalt.

    Science.gov (United States)

    Salisbury, Dean F

    2012-04-01

    Deviations from repetitive auditory stimuli evoke a mismatch negativity (MMN). Counterintuitively, omissions of repetitive stimuli do not. Violations of patterns reflecting complex rules also evoke MMN. To detect a MMN to missing stimuli, we developed an auditory gestalt task using one stimulus. Groups of six pips (50 ms duration, 330 ms stimulus onset asynchrony [SOA], 400 trials), were presented with an intertrial interval (ITI) of 750 ms while subjects (n=16) watched a silent video. Occasional deviant groups had missing 4th or 6th tones (50 trials each). Missing stimuli evoked a MMN (pgestalt grouping rule. Patterned stimuli appear more sensitive to omissions and ITI than homogenous streams.

  4. McGurk illusion recalibrates subsequent auditory perception.

    Science.gov (United States)

    Lüttke, Claudia S; Ekman, Matthias; van Gerven, Marcel A J; de Lange, Floris P

    2016-01-01

    Visual information can alter auditory perception. This is clearly illustrated by the well-known McGurk illusion, where an auditory/aba/ and a visual /aga/ are merged to the percept of 'ada'. It is less clear however whether such a change in perception may recalibrate subsequent perception. Here we asked whether the altered auditory perception due to the McGurk illusion affects subsequent auditory perception, i.e. whether this process of fusion may cause a recalibration of the auditory boundaries between phonemes. Participants categorized auditory and audiovisual speech stimuli as /aba/, /ada/ or /aga/ while activity patterns in their auditory cortices were recorded using fMRI. Interestingly, following a McGurk illusion, an auditory /aba/ was more often misperceived as 'ada'. Furthermore, we observed a neural counterpart of this recalibration in the early auditory cortex. When the auditory input /aba/ was perceived as 'ada', activity patterns bore stronger resemblance to activity patterns elicited by /ada/ sounds than when they were correctly perceived as /aba/. Our results suggest that upon experiencing the McGurk illusion, the brain shifts the neural representation of an /aba/ sound towards /ada/, culminating in a recalibration in perception of subsequent auditory input. PMID:27611960

  5. McGurk illusion recalibrates subsequent auditory perception

    Science.gov (United States)

    Lüttke, Claudia S.; Ekman, Matthias; van Gerven, Marcel A. J.; de Lange, Floris P.

    2016-01-01

    Visual information can alter auditory perception. This is clearly illustrated by the well-known McGurk illusion, where an auditory/aba/ and a visual /aga/ are merged to the percept of ‘ada’. It is less clear however whether such a change in perception may recalibrate subsequent perception. Here we asked whether the altered auditory perception due to the McGurk illusion affects subsequent auditory perception, i.e. whether this process of fusion may cause a recalibration of the auditory boundaries between phonemes. Participants categorized auditory and audiovisual speech stimuli as /aba/, /ada/ or /aga/ while activity patterns in their auditory cortices were recorded using fMRI. Interestingly, following a McGurk illusion, an auditory /aba/ was more often misperceived as ‘ada’. Furthermore, we observed a neural counterpart of this recalibration in the early auditory cortex. When the auditory input /aba/ was perceived as ‘ada’, activity patterns bore stronger resemblance to activity patterns elicited by /ada/ sounds than when they were correctly perceived as /aba/. Our results suggest that upon experiencing the McGurk illusion, the brain shifts the neural representation of an /aba/ sound towards /ada/, culminating in a recalibration in perception of subsequent auditory input. PMID:27611960

  6. Hierarchical photocatalysts.

    Science.gov (United States)

    Li, Xin; Yu, Jiaguo; Jaroniec, Mietek

    2016-05-01

    As a green and sustainable technology, semiconductor-based heterogeneous photocatalysis has received much attention in the last few decades because it has potential to solve both energy and environmental problems. To achieve efficient photocatalysts, various hierarchical semiconductors have been designed and fabricated at the micro/nanometer scale in recent years. This review presents a critical appraisal of fabrication methods, growth mechanisms and applications of advanced hierarchical photocatalysts. Especially, the different synthesis strategies such as two-step templating, in situ template-sacrificial dissolution, self-templating method, in situ template-free assembly, chemically induced self-transformation and post-synthesis treatment are highlighted. Finally, some important applications including photocatalytic degradation of pollutants, photocatalytic H2 production and photocatalytic CO2 reduction are reviewed. A thorough assessment of the progress made in photocatalysis may open new opportunities in designing highly effective hierarchical photocatalysts for advanced applications ranging from thermal catalysis, separation and purification processes to solar cells. PMID:26963902

  7. Hierarchical photocatalysts.

    Science.gov (United States)

    Li, Xin; Yu, Jiaguo; Jaroniec, Mietek

    2016-05-01

    As a green and sustainable technology, semiconductor-based heterogeneous photocatalysis has received much attention in the last few decades because it has potential to solve both energy and environmental problems. To achieve efficient photocatalysts, various hierarchical semiconductors have been designed and fabricated at the micro/nanometer scale in recent years. This review presents a critical appraisal of fabrication methods, growth mechanisms and applications of advanced hierarchical photocatalysts. Especially, the different synthesis strategies such as two-step templating, in situ template-sacrificial dissolution, self-templating method, in situ template-free assembly, chemically induced self-transformation and post-synthesis treatment are highlighted. Finally, some important applications including photocatalytic degradation of pollutants, photocatalytic H2 production and photocatalytic CO2 reduction are reviewed. A thorough assessment of the progress made in photocatalysis may open new opportunities in designing highly effective hierarchical photocatalysts for advanced applications ranging from thermal catalysis, separation and purification processes to solar cells.

  8. Infants' Preferential Attention to Sung and Spoken Stimuli

    Science.gov (United States)

    Costa-Giomi, Eugenia; Ilari, Beatriz

    2014-01-01

    Caregivers and early childhood teachers all over the world use singing and speech to elicit and maintain infants' attention. Research comparing infants' preferential attention to music and speech is inconclusive regarding their responses to these two types of auditory stimuli, with one study showing a music bias and another one…

  9. Achilles' ear? Inferior human short-term and recognition memory in the auditory modality.

    Directory of Open Access Journals (Sweden)

    James Bigelow

    Full Text Available Studies of the memory capabilities of nonhuman primates have consistently revealed a relative weakness for auditory compared to visual or tactile stimuli: extensive training is required to learn auditory memory tasks, and subjects are only capable of retaining acoustic information for a brief period of time. Whether a parallel deficit exists in human auditory memory remains an outstanding question. In the current study, a short-term memory paradigm was used to test human subjects' retention of simple auditory, visual, and tactile stimuli that were carefully equated in terms of discriminability, stimulus exposure time, and temporal dynamics. Mean accuracy did not differ significantly among sensory modalities at very short retention intervals (1-4 s. However, at longer retention intervals (8-32 s, accuracy for auditory stimuli fell substantially below that observed for visual and tactile stimuli. In the interest of extending the ecological validity of these findings, a second experiment tested recognition memory for complex, naturalistic stimuli that would likely be encountered in everyday life. Subjects were able to identify all stimuli when retention was not required, however, recognition accuracy following a delay period was again inferior for auditory compared to visual and tactile stimuli. Thus, the outcomes of both experiments provide a human parallel to the pattern of results observed in nonhuman primates. The results are interpreted in light of neuropsychological data from nonhuman primates, which suggest a difference in the degree to which auditory, visual, and tactile memory are mediated by the perirhinal and entorhinal cortices.

  10. Hierarchical systems

    NARCIS (Netherlands)

    Hamers, A.S.

    2016-01-01

    The thesis addresses the long-term dynamical evolution of hierarchical multiple systems. First, we consider the evolution of orbits of stars orbiting a supermassive black hole (SBH). We study the long-term evolution and compute tidal disruption rates of stars by the SBH. Such disruption events revea

  11. Interactions across Multiple Stimulus Dimensions in Primary Auditory Cortex.

    Science.gov (United States)

    Sloas, David C; Zhuo, Ran; Xue, Hongbo; Chambers, Anna R; Kolaczyk, Eric; Polley, Daniel B; Sen, Kamal

    2016-01-01

    Although sensory cortex is thought to be important for the perception of complex objects, its specific role in representing complex stimuli remains unknown. Complex objects are rich in information along multiple stimulus dimensions. The position of cortex in the sensory hierarchy suggests that cortical neurons may integrate across these dimensions to form a more gestalt representation of auditory objects. Yet, studies of cortical neurons typically explore single or few dimensions due to the difficulty of determining optimal stimuli in a high dimensional stimulus space. Evolutionary algorithms (EAs) provide a potentially powerful approach for exploring multidimensional stimulus spaces based on real-time spike feedback, but two important issues arise in their application. First, it is unclear whether it is necessary to characterize cortical responses to multidimensional stimuli or whether it suffices to characterize cortical responses to a single dimension at a time. Second, quantitative methods for analyzing complex multidimensional data from an EA are lacking. Here, we apply a statistical method for nonlinear regression, the generalized additive model (GAM), to address these issues. The GAM quantitatively describes the dependence between neural response and all stimulus dimensions. We find that auditory cortical neurons in mice are sensitive to interactions across dimensions. These interactions are diverse across the population, indicating significant integration across stimulus dimensions in auditory cortex. This result strongly motivates using multidimensional stimuli in auditory cortex. Together, the EA and the GAM provide a novel quantitative paradigm for investigating neural coding of complex multidimensional stimuli in auditory and other sensory cortices.

  12. Interactions across Multiple Stimulus Dimensions in Primary Auditory Cortex

    Science.gov (United States)

    Zhuo, Ran; Xue, Hongbo; Chambers, Anna R.; Kolaczyk, Eric; Polley, Daniel B.

    2016-01-01

    Although sensory cortex is thought to be important for the perception of complex objects, its specific role in representing complex stimuli remains unknown. Complex objects are rich in information along multiple stimulus dimensions. The position of cortex in the sensory hierarchy suggests that cortical neurons may integrate across these dimensions to form a more gestalt representation of auditory objects. Yet, studies of cortical neurons typically explore single or few dimensions due to the difficulty of determining optimal stimuli in a high dimensional stimulus space. Evolutionary algorithms (EAs) provide a potentially powerful approach for exploring multidimensional stimulus spaces based on real-time spike feedback, but two important issues arise in their application. First, it is unclear whether it is necessary to characterize cortical responses to multidimensional stimuli or whether it suffices to characterize cortical responses to a single dimension at a time. Second, quantitative methods for analyzing complex multidimensional data from an EA are lacking. Here, we apply a statistical method for nonlinear regression, the generalized additive model (GAM), to address these issues. The GAM quantitatively describes the dependence between neural response and all stimulus dimensions. We find that auditory cortical neurons in mice are sensitive to interactions across dimensions. These interactions are diverse across the population, indicating significant integration across stimulus dimensions in auditory cortex. This result strongly motivates using multidimensional stimuli in auditory cortex. Together, the EA and the GAM provide a novel quantitative paradigm for investigating neural coding of complex multidimensional stimuli in auditory and other sensory cortices. PMID:27622211

  13. Auditory perception of a human walker.

    Science.gov (United States)

    Cottrell, David; Campbell, Megan E J

    2014-01-01

    When one hears footsteps in the hall, one is able to instantly recognise it as a person: this is an everyday example of auditory biological motion perception. Despite the familiarity of this experience, research into this phenomenon is in its infancy compared with visual biological motion perception. Here, two experiments explored sensitivity to, and recognition of, auditory stimuli of biological and nonbiological origin. We hypothesised that the cadence of a walker gives rise to a temporal pattern of impact sounds that facilitates the recognition of human motion from auditory stimuli alone. First a series of detection tasks compared sensitivity with three carefully matched impact sounds: footsteps, a ball bouncing, and drumbeats. Unexpectedly, participants were no more sensitive to footsteps than to impact sounds of nonbiological origin. In the second experiment participants made discriminations between pairs of the same stimuli, in a series of recognition tasks in which the temporal pattern of impact sounds was manipulated to be either that of a walker or the pattern more typical of the source event (a ball bouncing or a drumbeat). Under these conditions, there was evidence that both temporal and nontemporal cues were important in recognising theses stimuli. It is proposed that the interval between footsteps, which reflects a walker's cadence, is a cue for the recognition of the sounds of a human walking.

  14. Influence of a preceding auditory stimulus on evoked potential of the succeeding stimulus

    Institute of Scientific and Technical Information of China (English)

    WANG Mingshi; LIU Zhongguo; ZHU Qiang; LIU Jin; WANG Liqun; LIU Haiying

    2004-01-01

    In the present study, we investigated the influence of the preceding auditory stimulus on the auditory-evoked potential (AEP) of the succeeding stimuli, when the human subjects were presented with a pair of auditory stimuli. We found that the evoked potential of the succeeding stimulus was inhibited completely by the preceding stimulus, as the inter-stimulus interval (ISI) was shorter than 150 ms. This influence was dependent on the ISI of two stimuli, the shorter the ISI the stronger the influence would be. The inhibitory influence of the preceding stimulus might be caused by the neural refractory effect.

  15. Emotion Recognition in Animated Compared to Human Stimuli in Adolescents with Autism Spectrum Disorder

    Science.gov (United States)

    Brosnan, Mark; Johnson, Hilary; Grawmeyer, Beate; Chapman, Emma; Benton, Laura

    2015-01-01

    There is equivocal evidence as to whether there is a deficit in recognising emotional expressions in Autism spectrum disorder (ASD). This study compared emotion recognition in ASD in three types of emotion expression media (still image, dynamic image, auditory) across human stimuli (e.g. photo of a human face) and animated stimuli (e.g. cartoon…

  16. Midbrain auditory selectivity to natural sounds.

    Science.gov (United States)

    Wohlgemuth, Melville J; Moss, Cynthia F

    2016-03-01

    This study investigated auditory stimulus selectivity in the midbrain superior colliculus (SC) of the echolocating bat, an animal that relies on hearing to guide its orienting behaviors. Multichannel, single-unit recordings were taken across laminae of the midbrain SC of the awake, passively listening big brown bat, Eptesicus fuscus. Species-specific frequency-modulated (FM) echolocation sound sequences with dynamic spectrotemporal features served as acoustic stimuli along with artificial sound sequences matched in bandwidth, amplitude, and duration but differing in spectrotemporal structure. Neurons in dorsal sensory regions of the bat SC responded selectively to elements within the FM sound sequences, whereas neurons in ventral sensorimotor regions showed broad response profiles to natural and artificial stimuli. Moreover, a generalized linear model (GLM) constructed on responses in the dorsal SC to artificial linear FM stimuli failed to predict responses to natural sounds and vice versa, but the GLM produced accurate response predictions in ventral SC neurons. This result suggests that auditory selectivity in the dorsal extent of the bat SC arises through nonlinear mechanisms, which extract species-specific sensory information. Importantly, auditory selectivity appeared only in responses to stimuli containing the natural statistics of acoustic signals used by the bat for spatial orientation-sonar vocalizations-offering support for the hypothesis that sensory selectivity enables rapid species-specific orienting behaviors. The results of this study are the first, to our knowledge, to show auditory spectrotemporal selectivity to natural stimuli in SC neurons and serve to inform a more general understanding of mechanisms guiding sensory selectivity for natural, goal-directed orienting behaviors.

  17. Auditory presentation and synchronization in Adobe Flash and HTML5/JavaScript Web experiments.

    Science.gov (United States)

    Reimers, Stian; Stewart, Neil

    2016-09-01

    Substantial recent research has examined the accuracy of presentation durations and response time measurements for visually presented stimuli in Web-based experiments, with a general conclusion that accuracy is acceptable for most kinds of experiments. However, many areas of behavioral research use auditory stimuli instead of, or in addition to, visual stimuli. Much less is known about auditory accuracy using standard Web-based testing procedures. We used a millisecond-accurate Black Box Toolkit to measure the actual durations of auditory stimuli and the synchronization of auditory and visual presentation onsets. We examined the distribution of timings for 100 presentations of auditory and visual stimuli across two computers with difference specs, three commonly used browsers, and code written in either Adobe Flash or JavaScript. We also examined different coding options for attempting to synchronize the auditory and visual onsets. Overall, we found that auditory durations were very consistent, but that the lags between visual and auditory onsets varied substantially across browsers and computer systems.

  18. Auditory presentation and synchronization in Adobe Flash and HTML5/JavaScript Web experiments.

    Science.gov (United States)

    Reimers, Stian; Stewart, Neil

    2016-09-01

    Substantial recent research has examined the accuracy of presentation durations and response time measurements for visually presented stimuli in Web-based experiments, with a general conclusion that accuracy is acceptable for most kinds of experiments. However, many areas of behavioral research use auditory stimuli instead of, or in addition to, visual stimuli. Much less is known about auditory accuracy using standard Web-based testing procedures. We used a millisecond-accurate Black Box Toolkit to measure the actual durations of auditory stimuli and the synchronization of auditory and visual presentation onsets. We examined the distribution of timings for 100 presentations of auditory and visual stimuli across two computers with difference specs, three commonly used browsers, and code written in either Adobe Flash or JavaScript. We also examined different coding options for attempting to synchronize the auditory and visual onsets. Overall, we found that auditory durations were very consistent, but that the lags between visual and auditory onsets varied substantially across browsers and computer systems. PMID:27421976

  19. Conceptual priming for realistic auditory scenes and for auditory words.

    Science.gov (United States)

    Frey, Aline; Aramaki, Mitsuko; Besson, Mireille

    2014-02-01

    Two experiments were conducted using both behavioral and Event-Related brain Potentials methods to examine conceptual priming effects for realistic auditory scenes and for auditory words. Prime and target sounds were presented in four stimulus combinations: Sound-Sound, Word-Sound, Sound-Word and Word-Word. Within each combination, targets were conceptually related to the prime, unrelated or ambiguous. In Experiment 1, participants were asked to judge whether the primes and targets fit together (explicit task) and in Experiment 2 they had to decide whether the target was typical or ambiguous (implicit task). In both experiments and in the four stimulus combinations, reaction times and/or error rates were longer/higher and the N400 component was larger to ambiguous targets than to conceptually related targets, thereby pointing to a common conceptual system for processing auditory scenes and linguistic stimuli in both explicit and implicit tasks. However, fine-grained analyses also revealed some differences between experiments and conditions in scalp topography and duration of the priming effects possibly reflecting differences in the integration of perceptual and cognitive attributes of linguistic and nonlinguistic sounds. These results have clear implications for the building-up of virtual environments that need to convey meaning without words. PMID:24378910

  20. Processing of harmonics in the lateral belt of macaque auditory cortex.

    Science.gov (United States)

    Kikuchi, Yukiko; Horwitz, Barry; Mishkin, Mortimer; Rauschecker, Josef P

    2014-01-01

    Many speech sounds and animal vocalizations contain components, referred to as complex tones, that consist of a fundamental frequency (F0) and higher harmonics. In this study we examined single-unit activity recorded in the core (A1) and lateral belt (LB) areas of auditory cortex in two rhesus monkeys as they listened to pure tones and pitch-shifted conspecific vocalizations ("coos"). The latter consisted of complex-tone segments in which F0 was matched to a corresponding pure-tone stimulus. In both animals, neuronal latencies to pure-tone stimuli at the best frequency (BF) were ~10 to 15 ms longer in LB than in A1. This might be expected, since LB is considered to be at a hierarchically higher level than A1. On the other hand, the latency of LB responses to coos was ~10 to 20 ms shorter than to the corresponding pure-tone BF, suggesting facilitation in LB by the harmonics. This latency reduction by coos was not observed in A1, resulting in similar coo latencies in A1 and LB. Multi-peaked neurons were present in both A1 and LB; however, harmonically-related peaks were observed in LB for both early and late response components, whereas in A1 they were observed only for late components. Our results suggest that harmonic features, such as relationships between specific frequency intervals of communication calls, are processed at relatively early stages of the auditory cortical pathway, but preferentially in LB. PMID:25100935

  1. Hemodynamic responses in human multisensory and auditory association cortex to purely visual stimulation

    Directory of Open Access Journals (Sweden)

    Baumann Simon

    2007-02-01

    Full Text Available Abstract Background Recent findings of a tight coupling between visual and auditory association cortices during multisensory perception in monkeys and humans raise the question whether consistent paired presentation of simple visual and auditory stimuli prompts conditioned responses in unimodal auditory regions or multimodal association cortex once visual stimuli are presented in isolation in a post-conditioning run. To address this issue fifteen healthy participants partook in a "silent" sparse temporal event-related fMRI study. In the first (visual control habituation phase they were presented with briefly red flashing visual stimuli. In the second (auditory control habituation phase they heard brief telephone ringing. In the third (conditioning phase we coincidently presented the visual stimulus (CS paired with the auditory stimulus (UCS. In the fourth phase participants either viewed flashes paired with the auditory stimulus (maintenance, CS- or viewed the visual stimulus in isolation (extinction, CS+ according to a 5:10 partial reinforcement schedule. The participants had no other task than attending to the stimuli and indicating the end of each trial by pressing a button. Results During unpaired visual presentations (preceding and following the paired presentation we observed significant brain responses beyond primary visual cortex in the bilateral posterior auditory association cortex (planum temporale, planum parietale and in the right superior temporal sulcus whereas the primary auditory regions were not involved. By contrast, the activity in auditory core regions was markedly larger when participants were presented with auditory stimuli. Conclusion These results demonstrate involvement of multisensory and auditory association areas in perception of unimodal visual stimulation which may reflect the instantaneous forming of multisensory associations and cannot be attributed to sensation of an auditory event. More importantly, we are able

  2. Effects of an Auditory Lateralization Training in Children Suspected to Central Auditory Processing Disorder

    Science.gov (United States)

    Lotfi, Yones; Moosavi, Abdollah; Bakhshi, Enayatollah; Sadjedi, Hamed

    2016-01-01

    Background and Objectives Central auditory processing disorder [(C)APD] refers to a deficit in auditory stimuli processing in nervous system that is not due to higher-order language or cognitive factors. One of the problems in children with (C)APD is spatial difficulties which have been overlooked despite their significance. Localization is an auditory ability to detect sound sources in space and can help to differentiate between the desired speech from other simultaneous sound sources. Aim of this research was investigating effects of an auditory lateralization training on speech perception in presence of noise/competing signals in children suspected to (C)APD. Subjects and Methods In this analytical interventional study, 60 children suspected to (C)APD were selected based on multiple auditory processing assessment subtests. They were randomly divided into two groups: control (mean age 9.07) and training groups (mean age 9.00). Training program consisted of detection and pointing to sound sources delivered with interaural time differences under headphones for 12 formal sessions (6 weeks). Spatial word recognition score (WRS) and monaural selective auditory attention test (mSAAT) were used to follow the auditory lateralization training effects. Results This study showed that in the training group, mSAAT score and spatial WRS in noise (p value≤0.001) improved significantly after the auditory lateralization training. Conclusions We used auditory lateralization training for 6 weeks and showed that auditory lateralization can improve speech understanding in noise significantly. The generalization of this results needs further researches.

  3. Cardiorespiratory interactions to external stimuli.

    Science.gov (United States)

    Bernardi, L; Porta, C; Spicuzza, L; Sleight, P

    2005-09-01

    Respiration is a powerful modulator of heart rate variability, and of baro- or chemo-reflex sensitivity. This occurs via a mechanical effect of breathing that synchronizes all cardiovascular variables at the respiratory rhythm, particularly when this occurs at a particular slow rate coincident with the Mayer waves in arterial pressure (approximately 6 cycles/min). Recitation of the rosary prayer (or of most mantras), induces a marked enhancement of these slow rhythms, whereas random verbalization or random breathing does not. This phenomenon in turn increases baroreflex sensitivity and reduces chemoreflex sensitivity, leading to increases in parasympathetic and reductions in sympathetic activity. The opposite can be seen during either verbalization or mental stress tests. Qualitatively similar effects can be obtained even by passive listening to more or less rhythmic auditory stimuli, such as music, and the speed of the rhythm (rather than the style) appears to be one of the main determinants of the cardiovascular and respiratory responses. These findings have clinical relevance. Appropriate modulation of breathing, can improve/restore autonomic control of cardiovascular and respiratory systems in relevant diseases such as hypertension and heart failure, and might therefore help improving exercise tolerance, quality of life, and ultimately, survival.

  4. Measuring Auditory Selective Attention using Frequency Tagging

    Directory of Open Access Journals (Sweden)

    Hari M Bharadwaj

    2014-02-01

    Full Text Available Frequency tagging of sensory inputs (presenting stimuli that fluctuate periodically at rates to which the cortex can phase lock has been used to study attentional modulation of neural responses to inputs in different sensory modalities. For visual inputs, the visual steady-state response (VSSR at the frequency modulating an attended object is enhanced, while the VSSR to a distracting object is suppressed. In contrast, the effect of attention on the auditory steady-state response (ASSR is inconsistent across studies. However, most auditory studies analyzed results at the sensor level or used only a small number of equivalent current dipoles to fit cortical responses. In addition, most studies of auditory spatial attention used dichotic stimuli (independent signals at the ears rather than more natural, binaural stimuli. Here, we asked whether these methodological choices help explain discrepant results. Listeners attended to one of two competing speech streams, one simulated from the left and one from the right, that were modulated at different frequencies. Using distributed source modeling of magnetoencephalography results, we estimate how spatially directed attention modulates the ASSR in neural regions across the whole brain. Attention enhances the ASSR power at the frequency of the attended stream in the contralateral auditory cortex. The attended-stream modulation frequency also drives phase-locked responses in the left (but not right precentral sulcus (lPCS, a region implicated in control of eye gaze and visual spatial attention. Importantly, this region shows no phase locking to the distracting stream suggesting that the lPCS in engaged in an attention-specific manner. Modeling results that take account of the geometry and phases of the cortical sources phase locked to the two streams (including hemispheric asymmetry of lPCS activity help partly explain why past ASSR studies of auditory spatial attention yield seemingly contradictory

  5. Neural Processing of Emotional Musical and Nonmusical Stimuli in Depression

    Science.gov (United States)

    Atchley, Ruth Ann; Chrysikou, Evangelia; Martin, Laura E.; Clair, Alicia A.; Ingram, Rick E.; Simmons, W. Kyle; Savage, Cary R.

    2016-01-01

    Background Anterior cingulate cortex (ACC) and striatum are part of the emotional neural circuitry implicated in major depressive disorder (MDD). Music is often used for emotion regulation, and pleasurable music listening activates the dopaminergic system in the brain, including the ACC. The present study uses functional MRI (fMRI) and an emotional nonmusical and musical stimuli paradigm to examine how neural processing of emotionally provocative auditory stimuli is altered within the ACC and striatum in depression. Method Nineteen MDD and 20 never-depressed (ND) control participants listened to standardized positive and negative emotional musical and nonmusical stimuli during fMRI scanning and gave subjective ratings of valence and arousal following scanning. Results ND participants exhibited greater activation to positive versus negative stimuli in ventral ACC. When compared with ND participants, MDD participants showed a different pattern of activation in ACC. In the rostral part of the ACC, ND participants showed greater activation for positive information, while MDD participants showed greater activation to negative information. In dorsal ACC, the pattern of activation distinguished between the types of stimuli, with ND participants showing greater activation to music compared to nonmusical stimuli, while MDD participants showed greater activation to nonmusical stimuli, with the greatest response to negative nonmusical stimuli. No group differences were found in striatum. Conclusions These results suggest that people with depression may process emotional auditory stimuli differently based on both the type of stimulation and the emotional content of that stimulation. This raises the possibility that music may be useful in retraining ACC function, potentially leading to more effective and targeted treatments. PMID:27284693

  6. THE EFFECTS OF SALICYLATE ON AUDITORY EVOKED POTENTIAL AMPLITWDE FROM THE AUDITORY CORTEX AND AUDITORY BRAINSTEM

    Institute of Scientific and Technical Information of China (English)

    Brian Sawka; SUN Wei

    2014-01-01

    Tinnitus has often been studied using salicylate in animal models as they are capable of inducing tempo-rary hearing loss and tinnitus. Studies have recently observed enhancement of auditory evoked responses of the auditory cortex (AC) post salicylate treatment which is also shown to be related to tinnitus like behavior in rats. The aim of this study was to observe if enhancements of the AC post salicylate treatment are also present at structures in the brainstem. Four male Sprague Dawley rats with AC implanted electrodes were tested for both AC and auditory brainstem response (ABR) recordings pre and post 250 mg/kg intraperitone-al injections of salicylate. The responses were recorded as the peak to trough amplitudes of P1-N1 (AC), ABR wave V, and ABR waveⅡ. AC responses resulted in statistically significant enhancement of ampli-tude at 2 hours post salicylate with 90 dB stimuli tone bursts of 4, 8, 12, and 20 kHz. Wave V of ABR re-sponses at 90 dB resulted in a statistically significant reduction of amplitude 2 hours post salicylate and a mean decrease of amplitude of 31%for 16 kHz. WaveⅡamplitudes at 2 hours post treatment were signifi-cantly reduced for 4, 12, and 20 kHz stimuli at 90 dB SPL. Our results suggest that the enhancement chang-es of the AC related to salicylate induced tinnitus are generated superior to the level of the inferior colliculus and may originate in the AC.

  7. Deriving cochlear delays in humans using otoacoustic emissions and auditory evoked potentials

    DEFF Research Database (Denmark)

    Pigasse, Gilles

    . These methods include: otoacoustic emissions (OAEs), auditory brainstem responses (ABRs) and auditory steady-state responses (ASSRs). A comparison between the three methods was made across and within subjects, in order to highlight the impact of inter-subject variability on the cochlear delay estimates...... results are also given for an experiment using stimuli designed to compensate for OAE delays. These were designed to try and reproduce the success of similar stimuli now used routinely to improve ABR signal-to-noise ratio....

  8. Functional sex differences in human primary auditory cortex

    Energy Technology Data Exchange (ETDEWEB)

    Ruytjens, Liesbet [University Medical Center Groningen, Department of Otorhinolaryngology, Groningen (Netherlands); University Medical Center Utrecht, Department Otorhinolaryngology, P.O. Box 85500, Utrecht (Netherlands); Georgiadis, Janniko R. [University of Groningen, University Medical Center Groningen, Department of Anatomy and Embryology, Groningen (Netherlands); Holstege, Gert [University of Groningen, University Medical Center Groningen, Center for Uroneurology, Groningen (Netherlands); Wit, Hero P. [University Medical Center Groningen, Department of Otorhinolaryngology, Groningen (Netherlands); Albers, Frans W.J. [University Medical Center Utrecht, Department Otorhinolaryngology, P.O. Box 85500, Utrecht (Netherlands); Willemsen, Antoon T.M. [University Medical Center Groningen, Department of Nuclear Medicine and Molecular Imaging, Groningen (Netherlands)

    2007-12-15

    We used PET to study cortical activation during auditory stimulation and found sex differences in the human primary auditory cortex (PAC). Regional cerebral blood flow (rCBF) was measured in 10 male and 10 female volunteers while listening to sounds (music or white noise) and during a baseline (no auditory stimulation). We found a sex difference in activation of the left and right PAC when comparing music to noise. The PAC was more activated by music than by noise in both men and women. But this difference between the two stimuli was significantly higher in men than in women. To investigate whether this difference could be attributed to either music or noise, we compared both stimuli with the baseline and revealed that noise gave a significantly higher activation in the female PAC than in the male PAC. Moreover, the male group showed a deactivation in the right prefrontal cortex when comparing noise to the baseline, which was not present in the female group. Interestingly, the auditory and prefrontal regions are anatomically and functionally linked and the prefrontal cortex is known to be engaged in auditory tasks that involve sustained or selective auditory attention. Thus we hypothesize that differences in attention result in a different deactivation of the right prefrontal cortex, which in turn modulates the activation of the PAC and thus explains the sex differences found in the activation of the PAC. Our results suggest that sex is an important factor in auditory brain studies. (orig.)

  9. Functional sex differences in human primary auditory cortex

    International Nuclear Information System (INIS)

    We used PET to study cortical activation during auditory stimulation and found sex differences in the human primary auditory cortex (PAC). Regional cerebral blood flow (rCBF) was measured in 10 male and 10 female volunteers while listening to sounds (music or white noise) and during a baseline (no auditory stimulation). We found a sex difference in activation of the left and right PAC when comparing music to noise. The PAC was more activated by music than by noise in both men and women. But this difference between the two stimuli was significantly higher in men than in women. To investigate whether this difference could be attributed to either music or noise, we compared both stimuli with the baseline and revealed that noise gave a significantly higher activation in the female PAC than in the male PAC. Moreover, the male group showed a deactivation in the right prefrontal cortex when comparing noise to the baseline, which was not present in the female group. Interestingly, the auditory and prefrontal regions are anatomically and functionally linked and the prefrontal cortex is known to be engaged in auditory tasks that involve sustained or selective auditory attention. Thus we hypothesize that differences in attention result in a different deactivation of the right prefrontal cortex, which in turn modulates the activation of the PAC and thus explains the sex differences found in the activation of the PAC. Our results suggest that sex is an important factor in auditory brain studies. (orig.)

  10. Formation of associations in auditory cortex by slow changes of tonic firing.

    Science.gov (United States)

    Brosch, Michael; Selezneva, Elena; Scheich, Henning

    2011-01-01

    We review event-related slow firing changes in the auditory cortex and related brain structures. Two types of changes can be distinguished, namely increases and decreases of firing, lasting in the order of seconds. Triggering events can be auditory stimuli, reinforcers, and behavioral responses. Slow firing changes terminate with reinforcers and possibly with auditory stimuli and behavioral responses. A necessary condition for the emergence of slow firing changes seems to be that subjects have learnt that consecutive sensory or behavioral events are contingent on reinforcement. They disappear when the contingencies are no longer present. Slow firing changes in auditory cortex bear similarities with slow changes of neuronal activity that have been observed in subcortical parts of the auditory system and in other non-sensory brain structures. We propose that slow firing changes in auditory cortex provide a neuronal mechanism for anticipating, memorizing, and associating events that are related to hearing and of behavioral relevance. This may complement the representation of the timing and types of auditory and auditory-related events which may be provided by phasic responses in auditory cortex. The presence of slow firing changes indicates that many more auditory-related aspects of a behavioral procedure are reflected in the neuronal activity of auditory cortex than previously assumed. PMID:20488230

  11. Comparison of Auditory Evoked Potentials in Heterosexual, Homosexual, and Bisexual Males and Females

    OpenAIRE

    McFadden, Dennis; Champlin, Craig A.

    2000-01-01

    The auditory evoked potentials (AEPs) elicited by click stimuli were measured in heterosexual, homosexual, and bisexual males and females having normal hearing sensitivity. Estimates of latency and/or amplitude were extracted for nine peaks having latencies of about 2–240 ms, which are presumed to correspond to populations of neurons located from the auditory nerve through auditory cortex. For five of the 19 measures obtained, the mean latency or amplitude for the 57 homosexual and bisexual f...

  12. Covert Auditory Spatial Orienting: An Evaluation of the Spatial Relevance Hypothesis

    Science.gov (United States)

    Roberts, Katherine L.; Summerfield, A. Quentin; Hall, Deborah A.

    2009-01-01

    The spatial relevance hypothesis (J. J. McDonald & L. M. Ward, 1999) proposes that covert auditory spatial orienting can only be beneficial to auditory processing when task stimuli are encoded spatially. We present a series of experiments that evaluate 2 key aspects of the hypothesis: (a) that "reflexive activation of location-sensitive neurons is…

  13. The Process of Auditory Distraction: Disrupted Attention and Impaired Recall in a Simulated Lecture Environment

    Science.gov (United States)

    Zeamer, Charlotte; Fox Tree, Jean E.

    2013-01-01

    Literature on auditory distraction has generally focused on the effects of particular kinds of sounds on attention to target stimuli. In support of extensive previous findings that have demonstrated the special role of language as an auditory distractor, we found that a concurrent speech stream impaired recall of a short lecture, especially for…

  14. The role of modality : Auditory and visual distractors in Stroop interference

    NARCIS (Netherlands)

    Elliott, Emily M.; Morey, Candice C.; Morey, Richard D.; Eaves, Sharon D.; Shelton, Jill Talley; Lutfi-Proctor, Danielle A.

    2014-01-01

    As a commonly used measure of selective attention, it is important to understand the factors contributing to interference in the Stroop task. The current research examined distracting stimuli in the auditory and visual modalities to determine whether the use of auditory distractors would create addi

  15. Visual and auditory perception in preschool children at risk for dyslexia.

    Science.gov (United States)

    Ortiz, Rosario; Estévez, Adelina; Muñetón, Mercedes; Domínguez, Carolina

    2014-11-01

    Recently, there has been renewed interest in perceptive problems of dyslexics. A polemic research issue in this area has been the nature of the perception deficit. Another issue is the causal role of this deficit in dyslexia. Most studies have been carried out in adult and child literates; consequently, the observed deficits may be the result rather than the cause of dyslexia. This study addresses these issues by examining visual and auditory perception in children at risk for dyslexia. We compared children from preschool with and without risk for dyslexia in auditory and visual temporal order judgment tasks and same-different discrimination tasks. Identical visual and auditory, linguistic and nonlinguistic stimuli were presented in both tasks. The results revealed that the visual as well as the auditory perception of children at risk for dyslexia is impaired. The comparison between groups in auditory and visual perception shows that the achievement of children at risk was lower than children without risk for dyslexia in the temporal tasks. There were no differences between groups in auditory discrimination tasks. The difficulties of children at risk in visual and auditory perceptive processing affected both linguistic and nonlinguistic stimuli. Our conclusions are that children at risk for dyslexia show auditory and visual perceptive deficits for linguistic and nonlinguistic stimuli. The auditory impairment may be explained by temporal processing problems and these problems are more serious for processing language than for processing other auditory stimuli. These visual and auditory perceptive deficits are not the consequence of failing to learn to read, thus, these findings support the theory of temporal processing deficit.

  16. Neuromagnetic evidence for early auditory restoration of fundamental pitch.

    Directory of Open Access Journals (Sweden)

    Philip J Monahan

    Full Text Available BACKGROUND: Understanding the time course of how listeners reconstruct a missing fundamental component in an auditory stimulus remains elusive. We report MEG evidence that the missing fundamental component of a complex auditory stimulus is recovered in auditory cortex within 100 ms post stimulus onset. METHODOLOGY: Two outside tones of four-tone complex stimuli were held constant (1200 Hz and 2400 Hz, while two inside tones were systematically modulated (between 1300 Hz and 2300 Hz, such that the restored fundamental (also knows as "virtual pitch" changed from 100 Hz to 600 Hz. Constructing the auditory stimuli in this manner controls for a number of spectral properties known to modulate the neuromagnetic signal. The tone complex stimuli only diverged on the value of the missing fundamental component. PRINCIPAL FINDINGS: We compared the M100 latencies of these tone complexes to the M100 latencies elicited by their respective pure tone (spectral pitch counterparts. The M100 latencies for the tone complexes matched their pure sinusoid counterparts, while also replicating the M100 temporal latency response curve found in previous studies. CONCLUSIONS: Our findings suggest that listeners are reconstructing the inferred pitch by roughly 100 ms after stimulus onset and are consistent with previous electrophysiological research suggesting that the inferential pitch is perceived in early auditory cortex.

  17. Enhanced representation of spectral contrasts in the primary auditory cortex

    Directory of Open Access Journals (Sweden)

    Nicolas eCatz

    2013-06-01

    Full Text Available The role of early auditory processing may be to extract some elementary features from an acoustic mixture in order to organize the auditory scene. To accomplish this task, the central auditory system may rely on the fact that sensory objects are often composed of spectral edges, i.e. regions where the stimulus energy changes abruptly over frequency. The processing of acoustic stimuli may benefit from a mechanism enhancing the internal representation of spectral edges. While the visual system is thought to rely heavily on this mechanism (enhancing spatial edges, it is still unclear whether a related process plays a significant role in audition. We investigated the cortical representation of spectral edges, using acoustic stimuli composed of multi-tone pips whose time-averaged spectral envelope contained suppressed or enhanced regions. Importantly, the stimuli were designed such that neural responses properties could be assessed as a function of stimulus frequency during stimulus presentation. Our results suggest that the representation of acoustic spectral edges is enhanced in the auditory cortex, and that this enhancement is sensitive to the characteristics of the spectral contrast profile, such as depth, sharpness and width. Spectral edges are maximally enhanced for sharp contrast and large depth. Cortical activity was also suppressed at frequencies within the suppressed region. To note, the suppression of firing was larger at frequencies nearby the lower edge of the suppressed region than at the upper edge. Overall, the present study gives critical insights into the processing of spectral contrasts in the auditory system.

  18. Increased Auditory Startle Reflex in Children with Functional Abdominal Pain

    NARCIS (Netherlands)

    Bakker, Mirte J.; Boer, Frits; Benninga, Marc A.; Koelman, Johannes H. T. M.; Tijssen, Marina A. J.

    2010-01-01

    Objective To test the hypothesis that children with abdominal pain-related functional gastrointestinal disorders have a general hypersensitivity for sensory stimuli. Study design Auditory startle reflexes were assessed in 20 children classified according to Rome III classifications of abdominal pain

  19. Context, Contrast, and Tone of Voice in Auditory Sarcasm Perception

    Science.gov (United States)

    Voyer, Daniel; Thibodeau, Sophie-Hélène; Delong, Breanna J.

    2016-01-01

    Four experiments were conducted to investigate the interplay between context and tone of voice in the perception of sarcasm. These experiments emphasized the role of contrast effects in sarcasm perception exclusively by means of auditory stimuli whereas most past research has relied on written material. In all experiments, a positive or negative…

  20. Auditory Processing and Language Impairment in Children: Stimulus Considerations for Intervention.

    Science.gov (United States)

    Thal, Donna J.; Barone, Patricia

    1983-01-01

    The performance of language impaired children (four to eight years old) on auditory identification and sequencing tasks which employed different stimuli was studied in two experiments. Results indicated that some children performed significantly better when words rather than tones were used as stimuli.(Author/SEW)

  1. Cortical oscillations in auditory perception and speech: evidence for two temporal windows in human auditory cortex

    Directory of Open Access Journals (Sweden)

    Huan eLuo

    2012-05-01

    Full Text Available Natural sounds, including vocal communication sounds, contain critical information at multiple time scales. Two essential temporal modulation rates in speech have been argued to be in the low gamma band (~20-80 ms duration information and the theta band (~150-300 ms, corresponding to segmental and syllabic modulation rates, respectively. On one hypothesis, auditory cortex implements temporal integration using time constants closely related to these values. The neural correlates of a proposed dual temporal window mechanism in human auditory cortex remain poorly understood. We recorded MEG responses from participants listening to non-speech auditory stimuli with different temporal structures, created by concatenating frequency-modulated segments of varied segment durations. We show that these non-speech stimuli with temporal structure matching speech-relevant scales (~25 ms and ~200 ms elicit reliable phase tracking in the corresponding associated oscillatory frequencies (low gamma and theta bands. In contrast, stimuli with non-matching temporal structure do not. Furthermore, the topography of theta band phase tracking shows rightward lateralization while gamma band phase tracking occurs bilaterally. The results support the hypothesis that there exists multi-time resolution processing in cortex on discontinuous scales and provide evidence for an asymmetric organization of temporal analysis (asymmetrical sampling in time, AST. The data argue for a macroscopic-level neural mechanism underlying multi-time resolution processing: the sliding and resetting of intrinsic temporal windows on privileged time scales.

  2. Differential coding of conspecific vocalizations in the ventral auditory cortical stream.

    Science.gov (United States)

    Fukushima, Makoto; Saunders, Richard C; Leopold, David A; Mishkin, Mortimer; Averbeck, Bruno B

    2014-03-26

    The mammalian auditory cortex integrates spectral and temporal acoustic features to support the perception of complex sounds, including conspecific vocalizations. Here we investigate coding of vocal stimuli in different subfields in macaque auditory cortex. We simultaneously measured auditory evoked potentials over a large swath of primary and higher order auditory cortex along the supratemporal plane in three animals chronically using high-density microelectrocorticographic arrays. To evaluate the capacity of neural activity to discriminate individual stimuli in these high-dimensional datasets, we applied a regularized multivariate classifier to evoked potentials to conspecific vocalizations. We found a gradual decrease in the level of overall classification performance along the caudal to rostral axis. Furthermore, the performance in the caudal sectors was similar across individual stimuli, whereas the performance in the rostral sectors significantly differed for different stimuli. Moreover, the information about vocalizations in the caudal sectors was similar to the information about synthetic stimuli that contained only the spectral or temporal features of the original vocalizations. In the rostral sectors, however, the classification for vocalizations was significantly better than that for the synthetic stimuli, suggesting that conjoined spectral and temporal features were necessary to explain differential coding of vocalizations in the rostral areas. We also found that this coding in the rostral sector was carried primarily in the theta frequency band of the response. These findings illustrate a progression in neural coding of conspecific vocalizations along the ventral auditory pathway. PMID:24672012

  3. A theory of three-dimensional auditory perception

    Science.gov (United States)

    Saifuddin, Kazi

    2001-05-01

    A theory of auditory dimensions regarding temporal perception is proposed on the basis of the results found from a series of experiments conducted. In all experiments, relationships were investigated between the subjective judgments and the factors extracted from the autocorrelation function (ACF) of the auditory stimuli. The factors were changed by using different properties of pure-tone, complex-tone, white-noise and bandpass-noise stimuli. Experiments by paired-comparison method were conducted in the sound proof chamber except for one in a concert hall. Human subjects were asked to compare the durations of the two successive stimuli in the pair. Subjective durations were obtained in the psychometric function. Auditory stimuli were selected to use on the basis of the measured factors of ACF as parameters. Obtained results showed significant correlation between the factors of ACF and the subjective durations described well by the theory. The theory indicates loudness and pitch as two fundamental dimensions and whether the third one is the duration of the stimuli.

  4. Extra-classical tuning predicts stimulus-dependent receptive fields in auditory neurons

    OpenAIRE

    Schneider, David M.; Woolley, Sarah M. N.

    2011-01-01

    The receptive fields of many sensory neurons are sensitive to statistical differences among classes of complex stimuli. For example, excitatory spectral bandwidths of midbrain auditory neurons and the spatial extent of cortical visual neurons differ during the processing of natural stimuli compared to the processing of artificial stimuli. Experimentally characterizing neuronal non-linearities that contribute to stimulus-dependent receptive fields is important for understanding how neurons res...

  5. Selective attention in an insect auditory neuron.

    Science.gov (United States)

    Pollack, G S

    1988-07-01

    Previous work (Pollack, 1986) showed that an identified auditory neuron of crickets, the omega neuron, selectively encodes the temporal structure of an ipsilateral sound stimulus when a contralateral stimulus is presented simultaneously, even though the contralateral stimulus is clearly encoded when it is presented alone. The present paper investigates the physiological basis for this selective response. The selectivity for the ipsilateral stimulus is a result of the apparent intensity difference of ipsi- and contralateral stimuli, which is imposed by auditory directionality; when simultaneous presentation of stimuli from the 2 sides is mimicked by presenting low- and high-intensity stimuli simultaneously from the ipsilateral side, the neuron responds selectively to the high-intensity stimulus, even though the low-intensity stimulus is effective when it is presented alone. The selective encoding of the more intense (= ipsilateral) stimulus is due to intensity-dependent inhibition, which is superimposed on the cell's excitatory response to sound. Because of the inhibition, the stimulus with lower intensity (i.e., the contralateral stimulus) is rendered subthreshold, while the stimulus with higher intensity (the ipsilateral stimulus) remains above threshold. Consequently, the temporal structure of the low-intensity stimulus is filtered out of the neuron's spike train. The source of the inhibition is not known. It is not a consequence of activation of the omega neuron. Its characteristics are not consistent with those of known inhibitory inputs to the omega neuron.

  6. Attention Modulates the Auditory Cortical Processing of Spatial and Category Cues in Naturalistic Auditory Scenes

    Science.gov (United States)

    Renvall, Hanna; Staeren, Noël; Barz, Claudia S.; Ley, Anke; Formisano, Elia

    2016-01-01

    the auditory cortex, may explain the simultaneous increase of BOLD responses and decrease of MEG responses. These findings highlight the complimentary role of electrophysiological and hemodynamic measures in addressing brain processing of complex stimuli. PMID:27375416

  7. A hardware model of the auditory periphery to transduce acoustic signals into neural activity

    Directory of Open Access Journals (Sweden)

    Takashi eTateno

    2013-11-01

    Full Text Available To improve the performance of cochlear implants, we have integrated a microdevice into a model of the auditory periphery with the goal of creating a microprocessor. We constructed an artificial peripheral auditory system using a hybrid model in which polyvinylidene difluoride was used as a piezoelectric sensor to convert mechanical stimuli into electric signals. To produce frequency selectivity, the slit on a stainless steel base plate was designed such that the local resonance frequency of the membrane over the slit reflected the transfer function. In the acoustic sensor, electric signals were generated based on the piezoelectric effect from local stress in the membrane. The electrodes on the resonating plate produced relatively large electric output signals. The signals were fed into a computer model that mimicked some functions of inner hair cells, inner hair cell–auditory nerve synapses, and auditory nerve fibers. In general, the responses of the model to pure-tone burst and complex stimuli accurately represented the discharge rates of high-spontaneous-rate auditory nerve fibers across a range of frequencies greater than 1 kHz and middle to high sound pressure levels. Thus, the model provides a tool to understand information processing in the peripheral auditory system and a basic design for connecting artificial acoustic sensors to the peripheral auditory nervous system. Finally, we discuss the need for stimulus control with an appropriate model of the auditory periphery based on auditory brainstem responses that were electrically evoked by different temporal pulse patterns with the same pulse number.

  8. The Analysis of Sensory Stimuli of Terror in A Rose for Emily

    Institute of Scientific and Technical Information of China (English)

    尚慧敏

    2015-01-01

    William Faulkner drew impressive pictures of terror in A Rose for Emily by visual descriptions, auditory descriptions, tactile descriptions, and olfactory descriptions. Through the biological analysis, people can figure out what kinds of stimuli in this work can produce terror, how the sensory organs respond to the terror stimuli and why readers fear them. It is proved that Faulkner’s description of terror is based on the system of men’s receiving information and the production mechanism of terror.

  9. Effect of stimuli, transducers and gender on acoustic change complex

    Directory of Open Access Journals (Sweden)

    Hemanth N. Shetty

    2012-08-01

    Full Text Available The objective of this study was to investigate the effect of stimuli, transducers and gender on the latency and amplitude of acoustic change complex (ACC. ACC is a multiple overlapping P1-N1-P2 complex reflecting acoustic changes across the entire stimulus. Fifteen males and 15 females, in the age range of 18 to 25 (mean=21.67 years, having normal hearing participated in the study. The ACC was recorded using the vertical montage. The naturally produced stimuli /sa/ and /si/ were presented through the insert earphone/loud speaker to record the ACC. The ACC obtained from different stimuli presented through different transducers from male/female participants were analyzed using mixed analysis of variance. Dependent t-test and independent t-test were performed when indicated. There was a significant difference in latency of 2N1 at the transition, with latency for /sa/ being earlier; but not at the onset portions of ACC. There was no significant difference in amplitude of ACC between the stimuli. Among the transducers, there was no significant difference in latency and amplitude of ACC, for both /sa/ and /si/ stimuli. Female participants showed earlier latency for 2N1 and larger amplitude of N1 and 2P2 than male participants, which was significant. ACC provides important insight in detecting the subtle spectral changes in each stimulus. Among the transducers, no difference in ACC was noted as the spectra of stimuli delivered were within the frequency response of the transducers. The earlier 2N1 latency and larger N1 and 2P2 amplitudes noticed in female participants could be due to smaller head circumference. The findings of this study will be useful in determining the capacity of the auditory pathway in detecting subtle spectral changes in the stimulus at the level of the auditory cortex.

  10. Autosomal recessive hereditary auditory neuropathy

    Institute of Scientific and Technical Information of China (English)

    王秋菊; 顾瑞; 曹菊阳

    2003-01-01

    Objectives: Auditory neuropathy (AN) is a sensorineural hearing disorder characterized by absent or abnormal auditory brainstem responses (ABRs) and normal cochlear outer hair cell function as measured by otoacoustic emissions (OAEs). Many risk factors are thought to be involved in its etiology and pathophysiology. Three Chinese pedigrees with familial AN are presented herein to demonstrate involvement of genetic factors in AN etiology. Methods: Probands of the above - mentioned pedigrees, who had been diagnosed with AN, were evaluated and followed up in the Department of Otolaryngology Head and Neck Surgery, China PLA General Hospital. Their family members were studied and the pedigree diagrams were established. History of illness, physical examination,pure tone audiometry, acoustic reflex, ABRs and transient evoked and distortion- product otoacoustic emissions (TEOAEs and DPOAEs) were obtained from members of these families. DPOAE changes under the influence of contralateral sound stimuli were observed by presenting a set of continuous white noise to the non - recording ear to exam the function of auditory efferent system. Some subjects received vestibular caloric test, computed tomography (CT)scan of the temporal bone and electrocardiography (ECG) to exclude other possible neuropathy disorders. Results: In most affected subjects, hearing loss of various degrees and speech discrimination difficulties started at 10 to16 years of age. Their audiological evaluation showed absence of acoustic reflex and ABRs. As expected in AN, these subjects exhibited near normal cochlear outer hair cell function as shown in TEOAE & DPOAE recordings. Pure- tone audiometry revealed hearing loss ranging from mild to severe in these patients. Autosomal recessive inheritance patterns were observed in the three families. In Pedigree Ⅰ and Ⅱ, two affected brothers were found respectively, while in pedigree Ⅲ, 2 sisters were affected. All the patients were otherwise normal without

  11. A corollary discharge maintains auditory sensitivity during sound production.

    Science.gov (United States)

    Poulet, James F A; Hedwig, Berthold

    2002-08-22

    Speaking and singing present the auditory system of the caller with two fundamental problems: discriminating between self-generated and external auditory signals and preventing desensitization. In humans and many other vertebrates, auditory neurons in the brain are inhibited during vocalization but little is known about the nature of the inhibition. Here we show, using intracellular recordings of auditory neurons in the singing cricket, that presynaptic inhibition of auditory afferents and postsynaptic inhibition of an identified auditory interneuron occur in phase with the song pattern. Presynaptic and postsynaptic inhibition persist in a fictively singing, isolated cricket central nervous system and are therefore the result of a corollary discharge from the singing motor network. Mimicking inhibition in the interneuron by injecting hyperpolarizing current suppresses its spiking response to a 100-dB sound pressure level (SPL) acoustic stimulus and maintains its response to subsequent, quieter stimuli. Inhibition by the corollary discharge reduces the neural response to self-generated sound and protects the cricket's auditory pathway from self-induced desensitization.

  12. Neural correlates of auditory scale illusion.

    Science.gov (United States)

    Kuriki, Shinya; Numao, Ryousuke; Nemoto, Iku

    2016-09-01

    The auditory illusory perception "scale illusion" occurs when ascending and descending musical scale tones are delivered in a dichotic manner, such that the higher or lower tone at each instant is presented alternately to the right and left ears. Resulting tone sequences have a zigzag pitch in one ear and the reversed (zagzig) pitch in the other ear. Most listeners hear illusory smooth pitch sequences of up-down and down-up streams in the two ears separated in higher and lower halves of the scale. Although many behavioral studies have been conducted, how and where in the brain the illusory percept is formed have not been elucidated. In this study, we conducted functional magnetic resonance imaging using sequential tones that induced scale illusion (ILL) and those that mimicked the percept of scale illusion (PCP), and we compared the activation responses evoked by those stimuli by region-of-interest analysis. We examined the effects of adaptation, i.e., the attenuation of response that occurs when close-frequency sounds are repeated, which might interfere with the changes in activation by the illusion process. Results of the activation difference of the two stimuli, measured at varied tempi of tone presentation, in the superior temporal auditory cortex were not explained by adaptation. Instead, excess activation of the ILL stimulus from the PCP stimulus at moderate tempi (83 and 126 bpm) was significant in the posterior auditory cortex with rightward superiority, while significant prefrontal activation was dominant at the highest tempo (245 bpm). We suggest that the area of the planum temporale posterior to the primary auditory cortex is mainly involved in the illusion formation, and that the illusion-related process is strongly dependent on the rate of tone presentation. PMID:27292114

  13. Auditory-motor learning influences auditory memory for music.

    Science.gov (United States)

    Brown, Rachel M; Palmer, Caroline

    2012-05-01

    In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features. PMID:22271265

  14. Cross-Modal Functional Reorganization of Visual and Auditory Cortex in Adult Cochlear Implant Users Identified with fNIRS

    Directory of Open Access Journals (Sweden)

    Ling-Chia Chen

    2016-01-01

    Full Text Available Cochlear implant (CI users show higher auditory-evoked activations in visual cortex and higher visual-evoked activation in auditory cortex compared to normal hearing (NH controls, reflecting functional reorganization of both visual and auditory modalities. Visual-evoked activation in auditory cortex is a maladaptive functional reorganization whereas auditory-evoked activation in visual cortex is beneficial for speech recognition in CI users. We investigated their joint influence on CI users’ speech recognition, by testing 20 postlingually deafened CI users and 20 NH controls with functional near-infrared spectroscopy (fNIRS. Optodes were placed over occipital and temporal areas to measure visual and auditory responses when presenting visual checkerboard and auditory word stimuli. Higher cross-modal activations were confirmed in both auditory and visual cortex for CI users compared to NH controls, demonstrating that functional reorganization of both auditory and visual cortex can be identified with fNIRS. Additionally, the combined reorganization of auditory and visual cortex was found to be associated with speech recognition performance. Speech performance was good as long as the beneficial auditory-evoked activation in visual cortex was higher than the visual-evoked activation in the auditory cortex. These results indicate the importance of considering cross-modal activations in both visual and auditory cortex for potential clinical outcome estimation.

  15. Neurophysiological Mechanisms of Auditory Information Processing in Adolescence: A Study on Sex Differences.

    Science.gov (United States)

    Bakos, Sarolta; Töllner, Thomas; Trinkl, Monika; Landes, Iris; Bartling, Jürgen; Grossheinrich, Nicola; Schulte-Körne, Gerd; Greimel, Ellen

    2016-04-01

    To date, little is known about sex differences in the neurophysiological correlates underlying auditory information processing. In the present study, auditory evoked potentials were evoked in typically developing male (n = 15) and female (n = 14) adolescents (13-18 years) during an auditory oddball task. Girls compared to boys displayed lower N100 and P300 amplitudes to targets. Larger N100 amplitudes in adolescent boys might indicate higher neural sensitivity to changes of incoming auditory information. The P300 findings point toward sex differences in auditory working memory and might suggest that adolescent boys might allocate more attentional resources when processing relevant auditory stimuli than adolescent girls. PMID:27379950

  16. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events

    Directory of Open Access Journals (Sweden)

    Jeroen eStekelenburg

    2012-05-01

    Full Text Available In many natural audiovisual events (e.g., a clap of the two hands, the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have already reported that there are distinct neural correlates of temporal (when versus phonetic/semantic (which content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual part of the audiovisual stimulus. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical subadditive amplitude reductions (AV – V < A were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that the N1 suppression was larger for spatially congruent stimuli. A very early audiovisual interaction was also found at 30-50 ms in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  17. Neuronal activity in primate prefrontal cortex related to goal-directed behavior during auditory working memory tasks.

    Science.gov (United States)

    Huang, Ying; Brosch, Michael

    2016-06-01

    Prefrontal cortex (PFC) has been documented to play critical roles in goal-directed behaviors, like representing goal-relevant events and working memory (WM). However, neurophysiological evidence for such roles of PFC has been obtained mainly with visual tasks but rarely with auditory tasks. In the present study, we tested roles of PFC in auditory goal-directed behaviors by recording local field potentials in the auditory region of left ventrolateral PFC while a monkey performed auditory WM tasks. The tasks consisted of multiple events and required the monkey to change its mental states to achieve the reward. The events were auditory and visual stimuli, as well as specific actions. Mental states were engaging in the tasks and holding task-relevant information in auditory WM. We found that, although based on recordings from one hemisphere in one monkey only, PFC represented multiple events that were important for achieving reward, including auditory and visual stimuli like turning on and off an LED, as well as bar touch. The responses to auditory events depended on the tasks and on the context of the tasks. This provides support for the idea that neuronal representations in PFC are flexible and can be related to the behavioral meaning of stimuli. We also found that engaging in the tasks and holding information in auditory WM were associated with persistent changes of slow potentials, both of which are essential for auditory goal-directed behaviors. Our study, on a single hemisphere in a single monkey, reveals roles of PFC in auditory goal-directed behaviors similar to those in visual goal-directed behaviors, suggesting that functions of PFC in goal-directed behaviors are probably common across the auditory and visual modality. This article is part of a Special Issue entitled SI: Auditory working memory. PMID:26874071

  18. Biases in Visual, Auditory, and Audiovisual Perception of Space.

    Directory of Open Access Journals (Sweden)

    Brian Odegaard

    2015-12-01

    Full Text Available Localization of objects and events in the environment is critical for survival, as many perceptual and motor tasks rely on estimation of spatial location. Therefore, it seems reasonable to assume that spatial localizations should generally be accurate. Curiously, some previous studies have reported biases in visual and auditory localizations, but these studies have used small sample sizes and the results have been mixed. Therefore, it is not clear (1 if the reported biases in localization responses are real (or due to outliers, sampling bias, or other factors, and (2 whether these putative biases reflect a bias in sensory representations of space or a priori expectations (which may be due to the experimental setup, instructions, or distribution of stimuli. Here, to address these questions, a dataset of unprecedented size (obtained from 384 observers was analyzed to examine presence, direction, and magnitude of sensory biases, and quantitative computational modeling was used to probe the underlying mechanism(s driving these effects. Data revealed that, on average, observers were biased towards the center when localizing visual stimuli, and biased towards the periphery when localizing auditory stimuli. Moreover, quantitative analysis using a Bayesian Causal Inference framework suggests that while pre-existing spatial biases for central locations exert some influence, biases in the sensory representations of both visual and auditory space are necessary to fully explain the behavioral data. How are these opposing visual and auditory biases reconciled in conditions in which both auditory and visual stimuli are produced by a single event? Potentially, the bias in one modality could dominate, or the biases could interact/cancel out. The data revealed that when integration occurred in these conditions, the visual bias dominated, but the magnitude of this bias was reduced compared to unisensory conditions. Therefore, multisensory integration not only

  19. Temporal Integration of Auditory Stimulation and Binocular Disparity Signals

    Directory of Open Access Journals (Sweden)

    Marina Zannoli

    2011-10-01

    Full Text Available Several studies using visual objects defined by luminance have reported that the auditory event must be presented 30 to 40 ms after the visual stimulus to perceive audiovisual synchrony. In the present study, we used visual objects defined only by their binocular disparity. We measured the optimal latency between visual and auditory stimuli for the perception of synchrony using a method introduced by Moutoussis & Zeki (1997. Visual stimuli were defined either by luminance and disparity or by disparity only. They moved either back and forth between 6 and 12 arcmin or from left to right at a constant disparity of 9 arcmin. This visual modulation was presented together with an amplitude-modulated 500 Hz tone. Both modulations were sinusoidal (frequency: 0.7 Hz. We found no difference between 2D and 3D motion for luminance stimuli: a 40 ms auditory lag was necessary for perceived synchrony. Surprisingly, even though stereopsis is often thought to be slow, we found a similar optimal latency in the disparity 3D motion condition (55 ms. However, when participants had to judge simultaneity for disparity 2D motion stimuli, it led to larger latencies (170 ms, suggesting that stereo motion detectors are poorly suited to track 2D motion.

  20. Moving Objects in the Barn Owl's Auditory World.

    Science.gov (United States)

    Langemann, Ulrike; Krumm, Bianca; Liebner, Katharina; Beutelmann, Rainer; Klump, Georg M

    2016-01-01

    Barn owls are keen hunters of moving prey. They have evolved an auditory system with impressive anatomical and physiological specializations for localizing their prey. Here we present behavioural data on the owl's sensitivity for discriminating acoustic motion direction in azimuth that, for the first time, allow a direct comparison of neuronal and perceptual sensitivity for acoustic motion in the same model species. We trained two birds to report a change in motion direction within a series of repeating wideband noise stimuli. For any trial the starting point, motion direction, velocity (53-2400°/s), duration (30-225 ms) and angular range (12-72°) of the noise sweeps were randomized. Each test stimulus had a motion direction being opposite to that of the reference stimuli. Stimuli were presented in the frontal or the lateral auditory space. The angular extent of the motion had a large effect on the owl's discrimination sensitivity allowing a better discrimination for a larger angular range of the motion. In contrast, stimulus velocity or stimulus duration had a smaller, although significant effect. Overall there was no difference in the owls' behavioural performance between "inward" noise sweeps (moving from lateral to frontal) compared to "outward" noise sweeps (moving from frontal to lateral). The owls did, however, respond more often to stimuli with changing motion direction in the frontal compared to the lateral space. The results of the behavioural experiments are discussed in relation to the neuronal representation of motion cues in the barn owl auditory midbrain. PMID:27080662

  1. Designing auditory cues for Parkinson's disease gait rehabilitation.

    Science.gov (United States)

    Cancela, Jorge; Moreno, Eugenio M; Arredondo, Maria T; Bonato, Paolo

    2014-01-01

    Recent works have proved that Parkinson's disease (PD) patients can be largely benefit by performing rehabilitation exercises based on audio cueing and music therapy. Specially, gait can benefit from repetitive sessions of exercises using auditory cues. Nevertheless, all the experiments are based on the use of a metronome as auditory stimuli. Within this work, Human-Computer Interaction methodologies have been used to design new cues that could benefit the long-term engagement of PD patients in these repetitive routines. The study has been also extended to commercial music and musical pieces by analyzing features and characteristics that could benefit the engagement of PD patients to rehabilitation tasks. PMID:25571327

  2. Sparse representation of sounds in the unanesthetized auditory cortex.

    Directory of Open Access Journals (Sweden)

    Tomás Hromádka

    2008-01-01

    Full Text Available How do neuronal populations in the auditory cortex represent acoustic stimuli? Although sound-evoked neural responses in the anesthetized auditory cortex are mainly transient, recent experiments in the unanesthetized preparation have emphasized subpopulations with other response properties. To quantify the relative contributions of these different subpopulations in the awake preparation, we have estimated the representation of sounds across the neuronal population using a representative ensemble of stimuli. We used cell-attached recording with a glass electrode, a method for which single-unit isolation does not depend on neuronal activity, to quantify the fraction of neurons engaged by acoustic stimuli (tones, frequency modulated sweeps, white-noise bursts, and natural stimuli in the primary auditory cortex of awake head-fixed rats. We find that the population response is sparse, with stimuli typically eliciting high firing rates (>20 spikes/second in less than 5% of neurons at any instant. Some neurons had very low spontaneous firing rates (<0.01 spikes/second. At the other extreme, some neurons had driven rates in excess of 50 spikes/second. Interestingly, the overall population response was well described by a lognormal distribution, rather than the exponential distribution that is often reported. Our results represent, to our knowledge, the first quantitative evidence for sparse representations of sounds in the unanesthetized auditory cortex. Our results are compatible with a model in which most neurons are silent much of the time, and in which representations are composed of small dynamic subsets of highly active neurons.

  3. Visual anticipatory information modulates multisensory interactions of artificial audiovisual stimuli.

    Science.gov (United States)

    Vroomen, Jean; Stekelenburg, Jeroen J

    2010-07-01

    The neural activity of speech sound processing (the N1 component of the auditory ERP) can be suppressed if a speech sound is accompanied by concordant lip movements. Here we demonstrate that this audiovisual interaction is neither speech specific nor linked to humanlike actions but can be observed with artificial stimuli if their timing is made predictable. In Experiment 1, a pure tone synchronized with a deformation of a rectangle induced a smaller auditory N1 than auditory-only presentations if the temporal occurrence of this audiovisual event was made predictable by two moving disks that touched the rectangle. Local autoregressive average source estimation indicated that this audiovisual interaction may be related to integrative processing in auditory areas. When the moving disks did not precede the audiovisual stimulus--making the onset unpredictable--there was no N1 reduction. In Experiment 2, the predictability of the leading visual signal was manipulated by introducing a temporal asynchrony between the audiovisual event and the collision of moving disks. Audiovisual events occurred either at the moment, before (too "early"), or after (too "late") the disks collided on the rectangle. When asynchronies varied from trial to trial--rendering the moving disks unreliable temporal predictors of the audiovisual event--the N1 reduction was abolished. These results demonstrate that the N1 suppression is induced by visual information that both precedes and reliably predicts audiovisual onset, without a necessary link to human action-related neural mechanisms.

  4. The processing of visual and auditory information for reaching movements.

    Science.gov (United States)

    Glazebrook, Cheryl M; Welsh, Timothy N; Tremblay, Luc

    2016-09-01

    Presenting target and non-target information in different modalities influences target localization if the non-target is within the spatiotemporal limits of perceptual integration. When using auditory and visual stimuli, the influence of a visual non-target on auditory target localization is greater than the reverse. It is not known, however, whether or how such perceptual effects extend to goal-directed behaviours. To gain insight into how audio-visual stimuli are integrated for motor tasks, the kinematics of reaching movements towards visual or auditory targets with or without a non-target in the other modality were examined. When present, the simultaneously presented non-target could be spatially coincident, to the left, or to the right of the target. Results revealed that auditory non-targets did not influence reaching trajectories towards a visual target, whereas visual non-targets influenced trajectories towards an auditory target. Interestingly, the biases induced by visual non-targets were present early in the trajectory and persisted until movement end. Subsequent experimentation indicated that the magnitude of the biases was equivalent whether participants performed a perceptual or motor task, whereas variability was greater for the motor versus the perceptual tasks. We propose that visually induced trajectory biases were driven by the perceived mislocation of the auditory target, which in turn affected both the movement plan and subsequent control of the movement. Such findings provide further evidence of the dominant role visual information processing plays in encoding spatial locations as well as planning and executing reaching action, even when reaching towards auditory targets. PMID:26253323

  5. Spatial audition in a static virtual environment: the role of auditory-visual interaction

    Directory of Open Access Journals (Sweden)

    Isabelle Viaud-Delmon

    2009-04-01

    Full Text Available The integration of the auditory modality in virtual reality environments is known to promote the sensations of immersion and presence. However it is also known from psychophysics studies that auditory-visual interaction obey to complex rules and that multisensory conflicts may disrupt the adhesion of the participant to the presented virtual scene. It is thus important to measure the accuracy of the auditory spatial cues reproduced by the auditory display and their consistency with the spatial visual cues. This study evaluates auditory localization performances under various unimodal and auditory-visual bimodal conditions in a virtual reality (VR setup using a stereoscopic display and binaural reproduction over headphones in static conditions. The auditory localization performances observed in the present study are in line with those reported in real conditions, suggesting that VR gives rise to consistent auditory and visual spatial cues. These results validate the use of VR for future psychophysics experiments with auditory and visual stimuli. They also emphasize the importance of a spatially accurate auditory and visual rendering for VR setups.

  6. Effects of Background Music on Objective and Subjective Performance Measures in an Auditory BCI

    Science.gov (United States)

    Zhou, Sijie; Allison, Brendan Z.; Kübler, Andrea; Cichocki, Andrzej; Wang, Xingyu; Jin, Jing

    2016-01-01

    Several studies have explored brain computer interface (BCI) systems based on auditory stimuli, which could help patients with visual impairments. Usability and user satisfaction are important considerations in any BCI. Although background music can influence emotion and performance in other task environments, and many users may wish to listen to music while using a BCI, auditory, and other BCIs are typically studied without background music. Some work has explored the possibility of using polyphonic music in auditory BCI systems. However, this approach requires users with good musical skills, and has not been explored in online experiments. Our hypothesis was that an auditory BCI with background music would be preferred by subjects over a similar BCI without background music, without any difference in BCI performance. We introduce a simple paradigm (which does not require musical skill) using percussion instrument sound stimuli and background music, and evaluated it in both offline and online experiments. The result showed that subjects preferred the auditory BCI with background music. Different performance measures did not reveal any significant performance effect when comparing background music vs. no background. Since the addition of background music does not impair BCI performance but is preferred by users, auditory (and perhaps other) BCIs should consider including it. Our study also indicates that auditory BCIs can be effective even if the auditory channel is simultaneously otherwise engaged. PMID:27790111

  7. Auditory Responses of Infants

    Science.gov (United States)

    Watrous, Betty Springer; And Others

    1975-01-01

    Forty infants, 3- to 12-months-old, participated in a study designed to differentiate the auditory response characteristics of normally developing infants in the age ranges 3 - 5 months, 6 - 8 months, and 9 - 12 months. (Author)

  8. Left hemispheric dominance during auditory processing in a noisy environment

    Directory of Open Access Journals (Sweden)

    Ross Bernhard

    2007-11-01

    Full Text Available Abstract Background In daily life, we are exposed to different sound inputs simultaneously. During neural encoding in the auditory pathway, neural activities elicited by these different sounds interact with each other. In the present study, we investigated neural interactions elicited by masker and amplitude-modulated test stimulus in primary and non-primary human auditory cortex during ipsi-lateral and contra-lateral masking by means of magnetoencephalography (MEG. Results We observed significant decrements of auditory evoked responses and a significant inter-hemispheric difference for the N1m response during both ipsi- and contra-lateral masking. Conclusion The decrements of auditory evoked neural activities during simultaneous masking can be explained by neural interactions evoked by masker and test stimulus in peripheral and central auditory systems. The inter-hemispheric differences of N1m decrements during ipsi- and contra-lateral masking reflect a basic hemispheric specialization contributing to the processing of complex auditory stimuli such as speech signals in noisy environments.

  9. [Central auditory prosthesis].

    Science.gov (United States)

    Lenarz, T; Lim, H; Joseph, G; Reuter, G; Lenarz, M

    2009-06-01

    Deaf patients with severe sensory hearing loss can benefit from a cochlear implant (CI), which stimulates the auditory nerve fibers. However, patients who do not have an intact auditory nerve cannot benefit from a CI. The majority of these patients are neurofibromatosis type 2 (NF2) patients who developed neural deafness due to growth or surgical removal of a bilateral acoustic neuroma. The only current solution is the auditory brainstem implant (ABI), which stimulates the surface of the cochlear nucleus in the brainstem. Although the ABI provides improvement in environmental awareness and lip-reading capabilities, only a few NF2 patients have achieved some limited open set speech perception. In the search for alternative procedures our research group in collaboration with Cochlear Ltd. (Australia) developed a human prototype auditory midbrain implant (AMI), which is designed to electrically stimulate the inferior colliculus (IC). The IC has the potential as a new target for an auditory prosthesis as it provides access to neural projections necessary for speech perception as well as a systematic map of spectral information. In this paper the present status of research and development in the field of central auditory prostheses is presented with respect to technology, surgical technique and hearing results as well as the background concepts of ABI and AMI. PMID:19517084

  10. Training in rapid auditory processing ameliorates auditory comprehension in aphasic patients: a randomized controlled pilot study.

    Science.gov (United States)

    Szelag, Elzbieta; Lewandowska, Monika; Wolak, Tomasz; Seniow, Joanna; Poniatowska, Renata; Pöppel, Ernst; Szymaszek, Aneta

    2014-03-15

    Experimental studies have often reported close associations between rapid auditory processing and language competency. The present study was aimed at improving auditory comprehension in aphasic patients following specific training in the perception of temporal order (TO) of events. We tested 18 aphasic patients showing both comprehension and TO perception deficits. Auditory comprehension was assessed by the Token Test, phonemic awareness and Voice-Onset-Time Test. The TO perception was assessed using auditory Temporal-Order-Threshold, defined as the shortest interval between two consecutive stimuli, necessary to report correctly their before-after relation. Aphasic patients participated in eight 45-minute sessions of either specific temporal training (TT, n=11) aimed to improve sequencing abilities, or control non-temporal training (NT, n=7) focussed on volume discrimination. The TT yielded improved TO perception; moreover, a transfer of improvement was observed from the time domain to the language domain, which was untrained during the training. The NT did not improve either the TO perception or comprehension in any language test. These results are in agreement with previous literature studies which proved ameliorated language competency following the TT in language-learning-impaired or dyslexic children. Our results indicated for the first time such benefits also in aphasic patients. PMID:24388435

  11. Different patterns of auditory cortex activation revealed by functional magnetic resonance imaging

    International Nuclear Information System (INIS)

    In the last few years, functional Magnetic Resonance Imaging (fMRI) has been widely accepted as an effective tool for mapping brain activities in both the sensorimotor and the cognitive field. The present work aims to assess the possibility of using fMRI methods to study the cortical response to different acoustic stimuli. Furthermore, we refer to recent data collected at Frankfurt University on the cortical pattern of auditory hallucinations. Healthy subjects showed broad bilateral activation, mostly located in the transverse gyrus of Heschl. The analysis of the cortical activation induced by different stimuli has pointed out a remarkable difference in the spatial and temporal features of the auditory cortex response to pulsed tones and pure tones. The activated areas during episodes of auditory hallucinations match the location of primary auditory cortex as defined in control measurements with the same patients and in the experiments on healthy subjects. (authors)

  12. Stimuli-Adaptable Materials

    DEFF Research Database (Denmark)

    Frankær, Sarah Maria Grundahl

    The work presented in this Thesis deals with the development of a stimuli-adaptable polymer material based on the UV-induced dimerisation of cinnamic acid and its derivatives. It is in the nature of an adhesive to adhere very well to its substrate and therefore problems can arise upon removal...... of the adhesive. This is also known from skin adhesives where it is very undesirable to cause damage to the skin. The overall idea of this project was to resolve this problem by developing a material which could switch between an adhesive and a non-adhesive state. Switchable adhesion is known in the literature...... but the presented work has a new approach to the field by basing itself on the idea of developing a network into which a photo-active polymer is mixed and which function as an adhesive. Upon irradiation with UV-light for a short time a non-adhering inter-penetrating network material would be formed. Two simple...

  13. Magnitude judgments of loudness change for discrete, dynamic, and hybrid stimuli.

    Science.gov (United States)

    Pastore, Richard E; Flint, Jesse

    2011-04-01

    Recent investigations of loudness change within stimuli have identified differences as a function of direction of change and power range (e.g., Canévet, Acustica, 62, 2136-2142, 1986; Neuhoff, Nature, 395, 123-124, 1998), with claims of differences between dynamic and static stimuli. Experiment 1 provides the needed direct empirical evaluation of loudness change across static, dynamic, and hybrid stimuli. Consistent with recent findings for dynamic stimuli, quantitative and qualitative differences in pattern of loudness change were found as a function of power change direction. With identical patterns of loudness change, only quantitative differences were found across stimulus type. In Experiment 2, Points of Subjective loudness Equality (PSE) provided additional information about loudness judgments for the static and dynamic stimuli. Because the quantitative differences across stimulus type exceed the magnitude that could be expected based upon temporal integration by the auditory system, other factors need to be, and are, considered. PMID:21264709

  14. Sensory Responses during Sleep in Primate Primary and Secondary Auditory Cortex

    OpenAIRE

    Issa, Elias B.; Wang, Xiaoqin

    2008-01-01

    Most sensory stimuli do not reach conscious perception during sleep. It has been thought that the thalamus prevents the relay of sensory information to cortex during sleep, but the consequences for cortical responses to sensory signals in this physiological state remain unclear. We recorded from two auditory cortical areas downstream of the thalamus in naturally sleeping marmoset monkeys. Single neurons in primary auditory cortex either increased or decreased their responses during sleep comp...

  15. Developing representations of compound stimuli

    NARCIS (Netherlands)

    I. Visser; M.E.J. Raijmakers

    2012-01-01

    Classification based on multiple dimensions of stimuli is usually associated with similarity-based representations, whereas uni-dimensional classifications are associated with rule-based representations. This paper studies classification of stimuli and category representations in school-aged childre

  16. Visual change detection recruits auditory cortices in early deafness.

    Science.gov (United States)

    Bottari, Davide; Heimler, Benedetta; Caclin, Anne; Dalmolin, Anna; Giard, Marie-Hélène; Pavani, Francesco

    2014-07-01

    Although cross-modal recruitment of early sensory areas in deafness and blindness is well established, the constraints and limits of these plastic changes remain to be understood. In the case of human deafness, for instance, it is known that visual, tactile or visuo-tactile stimuli can elicit a response within the auditory cortices. Nonetheless, both the timing of these evoked responses and the functional contribution of cross-modally recruited areas remain to be ascertained. In the present study, we examined to what extent auditory cortices of deaf humans participate in high-order visual processes, such as visual change detection. By measuring visual ERPs, in particular the visual MisMatch Negativity (vMMN), and performing source localization, we show that individuals with early deafness (N=12) recruit the auditory cortices when a change in motion direction during shape deformation occurs in a continuous visual motion stream. Remarkably this "auditory" response for visual events emerged with the same timing as the visual MMN in hearing controls (N=12), between 150 and 300 ms after the visual change. Furthermore, the recruitment of auditory cortices for visual change detection in early deaf was paired with a reduction of response within the visual system, indicating a shift from visual to auditory cortices of part of the computational process. The present study suggests that the deafened auditory cortices participate at extracting and storing the visual information and at comparing on-line the upcoming visual events, thus indicating that cross-modally recruited auditory cortices can reach this level of computation.

  17. Developing Representations of Compound Stimuli

    Directory of Open Access Journals (Sweden)

    Ingmar eVisser

    2012-03-01

    Full Text Available Classification based on multiple dimensions of stimuli is usually associated with similarity-based representations, whereas uni-dimensional classifications are associated with rule-based representations. This paper studies classification of stimuli and category representations in school-aged children and adults when learning to categorize compound, multidimensional stimuli. Stimuli were such that both similarity-based and rule-based representations would lead to correct classification. This allows testing whether children have a bias for formation of similarity-based representations. The results are at odds with this expectation. Children use both uni-dimensional and multidimensional classification, and the use of both strategies increases with age. Multidimensional classification is best characterized as resulting from an analytic strategy rather than from procedural processing of overall-similarity. The conclusion is that children are capable of using complex rule-based categorization strategies that involve the use of multiple features of the stimuli.

  18. Quadri-stability of a spatially ambiguous auditory illusion

    Directory of Open Access Journals (Sweden)

    Constance May Bainbridge

    2015-01-01

    Full Text Available In addition to vision, audition plays an important role in sound localization in our world. One way we estimate the motion of an auditory object moving towards or away from us is from changes in volume intensity. However, the human auditory system has unequally distributed spatial resolution, including difficulty distinguishing sounds in front versus behind the listener. Here, we introduce a novel quadri-stable illusion, the Transverse-and-Bounce Auditory Illusion, which combines front-back confusion with changes in volume levels of a nonspatial sound to create ambiguous percepts of an object approaching and withdrawing from the listener. The sound can be perceived as traveling transversely from front to back or back to front, or bouncing to remain exclusively in front of or behind the observer. Here we demonstrate how human listeners experience this illusory phenomenon by comparing ambiguous and unambiguous stimuli for each of the four possible motion percepts. When asked to rate their confidence in perceiving each sound’s motion, participants reported equal confidence for the illusory and unambiguous stimuli. Participants perceived all four illusory motion percepts, and could not distinguish the illusion from the unambiguous stimuli. These results show that this illusion is effectively quadri-stable. In a second experiment, the illusory stimulus was looped continuously in headphones while participants identified its perceived path of motion to test properties of perceptual switching, locking, and biases. Participants were biased towards perceiving transverse compared to bouncing paths, and they became perceptually locked into alternating between front-to-back and back-to-front percepts, perhaps reflecting how auditory objects commonly move in the real world. This multi-stable auditory illusion opens opportunities for studying the perceptual, cognitive, and neural representation of objects in motion, as well as exploring multimodal perceptual

  19. Stimulator with arbitrary waveform for auditory evoked potentials

    Energy Technology Data Exchange (ETDEWEB)

    Martins, H R; Romao, M; Placido, D; Provenzano, F; Tierra-Criollo, C J [Universidade Federal de Minas Gerais (UFMG), Departamento de Engenharia Eletrica (DEE), Nucleo de Estudos e Pesquisa em Engenharia Biomedica NEPEB, Av. Ant. Carlos, 6627, sala 2206, Pampulha, Belo Horizonte, MG, 31.270-901 (Brazil)

    2007-11-15

    The technological improvement helps many medical areas. The audiometric exams involving the auditory evoked potentials can make better diagnoses of auditory disorders. This paper proposes the development of a stimulator based on Digital Signal Processor. This stimulator is the first step of an auditory evoked potential system based on the ADSP-BF533 EZ KIT LITE (Analog Devices Company - USA). The stimulator can generate arbitrary waveform like Sine Waves, Modulated Amplitude, Pulses, Bursts and Pips. The waveforms are generated through a graphical interface programmed in C++ in which the user can define the parameters of the waveform. Furthermore, the user can set the exam parameters as number of stimuli, time with stimulation (Time ON) and time without stimulus (Time OFF). In future works will be implemented another parts of the system that includes the acquirement of electroencephalogram and signal processing to estimate and analyze the evoked potential.

  20. Altered intrinsic connectivity of the auditory cortex in congenital amusia.

    Science.gov (United States)

    Leveque, Yohana; Fauvel, Baptiste; Groussard, Mathilde; Caclin, Anne; Albouy, Philippe; Platel, Hervé; Tillmann, Barbara

    2016-07-01

    Congenital amusia, a neurodevelopmental disorder of music perception and production, has been associated with abnormal anatomical and functional connectivity in a right frontotemporal pathway. To investigate whether spontaneous connectivity in brain networks involving the auditory cortex is altered in the amusic brain, we ran a seed-based connectivity analysis, contrasting at-rest functional MRI data of amusic and matched control participants. Our results reveal reduced frontotemporal connectivity in amusia during resting state, as well as an overconnectivity between the auditory cortex and the default mode network (DMN). The findings suggest that the auditory cortex is intrinsically more engaged toward internal processes and less available to external stimuli in amusics compared with controls. Beyond amusia, our findings provide new evidence for the link between cognitive deficits in pathology and abnormalities in the connectivity between sensory areas and the DMN at rest. PMID:27009161

  1. Diminished Auditory Responses during NREM Sleep Correlate with the Hierarchy of Language Processing

    Science.gov (United States)

    Furman-Haran, Edna; Arzi, Anat; Levkovitz, Yechiel; Malach, Rafael

    2016-01-01

    Natural sleep provides a powerful model system for studying the neuronal correlates of awareness and state changes in the human brain. To quantitatively map the nature of sleep-induced modulations in sensory responses we presented participants with auditory stimuli possessing different levels of linguistic complexity. Ten participants were scanned using functional magnetic resonance imaging (fMRI) during the waking state and after falling asleep. Sleep staging was based on heart rate measures validated independently on 20 participants using concurrent EEG and heart rate measurements and the results were confirmed using permutation analysis. Participants were exposed to three types of auditory stimuli: scrambled sounds, meaningless word sentences and comprehensible sentences. During non-rapid eye movement (NREM) sleep, we found diminishing brain activation along the hierarchy of language processing, more pronounced in higher processing regions. Specifically, the auditory thalamus showed similar activation levels during sleep and waking states, primary auditory cortex remained activated but showed a significant reduction in auditory responses during sleep, and the high order language-related representation in inferior frontal gyrus (IFG) cortex showed a complete abolishment of responses during NREM sleep. In addition to an overall activation decrease in language processing regions in superior temporal gyrus and IFG, those areas manifested a loss of semantic selectivity during NREM sleep. Our results suggest that the decreased awareness to linguistic auditory stimuli during NREM sleep is linked to diminished activity in high order processing stations. PMID:27310812

  2. Diminished Auditory Responses during NREM Sleep Correlate with the Hierarchy of Language Processing.

    Directory of Open Access Journals (Sweden)

    Meytal Wilf

    Full Text Available Natural sleep provides a powerful model system for studying the neuronal correlates of awareness and state changes in the human brain. To quantitatively map the nature of sleep-induced modulations in sensory responses we presented participants with auditory stimuli possessing different levels of linguistic complexity. Ten participants were scanned using functional magnetic resonance imaging (fMRI during the waking state and after falling asleep. Sleep staging was based on heart rate measures validated independently on 20 participants using concurrent EEG and heart rate measurements and the results were confirmed using permutation analysis. Participants were exposed to three types of auditory stimuli: scrambled sounds, meaningless word sentences and comprehensible sentences. During non-rapid eye movement (NREM sleep, we found diminishing brain activation along the hierarchy of language processing, more pronounced in higher processing regions. Specifically, the auditory thalamus showed similar activation levels during sleep and waking states, primary auditory cortex remained activated but showed a significant reduction in auditory responses during sleep, and the high order language-related representation in inferior frontal gyrus (IFG cortex showed a complete abolishment of responses during NREM sleep. In addition to an overall activation decrease in language processing regions in superior temporal gyrus and IFG, those areas manifested a loss of semantic selectivity during NREM sleep. Our results suggest that the decreased awareness to linguistic auditory stimuli during NREM sleep is linked to diminished activity in high order processing stations.

  3. Variability and information content in auditory cortex spike trains during an interval-discrimination task.

    Science.gov (United States)

    Abolafia, Juan M; Martinez-Garcia, M; Deco, G; Sanchez-Vives, M V

    2013-11-01

    Processing of temporal information is key in auditory processing. In this study, we recorded single-unit activity from rat auditory cortex while they performed an interval-discrimination task. The animals had to decide whether two auditory stimuli were separated by either 150 or 300 ms and nose-poke to the left or to the right accordingly. The spike firing of single neurons in the auditory cortex was then compared in engaged vs. idle brain states. We found that spike firing variability measured with the Fano factor was markedly reduced, not only during stimulation, but also in between stimuli in engaged trials. We next explored if this decrease in variability was associated with an increased information encoding. Our information theory analysis revealed increased information content in auditory responses during engagement compared with idle states, in particular in the responses to task-relevant stimuli. Altogether, we demonstrate that task-engagement significantly modulates coding properties of auditory cortical neurons during an interval-discrimination task. PMID:23945780

  4. Multimodal Hierarchical Dirichlet Process-based Active Perception

    OpenAIRE

    Taniguchi, Tadahiro; Takano, Toshiaki; Yoshino, Ryo

    2015-01-01

    In this paper, we propose an active perception method for recognizing object categories based on the multimodal hierarchical Dirichlet process (MHDP). The MHDP enables a robot to form object categories using multimodal information, e.g., visual, auditory, and haptic information, which can be observed by performing actions on an object. However, performing many actions on a target object requires a long time. In a real-time scenario, i.e., when the time is limited, the robot has to determine t...

  5. Frequency-specific modulation of population-level frequency tuning in human auditory cortex

    Directory of Open Access Journals (Sweden)

    Roberts Larry E

    2009-01-01

    Full Text Available Abstract Background Under natural circumstances, attention plays an important role in extracting relevant auditory signals from simultaneously present, irrelevant noises. Excitatory and inhibitory neural activity, enhanced by attentional processes, seems to sharpen frequency tuning, contributing to improved auditory performance especially in noisy environments. In the present study, we investigated auditory magnetic fields in humans that were evoked by pure tones embedded in band-eliminated noises during two different stimulus sequencing conditions (constant vs. random under auditory focused attention by means of magnetoencephalography (MEG. Results In total, we used identical auditory stimuli between conditions, but presented them in a different order, thereby manipulating the neural processing and the auditory performance of the listeners. Constant stimulus sequencing blocks were characterized by the simultaneous presentation of pure tones of identical frequency with band-eliminated noises, whereas random sequencing blocks were characterized by the simultaneous presentation of pure tones of random frequencies and band-eliminated noises. We demonstrated that auditory evoked neural responses were larger in the constant sequencing compared to the random sequencing condition, particularly when the simultaneously presented noises contained narrow stop-bands. Conclusion The present study confirmed that population-level frequency tuning in human auditory cortex can be sharpened in a frequency-specific manner. This frequency-specific sharpening may contribute to improved auditory performance during detection and processing of relevant sound inputs characterized by specific frequency distributions in noisy environments.

  6. Response recovery in the locust auditory pathway.

    Science.gov (United States)

    Wirtssohn, Sarah; Ronacher, Bernhard

    2016-01-01

    Temporal resolution and the time courses of recovery from acute adaptation of neurons in the auditory pathway of the grasshopper Locusta migratoria were investigated with a response recovery paradigm. We stimulated with a series of single click and click pair stimuli while performing intracellular recordings from neurons at three processing stages: receptors and first and second order interneurons. The response to the second click was expressed relative to the single click response. This allowed the uncovering of the basic temporal resolution in these neurons. The effect of adaptation increased with processing layer. While neurons in the auditory periphery displayed a steady response recovery after a short initial adaptation, many interneurons showed nonlinear effects: most prominent a long-lasting suppression of the response to the second click in a pair, as well as a gain in response if a click was preceded by a click a few milliseconds before. Our results reveal a distributed temporal filtering of input at an early auditory processing stage. This set of specified filters is very likely homologous across grasshopper species and thus forms the neurophysiological basis for extracting relevant information from a variety of different temporal signals. Interestingly, in terms of spike timing precision neurons at all three processing layers recovered very fast, within 20 ms. Spike waveform analysis of several neuron types did not sufficiently explain the response recovery profiles implemented in these neurons, indicating that temporal resolution in neurons located at several processing layers of the auditory pathway is not necessarily limited by the spike duration and refractory period.

  7. Speech motor learning changes the neural response to both auditory and somatosensory signals

    Science.gov (United States)

    Ito, Takayuki; Coppola, Joshua H.; Ostry, David J.

    2016-01-01

    In the present paper, we present evidence for the idea that speech motor learning is accompanied by changes to the neural coding of both auditory and somatosensory stimuli. Participants in our experiments undergo adaptation to altered auditory feedback, an experimental model of speech motor learning which like visuo-motor adaptation in limb movement, requires that participants change their speech movements and associated somatosensory inputs to correct for systematic real-time changes to auditory feedback. We measure the sensory effects of adaptation by examining changes to auditory and somatosensory event-related responses. We find that adaptation results in progressive changes to speech acoustical outputs that serve to correct for the perturbation. We also observe changes in both auditory and somatosensory event-related responses that are correlated with the magnitude of adaptation. These results indicate that sensory change occurs in conjunction with the processes involved in speech motor adaptation. PMID:27181603

  8. A Psychophysical Imaging Method Evidencing Auditory Cue Extraction during Speech Perception: A Group Analysis of Auditory Classification Images

    OpenAIRE

    Varnet, Léo; Knoblauch, Kenneth; Serniclaes, Willy; Meunier, Fanny; Hoen, Michel

    2015-01-01

    Although there is a large consensus regarding the involvement of specific acoustic cues in speech perception, the precise mechanisms underlying the transformation from continuous acoustical properties into discrete perceptual units remains undetermined. This gap in knowledge is partially due to the lack of a turnkey solution for isolating critical speech cues from natural stimuli. In this paper, we describe a psychoacoustic imaging method known as the Auditory Classification Image technique t...

  9. Auditory and Visual Sensations

    CERN Document Server

    Ando, Yoichi

    2010-01-01

    Professor Yoichi Ando, acoustic architectural designer of the Kirishima International Concert Hall in Japan, presents a comprehensive rational-scientific approach to designing performance spaces. His theory is based on systematic psychoacoustical observations of spatial hearing and listener preferences, whose neuronal correlates are observed in the neurophysiology of the human brain. A correlation-based model of neuronal signal processing in the central auditory system is proposed in which temporal sensations (pitch, timbre, loudness, duration) are represented by an internal autocorrelation representation, and spatial sensations (sound location, size, diffuseness related to envelopment) are represented by an internal interaural crosscorrelation function. Together these two internal central auditory representations account for the basic auditory qualities that are relevant for listening to music and speech in indoor performance spaces. Observed psychological and neurophysiological commonalities between auditor...

  10. Developmental evaluation of atypical auditory sampling in dyslexia: Functional and structural evidence.

    Science.gov (United States)

    Lizarazu, Mikel; Lallier, Marie; Molinaro, Nicola; Bourguignon, Mathieu; Paz-Alonso, Pedro M; Lerma-Usabiaga, Garikoitz; Carreiras, Manuel

    2015-12-01

    Whether phonological deficits in developmental dyslexia are associated with impaired neural sampling of auditory information at either syllabic- or phonemic-rates is still under debate. In addition, whereas neuroanatomical alterations in auditory regions have been documented in dyslexic readers, whether and how these structural anomalies are linked to auditory sampling and reading deficits remains poorly understood. In this study, we measured auditory neural synchronization at different frequencies corresponding to relevant phonological spectral components of speech in children and adults with and without dyslexia, using magnetoencephalography. Furthermore, structural MRI was used to estimate cortical thickness of the auditory cortex of participants. Dyslexics showed atypical brain synchronization at both syllabic (slow) and phonemic (fast) rates. Interestingly, while a left hemispheric asymmetry in cortical thickness was functionally related to a stronger left hemispheric lateralization of neural synchronization to stimuli presented at the phonemic rate in skilled readers, the same anatomical index in dyslexics was related to a stronger right hemispheric dominance for neural synchronization to syllabic-rate auditory stimuli. These data suggest that the acoustic sampling deficit in development dyslexia might be linked to an atypical specialization of the auditory cortex to both low and high frequency amplitude modulations.

  11. Developmental evaluation of atypical auditory sampling in dyslexia: Functional and structural evidence.

    Science.gov (United States)

    Lizarazu, Mikel; Lallier, Marie; Molinaro, Nicola; Bourguignon, Mathieu; Paz-Alonso, Pedro M; Lerma-Usabiaga, Garikoitz; Carreiras, Manuel

    2015-12-01

    Whether phonological deficits in developmental dyslexia are associated with impaired neural sampling of auditory information at either syllabic- or phonemic-rates is still under debate. In addition, whereas neuroanatomical alterations in auditory regions have been documented in dyslexic readers, whether and how these structural anomalies are linked to auditory sampling and reading deficits remains poorly understood. In this study, we measured auditory neural synchronization at different frequencies corresponding to relevant phonological spectral components of speech in children and adults with and without dyslexia, using magnetoencephalography. Furthermore, structural MRI was used to estimate cortical thickness of the auditory cortex of participants. Dyslexics showed atypical brain synchronization at both syllabic (slow) and phonemic (fast) rates. Interestingly, while a left hemispheric asymmetry in cortical thickness was functionally related to a stronger left hemispheric lateralization of neural synchronization to stimuli presented at the phonemic rate in skilled readers, the same anatomical index in dyslexics was related to a stronger right hemispheric dominance for neural synchronization to syllabic-rate auditory stimuli. These data suggest that the acoustic sampling deficit in development dyslexia might be linked to an atypical specialization of the auditory cortex to both low and high frequency amplitude modulations. PMID:26356682

  12. Neural responses in songbird forebrain reflect learning rates, acquired salience, and stimulus novelty after auditory discrimination training.

    Science.gov (United States)

    Bell, Brittany A; Phan, Mimi L; Vicario, David S

    2015-03-01

    How do social interactions form and modulate the neural representations of specific complex signals? This question can be addressed in the songbird auditory system. Like humans, songbirds learn to vocalize by imitating tutors heard during development. These learned vocalizations are important in reproductive and social interactions and in individual recognition. As a model for the social reinforcement of particular songs, male zebra finches were trained to peck for a food reward in response to one song stimulus (GO) and to withhold responding for another (NoGO). After performance reached criterion, single and multiunit neural responses to both trained and novel stimuli were obtained from multiple electrodes inserted bilaterally into two songbird auditory processing areas [caudomedial mesopallium (CMM) and caudomedial nidopallium (NCM)] of awake, restrained birds. Neurons in these areas undergo stimulus-specific adaptation to repeated song stimuli, and responses to familiar stimuli adapt more slowly than to novel stimuli. The results show that auditory responses differed in NCM and CMM for trained (GO and NoGO) stimuli vs. novel song stimuli. When subjects were grouped by the number of training days required to reach criterion, fast learners showed larger neural responses and faster stimulus-specific adaptation to all stimuli than slow learners in both areas. Furthermore, responses in NCM of fast learners were more strongly left-lateralized than in slow learners. Thus auditory responses in these sensory areas not only encode stimulus familiarity, but also reflect behavioral reinforcement in our paradigm, and can potentially be modulated by social interactions.

  13. Role of the auditory system in speech production.

    Science.gov (United States)

    Guenther, Frank H; Hickok, Gregory

    2015-01-01

    This chapter reviews evidence regarding the role of auditory perception in shaping speech output. Evidence indicates that speech movements are planned to follow auditory trajectories. This in turn is followed by a description of the Directions Into Velocities of Articulators (DIVA) model, which provides a detailed account of the role of auditory feedback in speech motor development and control. A brief description of the higher-order brain areas involved in speech sequencing (including the pre-supplementary motor area and inferior frontal sulcus) is then provided, followed by a description of the Hierarchical State Feedback Control (HSFC) model, which posits internal error detection and correction processes that can detect and correct speech production errors prior to articulation. The chapter closes with a treatment of promising future directions of research into auditory-motor interactions in speech, including the use of intracranial recording techniques such as electrocorticography in humans, the investigation of the potential roles of various large-scale brain rhythms in speech perception and production, and the development of brain-computer interfaces that use auditory feedback to allow profoundly paralyzed users to learn to produce speech using a speech synthesizer.

  14. Coding of melodic gestalt in human auditory cortex.

    Science.gov (United States)

    Schindler, Andreas; Herdener, Marcus; Bartels, Andreas

    2013-12-01

    The perception of a melody is invariant to the absolute properties of its constituting notes, but depends on the relation between them-the melody's relative pitch profile. In fact, a melody's "Gestalt" is recognized regardless of the instrument or key used to play it. Pitch processing in general is assumed to occur at the level of the auditory cortex. However, it is unknown whether early auditory regions are able to encode pitch sequences integrated over time (i.e., melodies) and whether the resulting representations are invariant to specific keys. Here, we presented participants different melodies composed of the same 4 harmonic pitches during functional magnetic resonance imaging recordings. Additionally, we played the same melodies transposed in different keys and on different instruments. We found that melodies were invariantly represented by their blood oxygen level-dependent activation patterns in primary and secondary auditory cortices across instruments, and also across keys. Our findings extend common hierarchical models of auditory processing by showing that melodies are encoded independent of absolute pitch and based on their relative pitch profile as early as the primary auditory cortex.

  15. Hierarchical Models of Attitude.

    Science.gov (United States)

    Reddy, Srinivas K.; LaBarbera, Priscilla A.

    1985-01-01

    The application and use of hierarchical models is illustrated, using the example of the structure of attitudes toward a new product and a print advertisement. Subjects were college students who responded to seven-point bipolar scales. Hierarchical models were better than nonhierarchical models in conceptualizing attitude but not intention. (GDC)

  16. Integration of Auditory and Visual Communication Information in the Primate Ventrolateral Prefrontal Cortex

    OpenAIRE

    Sugihara, T.; Diltz, M. D.; Averbeck, B. B.; Romanski, L. M.

    2006-01-01

    The integration of auditory and visual stimuli is crucial for recognizing objects, communicating effectively, and navigating through our complex world. Although the frontal lobes are involved in memory, communication, and language, there has been no evidence that the integration of communication information occurs at the single-cell level in the frontal lobes. Here, we show that neurons in the macaque ventrolateral prefrontal cortex (VLPFC) integrate audiovisual communication stimuli. The mul...

  17. Early auditory change detection implicitly facilitated by ignored concurrent visual change during a Braille reading task.

    Science.gov (United States)

    Aoyama, Atsushi; Haruyama, Tomohiro; Kuriki, Shinya

    2013-09-01

    Unconscious monitoring of multimodal stimulus changes enables humans to effectively sense the external environment. Such automatic change detection is thought to be reflected in auditory and visual mismatch negativity (MMN) and mismatch negativity fields (MMFs). These are event-related potentials and magnetic fields, respectively, evoked by deviant stimuli within a sequence of standard stimuli, and both are typically studied during irrelevant visual tasks that cause the stimuli to be ignored. Due to the sensitivity of MMN/MMF to potential effects of explicit attention to vision, however, it is unclear whether multisensory co-occurring changes can purely facilitate early sensory change detection reciprocally across modalities. We adopted a tactile task involving the reading of Braille patterns as a neutral ignore condition, while measuring magnetoencephalographic responses to concurrent audiovisual stimuli that were infrequently deviated either in auditory, visual, or audiovisual dimensions; 1000-Hz standard tones were switched to 1050-Hz deviant tones and/or two-by-two standard check patterns displayed on both sides of visual fields were switched to deviant reversed patterns. The check patterns were set to be faint enough so that the reversals could be easily ignored even during Braille reading. While visual MMFs were virtually undetectable even for visual and audiovisual deviants, significant auditory MMFs were observed for auditory and audiovisual deviants, originating from bilateral supratemporal auditory areas. Notably, auditory MMFs were significantly enhanced for audiovisual deviants from about 100 ms post-stimulus, as compared with the summation responses for auditory and visual deviants or for each of the unisensory deviants recorded in separate sessions. Evidenced by high tactile task performance with unawareness of visual changes, we conclude that Braille reading can successfully suppress explicit attention and that simultaneous multisensory changes can

  18. Association between language development and auditory processing disorders

    Directory of Open Access Journals (Sweden)

    Caroline Nunes Rocha-Muniz

    2014-06-01

    Full Text Available INTRODUCTION: It is crucial to understand the complex processing of acoustic stimuli along the auditory pathway ;comprehension of this complex processing can facilitate our understanding of the processes that underlie normal and altered human communication. AIM: To investigate the performance and lateralization effects on auditory processing assessment in children with specific language impairment (SLI, relating these findings to those obtained in children with auditory processing disorder (APD and typical development (TD. MATERIAL AND METHODS: Prospective study. Seventy-five children, aged 6-12 years, were separated in three groups: 25 children with SLI, 25 children with APD, and 25 children with TD. All went through the following tests: speech-in-noise test, Dichotic Digit test and Pitch Pattern Sequencing test. RESULTS: The effects of lateralization were observed only in the SLI group, with the left ear presenting much lower scores than those presented to the right ear. The inter-group analysis has shown that in all tests children from APD and SLI groups had significantly poorer performance compared to TD group. Moreover, SLI group presented worse results than APD group. CONCLUSION: This study has shown, in children with SLI, an inefficient processing of essential sound components and an effect of lateralization. These findings may indicate that neural processes (required for auditory processing are different between auditory processing and speech disorders.

  19. Speech identification and cortical potentials in individuals with auditory neuropathy

    Directory of Open Access Journals (Sweden)

    Vanaja CS

    2008-03-01

    Full Text Available Abstract Background Present study investigated the relationship between speech identification scores in quiet and parameters of cortical potentials (latency of P1, N1, and P2; and amplitude of N1/P2 in individuals with auditory neuropathy. Methods Ten individuals with auditory neuropathy (five males and five females and ten individuals with normal hearing in the age range of 12 to 39 yr participated in the study. Speech identification ability was assessed for bi-syllabic words and cortical potentials were recorded for click stimuli. Results Results revealed that in individuals with auditory neuropathy, speech identification scores were significantly poorer than that of individuals with normal hearing. Individuals with auditory neuropathy were further classified into two groups, Good Performers and Poor Performers based on their speech identification scores. It was observed that the mean amplitude of N1/P2 of Poor Performers was significantly lower than that of Good Performers and those with normal hearing. There was no significant effect of group on the latency of the peaks. Speech identification scores showed a good correlation with the amplitude of cortical potentials (N1/P2 complex but did not show a significant correlation with the latency of cortical potentials. Conclusion Results of the present study suggests that measuring the cortical potentials may offer a means for predicting perceptual skills in individuals with auditory neuropathy.

  20. Implicit learning of predictable sound sequences modulates human brain responses at different levels of the auditory hierarchy

    Directory of Open Access Journals (Sweden)

    Françoise eLecaignard

    2015-09-01

    Full Text Available Deviant stimuli, violating regularities in a sensory environment, elicit the Mismatch Negativity (MMN, largely described in the Event-Related Potential literature. While it is widely accepted that the MMN reflects more than basic change detection, a comprehensive description of mental processes modulating this response is still lacking. Within the framework of predictive coding, deviance processing is part of an inference process where prediction errors (the mismatch between incoming sensations and predictions established through experience are minimized. In this view, the MMN is a measure of prediction error, which yields specific expectations regarding its modulations by various experimental factors. In particular, it predicts that the MMN should decrease as the occurrence of a deviance becomes more predictable. We conducted a passive oddball EEG study and manipulated the predictability of sound sequences by means of different temporal structures. Importantly, our design allows comparing mismatch responses elicited by predictable and unpredictable violations of a simple repetition rule and therefore departs from previous studies that investigate violations of different time-scale regularities. We observed a decrease of the MMN with predictability and interestingly, a similar effect at earlier latencies, within 70 ms after deviance onset. Following these pre-attentive responses, a reduced P3a was measured in the case of predictable deviants. We conclude that early and late deviance responses reflect prediction errors, triggering belief updating within the auditory hierarchy. Beside, in this passive study, such perceptual inference appears to be modulated by higher-level implicit learning of sequence statistical structures. Our findings argue for a hierarchical model of auditory processing where predictive coding enables implicit extraction of environmental regularities.

  1. The neglected neglect: auditory neglect.

    Science.gov (United States)

    Gokhale, Sankalp; Lahoti, Sourabh; Caplan, Louis R

    2013-08-01

    Whereas visual and somatosensory forms of neglect are commonly recognized by clinicians, auditory neglect is often not assessed and therefore neglected. The auditory cortical processing system can be functionally classified into 2 distinct pathways. These 2 distinct functional pathways deal with recognition of sound ("what" pathway) and the directional attributes of the sound ("where" pathway). Lesions of higher auditory pathways produce distinct clinical features. Clinical bedside evaluation of auditory neglect is often difficult because of coexisting neurological deficits and the binaural nature of auditory inputs. In addition, auditory neglect and auditory extinction may show varying degrees of overlap, which makes the assessment even harder. Shielding one ear from the other as well as separating the ear from space is therefore critical for accurate assessment of auditory neglect. This can be achieved by use of specialized auditory tests (dichotic tasks and sound localization tests) for accurate interpretation of deficits. Herein, we have reviewed auditory neglect with an emphasis on the functional anatomy, clinical evaluation, and basic principles of specialized auditory tests.

  2. Activation of auditory white matter tracts as revealed by functional magnetic resonance imaging

    Energy Technology Data Exchange (ETDEWEB)

    Tae, Woo Suk [Kangwon National University, Neuroscience Research Institute, School of Medicine, Chuncheon (Korea, Republic of); Yakunina, Natalia; Nam, Eui-Cheol [Kangwon National University, Neuroscience Research Institute, School of Medicine, Chuncheon (Korea, Republic of); Kangwon National University, Department of Otolaryngology, School of Medicine, Chuncheon, Kangwon-do (Korea, Republic of); Kim, Tae Su [Kangwon National University Hospital, Department of Otolaryngology, Chuncheon (Korea, Republic of); Kim, Sam Soo [Kangwon National University, Neuroscience Research Institute, School of Medicine, Chuncheon (Korea, Republic of); Kangwon National University, Department of Radiology, School of Medicine, Chuncheon (Korea, Republic of)

    2014-07-15

    The ability of functional magnetic resonance imaging (fMRI) to detect activation in brain white matter (WM) is controversial. In particular, studies on the functional activation of WM tracts in the central auditory system are scarce. We utilized fMRI to assess and characterize the entire auditory WM pathway under robust experimental conditions involving the acquisition of a large number of functional volumes, the application of broadband auditory stimuli of high intensity, and the use of sparse temporal sampling to avoid scanner noise effects and increase signal-to-noise ratio. Nineteen healthy volunteers were subjected to broadband white noise in a block paradigm; each run had four sound-on/off alternations and was repeated nine times for each subject. Sparse sampling (TR = 8 s) was used. In addition to traditional gray matter (GM) auditory center activation, WM activation was detected in the isthmus and midbody of the corpus callosum (CC), tapetum, auditory radiation, lateral lemniscus, and decussation of the superior cerebellar peduncles. At the individual level, 13 of 19 subjects (68 %) had CC activation. Callosal WM exhibited a temporal delay of approximately 8 s in response to the stimulation compared with GM. These findings suggest that direct evaluation of the entire functional network of the central auditory system may be possible using fMRI, which may aid in understanding the neurophysiological basis of the central auditory system and in developing treatment strategies for various central auditory disorders. (orig.)

  3. Grasping the sound: Auditory pitch influences size processing in motor planning.

    Science.gov (United States)

    Rinaldi, Luca; Lega, Carlotta; Cattaneo, Zaira; Girelli, Luisa; Bernardi, Nicolò Francesco

    2016-01-01

    Growing evidence shows that individuals consistently match auditory pitch with visual size. For instance, high-pitched sounds are perceptually associated with smaller visual stimuli, whereas low-pitched sounds with larger ones. The present study explores whether this crossmodal correspondence, reported so far for perceptual processing, also modulates motor planning. To address this issue, we carried out a series of kinematic experiments to verify whether actions implying size processing are affected by auditory pitch. Experiment 1 showed that grasping movements toward small/large objects were initiated faster in response to high/low pitches, respectively, thus extending previous findings in the literature to more complex motor behavior. Importantly, auditory pitch influenced the relative scaling of the hand preshaping, with high pitches associated with smaller grip aperture compared with low pitches. Notably, no effect of auditory pitch was found in case of pointing movements (no grasp implied, Experiment 2), as well as when auditory pitch was irrelevant to the programming of the grip aperture, that is, in case of grasping an object of uniform size (Experiment 3). Finally, auditory pitch influenced also symbolic manual gestures expressing "small" and "large" concepts (Experiment 4). In sum, our results are novel in revealing the impact of auditory pitch on motor planning when size processing is required, and shed light on the role of auditory information in driving actions. (PsycINFO Database Record PMID:26280267

  4. A songbird forebrain area potentially involved in auditory discrimination and memory formation

    Indian Academy of Sciences (India)

    Raphael Pinaud; Thomas A Terleph

    2008-03-01

    Songbirds rely on auditory processing of natural communication signals for a number of social behaviors, including mate selection, individual recognition and the rare behavior of vocal learning – the ability to learn vocalizations through imitation of an adult model, rather than by instinct. Like mammals, songbirds possess a set of interconnected ascending and descending auditory brain pathways that process acoustic information and that are presumably involved in the perceptual processing of vocal communication signals. Most auditory areas studied to date are located in the caudomedial forebrain of the songbird and include the thalamo-recipient field L (subfields L1, L2 and L3), the caudomedial and caudolateral mesopallium (CMM and CLM, respectively) and the caudomedial nidopallium (NCM). This review focuses on NCM, an auditory area previously proposed to be analogous to parts of the primary auditory cortex in mammals. Stimulation of songbirds with auditory stimuli drives vigorous electrophysiological responses and the expression of several activity-regulated genes in NCM. Interestingly, NCM neurons are tuned to species-specific songs and undergo some forms of experience-dependent plasticity in-vivo. These activity-dependent changes may underlie long-term modifications in the functional performance of NCM and constitute a potential neural substrate for auditory discrimination. We end this review by discussing evidence that suggests that NCM may be a site of auditory memory formation and/or storage.

  5. Distractor Effect of Auditory Rhythms on Self-Paced Tapping in Chimpanzees and Humans.

    Science.gov (United States)

    Hattori, Yuko; Tomonaga, Masaki; Matsuzawa, Tetsuro

    2015-01-01

    Humans tend to spontaneously align their movements in response to visual (e.g., swinging pendulum) and auditory rhythms (e.g., hearing music while walking). Particularly in the case of the response to auditory rhythms, neuroscientific research has indicated that motor resources are also recruited while perceiving an auditory rhythm (or regular pulse), suggesting a tight link between the auditory and motor systems in the human brain. However, the evolutionary origin of spontaneous responses to auditory rhythms is unclear. Here, we report that chimpanzees and humans show a similar distractor effect in perceiving isochronous rhythms during rhythmic movement. We used isochronous auditory rhythms as distractor stimuli during self-paced alternate tapping of two keys of an electronic keyboard by humans and chimpanzees. When the tempo was similar to their spontaneous motor tempo, tapping onset was influenced by intermittent entrainment to auditory rhythms. Although this effect itself is not an advanced rhythmic ability such as dancing or singing, our results suggest that, to some extent, the biological foundation for spontaneous responses to auditory rhythms was already deeply rooted in the common ancestor of chimpanzees and humans, 6 million years ago. This also suggests the possibility of a common attentional mechanism, as proposed by the dynamic attending theory, underlying the effect of perceiving external rhythms on motor movement. PMID:26132703

  6. Distractor Effect of Auditory Rhythms on Self-Paced Tapping in Chimpanzees and Humans.

    Directory of Open Access Journals (Sweden)

    Yuko Hattori

    Full Text Available Humans tend to spontaneously align their movements in response to visual (e.g., swinging pendulum and auditory rhythms (e.g., hearing music while walking. Particularly in the case of the response to auditory rhythms, neuroscientific research has indicated that motor resources are also recruited while perceiving an auditory rhythm (or regular pulse, suggesting a tight link between the auditory and motor systems in the human brain. However, the evolutionary origin of spontaneous responses to auditory rhythms is unclear. Here, we report that chimpanzees and humans show a similar distractor effect in perceiving isochronous rhythms during rhythmic movement. We used isochronous auditory rhythms as distractor stimuli during self-paced alternate tapping of two keys of an electronic keyboard by humans and chimpanzees. When the tempo was similar to their spontaneous motor tempo, tapping onset was influenced by intermittent entrainment to auditory rhythms. Although this effect itself is not an advanced rhythmic ability such as dancing or singing, our results suggest that, to some extent, the biological foundation for spontaneous responses to auditory rhythms was already deeply rooted in the common ancestor of chimpanzees and humans, 6 million years ago. This also suggests the possibility of a common attentional mechanism, as proposed by the dynamic attending theory, underlying the effect of perceiving external rhythms on motor movement.

  7. Distractor Effect of Auditory Rhythms on Self-Paced Tapping in Chimpanzees and Humans.

    Science.gov (United States)

    Hattori, Yuko; Tomonaga, Masaki; Matsuzawa, Tetsuro

    2015-01-01

    Humans tend to spontaneously align their movements in response to visual (e.g., swinging pendulum) and auditory rhythms (e.g., hearing music while walking). Particularly in the case of the response to auditory rhythms, neuroscientific research has indicated that motor resources are also recruited while perceiving an auditory rhythm (or regular pulse), suggesting a tight link between the auditory and motor systems in the human brain. However, the evolutionary origin of spontaneous responses to auditory rhythms is unclear. Here, we report that chimpanzees and humans show a similar distractor effect in perceiving isochronous rhythms during rhythmic movement. We used isochronous auditory rhythms as distractor stimuli during self-paced alternate tapping of two keys of an electronic keyboard by humans and chimpanzees. When the tempo was similar to their spontaneous motor tempo, tapping onset was influenced by intermittent entrainment to auditory rhythms. Although this effect itself is not an advanced rhythmic ability such as dancing or singing, our results suggest that, to some extent, the biological foundation for spontaneous responses to auditory rhythms was already deeply rooted in the common ancestor of chimpanzees and humans, 6 million years ago. This also suggests the possibility of a common attentional mechanism, as proposed by the dynamic attending theory, underlying the effect of perceiving external rhythms on motor movement.

  8. Odors Bias Time Perception in Visual and Auditory Modalities.

    Science.gov (United States)

    Yue, Zhenzhu; Gao, Tianyu; Chen, Lihan; Wu, Jiashuang

    2016-01-01

    Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal) were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor). The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a framework of

  9. Odors bias time perception in visual and auditory modalities

    Directory of Open Access Journals (Sweden)

    Zhenzhu eYue

    2016-04-01

    Full Text Available Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 ms or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor. The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a

  10. Hierarchical Network Design

    DEFF Research Database (Denmark)

    Thomadsen, Tommy

    2005-01-01

    Communication networks are immensely important today, since both companies and individuals use numerous services that rely on them. This thesis considers the design of hierarchical (communication) networks. Hierarchical networks consist of layers of networks and are well-suited for coping...... with changing and increasing demands. Two-layer networks consist of one backbone network, which interconnects cluster networks. The clusters consist of nodes and links, which connect the nodes. One node in each cluster is a hub node, and the backbone interconnects the hub nodes of each cluster and thus...... the clusters. The design of hierarchical networks involves clustering of nodes, hub selection, and network design, i.e. selection of links and routing of ows. Hierarchical networks have been in use for decades, but integrated design of these networks has only been considered for very special types of networks...

  11. Programming with Hierarchical Maps

    DEFF Research Database (Denmark)

    Ørbæk, Peter

    This report desribes the hierarchical maps used as a central data structure in the Corundum framework. We describe its most prominent features, ague for its usefulness and briefly describe some of the software prototypes implemented using the technology....

  12. Micromechanics of hierarchical materials

    DEFF Research Database (Denmark)

    Mishnaevsky, Leon, Jr.

    2012-01-01

    A short overview of micromechanical models of hierarchical materials (hybrid composites, biomaterials, fractal materials, etc.) is given. Several examples of the modeling of strength and damage in hierarchical materials are summarized, among them, 3D FE model of hybrid composites...... with nanoengineered matrix, fiber bundle model of UD composites with hierarchically clustered fibers and 3D multilevel model of wood considered as a gradient, cellular material with layered composite cell walls. The main areas of research in micromechanics of hierarchical materials are identified, among them......, the investigations of the effects of load redistribution between reinforcing elements at different scale levels, of the possibilities to control different material properties and to ensure synergy of strengthening effects at different scale levels and using the nanoreinforcement effects. The main future directions...

  13. Auditory pathways: anatomy and physiology.

    Science.gov (United States)

    Pickles, James O

    2015-01-01

    This chapter outlines the anatomy and physiology of the auditory pathways. After a brief analysis of the external, middle ears, and cochlea, the responses of auditory nerve fibers are described. The central nervous system is analyzed in more detail. A scheme is provided to help understand the complex and multiple auditory pathways running through the brainstem. The multiple pathways are based on the need to preserve accurate timing while extracting complex spectral patterns in the auditory input. The auditory nerve fibers branch to give two pathways, a ventral sound-localizing stream, and a dorsal mainly pattern recognition stream, which innervate the different divisions of the cochlear nucleus. The outputs of the two streams, with their two types of analysis, are progressively combined in the inferior colliculus and onwards, to produce the representation of what can be called the "auditory objects" in the external world. The progressive extraction of critical features in the auditory stimulus in the different levels of the central auditory system, from cochlear nucleus to auditory cortex, is described. In addition, the auditory centrifugal system, running from cortex in multiple stages to the organ of Corti of the cochlea, is described.

  14. Hierarchical Dirichlet Scaling Process

    OpenAIRE

    Kim, Dongwoo; Oh, Alice

    2014-01-01

    We present the \\textit{hierarchical Dirichlet scaling process} (HDSP), a Bayesian nonparametric mixed membership model. The HDSP generalizes the hierarchical Dirichlet process (HDP) to model the correlation structure between metadata in the corpus and mixture components. We construct the HDSP based on the normalized gamma representation of the Dirichlet process, and this construction allows incorporating a scaling function that controls the membership probabilities of the mixture components. ...

  15. Hierarchical Communication Diagrams

    OpenAIRE

    Marcin Szpyrka; Piotr Matyasik; Jerzy Biernacki; Agnieszka Biernacka; Michał Wypych; Leszek Kotulski

    2016-01-01

    Formal modelling languages range from strictly textual ones like process algebra scripts to visual modelling languages based on hierarchical graphs like coloured Petri nets. Approaches equipped with visual modelling capabilities make developing process easier and help users to cope with more complex systems. Alvis is a modelling language that combines possibilities of formal models verification with flexibility and simplicity of practical programming languages. The paper deals with hierarchic...

  16. Perceptual grouping over time within and across auditory and tactile modalities.

    Directory of Open Access Journals (Sweden)

    I-Fan Lin

    Full Text Available In auditory scene analysis, population separation and temporal coherence have been proposed to explain how auditory features are grouped together and streamed over time. The present study investigated whether these two theories can be applied to tactile streaming and whether temporal coherence theory can be applied to crossmodal streaming. The results show that synchrony detection between two tones/taps at different frequencies/locations became difficult when one of the tones/taps was embedded in a perceptual stream. While the taps applied to the same location were streamed over time, the taps applied to different locations were not. This observation suggests that tactile stream formation can be explained by population-separation theory. On the other hand, temporally coherent auditory stimuli at different frequencies were streamed over time, but temporally coherent tactile stimuli applied to different locations were not. When there was within-modality streaming, temporally coherent auditory stimuli and tactile stimuli were not streamed over time, either. This observation suggests the limitation of temporal coherence theory when it is applied to perceptual grouping over time.

  17. Electrophysiological Auditory Responses and Language Development in Infants with Periventricular Leukomalacia

    Science.gov (United States)

    Avecilla-Ramirez, G. N.; Ruiz-Correa, S.; Marroquin, J. L.; Harmony, T.; Alba, A.; Mendoza-Montoya, O.

    2011-01-01

    This study presents evidence suggesting that electrophysiological responses to language-related auditory stimuli recorded at 46 weeks postconceptional age (PCA) are associated with language development, particularly in infants with periventricular leukomalacia (PVL). In order to investigate this hypothesis, electrophysiological responses to a set…

  18. Processing of acoustic motion in the auditory cortex of the rufous horseshoe bat, Rhinolophus rouxi

    OpenAIRE

    Firzlaff, Uwe

    2001-01-01

    This study investigated the representation of acoustic motion in different fields of auditory cortex of the rufous horseshoe bat, Rhinolophus rouxi. Motion in horizontal direction (azimuth) was simulated using successive stimuli with dynamically changing interaural intensity differences presented via earphones. The mechanisms underlying a specific sensitivity of neurons to the direction of motion were investigated using microiontophoretic application of γ-aminobutyric acid (GAB...

  19. The Analysis and Treatment of Problem Behavior Evoked by Auditory Stimulation

    Science.gov (United States)

    Devlin, Sarah; Healy, Olive; Leader, Geraldine; Reed, Phil

    2008-01-01

    The current study aimed to identify specific stimuli associated with music that served as an establishing operation (EO) for the problem behavior of a 6-year-old child with a diagnosis of autism. Specific EOs for problem behavior evoked by auditory stimulation could be identified. A differential negative reinforcement procedure was implemented for…

  20. A Persian version of the sustained auditory attention capacity test and its results in normal children

    Directory of Open Access Journals (Sweden)

    Sanaz Soltanparast

    2013-03-01

    Full Text Available Background and Aim: Sustained attention refers to the ability to maintain attention in target stimuli over a sustained period of time. This study was conducted to develop a Persian version of the sustained auditory attention capacity test and to study its results in normal children.Methods: To develop the Persian version of the sustained auditory attention capacity test, like the original version, speech stimuli were used. The speech stimuli consisted of one hundred monosyllabic words consisting of a 20 times random of and repetition of the words of a 21-word list of monosyllabic words, which were randomly grouped together. The test was carried out at comfortable hearing level using binaural, and diotic presentation modes on 46 normal children of 7 to 11 years of age of both gender.Results: There was a significant difference between age, and an average of impulsiveness error score (p=0.004 and total score of sustained auditory attention capacity test (p=0.005. No significant difference was revealed between age, and an average of inattention error score and attention reduction span index. Gender did not have a significant impact on various indicators of the test.Conclusion: The results of this test on a group of normal hearing children confirmed its ability to measure sustained auditory attention capacity through speech stimuli.

  1. Salient stimuli in advertising: the effect of contrast interval length and type on recall.

    Science.gov (United States)

    Olsen, G Douglas

    2002-09-01

    Salient auditory stimuli (e.g., music or sound effects) are commonly used in advertising to elicit attention. However, issues related to the effectiveness of such stimuli are not well understood. This research examines the ability of a salient auditory stimulus, in the form of a contrast interval (CI), to enhance recall of message-related information. Researchers have argued that the effectiveness of the CI is a function of the temporal duration between the onset and offset of the change in the background stimulus and the nature of this stimulus. Three experiments investigate these propositions and indicate that recall is enhanced, providing the CI is 3 s or less. Information highlighted with silence is recalled better than information highlighted with music.

  2. Resizing Auditory Communities

    DEFF Research Database (Denmark)

    Kreutzfeldt, Jacob

    2012-01-01

    Heard through the ears of the Canadian composer and music teacher R. Murray Schafer the ideal auditory community had the shape of a village. Schafer’s work with the World Soundscape Project in the 70s represent an attempt to interpret contemporary environments through musical and auditory...... of sound as an active component in shaping urban environments. As urban conditions spreads globally, new scales, shapes and forms of communities appear and call for new distinctions and models in the study and representation of sonic environments. Particularly so, since urban environments are increasingly...... presents some terminologies for mapping urban environments through its sonic configuration. Such probing into the practices of acoustic territorialisation may direct attention to some of the conflicting and disharmonious interests defining public inclusive domains. The paper investigates the concept...

  3. Auditory and visual interhemispheric communication in musicians and non-musicians.

    Science.gov (United States)

    Woelfle, Rebecca; Grahn, Jessica A

    2013-01-01

    The corpus callosum (CC) is a brain structure composed of axon fibres linking the right and left hemispheres. Musical training is associated with larger midsagittal cross-sectional area of the CC, suggesting that interhemispheric communication may be faster in musicians. Here we compared interhemispheric transmission times (ITTs) for musicians and non-musicians. ITT was measured by comparing simple reaction times to stimuli presented to the same hemisphere that controlled a button-press response (uncrossed reaction time), or to the contralateral hemisphere (crossed reaction time). Both visual and auditory stimuli were tested. We predicted that the crossed-uncrossed difference (CUD) for musicians would be smaller than for non-musicians as a result of faster interhemispheric transfer times. We did not expect a difference in CUDs between the visual and auditory modalities for either musicians or non-musicians, as previous work indicates that interhemispheric transfer may happen through the genu of the CC, which contains motor fibres rather than sensory fibres. There were no significant differences in CUDs between musicians and non-musicians. However, auditory CUDs were significantly smaller than visual CUDs. Although this auditory-visual difference was larger in musicians than non-musicians, the interaction between modality and musical training was not significant. Therefore, although musical training does not significantly affect ITT, the crossing of auditory information between hemispheres appears to be faster than visual information, perhaps because subcortical pathways play a greater role for auditory interhemispheric transfer. PMID:24386382

  4. Across-ear stimulus-specific adaptation in the auditory cortex

    Directory of Open Access Journals (Sweden)

    Xinxiu eXu

    2014-07-01

    Full Text Available The ability to detect unexpected or deviant events in natural scenes is critical for survival. In the auditory system, neurons from the midbrain to cortex adapt quickly to repeated stimuli but this adaptation does not fully generalize to other, rare stimuli, a phenomenon called stimulus-specific adaptation (SSA. Most studies of SSA were conducted with pure tones of different frequencies, and it is by now well-established that SSA to tone frequency is strong and robust in auditory cortex. Here we tested SSA in the auditory cortex to the ear of stimulation using broadband noise. We show that cortical neurons adapt specifically to the ear of stimulation, and that the contrast between the responses to stimulation of the same ear when rare and when common depends on the binaural interaction class of the neurons.

  5. Spontaneous high-gamma band activity reflects functional organization of auditory cortex in the awake macaque.

    Science.gov (United States)

    Fukushima, Makoto; Saunders, Richard C; Leopold, David A; Mishkin, Mortimer; Averbeck, Bruno B

    2012-06-01

    In the absence of sensory stimuli, spontaneous activity in the brain has been shown to exhibit organization at multiple spatiotemporal scales. In the macaque auditory cortex, responses to acoustic stimuli are tonotopically organized within multiple, adjacent frequency maps aligned in a caudorostral direction on the supratemporal plane (STP) of the lateral sulcus. Here, we used chronic microelectrocorticography to investigate the correspondence between sensory maps and spontaneous neural fluctuations in the auditory cortex. We first mapped tonotopic organization across 96 electrodes spanning approximately two centimeters along the primary and higher auditory cortex. In separate sessions, we then observed that spontaneous activity at the same sites exhibited spatial covariation that reflected the tonotopic map of the STP. This observation demonstrates a close relationship between functional organization and spontaneous neural activity in the sensory cortex of the awake monkey. PMID:22681693

  6. 窄带CE-Chirp声诱发的听性稳态反应对正常青年人阈值测试的研究%Auditory steady-state responses to NB CE-Chirp stimuli in normal-hearing adults

    Institute of Scientific and Technical Information of China (English)

    张强; 李玉茹

    2013-01-01

    Objective To explore the auditory steady-state responses(ASSR) threshold in normal-hearing adults and the relationship between ASSR threshold and pure tone auditory(PTA) threshold.Methods Thirty normal-hearing adults(60 ears) were selected to ASSR test.Simultaneous carrier tones (0.5,1,2 and 4 kHz) were presented binaurally,modulation frequency was 90 Hz.The response thresholds were determined automatically.The test was performed in two different status (in awake and sleeping).Results ASSR thresholds were above PTA thresholds.No significant difference was found in the ASSR thresholds among different frequences.In sleeping adults,ASSR thresholds was associated with PTA thresholds,especially in 1 000 Hz and 2 000 Hz.Taking 50 dBSPL as analysis level,significant differences was found in different frequencies.Conclusion The ASSR thresholds are associated with PTA thresholds in sleeping adults,especially in 1 000 Hz and 2 000 Hz.%目的 了解正常青年人的听性稳态反应阈值及与纯音测听检查的相关性.方法 30例听力正常受试者(60耳),年龄24~30岁;听性稳态反应调制频率为90 Hz,分别记录500 Hz、1 000 Hz、2 000Hz和4000 Hz4个频率听性稳态反应原始阈值、测试中是清醒或是睡眠状态、检测完成总时间等.结果 听性稳态反应原始阈值高于纯音听阈值.听性稳态反应原始阈值,无论睡眠组22例(44耳)或是清醒组7例(14耳),耳别间和性别间在各个频率上均无显著性差异.调制频率为90 Hz,睡眠组听性稳态反应阈值与纯音测听阈值相关性更好,且1 000 Hz、2000 Hz,呈高、中度相关.睡眠组,以50 dBSPL刺激声作为听性稳态反应分析强度,各频率中,500 Hz引出率最低(率=84.1%),有显著性差异(x2=10.37;P=0.016).结论 调制频率为90 Hz的听性稳态反应对睡眠状态正常青年人听阈检查更准确,尤以1 000Hz、2000 Hz两频率更有参考价值.

  7. Subcortical correlates of auditory perceptual organization in humans.

    Science.gov (United States)

    Yamagishi, Shimpei; Otsuka, Sho; Furukawa, Shigeto; Kashino, Makio

    2016-09-01

    To make sense of complex auditory scenes, the auditory system sequentially organizes auditory components into perceptual objects or streams. In the conventional view of this process, the cortex plays a major role in perceptual organization, and subcortical mechanisms merely provide the cortex with acoustical features. Here, we show that the neural activities of the brainstem are linked to perceptual organization, which alternates spontaneously for human listeners without any stimulus change. The stimulus used in the experiment was an unchanging sequence of repeated triplet tones, which can be interpreted as either one or two streams. Listeners were instructed to report the perceptual states whenever they experienced perceptual switching between one and two streams throughout the stimulus presentation. Simultaneously, we recorded event related potentials with scalp electrodes. We measured the frequency-following response (FFR), which is considered to originate from the brainstem. We also assessed thalamo-cortical activity through the middle-latency response (MLR). The results demonstrate that the FFR and MLR varied with the state of auditory stream perception. In addition, we found that the MLR change precedes the FFR change with perceptual switching from a one-stream to a two-stream percept. This suggests that there are top-down influences on brainstem activity from the thalamo-cortical pathway. These findings are consistent with the idea of a distributed, hierarchical neural network for perceptual organization and suggest that the network extends to the brainstem level. PMID:27371867

  8. Silent music reading: auditory imagery and visuotonal modality transfer in singers and non-singers.

    Science.gov (United States)

    Hoppe, Christian; Splittstößer, Christoph; Fliessbach, Klaus; Trautner, Peter; Elger, Christian E; Weber, Bernd

    2014-11-01

    In daily life, responses are often facilitated by anticipatory imagery of expected targets which are announced by associated stimuli from different sensory modalities. Silent music reading represents an intriguing case of visuotonal modality transfer in working memory as it induces highly defined auditory imagery on the basis of presented visuospatial information (i.e. musical notes). Using functional MRI and a delayed sequence matching-to-sample paradigm, we compared brain activations during retention intervals (10s) of visual (VV) or tonal (TT) unimodal maintenance versus visuospatial-to-tonal modality transfer (VT) tasks. Visual or tonal sequences were comprised of six elements, white squares or tones, which were low, middle, or high regarding vertical screen position or pitch, respectively (presentation duration: 1.5s). For the cross-modal condition (VT, session 3), the visuospatial elements from condition VV (session 1) were re-defined as low, middle or high "notes" indicating low, middle or high tones from condition TT (session 2), respectively, and subjects had to match tonal sequences (probe) to previously presented note sequences. Tasks alternately had low or high cognitive load. To evaluate possible effects of music reading expertise, 15 singers and 15 non-musicians were included. Scanner task performance was excellent in both groups. Despite identity of applied visuospatial stimuli, visuotonal modality transfer versus visual maintenance (VT>VV) induced "inhibition" of visual brain areas and activation of primary and higher auditory brain areas which exceeded auditory activation elicited by tonal stimulation (VT>TT). This transfer-related visual-to-auditory activation shift occurred in both groups but was more pronounced in experts. Frontoparietal areas were activated by higher cognitive load but not by modality transfer. The auditory brain showed a potential to anticipate expected auditory target stimuli on the basis of non-auditory information and

  9. Silent music reading: auditory imagery and visuotonal modality transfer in singers and non-singers.

    Science.gov (United States)

    Hoppe, Christian; Splittstößer, Christoph; Fliessbach, Klaus; Trautner, Peter; Elger, Christian E; Weber, Bernd

    2014-11-01

    In daily life, responses are often facilitated by anticipatory imagery of expected targets which are announced by associated stimuli from different sensory modalities. Silent music reading represents an intriguing case of visuotonal modality transfer in working memory as it induces highly defined auditory imagery on the basis of presented visuospatial information (i.e. musical notes). Using functional MRI and a delayed sequence matching-to-sample paradigm, we compared brain activations during retention intervals (10s) of visual (VV) or tonal (TT) unimodal maintenance versus visuospatial-to-tonal modality transfer (VT) tasks. Visual or tonal sequences were comprised of six elements, white squares or tones, which were low, middle, or high regarding vertical screen position or pitch, respectively (presentation duration: 1.5s). For the cross-modal condition (VT, session 3), the visuospatial elements from condition VV (session 1) were re-defined as low, middle or high "notes" indicating low, middle or high tones from condition TT (session 2), respectively, and subjects had to match tonal sequences (probe) to previously presented note sequences. Tasks alternately had low or high cognitive load. To evaluate possible effects of music reading expertise, 15 singers and 15 non-musicians were included. Scanner task performance was excellent in both groups. Despite identity of applied visuospatial stimuli, visuotonal modality transfer versus visual maintenance (VT>VV) induced "inhibition" of visual brain areas and activation of primary and higher auditory brain areas which exceeded auditory activation elicited by tonal stimulation (VT>TT). This transfer-related visual-to-auditory activation shift occurred in both groups but was more pronounced in experts. Frontoparietal areas were activated by higher cognitive load but not by modality transfer. The auditory brain showed a potential to anticipate expected auditory target stimuli on the basis of non-auditory information and

  10. Thoughts of death modulate psychophysical and cortical responses to threatening stimuli.

    Directory of Open Access Journals (Sweden)

    Elia Valentini

    Full Text Available Existential social psychology studies show that awareness of one's eventual death profoundly influences human cognition and behaviour by inducing defensive reactions against end-of-life related anxiety. Much less is known about the impact of reminders of mortality on brain activity. Therefore we explored whether reminders of mortality influence subjective ratings of intensity and threat of auditory and painful thermal stimuli and the associated electroencephalographic activity. Moreover, we explored whether personality and demographics modulate psychophysical and neural changes related to mortality salience (MS. Following MS induction, a specific increase in ratings of intensity and threat was found for both nociceptive and auditory stimuli. While MS did not have any specific effect on nociceptive and auditory evoked potentials, larger amplitude of theta oscillatory activity related to thermal nociceptive activity was found after thoughts of death were induced. MS thus exerted a top-down modulation on theta electroencephalographic oscillatory amplitude, specifically for brain activity triggered by painful thermal stimuli. This effect was higher in participants reporting higher threat perception, suggesting that inducing a death-related mind-set may have an influence on body-defence related somatosensory representations.

  11. Thoughts of death modulate psychophysical and cortical responses to threatening stimuli.

    Science.gov (United States)

    Valentini, Elia; Koch, Katharina; Aglioti, Salvatore Maria

    2014-01-01

    Existential social psychology studies show that awareness of one's eventual death profoundly influences human cognition and behaviour by inducing defensive reactions against end-of-life related anxiety. Much less is known about the impact of reminders of mortality on brain activity. Therefore we explored whether reminders of mortality influence subjective ratings of intensity and threat of auditory and painful thermal stimuli and the associated electroencephalographic activity. Moreover, we explored whether personality and demographics modulate psychophysical and neural changes related to mortality salience (MS). Following MS induction, a specific increase in ratings of intensity and threat was found for both nociceptive and auditory stimuli. While MS did not have any specific effect on nociceptive and auditory evoked potentials, larger amplitude of theta oscillatory activity related to thermal nociceptive activity was found after thoughts of death were induced. MS thus exerted a top-down modulation on theta electroencephalographic oscillatory amplitude, specifically for brain activity triggered by painful thermal stimuli. This effect was higher in participants reporting higher threat perception, suggesting that inducing a death-related mind-set may have an influence on body-defence related somatosensory representations. PMID:25386905

  12. Event-related desynchronization of frontal-midline theta rhythm during preconscious auditory oddball processing.

    Science.gov (United States)

    Kawamata, Masaru; Kirino, Eiji; Inoue, Reiichi; Arai, Heii

    2007-10-01

    The goal of this study was to explore the frontal-midline theta rhythm (Fm theta) generation mechanism employing event-related desynchronization/synchronization (ERD/ERS) analysis in relation to task-irrelevant external stimuli. A dual paradigm was employed: a videogame and the simultaneous presentation of passive auditory oddball stimuli. We analyzed the data concerning ERD/ERS using both Fast Fourier Transformation (FFT) and wavelet transform (WT). In the FFT data, during the periods with appearance of Fm theta, apparent ERD of the theta band was observed at Fz and Cz. ERD when Fm theta was present was much more prominent than when Fm theta was absent. In the WT data, as in the FFT data, ERD was seen again, but in this case the ERD was preceded by ERS during both the periods with and without Fm theta. Furthermore, the WT analysis indicated that ERD was followed by ERS during the periods without Fm theta. However, during Fm theta, no apparent ERS following ERD was seen. In our study, Fm theta was desynchronized by the auditory stimuli that were independent of the video game task used to evoke the Fm theta. The ERD of Fm theta might be reflecting the mechanism of "positive suppression" to process external auditory stimuli automatically and preventing attentional resources from being unnecessarily allocated to those stimuli. Another possibility is that Fm theta induced by our dual paradigm may reflect information processing modeled by multi-item working memory requirements for playing the videogame and the simultaneous auditory processing using a memory trace. ERS in the WT data without Fm theta might indicate further processing of the auditory information free from "positive suppression" control reflected by Fm theta. PMID:17993201

  13. Emergence of tuning to natural stimulus statistics along the central auditory pathway.

    Directory of Open Access Journals (Sweden)

    Jose A Garcia-Lazaro

    Full Text Available We have previously shown that neurons in primary auditory cortex (A1 of anaesthetized (ketamine/medetomidine ferrets respond more strongly and reliably to dynamic stimuli whose statistics follow "natural" 1/f dynamics than to stimuli exhibiting pitch and amplitude modulations that are faster (1/f(0.5 or slower (1/f(2 than 1/f. To investigate where along the central auditory pathway this 1/f-modulation tuning arises, we have now characterized responses of neurons in the central nucleus of the inferior colliculus (ICC and the ventral division of the mediate geniculate nucleus of the thalamus (MGV to 1/f(γ distributed stimuli with γ varying between 0.5 and 2.8. We found that, while the great majority of neurons recorded from the ICC showed a strong preference for the most rapidly varying (1/f(0.5 distributed stimuli, responses from MGV neurons did not exhibit marked or systematic preferences for any particular γ exponent. Only in A1 did a majority of neurons respond with higher firing rates to stimuli in which γ takes values near 1. These results indicate that 1/f tuning emerges at forebrain levels of the ascending auditory pathway.

  14. Inducing attention not to blink: auditory entrainment improves conscious visual processing.

    Science.gov (United States)

    Ronconi, Luca; Pincham, Hannah L; Szűcs, Dénes; Facoetti, Andrea

    2016-09-01

    Our ability to allocate attention at different moments in time can sometimes fail to select stimuli occurring in close succession, preventing visual information from reaching awareness. This so-called attentional blink (AB) occurs when the second of two targets (T2) is presented closely after the first (T1) in a rapid serial visual presentation (RSVP). We hypothesized that entrainment to a rhythmic stream of stimuli-before visual targets appear-would reduce the AB. Experiment 1 tested the effect of auditory entrainment by presenting sounds with a regular or irregular interstimulus interval prior to a RSVP where T1 and T2 were separated by three possible lags (1, 3 and 8). Experiment 2 examined visual entrainment by presenting visual stimuli in place of auditory stimuli. Results revealed that irrespective of sensory modality, arrhythmic stimuli preceding the RSVP triggered an alerting effect that improved the T2 identification at lag 1, but impaired the recovery from the AB at lag 8. Importantly, only auditory rhythmic entrainment was effective in reducing the AB at lag 3. Our findings demonstrate that manipulating the pre-stimulus condition can reduce deficits in temporal attention characterizing the human cognitive architecture, suggesting innovative trainings for acquired and neurodevelopmental disorders. PMID:26215434

  15. Emotional stimuli and motor conversion disorder

    OpenAIRE

    Voon, V; Brezing, C.; Gallea, C; Ameli, R.; Roelofs, K.; LaFrance Jr, W.C.; Hallett, M

    2010-01-01

    Conversion disorder is characterized by neurological signs and symptoms related to an underlying psychological issue. Amygdala activity to affective stimuli is well characterized in healthy volunteers with greater amygdala activity to both negative and positive stimuli relative to neutral stimuli, and greater activity to negative relative to positive stimuli. We investigated the relationship between conversion disorder and affect by assessing amygdala activity to affective stimuli. We conduct...

  16. Auditory responses and stimulus-specific adaptation in rat auditory cortex are preserved across NREM and REM sleep.

    Science.gov (United States)

    Nir, Yuval; Vyazovskiy, Vladyslav V; Cirelli, Chiara; Banks, Matthew I; Tononi, Giulio

    2015-05-01

    Sleep entails a disconnection from the external environment. By and large, sensory stimuli do not trigger behavioral responses and are not consciously perceived as they usually are in wakefulness. Traditionally, sleep disconnection was ascribed to a thalamic "gate," which would prevent signal propagation along ascending sensory pathways to primary cortical areas. Here, we compared single-unit and LFP responses in core auditory cortex as freely moving rats spontaneously switched between wakefulness and sleep states. Despite robust differences in baseline neuronal activity, both the selectivity and the magnitude of auditory-evoked responses were comparable across wakefulness, Nonrapid eye movement (NREM) and rapid eye movement (REM) sleep (pairwise differences sleep and wakefulness using an oddball paradigm. Robust stimulus-specific adaptation (SSA) was observed following the onset of repetitive tones, and the strength of SSA effects (13-20%) was comparable across vigilance states. Thus, responses in core auditory cortex are preserved across sleep states, suggesting that evoked activity in primary sensory cortices is driven by external physical stimuli with little modulation by vigilance state. We suggest that sensory disconnection during sleep occurs at a stage later than primary sensory areas.

  17. Klinefelter syndrome has increased brain responses to auditory stimuli and motor output, but not to visual stimuli or Stroop adaptation

    DEFF Research Database (Denmark)

    Wallentin, Mikkel; Skakkebæk, Anne; Bojesen, Anders;

    2016-01-01

    Klinefelter syndrome (47, XXY) (KS) is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS...... relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS=49; Controls=49) responded to whether the words “GREEN” or “RED” were displayed in green or red (incongruent versus congruent colors). One of the colors was presented three times as often...... with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop) or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain’s motor network with no difference between groups...

  18. Parallel hierarchical radiosity rendering

    Energy Technology Data Exchange (ETDEWEB)

    Carter, M.

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  19. Multivoxel Patterns Reveal Functionally Differentiated Networks Underlying Auditory Feedback Processing of Speech

    DEFF Research Database (Denmark)

    Zheng, Zane Z.; Vicente-Grabovetsky, Alejandro; MacDonald, Ewen N.;

    2013-01-01

    human participants were vocalizing monosyllabic words, and to present the same auditory stimuli while participants were passively listening. Whole-brain analysis of neural-pattern similarity revealed three functional networks that were differentially sensitive to distorted auditory feedback during...... vocalization, compared with during passive listening. One network of regions appears to encode an “error signal” regardless of acoustic features of the error: this network, including right angular gyrus, right supplementary motor area, and bilateral cerebellum, yielded consistent neural patterns across...... presented as auditory concomitants of vocalization. A third network, showing a distinct functional pattern from the other two, appears to capture aspects of both neural response profiles. Together, our findings suggest that auditory feedback processing during speech motor control may rely on multiple...

  20. Differential Effects of Music and Video Gaming During Breaks on Auditory and Visual Learning.

    Science.gov (United States)

    Liu, Shuyan; Kuschpel, Maxim S; Schad, Daniel J; Heinz, Andreas; Rapp, Michael A

    2015-11-01

    The interruption of learning processes by breaks filled with diverse activities is common in everyday life. This study investigated the effects of active computer gaming and passive relaxation (rest and music) breaks on auditory versus visual memory performance. Young adults were exposed to breaks involving (a) open eyes resting, (b) listening to music, and (c) playing a video game, immediately after memorizing auditory versus visual stimuli. To assess learning performance, words were recalled directly after the break (an 8:30 minute delay) and were recalled and recognized again after 7 days. Based on linear mixed-effects modeling, it was found that playing the Angry Birds video game during a short learning break impaired long-term retrieval in auditory learning but enhanced long-term retrieval in visual learning compared with the music and rest conditions. These differential effects of video games on visual versus auditory learning suggest specific interference of common break activities on learning. PMID:26448497

  1. Differential Effects of Music and Video Gaming During Breaks on Auditory and Visual Learning.

    Science.gov (United States)

    Liu, Shuyan; Kuschpel, Maxim S; Schad, Daniel J; Heinz, Andreas; Rapp, Michael A

    2015-11-01

    The interruption of learning processes by breaks filled with diverse activities is common in everyday life. This study investigated the effects of active computer gaming and passive relaxation (rest and music) breaks on auditory versus visual memory performance. Young adults were exposed to breaks involving (a) open eyes resting, (b) listening to music, and (c) playing a video game, immediately after memorizing auditory versus visual stimuli. To assess learning performance, words were recalled directly after the break (an 8:30 minute delay) and were recalled and recognized again after 7 days. Based on linear mixed-effects modeling, it was found that playing the Angry Birds video game during a short learning break impaired long-term retrieval in auditory learning but enhanced long-term retrieval in visual learning compared with the music and rest conditions. These differential effects of video games on visual versus auditory learning suggest specific interference of common break activities on learning.

  2. Valid cues for auditory or somatosensory targets affect their perception: a signal detection approach.

    Science.gov (United States)

    Van Hulle, Lore; Van Damme, Stefaan; Crombez, Geert

    2013-01-01

    We investigated the effects of focusing attention towards auditory or somatosensory stimuli on perceptual sensitivity and response bias using a signal detection task. Participants (N = 44) performed an unspeeded detection task in which weak (individually calibrated) somatosensory or auditory stimuli were delivered. The focus of attention was manipulated by the presentation of a visual cue at the start of each trial. The visual cue consisted of the word "warmth" or the word "tone". This word cue was predictive of the corresponding target on two-thirds of the trials. As hypothesised, the results showed that cueing attention to a specific sensory modality resulted in a higher perceptual sensitivity for validly cued targets than for invalidly cued targets, as well as in a more liberal response criterion for reporting stimuli in the valid modality than in the invalid modality. The value of this experimental paradigm for investigating excessive attentional focus or hypervigilance in various non-clinical and clinical populations is discussed.

  3. Functional studies of the human auditory cortex, auditory memory and musical hallucinations

    International Nuclear Information System (INIS)

    of Brodmann, more intense in the contralateral (right) side. There is activation of both frontal executive areas without lateralization. Simultaneously, while area 39 of Brodmann was being activated, the temporal lobe was being inhibited. This seemingly not previously reported functional observation is suggestive that also inhibitory and not only excitatory relays play a role in the auditory pathways. The central activity in our patient (without external auditory stimuli) -who was tested while having musical hallucinations- was a mirror image of that of our normal stimulated volunteers. It is suggested that the trigger role of the inner ear -if any- could conceivably be inhibitory, desinhibitory and not necessarily purely excitatory. Based on our observations the trigger effect in our patient, could occur via the left ear. Finally, our functional studies are suggestive that auditory memory for musical perceptions could be seemingly located in the right area 39 of Brodm

  4. Using auditory-visual speech to probe the basis of noise-impaired consonant-vowel perception in dyslexia and auditory neuropathy

    Science.gov (United States)

    Ramirez, Joshua; Mann, Virginia

    2005-08-01

    Both dyslexics and auditory neuropathy (AN) subjects show inferior consonant-vowel (CV) perception in noise, relative to controls. To better understand these impairments, natural acoustic speech stimuli that were masked in speech-shaped noise at various intensities were presented to dyslexic, AN, and control subjects either in isolation or accompanied by visual articulatory cues. AN subjects were expected to benefit from the pairing of visual articulatory cues and auditory CV stimuli, provided that their speech perception impairment reflects a relatively peripheral auditory disorder. Assuming that dyslexia reflects a general impairment of speech processing rather than a disorder of audition, dyslexics were not expected to similarly benefit from an introduction of visual articulatory cues. The results revealed an increased effect of noise masking on the perception of isolated acoustic stimuli by both dyslexic and AN subjects. More importantly, dyslexics showed less effective use of visual articulatory cues in identifying masked speech stimuli and lower visual baseline performance relative to AN subjects and controls. Last, a significant positive correlation was found between reading ability and the ameliorating effect of visual articulatory cues on speech perception in noise. These results suggest that some reading impairments may stem from a central deficit of speech processing.

  5. Reproducibility and discriminability of brain patterns of semantic categories enhanced by congruent audiovisual stimuli.

    Directory of Open Access Journals (Sweden)

    Yuanqing Li

    Full Text Available One of the central questions in cognitive neuroscience is the precise neural representation, or brain pattern, associated with a semantic category. In this study, we explored the influence of audiovisual stimuli on the brain patterns of concepts or semantic categories through a functional magnetic resonance imaging (fMRI experiment. We used a pattern search method to extract brain patterns corresponding to two semantic categories: "old people" and "young people." These brain patterns were elicited by semantically congruent audiovisual, semantically incongruent audiovisual, unimodal visual, and unimodal auditory stimuli belonging to the two semantic categories. We calculated the reproducibility index, which measures the similarity of the patterns within the same category. We also decoded the semantic categories from these brain patterns. The decoding accuracy reflects the discriminability of the brain patterns between two categories. The results showed that both the reproducibility index of brain patterns and the decoding accuracy were significantly higher for semantically congruent audiovisual stimuli than for unimodal visual and unimodal auditory stimuli, while the semantically incongruent stimuli did not elicit brain patterns with significantly higher reproducibility index or decoding accuracy. Thus, the semantically congruent audiovisual stimuli enhanced the within-class reproducibility of brain patterns and the between-class discriminability of brain patterns, and facilitate neural representations of semantic categories or concepts. Furthermore, we analyzed the brain activity in superior temporal sulcus and middle temporal gyrus (STS/MTG. The strength of the fMRI signal and the reproducibility index were enhanced by the semantically congruent audiovisual stimuli. Our results support the use of the reproducibility index as a potential tool to supplement the fMRI signal amplitude for evaluating multimodal integration.

  6. Auditory and non-auditory effects of noise on health

    NARCIS (Netherlands)

    Basner, M.; Babisch, W.; Davis, A.; Brink, M.; Clark, C.; Janssen, S.A.; Stansfeld, S.

    2013-01-01

    Noise is pervasive in everyday life and can cause both auditory and non-auditory health eff ects. Noise-induced hearing loss remains highly prevalent in occupational settings, and is increasingly caused by social noise exposure (eg, through personal music players). Our understanding of molecular mec

  7. Visual Timing of Structured Dance Movements Resembles Auditory Rhythm Perception

    Directory of Open Access Journals (Sweden)

    Yi-Huang Su

    2016-01-01

    Full Text Available Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance.

  8. Visual Timing of Structured Dance Movements Resembles Auditory Rhythm Perception.

    Science.gov (United States)

    Su, Yi-Huang; Salazar-López, Elvira

    2016-01-01

    Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance. PMID:27313900

  9. Background sounds contribute to spectrotemporal plasticity in primary auditory cortex.

    Science.gov (United States)

    Moucha, Raluca; Pandya, Pritesh K; Engineer, Navzer D; Rathbun, Daniel L; Kilgard, Michael P

    2005-05-01

    The mammalian auditory system evolved to extract meaningful information from complex acoustic environments. Spectrotemporal selectivity of auditory neurons provides a potential mechanism to represent natural sounds. Experience-dependent plasticity mechanisms can remodel the spectrotemporal selectivity of neurons in primary auditory cortex (A1). Electrical stimulation of the cholinergic nucleus basalis (NB) enables plasticity in A1 that parallels natural learning and is specific to acoustic features associated with NB activity. In this study, we used NB stimulation to explore how cortical networks reorganize after experience with frequency-modulated (FM) sweeps, and how background stimuli contribute to spectrotemporal plasticity in rat auditory cortex. Pairing an 8-4 kHz FM sweep with NB stimulation 300 times per day for 20 days decreased tone thresholds, frequency selectivity, and response latency of A1 neurons in the region of the tonotopic map activated by the sound. In an attempt to modify neuronal response properties across all of A1 the same NB activation was paired in a second group of rats with five downward FM sweeps, each spanning a different octave. No changes in FM selectivity or receptive field (RF) structure were observed when the neural activation was distributed across the cortical surface. However, the addition of unpaired background sweeps of different rates or direction was sufficient to alter RF characteristics across the tonotopic map in a third group of rats. These results extend earlier observations that cortical neurons can develop stimulus specific plasticity and indicate that background conditions can strongly influence cortical plasticity.

  10. Auditory steady-state responses in the rabbit.

    Science.gov (United States)

    Ottaviani, F; Paludetti, G; Grassi, S; Draicchio, F; Santarelli, R M; Serafini, G; Pettorossi, V E

    1990-01-01

    The authors have studied auditory brainstem (ABRs), middle latency (MLRs) and steady-state potentials (SSRs) in 15 adult male rabbits weighing between 2.5 and 3 kg in order to verify if SSRs are due to a mere superimposition of ABRs and MLRs or to a resonance phenomenon. Ten of them were awake while 5 were studied under urethane anesthesia. Acoustic stimuli consisted in 0.1-ms square-wave pulses delivered at presentation rates ranging between 1 and 80/s at a stimulus intensity of 80 dB p.e. SPL. Our data show that reliable auditory SSRs can be obtained in the rabbit at a presentation rate of 30 stimuli/s, probably due to the superimposition of ABRs and MLR Pb waves which show an interwave interval of about 35 ms. The nonlinear aspects which can be detected are probably due to the effect of decreasing interstimulus intervals on the duration and amplitude of the Pb wave. It can then be concluded that SSRs in the rabbit are due more to a superimposition of ABR and MLR waves than to a resonance phenomenon.

  11. Auditory Cortical Plasticity Drives Training-Induced Cognitive Changes in Schizophrenia.

    Science.gov (United States)

    Dale, Corby L; Brown, Ethan G; Fisher, Melissa; Herman, Alexander B; Dowling, Anne F; Hinkley, Leighton B; Subramaniam, Karuna; Nagarajan, Srikantan S; Vinogradov, Sophia

    2016-01-01

    Schizophrenia is characterized by dysfunction in basic auditory processing, as well as higher-order operations of verbal learning and executive functions. We investigated whether targeted cognitive training of auditory processing improves neural responses to speech stimuli, and how these changes relate to higher-order cognitive functions. Patients with schizophrenia performed an auditory syllable identification task during magnetoencephalography before and after 50 hours of either targeted cognitive training or a computer games control. Healthy comparison subjects were assessed at baseline and after a 10 week no-contact interval. Prior to training, patients (N = 34) showed reduced M100 response in primary auditory cortex relative to healthy participants (N = 13). At reassessment, only the targeted cognitive training patient group (N = 18) exhibited increased M100 responses. Additionally, this group showed increased induced high gamma band activity within left dorsolateral prefrontal cortex immediately after stimulus presentation, and later in bilateral temporal cortices. Training-related changes in neural activity correlated with changes in executive function scores but not verbal learning and memory. These data suggest that computerized cognitive training that targets auditory and verbal learning operations enhances both sensory responses in auditory cortex as well as engagement of prefrontal regions, as indexed during an auditory processing task with low demands on working memory. This neural circuit enhancement is in turn associated with better executive function but not verbal memory. PMID:26152668

  12. Partial Epilepsy with Auditory Features

    Directory of Open Access Journals (Sweden)

    J Gordon Millichap

    2004-07-01

    Full Text Available The clinical characteristics of 53 sporadic (S cases of idiopathic partial epilepsy with auditory features (IPEAF were analyzed and compared to previously reported familial (F cases of autosomal dominant partial epilepsy with auditory features (ADPEAF in a study at the University of Bologna, Italy.

  13. The Perception of Auditory Motion.

    Science.gov (United States)

    Carlile, Simon; Leung, Johahn

    2016-01-01

    The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion. The fact that we perceive our auditory world to be stable despite almost continual movement of the head demonstrates the efficiency and effectiveness of this process. This review examines the acoustical basis of auditory motion perception and a wide range of psychophysical, electrophysiological, and cortical imaging studies that have probed the limits and possible mechanisms underlying this perception. PMID:27094029

  14. Peripheral Auditory Mechanisms

    CERN Document Server

    Hall, J; Hubbard, A; Neely, S; Tubis, A

    1986-01-01

    How weIl can we model experimental observations of the peripheral auditory system'? What theoretical predictions can we make that might be tested'? It was with these questions in mind that we organized the 1985 Mechanics of Hearing Workshop, to bring together auditory researchers to compare models with experimental observations. Tbe workshop forum was inspired by the very successful 1983 Mechanics of Hearing Workshop in Delft [1]. Boston University was chosen as the site of our meeting because of the Boston area's role as a center for hearing research in this country. We made a special effort at this meeting to attract students from around the world, because without students this field will not progress. Financial support for the workshop was provided in part by grant BNS- 8412878 from the National Science Foundation. Modeling is a traditional strategy in science and plays an important role in the scientific method. Models are the bridge between theory and experiment. Tbey test the assumptions made in experim...

  15. Hierarchically Acting Sterile Neutrinos

    OpenAIRE

    Chen, Chian-Shu(Physics Division, National Center for Theoretical Sciences, Hsinchu, 300, Taiwan); Takahashi, Ryo

    2011-01-01

    We propose that a hierarchical spectrum of sterile neutrinos (eV, keV, $10^{13-15}$ GeV) is considered to as the explanations for MiniBooNE and LSND oscillation anomalies, dark matter, and baryon asymmetry of the universe (BAU) respectively. The scenario can also realize the smallness of active neutrino masses by seesaw mechanism.

  16. Tight bifunctional hierarchical catalyst.

    Science.gov (United States)

    Højholt, Karen T; Vennestrøm, Peter N R; Tiruvalam, Ramchandra; Beato, Pablo

    2011-12-28

    A new concept to prepare tight bifunctional catalysts has been developed, by anchoring CoMo(6) clusters on hierarchical ZSM-5 zeolites for simultaneous use in HDS and hydrocracking catalysis. The prepared material displays a significant improved activity in HDS catalysis compared to the impregnated counterpart. PMID:22048337

  17. Catalysis with hierarchical zeolites

    DEFF Research Database (Denmark)

    Holm, Martin Spangsberg; Taarning, Esben; Egeblad, Kresten;

    2011-01-01

    Hierarchical (or mesoporous) zeolites have attracted significant attention during the first decade of the 21st century, and so far this interest continues to increase. There have already been several reviews giving detailed accounts of the developments emphasizing different aspects of this resear...

  18. Hierarchical Porous Structures

    Energy Technology Data Exchange (ETDEWEB)

    Grote, Christopher John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-07

    Materials Design is often at the forefront of technological innovation. While there has always been a push to generate increasingly low density materials, such as aero or hydrogels, more recently the idea of bicontinuous structures has gone more into play. This review will cover some of the methods and applications for generating both porous, and hierarchically porous structures.

  19. On the Relevance of Natural Stimuli for the Study of Brainstem Correlates: The Example of Consonance Perception.

    Directory of Open Access Journals (Sweden)

    Marion Cousineau

    Full Text Available Some combinations of musical tones sound pleasing to Western listeners, and are termed consonant, while others sound discordant, and are termed dissonant. The perceptual phenomenon of consonance has been traced to the acoustic property of harmonicity. It has been repeatedly shown that neural correlates of consonance can be found as early as the auditory brainstem as reflected in the harmonicity of the scalp-recorded frequency-following response (FFR. "Neural Pitch Salience" (NPS measured from FFRs-essentially a time-domain equivalent of the classic pattern recognition models of pitch-has been found to correlate with behavioral judgments of consonance for synthetic stimuli. Following the idea that the auditory system has evolved to process behaviorally relevant natural sounds, and in order to test the generalizability of this finding made with synthetic tones, we recorded FFRs for consonant and dissonant intervals composed of synthetic and natural stimuli. We found that NPS correlated with behavioral judgments of consonance and dissonance for synthetic but not for naturalistic sounds. These results suggest that while some form of harmonicity can be computed from the auditory brainstem response, the general percept of consonance and dissonance is not captured by this measure. It might either be represented in the brainstem in a different code (such as place code or arise at higher levels of the auditory pathway. Our findings further illustrate the importance of using natural sounds, as a complementary tool to fully-controlled synthetic sounds, when probing auditory perception.

  20. Music perception and cognition following bilateral lesions of auditory cortex.

    Science.gov (United States)

    Tramo, M J; Bharucha, J J; Musiek, F E

    1990-01-01

    We present experimental and anatomical data from a case study of impaired auditory perception following bilateral hemispheric strokes. To consider the cortical representation of sensory, perceptual, and cognitive functions mediating tonal information processing in music, pure tone sensation thresholds, spectral intonation judgments, and the associative priming of spectral intonation judgments by harmonic context were examined, and lesion localization was analyzed quantitatively using straight-line two-dimensional maps of the cortical surface reconstructed from magnetic resonance images. Despite normal pure tone sensation thresholds at 250-8000 Hz, the perception of tonal spectra was severely impaired, such that harmonic structures (major triads) were almost uniformly judged to sound dissonant; yet, the associative priming of spectral intonation judgments by harmonic context was preserved, indicating that cognitive representations of tonal hierarchies in music remained intact and accessible. Brainprints demonstrated complete bilateral lesions of the transverse gyri of Heschl and partial lesions of the right and left superior temporal gyri involving 98 and 20% of their surface areas, respectively. In the right hemisphere, there was partial sparing of the planum temporale, temporoparietal junction, and inferior parietal cortex. In the left hemisphere, all of the superior temporal region anterior to the transverse gyrus and parts of the planum temporale, temporoparietal junction, inferior parietal cortex, and insula were spared. These observations suggest that (1) sensory, perceptual, and cognitive functions mediating tonal information processing in music are neurologically dissociable; (2) complete bilateral lesions of primary auditory cortex combined with partial bilateral lesions of auditory association cortex chronically impair tonal consonance perception; (3) cognitive functions that hierarchically structure pitch information and generate harmonic expectancies

  1. Binocular Combination of Second-Order Stimuli

    OpenAIRE

    Zhou, Jiawei; Liu, Rong; Zhou, Yifeng; Hess, Robert F.

    2014-01-01

    Phase information is a fundamental aspect of visual stimuli. However, the nature of the binocular combination of stimuli defined by modulations in contrast, so-called second-order stimuli, is presently not clear. To address this issue, we measured binocular combination for first- (luminance modulated) and second-order (contrast modulated) stimuli using a binocular phase combination paradigm in seven normal adults. We found that the binocular perceived phase of second-order gratings depends on...

  2. Electrostimulation mapping of comprehension of auditory and visual words.

    Science.gov (United States)

    Roux, Franck-Emmanuel; Miskin, Krasimir; Durand, Jean-Baptiste; Sacko, Oumar; Réhault, Emilie; Tanova, Rositsa; Démonet, Jean-François

    2015-10-01

    In order to spare functional areas during the removal of brain tumours, electrical stimulation mapping was used in 90 patients (77 in the left hemisphere and 13 in the right; 2754 cortical sites tested). Language functions were studied with a special focus on comprehension of auditory and visual words and the semantic system. In addition to naming, patients were asked to perform pointing tasks from auditory and visual stimuli (using sets of 4 different images controlled for familiarity), and also auditory object (sound recognition) and Token test tasks. Ninety-two auditory comprehension interference sites were observed. We found that the process of auditory comprehension involved a few, fine-grained, sub-centimetre cortical territories. Early stages of speech comprehension seem to relate to two posterior regions in the left superior temporal gyrus. Downstream lexical-semantic speech processing and sound analysis involved 2 pathways, along the anterior part of the left superior temporal gyrus, and posteriorly around the supramarginal and middle temporal gyri. Electrostimulation experimentally dissociated perceptual consciousness attached to speech comprehension. The initial word discrimination process can be considered as an "automatic" stage, the attention feedback not being impaired by stimulation as would be the case at the lexical-semantic stage. Multimodal organization of the superior temporal gyrus was also detected since some neurones could be involved in comprehension of visual material and naming. These findings demonstrate a fine graded, sub-centimetre, cortical representation of speech comprehension processing mainly in the left superior temporal gyrus and are in line with those described in dual stream models of language comprehension processing. PMID:26332785

  3. Electrostimulation mapping of comprehension of auditory and visual words.

    Science.gov (United States)

    Roux, Franck-Emmanuel; Miskin, Krasimir; Durand, Jean-Baptiste; Sacko, Oumar; Réhault, Emilie; Tanova, Rositsa; Démonet, Jean-François

    2015-10-01

    In order to spare functional areas during the removal of brain tumours, electrical stimulation mapping was used in 90 patients (77 in the left hemisphere and 13 in the right; 2754 cortical sites tested). Language functions were studied with a special focus on comprehension of auditory and visual words and the semantic system. In addition to naming, patients were asked to perform pointing tasks from auditory and visual stimuli (using sets of 4 different images controlled for familiarity), and also auditory object (sound recognition) and Token test tasks. Ninety-two auditory comprehension interference sites were observed. We found that the process of auditory comprehension involved a few, fine-grained, sub-centimetre cortical territories. Early stages of speech comprehension seem to relate to two posterior regions in the left superior temporal gyrus. Downstream lexical-semantic speech processing and sound analysis involved 2 pathways, along the anterior part of the left superior temporal gyrus, and posteriorly around the supramarginal and middle temporal gyri. Electrostimulation experimentally dissociated perceptual consciousness attached to speech comprehension. The initial word discrimination process can be considered as an "automatic" stage, the attention feedback not being impaired by stimulation as would be the case at the lexical-semantic stage. Multimodal organization of the superior temporal gyrus was also detected since some neurones could be involved in comprehension of visual material and naming. These findings demonstrate a fine graded, sub-centimetre, cortical representation of speech comprehension processing mainly in the left superior temporal gyrus and are in line with those described in dual stream models of language comprehension processing.

  4. Multi-sensory integration in brainstem and auditory cortex.

    Science.gov (United States)

    Basura, Gregory J; Koehler, Seth D; Shore, Susan E

    2012-11-16

    Tinnitus is the perception of sound in the absence of a physical sound stimulus. It is thought to arise from aberrant neural activity within central auditory pathways that may be influenced by multiple brain centers, including the somatosensory system. Auditory-somatosensory (bimodal) integration occurs in the dorsal cochlear nucleus (DCN), where electrical activation of somatosensory regions alters pyramidal cell spike timing and rates of sound stimuli. Moreover, in conditions of tinnitus, bimodal integration in DCN is enhanced, producing greater spontaneous and sound-driven neural activity, which are neural correlates of tinnitus. In primary auditory cortex (A1), a similar auditory-somatosensory integration has been described in the normal system (Lakatos et al., 2007), where sub-threshold multisensory modulation may be a direct reflection of subcortical multisensory responses (Tyll et al., 2011). The present work utilized simultaneous recordings from both DCN and A1 to directly compare bimodal integration across these separate brain stations of the intact auditory pathway. Four-shank, 32-channel electrodes were placed in DCN and A1 to simultaneously record tone-evoked unit activity in the presence and absence of spinal trigeminal nucleus (Sp5) electrical activation. Bimodal stimulation led to long-lasting facilitation or suppression of single and multi-unit responses to subsequent sound in both DCN and A1. Immediate (bimodal response) and long-lasting (bimodal plasticity) effects of Sp5-tone stimulation were facilitation or suppression of tone-evoked firing rates in DCN and A1 at all Sp5-tone pairing intervals (10, 20, and 40 ms), and greater suppression at 20 ms pairing-intervals for single unit responses. Understanding the complex relationships between DCN and A1 bimodal processing in the normal animal provides the basis for studying its disruption in hearing loss and tinnitus models. This article is part of a Special Issue entitled: Tinnitus Neuroscience

  5. A Detection-Theoretic Analysis of Auditory Streaming and Its Relation to Auditory Masking.

    Science.gov (United States)

    Chang, An-Chieh; Lutfi, Robert; Lee, Jungmee; Heo, Inseok

    2016-09-18

    Research on hearing has long been challenged with understanding our exceptional ability to hear out individual sounds in a mixture (the so-called cocktail party problem). Two general approaches to the problem have been taken using sequences of tones as stimuli. The first has focused on our tendency to hear sequences, sufficiently separated in frequency, split into separate cohesive streams (auditory streaming). The second has focused on our ability to detect a change in one sequence, ignoring all others (auditory masking). The two phenomena are clearly related, but that relation has never been evaluated analytically. This article offers a detection-theoretic analysis of the relation between multitone streaming and masking that underscores the expected similarities and differences between these phenomena and the predicted outcome of experiments in each case. The key to establishing this relation is the function linking performance to the information divergence of the tone sequences, DKL (a measure of the statistical separation of their parameters). A strong prediction is that streaming and masking of tones will be a common function of DKL provided that the statistical properties of sequences are symmetric. Results of experiments are reported supporting this prediction.

  6. Classification across the senses: Auditory-visual cognitive performance in a California sea lion (Zalophus californianus)

    Science.gov (United States)

    Lindemann, Kristy L.; Reichmuth-Kastak, Colleen; Schusterman, Ronald J.

    2005-09-01

    The model of stimulus equivalence describes how perceptually dissimilar stimuli can become interrelated to form useful categories both within and between the sensory modalities. A recent experiment expanded upon prior work with a California sea lion by examining stimulus classification across the auditory and visual modalities. Acoustic stimuli were associated with an exemplar from one of two pre-existing visual classes in a matching-to-sample paradigm. After direct training of these associations, the sea lion showed spontaneous transfer of the new auditory stimuli to the remaining members of the visual classes. The sea lion's performance on this cross-modal equivalence task was similar to that shown by human subjects in studies of emergent word learning and reading comprehension. Current research with the same animal further examines how stimulus classes can be expanded across modalities. Fast-mapping techniques are used to rapidly establish new auditory-visual relationships between acoustic cues and multiple arbitrary visual stimuli. Collectively, this research illustrates complex cross-modal performances in a highly experienced subject and provides insight into how animals organize information from multiple sensory modalities into meaningful representations.

  7. Auditory-Verbal Comprehension Development of 2-5 Year Old Normal Persian Speaking Children in Tehran, Iran

    Directory of Open Access Journals (Sweden)

    Fariba Yadegari

    2011-06-01

    Full Text Available Background and Aim: Understanding and defining developmental norms of auditory comprehension is a necessity for detecting auditory-verbal comprehension impairments in children. We hereby investigated lexical auditory development of Persian (Farsi speaking children.Methods: In this cross-sectional study, auditory comprehension of four 2-5 year old normal children of adult’s child-directed utterance at available nurseries was observed by researchers primarily to gain a great number of comprehendible words for the children of the same age. The words were classified into nouns, verbs and adjectives. Auditory-verbal comprehension task items were also considered in 2 sections of subordinates and superordinates auditory comprehension. Colored pictures were provided for each item. Thirty 2-5 year old normal children were randomly selected from nurseries all over Tehran. Children were tested by this task and subsequently, mean of their correct response were analyzed. Results: The findings revealed that there is a high positive correlation between auditory-verbal comprehension and age (r=0.804, p=0.001. Comparing children in 3 age groups of 2-3, 3-4 and 4-5 year old, showed that subordinate and superordinate auditory comprehension of the former group is significantly lower (p0.05, while the difference between subordinate and superordinate auditory comprehension was significant in all age groups (p<0.05.Conclusion: Auditory-verbal comprehension develop much faster at lower than older ages and there is no prominent difference between word linguistic classes including nouns, verbs and adjectives. Slower development of superordinate auditory comprehension implies semantic hierarchical evolution of words.

  8. How does the extraction of local and global auditory regularities vary with context?

    Directory of Open Access Journals (Sweden)

    Sébastien Marti

    Full Text Available How does the human brain extract regularities from its environment? There is evidence that short range or 'local' regularities (within seconds are automatically detected by the brain while long range or 'global' regularities (over tens of seconds or more require conscious awareness. In the present experiment, we asked whether participants' attention was needed to acquire such auditory regularities, to detect their violation or both. We designed a paradigm in which participants listened to predictable sounds. Subjects could be distracted by a visual task at two moments: when they were first exposed to a regularity or when they detected violations of this regularity. MEG recordings revealed that early brain responses (100-130 ms to violations of short range regularities were unaffected by visual distraction and driven essentially by local transitional probabilities. Based on global workspace theory and prior results, we expected that visual distraction would eliminate the long range global effect, but unexpectedly, we found the contrary, i.e. late brain responses (300-600 ms to violations of long range regularities on audio-visual trials but not on auditory only trials. Further analyses showed that, in fact, visual distraction was incomplete and that auditory and visual stimuli interfered in both directions. Our results show that conscious, attentive subjects can learn the long range dependencies present in auditory stimuli even while performing a visual task on synchronous visual stimuli. Furthermore, they acquire a complex regularity and end up making different predictions for the very same stimulus depending on the context (i.e. absence or presence of visual stimuli. These results suggest that while short-range regularity detection is driven by local transitional probabilities between stimuli, the human brain detects and stores long-range regularities in a highly flexible, context dependent manner.

  9. Tactile feedback improves auditory spatial localization

    OpenAIRE

    Gori, Monica; Vercillo, Tiziana; Sandini, Giulio; Burr, David

    2014-01-01

    Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gori et al., 2014). To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds b...

  10. Tactile feedback improves auditory spatial localization

    OpenAIRE

    Monica eGori; Tiziana eVercillo; Giulio eSandini; David eBurr

    2014-01-01

    Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial-bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gori et al., 2014). To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds b...

  11. Rapid context-based identification of target sounds in an auditory scene

    Science.gov (United States)

    Gamble, Marissa L.; Woldorff, Marty G.

    2015-01-01

    To make sense of our dynamic and complex auditory environment, we must be able to parse the sensory input into usable parts and pick out relevant sounds from all the potentially distracting auditory information. While it is unclear exactly how we accomplish this difficult task, Gamble and Woldorff (2014) recently reported an ERP study of an auditory target-search task in a temporally and spatially distributed, rapidly presented, auditory scene. They reported an early, differential, bilateral activation (beginning ~60 ms) between feature-deviating Target stimuli and physically equivalent feature-deviating Nontargets, reflecting a rapid Target-detection process. This was followed shortly later (~130 ms) by the lateralized N2ac ERP activation, reflecting the focusing of auditory spatial attention toward the Target sound and paralleling attentional-shifting processes widely studied in vision. Here we directly examined the early, bilateral, Target-selective effect to better understand its nature and functional role. Participants listened to midline-presented sounds that included Target and Nontarget stimuli that were randomly either embedded in a brief rapid stream or presented alone. The results indicate that this early bilateral effect results from a template for the Target that utilizes its feature deviancy within a stream to enable rapid identification. Moreover, individual-differences analysis showed that the size of this effect was larger for subjects with faster response times. The findings support the hypothesis that our auditory attentional systems can implement and utilize a context-based relational template for a Target sound, making use of additional auditory information in the environment when needing to rapidly detect a relevant sound. PMID:25848684

  12. A corollary discharge mechanism modulates central auditory processing in singing crickets.

    Science.gov (United States)

    Poulet, J F A; Hedwig, B

    2003-03-01

    Crickets communicate using loud (100 dB SPL) sound signals that could adversely affect their own auditory system. To examine how they cope with this self-generated acoustic stimulation, intracellular recordings were made from auditory afferent neurons and an identified auditory interneuron-the Omega 1 neuron (ON1)-during pharmacologically elicited singing (stridulation). During sonorous stridulation, the auditory afferents and ON1 responded with bursts of spikes to the crickets' own song. When the crickets were stridulating silently, after one wing had been removed, only a few spikes were recorded in the afferents and ON1. Primary afferent depolarizations (PADs) occurred in the terminals of the auditory afferents, and inhibitory postsynaptic potentials (IPSPs) were apparent in ON1. The PADs and IPSPs were composed of many summed, small-amplitude potentials that occurred at a rate of about 230 Hz. The PADs and the IPSPs started during the closing wing movement and peaked in amplitude during the subsequent opening wing movement. As a consequence, during silent stridulation, ON1's response to acoustic stimuli was maximally inhibited during wing opening. Inhibition coincides with the time when ON1 would otherwise be most strongly excited by self-generated sounds in a sonorously stridulating cricket. The PADs and the IPSPs persisted in fictively stridulating crickets whose ventral nerve cord had been isolated from muscles and sense organs. This strongly suggests that the inhibition of the auditory pathway is the result of a corollary discharge from the stridulation motor network. The central inhibition was mimicked by hyperpolarizing current injection into ON1 while it was responding to a 100 dB SPL sound pulse. This suppressed its spiking response to the acoustic stimulus and maintained its response to subsequent, quieter stimuli. The corollary discharge therefore prevents auditory desensitization in stridulating crickets and allows the animals to respond to external

  13. Nested Hierarchical Dirichlet Processes.

    Science.gov (United States)

    Paisley, John; Wang, Chong; Blei, David M; Jordan, Michael I

    2015-02-01

    We develop a nested hierarchical Dirichlet process (nHDP) for hierarchical topic modeling. The nHDP generalizes the nested Chinese restaurant process (nCRP) to allow each word to follow its own path to a topic node according to a per-document distribution over the paths on a shared tree. This alleviates the rigid, single-path formulation assumed by the nCRP, allowing documents to easily express complex thematic borrowings. We derive a stochastic variational inference algorithm for the model, which enables efficient inference for massive collections of text documents. We demonstrate our algorithm on 1.8 million documents from The New York Times and 2.7 million documents from Wikipedia. PMID:26353240

  14. Hierarchical surface fragments

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    A new compact level-of-detail representation, called hierarchical surface fragments, for geometric objects with highly complex shape is presented. The representation comprises a set of irregular unstructured sampled surface fragments, whose boundary is a circle viewed along its normal. An efficient algorithm to construct the representation is described. In depiction of the framework for visualization, a screen tile technique for acceleration of rendering is proposed. Since an approximate z-buffer algorithm is adopted to fast determine visibility of each rendering primitive, a new buffer, z-delta-buffer, is designed to facilitate solving the problems raised by the approximation and to improve the image fidelity. Finally, a solution is provided to integrate our rendering approach for hierarchical surface fragments with traditional polygon-based methods.

  15. Auditory Neuropathy - A Case of Auditory Neuropathy after Hyperbilirubinemia

    Directory of Open Access Journals (Sweden)

    Maliheh Mazaher Yazdi

    2007-12-01

    Full Text Available Background and Aim: Auditory neuropathy is an hearing disorder in which peripheral hearing is normal, but the eighth nerve and brainstem are abnormal. By clinical definition, patient with this disorder have normal OAE, but exhibit an absent or severely abnormal ABR. Auditory neuropathy was first reported in the late 1970s as different methods could identify discrepancy between absent ABR and present hearing threshold. Speech understanding difficulties are worse than can be predicted from other tests of hearing function. Auditory neuropathy may also affect vestibular function. Case Report: This article presents electrophysiological and behavioral data from a case of auditory neuropathy in a child with normal hearing after bilirubinemia in a 5 years follow-up. Audiological findings demonstrate remarkable changes after multidisciplinary rehabilitation. Conclusion: auditory neuropathy may involve damage to the inner hair cells-specialized sensory cells in the inner ear that transmit information about sound through the nervous system to the brain. Other causes may include faulty connections between the inner hair cells and the nerve leading from the inner ear to the brain or damage to the nerve itself. People with auditory neuropathy have OAEs response but absent ABR and hearing loss threshold that can be permanent, get worse or get better.

  16. Hierarchically Structured Electrospun Fibers

    Directory of Open Access Journals (Sweden)

    Nicole E. Zander

    2013-01-01

    Full Text Available Traditional electrospun nanofibers have a myriad of applications ranging from scaffolds for tissue engineering to components of biosensors and energy harvesting devices. The generally smooth one-dimensional structure of the fibers has stood as a limitation to several interesting novel applications. Control of fiber diameter, porosity and collector geometry will be briefly discussed, as will more traditional methods for controlling fiber morphology and fiber mat architecture. The remainder of the review will focus on new techniques to prepare hierarchically structured fibers. Fibers with hierarchical primary structures—including helical, buckled, and beads-on-a-string fibers, as well as fibers with secondary structures, such as nanopores, nanopillars, nanorods, and internally structured fibers and their applications—will be discussed. These new materials with helical/buckled morphology are expected to possess unique optical and mechanical properties with possible applications for negative refractive index materials, highly stretchable/high-tensile-strength materials, and components in microelectromechanical devices. Core-shell type fibers enable a much wider variety of materials to be electrospun and are expected to be widely applied in the sensing, drug delivery/controlled release fields, and in the encapsulation of live cells for biological applications. Materials with a hierarchical secondary structure are expected to provide new superhydrophobic and self-cleaning materials.

  17. HDS: Hierarchical Data System

    Science.gov (United States)

    Pearce, Dave; Walter, Anton; Lupton, W. F.; Warren-Smith, Rodney F.; Lawden, Mike; McIlwrath, Brian; Peden, J. C. M.; Jenness, Tim; Draper, Peter W.

    2015-02-01

    The Hierarchical Data System (HDS) is a file-based hierarchical data system designed for the storage of a wide variety of information. It is particularly suited to the storage of large multi-dimensional arrays (with their ancillary data) where efficient access is needed. It is a key component of the Starlink software collection (ascl:1110.012) and is used by the Starlink N-Dimensional Data Format (NDF) library (ascl:1411.023). HDS organizes data into hierarchies, broadly similar to the directory structure of a hierarchical filing system, but contained within a single HDS container file. The structures stored in these files are self-describing and flexible; HDS supports modification and extension of structures previously created, as well as functions such as deletion, copying, and renaming. All information stored in HDS files is portable between the machines on which HDS is implemented. Thus, there are no format conversion problems when moving between machines. HDS can write files in a private binary format (version 4), or be layered on top of HDF5 (version 5).

  18. Hierarchical video summarization

    Science.gov (United States)

    Ratakonda, Krishna; Sezan, M. Ibrahim; Crinon, Regis J.

    1998-12-01

    We address the problem of key-frame summarization of vide in the absence of any a priori information about its content. This is a common problem that is encountered in home videos. We propose a hierarchical key-frame summarization algorithm where a coarse-to-fine key-frame summary is generated. A hierarchical key-frame summary facilitates multi-level browsing where the user can quickly discover the content of the video by accessing its coarsest but most compact summary and then view a desired segment of the video with increasingly more detail. At the finest level, the summary is generated on the basis of color features of video frames, using an extension of a recently proposed key-frame extraction algorithm. The finest level key-frames are recursively clustered using a novel pairwise K-means clustering approach with temporal consecutiveness constraint. We also address summarization of MPEG-2 compressed video without fully decoding the bitstream. We also propose efficient mechanisms that facilitate decoding the video when the hierarchical summary is utilized in browsing and playback of video segments starting at selected key-frames.

  19. Predictive uncertainty in auditory sequence processing.

    Science.gov (United States)

    Hansen, Niels Chr; Pearce, Marcus T

    2014-01-01

    Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty-a property of listeners' prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure. Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex). Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty). We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty). Finally, we simulate listeners' perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature. The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music.

  20. Predictive uncertainty in auditory sequence processing

    Directory of Open Access Journals (Sweden)

    Niels Chr. eHansen

    2014-09-01

    Full Text Available Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty - a property of listeners’ prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure.Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex. Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty. We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty. Finally, we simulate listeners’ perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature.The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music.

  1. Predictive uncertainty in auditory sequence processing.

    Science.gov (United States)

    Hansen, Niels Chr; Pearce, Marcus T

    2014-01-01

    Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty-a property of listeners' prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure. Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex). Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty). We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty). Finally, we simulate listeners' perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature. The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music. PMID:25295018

  2. Effect of auditory deafferentation on the synaptic connectivity of a pair of identified interneurons in adult field crickets.

    Science.gov (United States)

    Brodfuehrer, P D; Hoy, R R

    1988-01-01

    In adult crickets, Teleogryllus oceanicus, unilateral auditory deafferentation causes the medial dendrites of an afferent-deprived, identified auditory interneuron (Int-1) in the prothoracic ganglion to sprout and form new functional connections in the contralateral auditory neuropil. The establishment of these new functional connections by the deafferented Int-1, however, does not appear to affect the physiological responses of Int-1's homolog on the intact side of the prothoracic ganglion which also innervates this auditory neuropil. Thus it appears that the sprouting dendrites of the deafferented Int-1 are not functionally competing with those of the intact Int-1 for synaptic connections in the remaining auditory neuropil following unilateral deafferentation in adult crickets. Moreover, we demonstrate that auditory function is restored to the afferent-deprived Int-1 within 4-6 days following deafferentation, when few branches of Int-1's medial dendrites can be seen to have sprouted. The strength of the physiological responses and extent of dendritic sprouting in the deafferented Int-1 progressively increase with time following deafferentation. By 28 days following deafferentation, most of the normal physiological responses of Int-1 to auditory stimuli have been restored in the deafferented Int-1, and the medial dendrites of the deafferented Int-1 have clearly sprouted and grown across into the contralateral auditory afferent field. The strength of the physiological responses of the deafferented Int-1 to auditory stimuli and extent of dendritic sprouting in the deafferented Int-1 are greater in crickets deafferented as juveniles than as adults. Thus, neuronal plasticity persists in Int-1 following sensory deprivation from the earliest juvenile stages through adulthood.

  3. Auditory Processing Disorder in Children

    Science.gov (United States)

    ... free publications Find organizations Related Topics Auditory Neuropathy Autism Spectrum Disorder: Communication Problems in Children Dysphagia Quick ... NIH… Turning Discovery Into Health ® National Institute on Deafness and Other Communication Disorders 31 Center Drive, MSC ...

  4. Auditory Processing Disorder (For Parents)

    Science.gov (United States)

    ... and school. A positive, realistic attitude and healthy self-esteem in a child with APD can work wonders. And kids with APD can go on to ... Parents MORE ON THIS TOPIC Auditory Processing Disorder Special ...

  5. Interaction of streaming and attention in human auditory cortex.

    Science.gov (United States)

    Gutschalk, Alexander; Rupp, André; Dykstra, Andrew R

    2015-01-01

    Serially presented tones are sometimes segregated into two perceptually distinct streams. An ongoing debate is whether this basic streaming phenomenon reflects automatic processes or requires attention focused to the stimuli. Here, we examined the influence of focused attention on streaming-related activity in human auditory cortex using magnetoencephalography (MEG). Listeners were presented with a dichotic paradigm in which left-ear stimuli consisted of canonical streaming stimuli (ABA_ or ABAA) and right-ear stimuli consisted of a classical oddball paradigm. In phase one, listeners were instructed to attend the right-ear oddball sequence and detect rare deviants. In phase two, they were instructed to attend the left ear streaming stimulus and report whether they heard one or two streams. The frequency difference (ΔF) of the sequences was set such that the smallest and largest ΔF conditions generally induced one- and two-stream percepts, respectively. Two intermediate ΔF conditions were chosen to elicit bistable percepts (i.e., either one or two streams). Attention enhanced the peak-to-peak amplitude of the P1-N1 complex, but only for ambiguous ΔF conditions, consistent with the notion that automatic mechanisms for streaming tightly interact with attention and that the latter is of particular importance for ambiguous sound sequences.

  6. Regional brain responses in nulliparous women to emotional infant stimuli.

    Directory of Open Access Journals (Sweden)

    Jessica L Montoya

    Full Text Available Infant cries and facial expressions influence social interactions and elicit caretaking behaviors from adults. Recent neuroimaging studies suggest that neural responses to infant stimuli involve brain regions that process rewards. However, these studies have yet to investigate individual differences in tendencies to engage or withdraw from motivationally relevant stimuli. To investigate this, we used event-related fMRI to scan 17 nulliparous women. Participants were presented with novel infant cries of two distress levels (low and high and unknown infant faces of varying affect (happy, sad, and neutral in a randomized, counter-balanced order. Brain activation was subsequently correlated with scores on the Behavioral Inhibition System/Behavioral Activation System scale. Infant cries activated bilateral superior and middle temporal gyri (STG and MTG and precentral and postcentral gyri. Activation was greater in bilateral temporal cortices for low- relative to high-distress cries. Happy relative to neutral faces activated the ventral striatum, caudate, ventromedial prefrontal, and orbitofrontal cortices. Sad versus neutral faces activated the precuneus, cuneus, and posterior cingulate cortex, and behavioral activation drive correlated with occipital cortical activations in this contrast. Behavioral inhibition correlated with activation in the right STG for high- and low-distress cries relative to pink noise. Behavioral drive correlated inversely with putamen, caudate, and thalamic activations for the comparison of high-distress cries to pink noise. Reward-responsiveness correlated with activation in the left precentral gyrus during the perception of low-distress cries relative to pink noise. Our findings indicate that infant cry stimuli elicit activations in areas implicated in auditory processing and social cognition. Happy infant faces may be encoded as rewarding, whereas sad faces activate regions associated with empathic processing. Differences

  7. Regional brain responses in nulliparous women to emotional infant stimuli.

    Science.gov (United States)

    Montoya, Jessica L; Landi, Nicole; Kober, Hedy; Worhunsky, Patrick D; Rutherford, Helena J V; Mencl, W Einar; Mayes, Linda C; Potenza, Marc N

    2012-01-01

    Infant cries and facial expressions influence social interactions and elicit caretaking behaviors from adults. Recent neuroimaging studies suggest that neural responses to infant stimuli involve brain regions that process rewards. However, these studies have yet to investigate individual differences in tendencies to engage or withdraw from motivationally relevant stimuli. To investigate this, we used event-related fMRI to scan 17 nulliparous women. Participants were presented with novel infant cries of two distress levels (low and high) and unknown infant faces of varying affect (happy, sad, and neutral) in a randomized, counter-balanced order. Brain activation was subsequently correlated with scores on the Behavioral Inhibition System/Behavioral Activation System scale. Infant cries activated bilateral superior and middle temporal gyri (STG and MTG) and precentral and postcentral gyri. Activation was greater in bilateral temporal cortices for low- relative to high-distress cries. Happy relative to neutral faces activated the ventral striatum, caudate, ventromedial prefrontal, and orbitofrontal cortices. Sad versus neutral faces activated the precuneus, cuneus, and posterior cingulate cortex, and behavioral activation drive correlated with occipital cortical activations in this contrast. Behavioral inhibition correlated with activation in the right STG for high- and low-distress cries relative to pink noise. Behavioral drive correlated inversely with putamen, caudate, and thalamic activations for the comparison of high-distress cries to pink noise. Reward-responsiveness correlated with activation in the left precentral gyrus during the perception of low-distress cries relative to pink noise. Our findings indicate that infant cry stimuli elicit activations in areas implicated in auditory processing and social cognition. Happy infant faces may be encoded as rewarding, whereas sad faces activate regions associated with empathic processing. Differences in motivational

  8. The power of auditory-motor synchronization in sports: enhancing running performance by coupling cadence with the right beats.

    Directory of Open Access Journals (Sweden)

    Robert Jan Bood

    Full Text Available Acoustic stimuli, like music and metronomes, are often used in sports. Adjusting movement tempo to acoustic stimuli (i.e., auditory-motor synchronization may be beneficial for sports performance. However, music also possesses motivational qualities that may further enhance performance. Our objective was to examine the relative effects of auditory-motor synchronization and the motivational impact of acoustic stimuli on running performance. To this end, 19 participants ran to exhaustion on a treadmill in 1 a control condition without acoustic stimuli, 2 a metronome condition with a sequence of beeps matching participants' cadence (synchronization, and 3 a music condition with synchronous motivational music matched to participants' cadence (synchronization+motivation. Conditions were counterbalanced and measurements were taken on separate days. As expected, time to exhaustion was significantly longer with acoustic stimuli than without. Unexpectedly, however, time to exhaustion did not differ between metronome and motivational music conditions, despite differences in motivational quality. Motivational music slightly reduced perceived exertion of sub-maximal running intensity and heart rates of (near-maximal running intensity. The beat of the stimuli -which was most salient during the metronome condition- helped runners to maintain a consistent pace by coupling cadence to the prescribed tempo. Thus, acoustic stimuli may have enhanced running performance because runners worked harder as a result of motivational aspects (most pronounced with motivational music and more efficiently as a result of auditory-motor synchronization (most notable with metronome beeps. These findings imply that running to motivational music with a very prominent and consistent beat matched to the runner's cadence will likely yield optimal effects because it helps to elevate physiological effort at a high perceived exertion, whereas the consistent and correct cadence induced by

  9. Influence of the power-spectrum of the pre-stimulus EEG on the consecutive Auditory Evoked Potential in rats.

    NARCIS (Netherlands)

    Jongsma, M.L.A.; Quian Quiroga, R.; Rijn, C.M. van; Schaijk, W.J. van; Dirksen, R.; Coenen, A.M.L.

    2000-01-01

    Evoked Potentials (EPs) are responses that appear in the EEG due to external stimulation. Findings indicate that changes in EPs can be related to changes in frequencies of the pre-stimulus EEG. Auditory EPs of rats (n=8) were measured in reaction to tone-pip stimuli (90 dB, 10.2 kHz, ISI 2s, n=1500)

  10. Behavioral Distraction by Auditory Novelty Is Not Only about Novelty: The Role of the Distracter's Informational Value

    Science.gov (United States)

    Parmentier, Fabrice B. R.; Elsley, Jane V.; Ljungberg, Jessica K.

    2010-01-01

    Unexpected events often distract us. In the laboratory, novel auditory stimuli have been shown to capture attention away from a focal visual task and yield specific electrophysiological responses as well as a behavioral cost to performance. Distraction is thought to follow ineluctably from the sound's low probability of occurrence or, put more…

  11. Mutual influences of intermodal visual/tactile apparent motion and auditory motion with uncrossed and crossed arms.

    Science.gov (United States)

    Jiang, Yushi; Chen, Lihan

    2013-01-01

    Intra-modal apparent motion has been shown to be affected or 'captured' by information from another, task-irrelevant modality, as shown in cross-modal dynamic capture effect. Here we created inter-modal apparent motion between visual and tactile stimuli and investigated whether there are mutual influences between auditory apparent motion and inter-modal visual/tactile apparent motion. Moreover, we examined whether and how the spatial remapping between somatotopic and external reference frames of tactile events affect the cross-modal capture between auditory apparent motion and inter-modal visual/tactile apparent motion, by introducing two arm postures: arms-uncrossed and arms-crossed. In Experiment 1, we used auditory stimuli (auditory apparent motion) as distractors and inter-modal visual/tactile stimuli (inter-modal apparent motion) as targets while in Experiment 2 we reversed the distractors and targets. In Experiment 1, we found a general detrimental influence of arms-crossed posture in the task of discrimination of direction in visual/tactile stream, but in Experiment 2, the influence of arms-uncrossed posture played a significant role in modulating the inter-modal visual/tactile stimuli capturing over auditory apparent motion. In both Experiments, the synchronously presented motion streams led to noticeable directional congruency effect in judging the target motion. Among the different modality combinations, tactile to tactile apparent motion (TT) and visual to visual apparent motion (VV) are two signatures revealing the asymmetric congruency effects. When the auditory stimuli were targets, the congruency effect was largest with VV distractors, lowest with TT distractors; the pattern was reversed when the auditory stimuli were distractors. In addition, across both experiments the congruency effect in visual to tactile (VT) and tactile to visual (TV) apparent motion was intermediate between the effect-sizes in VV and TT. We replicated the above findings with a

  12. Norepinephrine is necessary for experience-dependent plasticity in the developing mouse auditory cortex.

    Science.gov (United States)

    Shepard, Kathryn N; Liles, L Cameron; Weinshenker, David; Liu, Robert C

    2015-02-11

    Critical periods are developmental windows during which the stimuli an animal encounters can reshape response properties in the affected system to a profound degree. Despite this window's importance, the neural mechanisms that regulate it are not completely understood. Pioneering studies in visual cortex initially indicated that norepinephrine (NE) permits ocular dominance column plasticity during the critical period, but later research has suggested otherwise. More recent work implicating NE in experience-dependent plasticity in the adult auditory cortex led us to re-examine the role of NE in critical period plasticity. Here, we exposed dopamine β-hydroxylase knock-out (Dbh(-/-)) mice, which lack NE completely from birth, to a biased acoustic environment during the auditory cortical critical period. This manipulation led to a redistribution of best frequencies (BFs) across auditory cortex in our control mice, consistent with prior work. By contrast, Dbh(-/-) mice failed to exhibit the expected redistribution of BFs, even though NE-deficient and NE-competent mice showed comparable auditory cortical organization when reared in a quiet colony environment. These data suggest that while intrinsic tonotopic patterning of auditory cortical circuitry occurs independently from NE, NE is required for critical period plasticity in auditory cortex. PMID:25673838

  13. Eye Movements during Auditory Attention Predict Individual Differences in Dorsal Attention Network Activity

    Science.gov (United States)

    Braga, Rodrigo M.; Fu, Richard Z.; Seemungal, Barry M.; Wise, Richard J. S.; Leech, Robert

    2016-01-01

    The neural mechanisms supporting auditory attention are not fully understood. A dorsal frontoparietal network of brain regions is thought to mediate the spatial orienting of attention across all sensory modalities. Key parts of this network, the frontal eye fields (FEF) and the superior parietal lobes (SPL), contain retinotopic maps and elicit saccades when stimulated. This suggests that their recruitment during auditory attention might reflect crossmodal oculomotor processes; however this has not been confirmed experimentally. Here we investigate whether task-evoked eye movements during an auditory task can predict the magnitude of activity within the dorsal frontoparietal network. A spatial and non-spatial listening task was used with on-line eye-tracking and functional magnetic resonance imaging (fMRI). No visual stimuli or cues were used. The auditory task elicited systematic eye movements, with saccade rate and gaze position predicting attentional engagement and the cued sound location, respectively. Activity associated with these separate aspects of evoked eye-movements dissociated between the SPL and FEF. However these observed eye movements could not account for all the activation in the frontoparietal network. Our results suggest that the recruitment of the SPL and FEF during attentive listening reflects, at least partly, overt crossmodal oculomotor processes during non-visual attention. Further work is needed to establish whether the network’s remaining contribution to auditory attention is through covert crossmodal processes, or is directly involved in the manipulation of auditory information. PMID:27242465

  14. Long-term memory of hierarchical relationships in free-living greylag geese

    NARCIS (Netherlands)

    Weiss, Brigitte M.; Scheiber, Isabella B. R.

    2013-01-01

    Animals may memorise spatial and social information for many months and even years. Here, we investigated long-term memory of hierarchically ordered relationships, where the position of a reward depended on the relationship of a stimulus relative to other stimuli in the hierarchy. Seventeen greylag

  15. Modeling auditory-nerve responses to electrical stimulation

    DEFF Research Database (Denmark)

    Joshi, Suyash Narendra; Dau, Torsten; Epp, Bastian

    2014-01-01

    Cochlear implants (CI) directly stimulate the auditory nerve (AN), bypassing the mechano-electrical transduction in the inner ear. Trains of biphasic, charge balanced pulses (anodic and cathodic) are used as stimuli to avoid damage of the tissue. The pulses of either polarity are capable of produ......Cochlear implants (CI) directly stimulate the auditory nerve (AN), bypassing the mechano-electrical transduction in the inner ear. Trains of biphasic, charge balanced pulses (anodic and cathodic) are used as stimuli to avoid damage of the tissue. The pulses of either polarity are capable......μs, which is large enough to affect the temporal coding of sounds and hence, potentially, the communication abilities of the CI listener. In the present study, two recently proposed models of electric stimulation of the AN [1,2] were considered in terms of their efficacy to predict the spike timing...... for anodic and cathodic stimulation of the AN of cat [3]. The models’ responses to the electrical pulses of various shapes [4,5,6] were also analyzed. It was found that, while the models can account for the firing rates in response to various biphasic pulse shapes, they fail to correctly describe the timing...

  16. Temporal coding by populations of auditory receptor neurons.

    Science.gov (United States)

    Sabourin, Patrick; Pollack, Gerald S

    2010-03-01

    Auditory receptor neurons of crickets are most sensitive to either low or high sound frequencies. Earlier work showed that the temporal coding properties of first-order auditory interneurons are matched to the temporal characteristics of natural low- and high-frequency stimuli (cricket songs and bat echolocation calls, respectively). We studied the temporal coding properties of receptor neurons and used modeling to investigate how activity within populations of low- and high-frequency receptors might contribute to the coding properties of interneurons. We confirm earlier findings that individual low-frequency-tuned receptors code stimulus temporal pattern poorly, but show that coding performance of a receptor population increases markedly with population size, due in part to low redundancy among the spike trains of different receptors. By contrast, individual high-frequency-tuned receptors code a stimulus temporal pattern fairly well and, because their spike trains are redundant, there is only a slight increase in coding performance with population size. The coding properties of low- and high-frequency receptor populations resemble those of interneurons in response to low- and high-frequency stimuli, suggesting that coding at the interneuron level is partly determined by the nature and organization of afferent input. Consistent with this, the sound-frequency-specific coding properties of an interneuron, previously demonstrated by analyzing its spike train, are also apparent in the subthreshold fluctuations in membrane potential that are generated by synaptic input from receptor neurons.

  17. Asymmetric transfer of auditory perceptual learning

    Directory of Open Access Journals (Sweden)

    Sygal eAmitay

    2012-11-01

    Full Text Available Perceptual skills can improve dramatically even with minimal practice. A major and practical benefit of learning, however, is in transferring the improvement on the trained task to untrained tasks or stimuli, yet the mechanisms underlying this process are still poorly understood. Reduction of internal noise has been proposed as a mechanism of perceptual learning, and while we have evidence that frequency discrimination (FD learning is due to a reduction of internal noise, the source of that noise was not determined. In this study, we examined whether reducing the noise associated with neural phase locking to tones can explain the observed improvement in behavioural thresholds. We compared FD training between two tone durations (15 and 100 ms that straddled the temporal integration window of auditory nerve fibers upon which computational modeling of phase locking noise was based. Training on short tones resulted in improved FD on probe tests of both the long and short tones. Training on long tones resulted in improvement only on the long tones. Simulations of FD learning, based on the computational model and on signal detection theory, were compared with the behavioral FD data. We found that improved fidelity of phase locking accurately predicted transfer of learning from short to long tones, but also predicted transfer from long to short tones. The observed lack of transfer from long to short tones suggests the involvement of a second mechanism. Training may have increased the temporal integration window which could not transfer because integration time for the short tone is limited by its duration. Current learning models assume complex relationships between neural populations that represent the trained stimuli. In contrast, we propose that training-induced enhancement of the signal-to-noise ratio offers a parsimonious explanation of learning and transfer that easily accounts for asymmetric transfer of learning.

  18. Genetic pleiotropy explains associations between musical auditory discrimination and intelligence.

    Science.gov (United States)

    Mosing, Miriam A; Pedersen, Nancy L; Madison, Guy; Ullén, Fredrik

    2014-01-01

    Musical aptitude is commonly measured using tasks that involve discrimination of different types of musical auditory stimuli. Performance on such different discrimination tasks correlates positively with each other and with intelligence. However, no study to date has explored these associations using a genetically informative sample to estimate underlying genetic and environmental influences. In the present study, a large sample of Swedish twins (N = 10,500) was used to investigate the genetic architecture of the associations between intelligence and performance on three musical auditory discrimination tasks (rhythm, melody and pitch). Phenotypic correlations between the tasks ranged between 0.23 and 0.42 (Pearson r values). Genetic modelling showed that the covariation between the variables could be explained by shared genetic influences. Neither shared, nor non-shared environment had a significant effect on the associations. Good fit was obtained with a two-factor model where one underlying shared genetic factor explained all the covariation between the musical discrimination tasks and IQ, and a second genetic factor explained variance exclusively shared among the discrimination tasks. The results suggest that positive correlations among musical aptitudes result from both genes with broad effects on cognition, and genes with potentially more specific influences on auditory functions.

  19. Spectral and temporal processing in rat posterior auditory cortex.

    Science.gov (United States)

    Pandya, Pritesh K; Rathbun, Daniel L; Moucha, Raluca; Engineer, Navzer D; Kilgard, Michael P

    2008-02-01

    The rat auditory cortex is divided anatomically into several areas, but little is known about the functional differences in information processing between these areas. To determine the filter properties of rat posterior auditory field (PAF) neurons, we compared neurophysiological responses to simple tones, frequency modulated (FM) sweeps, and amplitude modulated noise and tones with responses of primary auditory cortex (A1) neurons. PAF neurons have excitatory receptive fields that are on average 65% broader than A1 neurons. The broader receptive fields of PAF neurons result in responses to narrow and broadband inputs that are stronger than A1. In contrast to A1, we found little evidence for an orderly topographic gradient in PAF based on frequency. These neurons exhibit latencies that are twice as long as A1. In response to modulated tones and noise, PAF neurons adapt to repeated stimuli at significantly slower rates. Unlike A1, neurons in PAF rarely exhibit facilitation to rapidly repeated sounds. Neurons in PAF do not exhibit strong selectivity for rate or direction of narrowband one octave FM sweeps. These results indicate that PAF, like nonprimary visual fields, processes sensory information on larger spectral and longer temporal scales than primary cortex.

  20. The role of the auditory brainstem in processing musically relevant pitch.

    Science.gov (United States)

    Bidelman, Gavin M

    2013-01-01

    Neuroimaging work has shed light on the cerebral architecture involved in processing the melodic and harmonic aspects of music. Here, recent evidence is reviewed illustrating that subcortical auditory structures contribute to the early formation and processing of musically relevant pitch. Electrophysiological recordings from the human brainstem and population responses from the auditory nerve reveal that nascent features of tonal music (e.g., consonance/dissonance, pitch salience, harmonic sonority) are evident at early, subcortical levels of the auditory pathway. The salience and harmonicity of brainstem activity is strongly correlated with listeners' perceptual preferences and perceived consonance for the tonal relationships of music. Moreover, the hierarchical ordering of pitch intervals/chords described by the Western music practice and their perceptual consonance is well-predicted by the salience with which pitch combinations are encoded in subcortical auditory structures. While the neural correlates of consonance can be tuned and exaggerated with musical training, they persist even in the absence of musicianship or long-term enculturation. As such, it is posited that the structural foundations of musical pitch might result from innate processing performed by the central auditory system. A neurobiological predisposition for consonant, pleasant sounding pitch relationships may be one reason why these pitch combinations have been favored by composers and listeners for centuries. It is suggested that important perceptual dimensions of music emerge well before the auditory signal reaches cerebral cortex and prior to attentional engagement. While cortical mechanisms are no doubt critical to the perception, production, and enjoyment of music, the contribution of subcortical structures implicates a more integrated, hierarchically organized network underlying music processing within the brain. PMID:23717294

  1. The role of the auditory brainstem in processing musically-relevant pitch

    Directory of Open Access Journals (Sweden)

    Gavin M. Bidelman

    2013-05-01

    Full Text Available Neuroimaging work has shed light on the cerebral architecture involved in processing the melodic and harmonic aspects of music. Here, recent evidence is reviewed illustrating that subcortical auditory structures contribute to the early formation and processing of musically-relevant pitch. Electrophysiological recordings from the human brainstem and population responses from the auditory nerve reveal that nascent features of tonal music (e.g., consonance/dissonance, pitch salience, harmonic sonority are evident at early, subcortical levels of the auditory pathway. The salience and harmonicity of brainstem activity is strongly correlated with listeners’ perceptual preferences and perceived consonance for the tonal relationships of music. Moreover, the hierarchical ordering of pitch intervals/chords described by the Western music practice and their perceptual consonance is well-predicted by the salience with which pitch combinations are encoded in subcortical auditory structures. While the neural correlates of consonance can be tuned and exaggerated with musical training, they persist even in the absence of musicianship or long-term enculturation. As such, it is posited that the structural foundations of musical pitch might result from innate processing performed by the central auditory system. A neurobiological predisposition for consonant, pleasant sounding pitch relationships may be one reason why these pitch combinations have been favored by composers and listeners for centuries. It is suggested that important perceptual dimensions of music emerge well before the auditory signal reaches cerebral cortex and prior to attentional engagement. While cortical mechanisms are no doubt critical to the perception, production, and enjoyment of music, the contribution of subcortical structures implicates a more integrated, hierarchically organized network underlying music processing within the brain.

  2. Do infants find snakes aversive? Infants' physiological responses to "fear-relevant" stimuli.

    Science.gov (United States)

    Thrasher, Cat; LoBue, Vanessa

    2016-02-01

    In the current research, we sought to measure infants' physiological responses to snakes-one of the world's most widely feared stimuli-to examine whether they find snakes aversive or merely attention grabbing. Using a similar method to DeLoache and LoBue (Developmental Science, 2009, Vol. 12, pp. 201-207), 6- to 9-month-olds watched a series of multimodal (both auditory and visual) stimuli: a video of a snake (fear-relevant) or an elephant (non-fear-relevant) paired with either a fearful or happy auditory track. We measured physiological responses to the pairs of stimuli, including startle magnitude, latency to startle, and heart rate. Results suggest that snakes capture infants' attention; infants showed the fastest startle responses and lowest average heart rate to the snakes, especially when paired with a fearful voice. Unexpectedly, they also showed significantly reduced startle magnitude during this same snake video plus fearful voice combination. The results are discussed with respect to theoretical perspectives on fear acquisition.

  3. Muscle group dependent responses to stimuli in a grasshopper model for tonic immobility

    Directory of Open Access Journals (Sweden)

    Ashwin Miriyala

    2013-09-01

    Tonic Immobility (TI is a prolonged immobile condition exhibited by a variety of animals when exposed to certain stimuli, and is thought to be associated with a specific state of arousal. In our study, we characterize this state by using the reliably inducible TI state of the grasshopper (Hieroglyphus banian and by monitoring abdominal pulsations and body movements in response to visual and auditory stimuli. These pulsations are present during the TI and ‘awake’, standing states, but not in the CO2 anesthetized state. In response to the stimuli, animals exhibited a suppression in pulsation and a startle response. The suppression of pulsation lasted longer than the duration of stimulus application. During TI, the suppression of pulsation does not habituate over time, whereas the startle response does. In response to the translating visual stimulus, the pulsations are suppressed at a certain phase independent of the time of stimulus application. Thus, we describe TI in Hieroglyphus banian as a state more similar to an ‘awake’ state than to an anesthetized state. During TI, the circuitry to the muscle outputs controlling the abdomen pulsation and the startle response are, at least in some part, different. The central pattern generators that maintain the abdomen pulsation receive inputs from visual and auditory pathways.

  4. Psychology of auditory perception.

    Science.gov (United States)

    Lotto, Andrew; Holt, Lori

    2011-09-01

    Audition is often treated as a 'secondary' sensory system behind vision in the study of cognitive science. In this review, we focus on three seemingly simple perceptual tasks to demonstrate the complexity of perceptual-cognitive processing involved in everyday audition. After providing a short overview of the characteristics of sound and their neural encoding, we present a description of the perceptual task of segregating multiple sound events that are mixed together in the signal reaching the ears. Then, we discuss the ability to localize the sound source in the environment. Finally, we provide some data and theory on how listeners categorize complex sounds, such as speech. In particular, we present research on how listeners weigh multiple acoustic cues in making a categorization decision. One conclusion of this review is that it is time for auditory cognitive science to be developed to match what has been done in vision in order for us to better understand how humans communicate with speech and music. WIREs Cogni Sci 2011 2 479-489 DOI: 10.1002/wcs.123 For further resources related to this article, please visit the WIREs website. PMID:26302301

  5. Can place-specific cochlear dispersion be represented by auditory steady-state responses?

    DEFF Research Database (Denmark)

    Paredes Gallardo, Andreu; Epp, Bastian; Dau, Torsten

    2016-01-01

    The present study investigated to what extent properties of local cochlear dispersion can be objectively assessed through auditory steady-state responses (ASSR). The hypothesis was that stimuli compensating for the phase response at a particular cochlear location generate a maximally modulated......, no significant differences were found between the responses to the IR and its temporally reversed counterpart. Thus, whereas ASSRs to narrowband stimuli have been used as an objective indicator of frequency-specific hearing sensitivity, the method does not seem to be sensitive enough to reflect local cochlear...

  6. Multisensory stimuli elicit altered oscillatory brain responses at gamma frequencies in patients with schizophrenia

    Directory of Open Access Journals (Sweden)

    David B. Stone

    2014-11-01

    Full Text Available Deficits in auditory and visual unisensory responses are well documented in patients with schizophrenia; however, potential abnormalities elicited from multisensory audio-visual stimuli are less understood. Further, schizophrenia patients have shown abnormal patterns in task-related and task-independent oscillatory brain activity, particularly in the gamma frequency band. We examined oscillatory responses to basic unisensory and multisensory stimuli in schizophrenia patients (N = 46 and healthy controls (N = 57 using magnetoencephalography (MEG. Time-frequency decomposition was performed to determine regions of significant changes in gamma band power by group in response to unisensory and multisensory stimuli relative to baseline levels. Results showed significant behavioral differences between groups in response to unisensory and multisensory stimuli. In addition, time-frequency analysis revealed significant decreases and increases in gamma-band power in schizophrenia patients relative to healthy controls, which emerged both early and late over both sensory and frontal regions in response to unisensory and multisensory stimuli. Unisensory gamma-band power predicted multisensory gamma-band power differently by group. Furthermore, gamma-band power in these regions predicted performance in select measures of the Measurement and Treatment Research to Improve Cognition in Schizophrenia (MATRICS test battery differently by group. These results reveal a unique pattern of task-related gamma-band power in schizophrenia patients relative to controls that may indicate reduced inhibition in combination with impaired oscillatory mechanisms in patients with schizophrenia.

  7. Context updates are hierarchical

    Directory of Open Access Journals (Sweden)

    Anton Karl Ingason

    2016-10-01

    Full Text Available This squib studies the order in which elements are added to the shared context of interlocutors in a conversation. It focuses on context updates within one hierarchical structure and argues that structurally higher elements are entered into the context before lower elements, even if the structurally higher elements are pronounced after the lower elements. The crucial data are drawn from a comparison of relative clauses in two head-initial languages, English and Icelandic, and two head-final languages, Korean and Japanese. The findings have consequences for any theory of a dynamic semantics.

  8. Hierarchical image enhancement

    Science.gov (United States)

    Qi, Wei; Han, Jing; Zhang, Yi; Bai, Lian-fa

    2016-05-01

    Image enhancement is an important technique in computer vision. In this paper, we propose a hierarchical image enhancement approach based on the structure layer and texture layer. In the structure layer, we propose a structure-based method based on GMM, which better exploits structure details with fewer noise. In the texture layer, we present a structure-filtering method to filter unwanted texture with keeping completeness of detected salient structure. Next, we introduce a structure constraint prior to integrate them, leading to an improved enhancement result. Extensive experiments demonstrate that the proposed approach achieves higher quality results than previous approaches.

  9. Effect of Infant Prematurity on Auditory Brainstem Response at Preschool Age

    Directory of Open Access Journals (Sweden)

    Sara Hasani

    2013-03-01

    Full Text Available Introduction: Preterm birth is a risk factor for a number of conditions that requires comprehensive examination. Our study was designed to investigate the impact of preterm birth on the processing of auditory stimuli and brain structures at the brainstem level at a preschool age.   Materials and Methods: An auditory brainstem response (ABR test was performed with low rates of stimuli in 60 children aged 4 to 6 years. Thirty subjects had been born following a very preterm labor or late-preterm labor and 30 control subjects had been born following a full-term labor.   Results: Significant differences in the ABR test result were observed in terms of the inter-peak intervals of the I–III and III–V waves, and the absolute latency of the III wave (P

  10. Evolutionary adaptations for the temporal processing of natural sounds by the anuran peripheral auditory system.

    Science.gov (United States)

    Schrode, Katrina M; Bee, Mark A

    2015-03-01

    Sensory systems function most efficiently when processing natural stimuli, such as vocalizations, and it is thought that this reflects evolutionary adaptation. Among the best-described examples of evolutionary adaptation in the auditory system are the frequent matches between spectral tuning in both the peripheral and central auditory systems of anurans (frogs and toads) and the frequency spectra of conspecific calls. Tuning to the temporal properties of conspecific calls is less well established, and in anurans has so far been documented only in the central auditory system. Using auditory-evoked potentials, we asked whether there are species-specific or sex-specific adaptations of the auditory systems of gray treefrogs (Hyla chrysoscelis) and green treefrogs (H. cinerea) to the temporal modulations present in conspecific calls. Modulation rate transfer functions (MRTFs) constructed from auditory steady-state responses revealed that each species was more sensitive than the other to the modulation rates typical of conspecific advertisement calls. In addition, auditory brainstem responses (ABRs) to paired clicks indicated relatively better temporal resolution in green treefrogs, which could represent an adaptation to the faster modulation rates present in the calls of this species. MRTFs and recovery of ABRs to paired clicks were generally similar between the sexes, and we found no evidence that males were more sensitive than females to the temporal modulation patterns characteristic of the aggressive calls used in male-male competition. Together, our results suggest that efficient processing of the temporal properties of behaviorally relevant sounds begins at potentially very early stages of the anuran auditory system that include the periphery. PMID:25617467

  11. Evolutionary adaptations for the temporal processing of natural sounds by the anuran peripheral auditory system.

    Science.gov (United States)

    Schrode, Katrina M; Bee, Mark A

    2015-03-01

    Sensory systems function most efficiently when processing natural stimuli, such as vocalizations, and it is thought that this reflects evolutionary adaptation. Among the best-described examples of evolutionary adaptation in the auditory system are the frequent matches between spectral tuning in both the peripheral and central auditory systems of anurans (frogs and toads) and the frequency spectra of conspecific calls. Tuning to the temporal properties of conspecific calls is less well established, and in anurans has so far been documented only in the central auditory system. Using auditory-evoked potentials, we asked whether there are species-specific or sex-specific adaptations of the auditory systems of gray treefrogs (Hyla chrysoscelis) and green treefrogs (H. cinerea) to the temporal modulations present in conspecific calls. Modulation rate transfer functions (MRTFs) constructed from auditory steady-state responses revealed that each species was more sensitive than the other to the modulation rates typical of conspecific advertisement calls. In addition, auditory brainstem responses (ABRs) to paired clicks indicated relatively better temporal resolution in green treefrogs, which could represent an adaptation to the faster modulation rates present in the calls of this species. MRTFs and recovery of ABRs to paired clicks were generally similar between the sexes, and we found no evidence that males were more sensitive than females to the temporal modulation patterns characteristic of the aggressive calls used in male-male competition. Together, our results suggest that efficient processing of the temporal properties of behaviorally relevant sounds begins at potentially very early stages of the anuran auditory system that include the periphery.

  12. Changes in auditory perceptions and cortex resulting from hearing recovery after extended congenital unilateral hearing loss

    Directory of Open Access Journals (Sweden)

    Jill B Firszt

    2013-12-01

    Full Text Available Monaural hearing induces auditory system reorganization. Imbalanced input also degrades time-intensity cues for sound localization and signal segregation for listening in noise. While there have been studies of bilateral auditory deprivation and later hearing restoration (e.g. cochlear implants, less is known about unilateral auditory deprivation and subsequent hearing improvement. We investigated effects of long-term congenital unilateral hearing loss on localization, speech understanding, and cortical organization following hearing recovery. Hearing in the congenitally affected ear of a 41 year old female improved significantly after stapedotomy and reconstruction. Pre-operative hearing threshold levels showed unilateral, mixed, moderately-severe to profound hearing loss. The contralateral ear had hearing threshold levels within normal limits. Testing was completed prior to, and three and nine months after surgery. Measurements were of sound localization with intensity-roved stimuli and speech recognition in various noise conditions. We also evoked magnetic resonance signals with monaural stimulation to the unaffected ear. Activation magnitudes were determined in core, belt, and parabelt auditory cortex regions via an interrupted single event design. Hearing improvement following 40 years of congenital unilateral hearing loss resulted in substantially improved sound localization and speech recognition in noise. Auditory cortex also reorganized. Contralateral auditory cortex responses were increased after hearing recovery and the extent of activated cortex was bilateral, including a greater portion of the posterior superior temporal plane. Thus, prolonged predominant monaural stimulation did not prevent auditory system changes consequent to restored binaural hearing. Results support future research of unilateral auditory deprivation effects and plasticity, with consideration for length of deprivation, age at hearing correction, degree and type

  13. Detecting Hierarchical Structure in Networks

    DEFF Research Database (Denmark)

    Herlau, Tue; Mørup, Morten; Schmidt, Mikkel Nørgaard;

    2012-01-01

    Many real-world networks exhibit hierarchical organization. Previous models of hierarchies within relational data has focused on binary trees; however, for many networks it is unknown whether there is hierarchical structure, and if there is, a binary tree might not account well for it. We propose....... On synthetic and real data we demonstrate that our model can detect hierarchical structure leading to better link-prediction than competing models. Our model can be used to detect if a network exhibits hierarchical structure, thereby leading to a better comprehension and statistical account the network....

  14. Detection, discrimination, and sensation of visceral stimuli

    OpenAIRE

    Hölzl, Rupert; Erasmus, Lutz-Peter; Möltner, Andreas; Samay, Sebastian; Waldmann, Hans-Christian; Neidig, Claus W.

    1994-01-01

    Examines the interoception of gastrointestinal stimuli. A total of 48 subjects participated in the study that used an adaptive up-down tracking method of threshold determination of distensions to the colon wall. Subjects were presented with two temporal intervals, and the stimulus was applied to one of the intervals. Then they were required to give behavioral and subjective responses to perceived distension stimuli in the lower bowel segments. It is concluded that detection of stimuli is poss...

  15. Multimodal information Management: Evaluation of Auditory and Haptic Cues for NextGen Communication Displays

    Science.gov (United States)

    Begault, Durand R.; Bittner, Rachel M.; Anderson, Mark R.

    2012-01-01

    Auditory communication displays within the NextGen data link system may use multiple synthetic speech messages replacing traditional ATC and company communications. The design of an interface for selecting amongst multiple incoming messages can impact both performance (time to select, audit and release a message) and preference. Two design factors were evaluated: physical pressure-sensitive switches versus flat panel "virtual switches", and the presence or absence of auditory feedback from switch contact. Performance with stimuli using physical switches was 1.2 s faster than virtual switches (2.0 s vs. 3.2 s); auditory feedback provided a 0.54 s performance advantage (2.33 s vs. 2.87 s). There was no interaction between these variables. Preference data were highly correlated with performance.

  16. Response to own name in children: ERP study of auditory social information processing.

    Science.gov (United States)

    Key, Alexandra P; Jones, Dorita; Peters, Sarika U

    2016-09-01

    Auditory processing is an important component of cognitive development, and names are among the most frequently occurring receptive language stimuli. Although own name processing has been examined in infants and adults, surprisingly little data exist on responses to own name in children. The present ERP study examined spoken name processing in 32 children (M=7.85years) using a passive listening paradigm. Our results demonstrated that children differentiate own and close other's names from unknown names, as reflected by the enhanced parietal P300 response. The responses to own and close other names did not differ between each other. Repeated presentations of an unknown name did not result in the same familiarity as the known names. These results suggest that auditory ERPs to known/unknown names are a feasible means to evaluate complex auditory processing without the need for overt behavioral responses.

  17. Hierarchical partial order ranking

    International Nuclear Information System (INIS)

    Assessing the potential impact on environmental and human health from the production and use of chemicals or from polluted sites involves a multi-criteria evaluation scheme. A priori several parameters are to address, e.g., production tonnage, specific release scenarios, geographical and site-specific factors in addition to various substance dependent parameters. Further socio-economic factors may be taken into consideration. The number of parameters to be included may well appear to be prohibitive for developing a sensible model. The study introduces hierarchical partial order ranking (HPOR) that remedies this problem. By HPOR the original parameters are initially grouped based on their mutual connection and a set of meta-descriptors is derived representing the ranking corresponding to the single groups of descriptors, respectively. A second partial order ranking is carried out based on the meta-descriptors, the final ranking being disclosed though average ranks. An illustrative example on the prioritisation of polluted sites is given. - Hierarchical partial order ranking of polluted sites has been developed for prioritization based on a large number of parameters

  18. The unity assumption facilitates cross-modal binding of musical, non-speech stimuli: The role of spectral and amplitude envelope cues.

    Science.gov (United States)

    Chuen, Lorraine; Schutz, Michael

    2016-07-01

    An observer's inference that multimodal signals originate from a common underlying source facilitates cross-modal binding. This 'unity assumption' causes asynchronous auditory and visual speech streams to seem simultaneous (Vatakis & Spence, Perception & Psychophysics, 69(5), 744-756, 2007). Subsequent tests of non-speech stimuli such as musical and impact events found no evidence for the unity assumption, suggesting the effect is speech-specific (Vatakis & Spence, Acta Psychologica, 127(1), 12-23, 2008). However, the role of amplitude envelope (the changes in energy of a sound over time) was not previously appreciated within this paradigm. Here, we explore whether previous findings suggesting speech-specificity of the unity assumption were confounded by similarities in the amplitude envelopes of the contrasted auditory stimuli. Experiment 1 used natural events with clearly differentiated envelopes: single notes played on either a cello (bowing motion) or marimba (striking motion). Participants performed an un-speeded temporal order judgments task; viewing audio-visually matched (e.g., marimba auditory with marimba video) and mismatched (e.g., cello auditory with marimba video) versions of stimuli at various stimulus onset asynchronies, and were required to indicate which modality was presented first. As predicted, participants were less sensitive to temporal order in matched conditions, demonstrating that the unity assumption can facilitate the perception of synchrony outside of speech stimuli. Results from Experiments 2 and 3 revealed that when spectral information was removed from the original auditory stimuli, amplitude envelope alone could not facilitate the influence of audiovisual unity. We propose that both amplitude envelope and spectral acoustic cues affect the percept of audiovisual unity, working in concert to help an observer determine when to integrate across modalities.

  19. The unity assumption facilitates cross-modal binding of musical, non-speech stimuli: The role of spectral and amplitude envelope cues.

    Science.gov (United States)

    Chuen, Lorraine; Schutz, Michael

    2016-07-01

    An observer's inference that multimodal signals originate from a common underlying source facilitates cross-modal binding. This 'unity assumption' causes asynchronous auditory and visual speech streams to seem simultaneous (Vatakis & Spence, Perception & Psychophysics, 69(5), 744-756, 2007). Subsequent tests of non-speech stimuli such as musical and impact events found no evidence for the unity assumption, suggesting the effect is speech-specific (Vatakis & Spence, Acta Psychologica, 127(1), 12-23, 2008). However, the role of amplitude envelope (the changes in energy of a sound over time) was not previously appreciated within this paradigm. Here, we explore whether previous findings suggesting speech-specificity of the unity assumption were confounded by similarities in the amplitude envelopes of the contrasted auditory stimuli. Experiment 1 used natural events with clearly differentiated envelopes: single notes played on either a cello (bowing motion) or marimba (striking motion). Participants performed an un-speeded temporal order judgments task; viewing audio-visually matched (e.g., marimba auditory with marimba video) and mismatched (e.g., cello auditory with marimba video) versions of stimuli at various stimulus onset asynchronies, and were required to indicate which modality was presented first. As predicted, participants were less sensitive to temporal order in matched conditions, demonstrating that the unity assumption can facilitate the perception of synchrony outside of speech stimuli. Results from Experiments 2 and 3 revealed that when spectral information was removed from the original auditory stimuli, amplitude envelope alone could not facilitate the influence of audiovisual unity. We propose that both amplitude envelope and spectral acoustic cues affect the percept of audiovisual unity, working in concert to help an observer determine when to integrate across modalities. PMID:27084701

  20. Altered Neural Responses to Sounds in Primate Primary Auditory Cortex during Slow-Wave Sleep

    OpenAIRE

    Issa, Elias B.; Wang, Xiaoqin

    2011-01-01

    How sounds are processed by the brain during sleep is an important question for understanding how we perceive the sensory environment in this unique behavioral state. While human behavioral data have indicated selective impairments of sound processing during sleep, brain imaging and neurophysiology studies have reported that overall neural activity in auditory cortex during sleep is surprisingly similar to that during wakefulness. This responsiveness to external stimuli leaves open the questi...

  1. Electrophysiological evidence for incremental lexical-semantic integration in auditory compound comprehension

    OpenAIRE

    Koester, Dirk; Holle, Henning; Gunter, Thomas C.

    2009-01-01

    The present study investigated the time-course of semantic integration in auditory compound word processing. Compounding is a productive mechanism of word formation that is used frequently in many languages. Specifically, we examined whether semantic integration is incremental or is delayed until the head, the last constituent in German, is available. Stimuli were compounds consisting of three nouns, and the semantic plausibility of the second and the third constituent was manipulated indepen...

  2. Increased Signal Complexity Improves the Breadth of Generalization in Auditory Perceptual Learning

    OpenAIRE

    Brown, David J.; Proulx, Michael J.

    2013-01-01

    Perceptual learning can be specific to a trained stimulus or optimally generalized to novel stimuli with the breadth of generalization being imperative for how we structure perceptual training programs. Adapting an established auditory interval discrimination paradigm to utilise complex signals, we trained human adults on a standard interval for either 2, 4, or 10 days. We then tested the standard, alternate frequency, interval, and stereo input conditions to evaluate the rapidity of specifi...

  3. Neural coding and perception of pitch in the normal and impaired human auditory system

    OpenAIRE

    Santurette, Sébastien; Dau, Torsten; Buchholz, Jörg; Wouters, Jan; Andrew J Oxenham

    2011-01-01

    Pitch is an important attribute of hearing that allows us to perceive the musical quality of sounds. Besides music perception, pitch contributes to speech communication, auditory grouping, and perceptual segregation of sound sources. In this work, several aspects of pitch perception in humans were investigated using psychophysical methods. First, hearing loss was found to affect the perception of binaural pitch, a pitch sensation created by the binaural interaction of noise stimuli. Specifica...

  4. Improved Electrically Evoked Auditory Steady-State Response Thresholds in Humans

    OpenAIRE

    Hofmann, Michael; Wouters, Jan

    2012-01-01

    Electrically evoked auditory steady-state responses (EASSRs) are EEG potentials in response to periodic electrical stimuli presented through a cochlear implant. For low-rate pulse trains in the 40-Hz range, electrophysiological thresholds derived from response amplitude growth functions correlate well with behavioral T levels at these rates. The aims of this study were: (1) to improve the correlation between electrophysiological thresholds and behavioral T levels at 900 pps by using amplitude...

  5. Influence of cortical descending pathways on neuronal adaptation in the auditory midbrain

    OpenAIRE

    Robinson, B. L.

    2014-01-01

    Adaptation of the spike rate of sensory neurones is associated with alteration in neuronal representation of a wide range of stimuli, including sound level, visual contrast, and whisker vibrissa motion. In the inferior colliculus (IC) of the auditory midbrain, adaptation may allow neurones to adjust their limited representational range to match the current range of sound levels in the environment. Two outstanding questions concern the rapidity of this adaptation in IC, and the mechanisms unde...

  6. High-Field Functional Imaging of Pitch Processing in Auditory Cortex of the Cat.

    Directory of Open Access Journals (Sweden)

    Blake E Butler

    Full Text Available The perception of pitch is a widely studied and hotly debated topic in human hearing. Many of these studies combine functional imaging techniques with stimuli designed to disambiguate the percept of pitch from frequency information present in the stimulus. While useful in identifying potential "pitch centres" in cortex, the existence of truly pitch-responsive neurons requires single neuron-level measures that can only be undertaken in animal models. While a number of animals have been shown to be sensitive to pitch, few studies have addressed the location of cortical generators of pitch percepts in non-human models. The current study uses high-field functional magnetic resonance imaging (fMRI of the feline brain in an attempt to identify regions of cortex that show increased activity in response to pitch-evoking stimuli. Cats were presented with iterated rippled noise (IRN stimuli, narrowband noise stimuli with the same spectral profile but no perceivable pitch, and a processed IRN stimulus in which phase components were randomized to preserve slowly changing modulations in the absence of pitch (IRNo. Pitch-related activity was not observed to occur in either primary auditory cortex (A1 or the anterior auditory field (AAF which comprise the core auditory cortex in cats. Rather, cortical areas surrounding the posterior ectosylvian sulcus responded preferentially to the IRN stimulus when compared to narrowband noise, with group analyses revealing bilateral activity centred in the posterior auditory field (PAF. This study demonstrates that fMRI is useful for identifying pitch-related processing in cat cortex, and identifies cortical areas that warrant further investigation. Moreover, we have taken the first steps in identifying a useful animal model for the study of pitch perception.

  7. Polarity sensitivity of the electrically stimulated auditory nerve at different cochlear sites

    OpenAIRE

    Undurraga Lucero, Jaime; van Wieringen, Astrid; Carlyon, Robert P.; Macherey, Olivier; Wouters, Jan

    2009-01-01

    Commercially available cochlear implants (CIs) stimulate the auditory nerve (AN) using symmetric biphasic current (BP) pulses. However, recent data have shown that the use of asymmetric pulse shapes could be beneficial in terms of reducing power consumption, increasing dynamic range and limiting channel interactions. In these charge-balanced stimuli, the effectiveness of one phase (one polarity) is reduced by making it longer and lower in amplitude than the other. For the design of novel CI s...

  8. Near-infrared spectroscopic imaging of stimulus-related hemodynamic responses on the neonatal auditory cortices

    Science.gov (United States)

    Kotilahti, Kalle; Nissila, Ilkka; Makela, Riikka; Noponen, Tommi; Lipiainen, Lauri; Gavrielides, Nasia; Kajava, Timo; Huotilainen, Minna; Fellman, Vineta; Merilainen, Pekka; Katila, Toivo

    2005-04-01

    We have used near-infrared spectroscopy (NIRS) to study hemodynamic auditory evoked responses on 7 full-term neonates. Measurements were done simultaneously above both auditory cortices to study the distribution of speech and music processing between hemispheres using a 16-channel frequency-domain instrument. The stimulation consisted of 5-second samples of music and speech with a 25-second silent interval. In response to stimulation, a significant increase in the concentration of oxygenated hemoglobin ([HbO2]) was detected in 6 out of 7 subjects. The strongest responses in [HbO2] were seen near the measurement location above the ear on both hemispheres. The mean latency of the maximum responses was 9.42+/-1.51 s. On the left hemisphere (LH), the maximum amplitude of the average [HbO2] response to the music stimuli was 0.76+/- 0.38 μ M (mean+/-std.) and to the speech stimuli 1.00+/- 0.45 μ+/- μM. On the right hemisphere (RH), the maximum amplitude of the average [HbO2] response was 1.29+/- 0.85 μM to the music stimuli and 1.23+/- 0.93 μM to the speech stimuli. The results indicate that auditory information is processed on both auditory cortices, but LH is more concentrated to process speech than music information. No significant differences in the locations and the latencies of the maximum responses relative to the stimulus type were found.

  9. Neural Correlates of an Auditory Afterimage in Primary Auditory Cortex

    OpenAIRE

    Noreña, A. J.; Eggermont, J. J.

    2003-01-01

    The Zwicker tone (ZT) is defined as an auditory negative afterimage, perceived after the presentation of an appropriate inducer. Typically, a notched noise (NN) with a notch width of 1/2 octave induces a ZT with a pitch falling in the frequency range of the notch. The aim of the present study was to find potential neural correlates of the ZT in the primary auditory cortex of ketamine-anesthetized cats. Responses of multiunits were recorded simultaneously with two 8-electrode arrays during 1 s...

  10. Impact of olfactory and auditory priming on the attraction to foods with high energy density.

    Science.gov (United States)

    Chambaron, S; Chisin, Q; Chabanet, C; Issanchou, S; Brand, G

    2015-12-01

    \\]\\Recent research suggests that non-attentively perceived stimuli may significantly influence consumers' food choices. The main objective of the present study was to determine whether an olfactory prime (a sweet-fatty odour) and a semantic auditory prime (a nutritional prevention message), both presented incidentally, either alone or in combination can influence subsequent food choices. The experiment included 147 participants who were assigned to four different conditions: a control condition, a scented condition, an auditory condition or an auditory-scented condition. All participants remained in the waiting room during15 min while they performed a 'lure' task. For the scented condition, the participants were unobtrusively exposed to a 'pain au chocolat' odour. Those in the auditory condition were exposed to an audiotape including radio podcasts and a nutritional message. A third group of participants was exposed to both olfactory and auditory stimuli simultaneously. In the control condition, no stimulation was given. Following this waiting period, all participants moved into a non-odorised test room where they were asked to choose, from dishes served buffet-style, the starter, main course and dessert that they would actually eat for lunch. The results showed that the participants primed with the odour of 'pain au chocolat' tended to choose more desserts with high energy density (i.e., a waffle) than the participants in the control condition (p = 0.06). Unexpectedly, the participants primed with the nutritional auditory message chose to consume more desserts with high energy density than the participants in the control condition (p = 0.03). In the last condition (odour and nutritional message), they chose to consume more desserts with high energy density than the participants in the control condition (p = 0.01), and the data reveal an additive effect of the two primes. PMID:26119807

  11. A vision-free brain-computer interface (BCI) paradigm based on auditory selective attention.

    Science.gov (United States)

    Kim, Do-Won; Cho, Jae-Hyun; Hwang, Han-Jeong; Lim, Jeong-Hwan; Im, Chang-Hwan

    2011-01-01

    Majority of the recently developed brain computer interface (BCI) systems have been using visual stimuli or visual feedbacks. However, the BCI paradigms based on visual perception might not be applicable to severe locked-in patients who have lost their ability to control their eye movement or even their vision. In the present study, we investigated the feasibility of a vision-free BCI paradigm based on auditory selective attention. We used the power difference of auditory steady-state responses (ASSRs) when the participant modulates his/her attention to the target auditory stimulus. The auditory stimuli were constructed as two pure-tone burst trains with different beat frequencies (37 and 43 Hz) which were generated simultaneously from two speakers located at different positions (left and right). Our experimental results showed high classification accuracies (64.67%, 30 commands/min, information transfer rate (ITR) = 1.89 bits/min; 74.00%, 12 commands/min, ITR = 2.08 bits/min; 82.00%, 6 commands/min, ITR = 1.92 bits/min; 84.33%, 3 commands/min, ITR = 1.12 bits/min; without any artifact rejection, inter-trial interval = 6 sec), enough to be used for a binary decision. Based on the suggested paradigm, we implemented a first online ASSR-based BCI system that demonstrated the possibility of materializing a totally vision-free BCI system.

  12. Selective attention modulates human auditory brainstem responses: relative contributions of frequency and spatial cues.

    Directory of Open Access Journals (Sweden)

    Alexandre Lehmann

    Full Text Available Selective attention is the mechanism that allows focusing one's attention on a particular stimulus while filtering out a range of other stimuli, for instance, on a single conversation in a noisy room. Attending to one sound source rather than another changes activity in the human auditory cortex, but it is unclear whether attention to different acoustic features, such as voice pitch and speaker location, modulates subcortical activity. Studies using a dichotic listening paradigm indicated that auditory brainstem processing may be modulated by the direction of attention. We investigated whether endogenous selective attention to one of two speech signals affects amplitude and phase locking in auditory brainstem responses when the signals were either discriminable by frequency content alone, or by frequency content and spatial location. Frequency-following responses to the speech sounds were significantly modulated in both conditions. The modulation was specific to the task-relevant frequency band. The effect was stronger when both frequency and spatial information were available. Patterns of response were variable between participants, and were correlated with psychophysical discriminability of the stimuli, suggesting that the modulation was biologically relevant. Our results demonstrate that auditory brainstem responses are susceptible to efferent modulation related to behavioral goals. Furthermore they suggest that mechanisms of selective attention actively shape activity at early subcortical processing stages according to task relevance and based on frequency and spatial cues.

  13. Vocal responses to perturbations in voice auditory feedback in individuals with Parkinson's disease.

    Directory of Open Access Journals (Sweden)

    Hanjun Liu

    Full Text Available BACKGROUND: One of the most common symptoms of speech deficits in individuals with Parkinson's disease (PD is significantly reduced vocal loudness and pitch range. The present study investigated whether abnormal vocalizations in individuals with PD are related to sensory processing of voice auditory feedback. Perturbations in loudness or pitch of voice auditory feedback are known to elicit short latency, compensatory responses in voice amplitude or fundamental frequency. METHODOLOGY/PRINCIPAL FINDINGS: Twelve individuals with Parkinson's disease and 13 age- and sex-matched healthy control subjects sustained a vowel sound (/α/ and received unexpected, brief (200 ms perturbations in voice loudness (±3 or 6 dB or pitch (±100 cents auditory feedback. Results showed that, while all subjects produced compensatory responses in their voice amplitude or fundamental frequency, individuals with PD exhibited larger response magnitudes than the control subjects. Furthermore, for loudness-shifted feedback, upward stimuli resulted in shorter response latencies than downward stimuli in the control subjects but not in individuals with PD. CONCLUSIONS/SIGNIFICANCE: The larger response magnitudes in individuals with PD compared with the control subjects suggest that processing of voice auditory feedback is abnormal in PD. Although the precise mechanisms of the voice feedback processing are unknown, results of this study suggest that abnormal voice control in individuals with PD may be related to dysfunctional mechanisms of error detection or correction in sensory feedback processing.

  14. Stable encoding of sounds over a broad range of statistical parameters in the auditory cortex.

    Science.gov (United States)

    Blackwell, Jennifer M; Taillefumier, Thibaud O; Natan, Ryan G; Carruthers, Isaac M; Magnasco, Marcelo O; Geffen, Maria N

    2016-03-01

    Natural auditory scenes possess highly structured statistical regularities, which are dictated by the physics of sound production in nature, such as scale-invariance. We recently identified that natural water sounds exhibit a particular type of scale invariance, in which the temporal modulation within spectral bands scales with the centre frequency of the band. Here, we tested how neurons in the mammalian primary auditory cortex encode sounds that exhibit this property, but differ in their statistical parameters. The stimuli varied in spectro-temporal density and cyclo-temporal statistics over several orders of magnitude, corresponding to a range of water-like percepts, from pattering of rain to a slow stream. We recorded neuronal activity in the primary auditory cortex of awake rats presented with these stimuli. The responses of the majority of individual neurons were selective for a subset of stimuli with specific statistics. However, as a neuronal population, the responses were remarkably stable over large changes in stimulus statistics, exhibiting a similar range in firing rate, response strength, variability and information rate, and only minor variation in receptive field parameters. This pattern of neuronal responses suggests a potentially general principle for cortical encoding of complex acoustic scenes: while individual cortical neurons exhibit selectivity for specific statistical features, a neuronal population preserves a constant response structure across a broad range of statistical parameters. PMID:26663571

  15. Auditory event-related responses to diphthongs in different attention conditions.

    Science.gov (United States)

    Morris, David J; Steinmetzger, Kurt; Tøndering, John

    2016-07-28

    The modulation of auditory event-related potentials (ERP) by attention generally results in larger amplitudes when stimuli are attended. We measured the P1-N1-P2 acoustic change complex elicited with synthetic overt (second formant, F2Δ=1000Hz) and subtle (F2Δ=100Hz) diphthongs, while subjects (i) attended to the auditory stimuli, (ii) ignored the auditory stimuli and watched a film, and (iii) diverted their attention to a visual discrimination task. Responses elicited by diphthongs where F2 values rose and fell were found to be different and this precluded their combined analysis. Multivariate analysis of ERP components from the rising F2 changes showed main effects of attention on P2 amplitude and latency, and N1-P2 amplitude. P2 amplitude decreased by 40% between the attend and ignore conditions, and by 60% between the attend and divert conditions. The effect of diphthong magnitude was significant for components from a broader temporal window which included P1 latency and N1 amplitude. N1 latency did not vary between attention conditions, a finding that may be related to stimulation with a continuous vowel. These data show that a discernible P1-N1-P2 response can be observed to subtle vowel quality transitions, even when the attention of a subject is diverted to an unrelated visual task. PMID:27158036

  16. Test-retest reliability of the 40 Hz EEG auditory steady-state response.

    Directory of Open Access Journals (Sweden)

    Kristina L McFadden

    Full Text Available Auditory evoked steady-state responses are increasingly being used as a marker of brain function and dysfunction in various neuropsychiatric disorders, but research investigating the test-retest reliability of this response is lacking. The purpose of this study was to assess the consistency of the auditory steady-state response (ASSR across sessions. Furthermore, the current study aimed to investigate how the reliability of the ASSR is impacted by stimulus parameters and analysis method employed. The consistency of this response across two sessions spaced approximately 1 week apart was measured in nineteen healthy adults using electroencephalography (EEG. The ASSR was entrained by both 40 Hz amplitude-modulated white noise and click train stimuli. Correlations between sessions were assessed with two separate analytical techniques: a channel-level analysis across the whole-head array and b signal-space projection from auditory dipoles. Overall, the ASSR was significantly correlated between sessions 1 and 2 (p<0.05, multiple comparison corrected, suggesting adequate test-retest reliability of this response. The current study also suggests that measures of inter-trial phase coherence may be more reliable between sessions than measures of evoked power. Results were similar between the two analysis methods, but reliability varied depending on the presented stimulus, with click train stimuli producing more consistent responses than white noise stimuli.

  17. Aberrant interference of auditory negative words on attention in patients with schizophrenia.

    Directory of Open Access Journals (Sweden)

    Norichika Iwashiro

    Full Text Available Previous research suggests that deficits in attention-emotion interaction are implicated in schizophrenia symptoms. Although disruption in auditory processing is crucial in the pathophysiology of schizophrenia, deficits in interaction between emotional processing of auditorily presented language stimuli and auditory attention have not yet been clarified. To address this issue, the current study used a dichotic listening task to examine 22 patients with schizophrenia and 24 age-, sex-, parental socioeconomic background-, handedness-, dexterous ear-, and intelligence quotient-matched healthy controls. The participants completed a word recognition task on the attended side in which a word with emotionally valenced content (negative/positive/neutral was presented to one ear and a different neutral word was presented to the other ear. Participants selectively attended to either ear. In the control subjects, presentation of negative but not positive word stimuli provoked a significantly prolonged reaction time compared with presentation of neutral word stimuli. This interference effect for negative words existed whether or not subjects directed attention to the negative words. This interference effect was significantly smaller in the patients with schizophrenia than in the healthy controls. Furthermore, the smaller interference effect was significantly correlated with severe positive symptoms and delusional behavior in the patients with schizophrenia. The present findings suggest that aberrant interaction between semantic processing of negative emotional content and auditory attention plays a role in production of positive symptoms in schizophrenia. (224 words.

  18. Auditory Hallucinations in Acute Stroke

    Directory of Open Access Journals (Sweden)

    Yair Lampl

    2005-01-01

    Full Text Available Auditory hallucinations are uncommon phenomena which can be directly caused by acute stroke, mostly described after lesions of the brain stem, very rarely reported after cortical strokes. The purpose of this study is to determine the frequency of this phenomenon. In a cross sectional study, 641 stroke patients were followed in the period between 1996–2000. Each patient underwent comprehensive investigation and follow-up. Four patients were found to have post cortical stroke auditory hallucinations. All of them occurred after an ischemic lesion of the right temporal lobe. After no more than four months, all patients were symptom-free and without therapy. The fact the auditory hallucinations may be of cortical origin must be taken into consideration in the treatment of stroke patients. The phenomenon may be completely reversible after a couple of months.

  19. The Influence of Emotion on Keyboard Typing: An Experimental Study Using Auditory Stimuli.

    Directory of Open Access Journals (Sweden)

    Po-Ming Lee

    Full Text Available In recent years, a novel approach for emotion recognition has been reported, which is by keystroke dynamics. The advantages of using this approach are that the data used is rather non-intrusive and easy to obtain. However, there were only limited investigations about the phenomenon itself in previous studies. Hence, this study aimed to examine the source of variance in keyboard typing patterns caused by emotions. A controlled experiment to collect subjects' keystroke data in different emotional states induced by International Affective Digitized Sounds (IADS was conducted. Two-way Valence (3 x Arousal (3 ANOVAs was used to examine the collected dataset. The results of the experiment indicate that the effect of arousal is significant in keystroke duration (p < .05, keystroke latency (p < .01, but not in the accuracy rate of keyboard typing. The size of the emotional effect is small, compared to the individual variability. Our findings support the conclusion that the keystroke duration and latency are influenced by arousal. The finding about the size of the effect suggests that the accuracy rate of emotion recognition technology could be further improved if personalized models are utilized. Notably, the experiment was conducted using standard instruments and hence is expected to be highly reproducible.

  20. Psychological and psychophysiological effects of auditory and visual stimuli during various modes of exercise

    OpenAIRE

    Jones, Leighton

    2014-01-01

    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University. This research programme had three principal objectives. First, to assess the stability of the exercise heart rate-music tempo preference relationship and its relevance to a range of psychological outcomes. Second, to explore the influence of two personal factors (motivational orientation and dominant attentional style) in a naturalistic exercise-to-music setting. Third, to examine me...

  1. Hierarchical models and functional traits

    NARCIS (Netherlands)

    E.E. van Loon; J. Shamoun-Baranes; H. Sierdsema; W. Bouten

    2006-01-01

    Hierarchical models for animal abundance prediction are conceptually elegant. They are generally more parsimonous than non-hierarchical models derived from the same data, give relatively robust predictions and automatically provide consistent output at multiple (spatio-temporal) scales. Another attr

  2. Roughly Weighted Hierarchical Simple Games

    OpenAIRE

    Hameed, Ali; Slinko, Arkadii

    2012-01-01

    Hierarchical simple games - both disjunctive and conjunctive - are natural generalizations of simple majority games. They take their origin in the theory of secret sharing. Another important generalization of simple majority games with origin in economics and politics are weighted and roughly weighted majority games. In this paper we characterize roughly weighted hierarchical games identifying where the two approaches coincide.

  3. Different mechanisms are responsible for dishabituation of electrophysiological auditory responses to a change in acoustic identity than to a change in stimulus location.

    Science.gov (United States)

    Smulders, Tom V; Jarvis, Erich D

    2013-11-01

    Repeated exposure to an auditory stimulus leads to habituation of the electrophysiological and immediate-early-gene (IEG) expression response in the auditory system. A novel auditory stimulus reinstates this response in a form of dishabituation. This has been interpreted as the start of new memory formation for this novel stimulus. Changes in the location of an otherwise identical auditory stimulus can also dishabituate the IEG expression response. This has been interpreted as an integration of stimulus identity and stimulus location into a single auditory object, encoded in the firing patterns of the auditory system. In this study, we further tested this hypothesis. Using chronic multi-electrode arrays to record multi-unit activity from the auditory system of awake and behaving zebra finches, we found that habituation occurs to repeated exposure to the same song and dishabituation with a novel song, similar to that described in head-fixed, restrained animals. A large proportion of recording sites also showed dishabituation when the same auditory stimulus was moved to a novel location. However, when the song was randomly moved among 8 interleaved locations, habituation occurred independently of the continuous changes in location. In contrast, when 8 different auditory stimuli were interleaved all from the same location, a separate habituation occurred to each stimulus. This result suggests that neuronal memories of the acoustic identity and spatial location are different, and that allocentric location of a stimulus is not encoded as part of the memory for an auditory object, while its acoustic properties are. We speculate that, instead, the dishabituation that occurs with a change from a stable location of a sound is due to the unexpectedness of the location change, and might be due to different underlying mechanisms than the dishabituation and separate habituations to different acoustic stimuli.

  4. Hierarchical Affinity Propagation

    CERN Document Server

    Givoni, Inmar; Frey, Brendan J

    2012-01-01

    Affinity propagation is an exemplar-based clustering algorithm that finds a set of data-points that best exemplify the data, and associates each datapoint with one exemplar. We extend affinity propagation in a principled way to solve the hierarchical clustering problem, which arises in a variety of domains including biology, sensor networks and decision making in operational research. We derive an inference algorithm that operates by propagating information up and down the hierarchy, and is efficient despite the high-order potentials required for the graphical model formulation. We demonstrate that our method outperforms greedy techniques that cluster one layer at a time. We show that on an artificial dataset designed to mimic the HIV-strain mutation dynamics, our method outperforms related methods. For real HIV sequences, where the ground truth is not available, we show our method achieves better results, in terms of the underlying objective function, and show the results correspond meaningfully to geographi...

  5. Optimisation by hierarchical search

    Science.gov (United States)

    Zintchenko, Ilia; Hastings, Matthew; Troyer, Matthias

    2015-03-01

    Finding optimal values for a set of variables relative to a cost function gives rise to some of the hardest problems in physics, computer science and applied mathematics. Although often very simple in their formulation, these problems have a complex cost function landscape which prevents currently known algorithms from efficiently finding the global optimum. Countless techniques have been proposed to partially circumvent this problem, but an efficient method is yet to be found. We present a heuristic, general purpose approach to potentially improve the performance of conventional algorithms or special purpose hardware devices by optimising groups of variables in a hierarchical way. We apply this approach to problems in combinatorial optimisation, machine learning and other fields.

  6. Trees and Hierarchical Structures

    CERN Document Server

    Haeseler, Arndt

    1990-01-01

    The "raison d'etre" of hierarchical dustering theory stems from one basic phe­ nomenon: This is the notorious non-transitivity of similarity relations. In spite of the fact that very often two objects may be quite similar to a third without being that similar to each other, one still wants to dassify objects according to their similarity. This should be achieved by grouping them into a hierarchy of non-overlapping dusters such that any two objects in ~ne duster appear to be more related to each other than they are to objects outside this duster. In everyday life, as well as in essentially every field of scientific investigation, there is an urge to reduce complexity by recognizing and establishing reasonable das­ sification schemes. Unfortunately, this is counterbalanced by the experience of seemingly unavoidable deadlocks caused by the existence of sequences of objects, each comparatively similar to the next, but the last rather different from the first.

  7. Corollary discharge inhibition of ascending auditory neurons in the stridulating cricket.

    Science.gov (United States)

    Poulet, James F A; Hedwig, Berthold

    2003-06-01

    Acoustically communicating animals are able to process external acoustic stimuli despite generating intense sounds during vocalization. We have examined how the crickets' ascending auditory pathway copes with self-generated, intense auditory signals (chirps) during singing (stridulation). We made intracellular recordings from two identified ascending auditory interneurons, ascending neuron 1 (AN1) and ascending neuron 2 (AN2), during pharmacologically elicited sonorous (two-winged), silent (one-winged), and fictive (isolated CNS) stridulation. During sonorous chirps, AN1 responded with bursts of spikes, whereas AN2 was inhibited and rarely spiked. Low-amplitude hyperpolarizing potentials were recorded in AN1 and AN2 during silent chirps. The potentials were also present during fictive chirps. Therefore, they were the result of a centrally generated corollary discharge from the stridulatory motor network. The spiking response of AN1 and AN2 to acoustic stimuli was inhibited during silent and fictive chirps. The maximum period of inhibition occurred in phase with the maximum spiking response to self-generated sound in a sonorously stridulating cricket. In some experiments (30%) depolarizing potentials were recorded during silent chirps. Reafferent feedback elicited by wing movement was probably responsible for the depolarizing potentials. In addition, two other sources of inhibition were present in AN1: (1) IPSPs were elicited by stimulation with 12.5 kHz stimuli and (2) a long-lasting hyperpolarization followed spiking responses to 4.5 kHz stimuli. The hyperpolarization desensitized the response of AN1 to subsequent quieter stimuli. Therefore, the corollary discharge will reduce desensitization by suppressing the response of AN1 to self-generated sounds.

  8. Effects of Presentation Rate and Attention on Auditory Discrimination: A Comparison of Long-Latency Auditory Evoked Potentials in School-Aged Children and Adults.

    Science.gov (United States)

    Choudhury, Naseem A; Parascando, Jessica A; Benasich, April A

    2015-01-01

    Decoding human speech requires both perception and integration of brief, successive auditory stimuli that enter the central nervous system as well as the allocation of attention to language-relevant signals. This study assesses the role of attention on processing rapid transient stimuli in adults and children. Cortical responses (EEG/ERPs), specifically mismatch negativity (MMN) responses, to paired tones (standard 100-100 Hz; deviant 100-300 Hz) separated by a 300, 70 or 10 ms silent gap (ISI) were recorded under Ignore and Attend conditions in 21 adults and 23 children (6-11 years old). In adults, an attention-related enhancement was found for all rate conditions and laterality effects (L>R) were observed. In children, 2 auditory discrimination-related peaks were identified from the difference wave (deviant-standard): an early peak (eMMN) at about 100-300 ms indexing sensory processing, and a later peak (LDN), at about 400-600 ms, thought to reflect reorientation to the deviant stimuli or "second-look" processing. Results revealed differing patterns of activation and attention modulation for the eMMN in children as compared to the MMN in adults: The eMMN had a more frontal topography as compared to adults and attention played a significantly greater role in childrens' rate processing. The pattern of findings for the LDN was consistent with hypothesized mechanisms related to further processing of complex stimuli. The differences between eMMN and LDN observed here support the premise that separate cognitive processes and mechanisms underlie these ERP peaks. These findings are the first to show that the eMMN and LDN differ under different temporal and attentional conditions, and that a more complete understanding of children's responses to rapid successive auditory stimulation requires an examination of both peaks. PMID:26368126

  9. Effects of Presentation Rate and Attention on Auditory Discrimination: A Comparison of Long-Latency Auditory Evoked Potentials in School-Aged Children and Adults.

    Directory of Open Access Journals (Sweden)

    Naseem A Choudhury

    Full Text Available Decoding human speech requires both perception and integration of brief, successive auditory stimuli that enter the central nervous system as well as the allocation of attention to language-relevant signals. This study assesses the role of attention on processing rapid transient stimuli in adults and children. Cortical responses (EEG/ERPs, specifically mismatch negativity (MMN responses, to paired tones (standard 100-100 Hz; deviant 100-300 Hz separated by a 300, 70 or 10 ms silent gap (ISI were recorded under Ignore and Attend conditions in 21 adults and 23 children (6-11 years old. In adults, an attention-related enhancement was found for all rate conditions and laterality effects (L>R were observed. In children, 2 auditory discrimination-related peaks were identified from the difference wave (deviant-standard: an early peak (eMMN at about 100-300 ms indexing sensory processing, and a later peak (LDN, at about 400-600 ms, thought to reflect reorientation to the deviant stimuli or "second-look" processing. Results revealed differing patterns of activation and attention modulation for the eMMN in children as compared to the MMN in adults: The eMMN had a more frontal topography as compared to adults and attention played a significantly greater role in childrens' rate processing. The pattern of findings for the LDN was consistent with hypothesized mechanisms related to further processing of complex stimuli. The differences between eMMN and LDN observed here support the premise that separate cognitive processes and mechanisms underlie these ERP peaks. These findings are the first to show that the eMMN and LDN differ under different temporal and attentional conditions, and that a more complete understanding of children's responses to rapid successive auditory stimulation requires an examination of both peaks.

  10. Binocular coordination in response to stereoscopic stimuli

    Science.gov (United States)

    Liversedge, Simon P.; Holliman, Nicolas S.; Blythe, Hazel I.

    2009-02-01

    Humans actively explore their visual environment by moving their eyes. Precise coordination of the eyes during visual scanning underlies the experience of a unified perceptual representation and is important for the perception of depth. We report data from three psychological experiments investigating human binocular coordination during visual processing of stereoscopic stimuli.In the first experiment participants were required to read sentences that contained a stereoscopically presented target word. Half of the word was presented exclusively to one eye and half exclusively to the other eye. Eye movements were recorded and showed that saccadic targeting was uninfluenced by the stereoscopic presentation, strongly suggesting that complementary retinal stimuli are perceived as a single, unified input prior to saccade initiation. In a second eye movement experiment we presented words stereoscopically to measure Panum's Fusional Area for linguistic stimuli. In the final experiment we compared binocular coordination during saccades between simple dot stimuli under 2D, stereoscopic 3D and real 3D viewing conditions. Results showed that depth appropriate vergence movements were made during saccades and fixations to real 3D stimuli, but only during fixations on stereoscopic 3D stimuli. 2D stimuli did not induce depth vergence movements. Together, these experiments indicate that stereoscopic visual stimuli are fused when they fall within Panum's Fusional Area, and that saccade metrics are computed on the basis of a unified percept. Also, there is sensitivity to non-foveal retinal disparity in real 3D stimuli, but not in stereoscopic 3D stimuli, and the system responsible for binocular coordination responds to this during saccades as well as fixations.

  11. Auditory Brainstem Response Wave Amplitude Characteristics as a Diagnostic Tool in Children with Speech Delay with Unknown Causes

    Directory of Open Access Journals (Sweden)

    Susan Abadi

    2016-09-01

    Full Text Available Speech delay with an unknown cause is a problem among children. This diagnosis is the last differential diagnosis after observing normal findings in routine hearing tests. The present study was undertaken to determine whether auditory brainstem responses to click stimuli are different between normally developing children and children suffering from delayed speech with unknown causes. In this cross-sectional study, we compared click auditory brainstem responses between 261 children who were clinically diagnosed with delayed speech with unknown causes based on normal routine auditory test findings and neurological examinations and had >12 months of speech delay (case group and 261 age- and sex-matched normally developing children (control group. Our results indicated that the case group exhibited significantly higher wave amplitude responses to click stimuli (waves I, III, and V than did the control group (P=0.001. These amplitudes were significantly reduced after 1 year (P=0.001; however, they were still significantly higher than those of the control group (P=0.001. The significant differences were seen regardless of the age and the sex of the participants. There were no statistically significant differences between the 2 groups considering the latency of waves I, III, and V. In conclusion, the higher amplitudes of waves I, III, and V, which were observed in the auditory brainstem responses to click stimuli among the patients with speech delay with unknown causes, might be used as a diagnostic tool to track patients’ improvement after treatment.

  12. Auditory Brainstem Response Wave Amplitude Characteristics as a Diagnostic Tool in Children with Speech Delay with Unknown Causes.

    Science.gov (United States)

    Abadi, Susan; Khanbabaee, Ghamartaj; Sheibani, Kourosh

    2016-09-01

    Speech delay with an unknown cause is a problem among children. This diagnosis is the last differential diagnosis after observing normal findings in routine hearing tests. The present study was undertaken to determine whether auditory brainstem responses to click stimuli are different between normally developing children and children suffering from delayed speech with unknown causes. In this cross-sectional study, we compared click auditory brainstem responses between 261 children who were clinically diagnosed with delayed speech with unknown causes based on normal routine auditory test findings and neurological examinations and had >12 months of speech delay (case group) and 261 age- and sex-matched normally developing children (control group). Our results indicated that the case group exhibited significantly higher wave amplitude responses to click stimuli (waves I, III, and V) than did the control group (P=0.001). These amplitudes were significantly reduced after 1 year (P=0.001); however, they were still significantly higher than those of the control group (P=0.001). The significant differences were seen regardless of the age and the sex of the participants. There were no statistically significant differences between the 2 groups considering the latency of waves I, III, and V. In conclusion, the higher amplitudes of waves I, III, and V, which were observed in the auditory brainstem responses to click stimuli among the patients with speech delay with unknown causes, might be used as a diagnostic tool to track patients' improvement after treatment. PMID:27582591

  13. An Evaluation of Training with an Auditory P300 Brain-Computer Interface for the Japanese Hiragana Syllabary

    Science.gov (United States)

    Halder, Sebastian; Takano, Kouji; Ora, Hiroki; Onishi, Akinari; Utsumi, Kota; Kansaku, Kenji

    2016-01-01

    Gaze-independent brain-computer interfaces (BCIs) are a possible communication channel for persons with paralysis. We investigated if it is possible to use auditory stimuli to create a BCI for the Japanese Hiragana syllabary, which has 46 Hiragana characters. Additionally, we investigated if training has an effect on accuracy despite the high amount of different stimuli involved. Able-bodied participants (N = 6) were asked to select 25 syllables (out of fifty possible choices) using a two step procedure: First the consonant (ten choices) and then the vowel (five choices). This was repeated on 3 separate days. Additionally, a person with spinal cord injury (SCI) participated in the experiment. Four out of six healthy participants reached Hiragana syllable accuracies above 70% and the information transfer rate increased from 1.7 bits/min in the first session to 3.2 bits/min in the third session. The accuracy of the participant with SCI increased from 12% (0.2 bits/min) to 56% (2 bits/min) in session three. Reliable selections from a 10 × 5 matrix using auditory stimuli were possible and performance is increased by training. We were able to show that auditory P300 BCIs can be used for communication with up to fifty symbols. This enables the use of the technology of auditory P300 BCIs with a variety of applications.

  14. Modeling hierarchical structures - Hierarchical Linear Modeling using MPlus

    CERN Document Server

    Jelonek, M

    2006-01-01

    The aim of this paper is to present the technique (and its linkage with physics) of overcoming problems connected to modeling social structures, which are typically hierarchical. Hierarchical Linear Models provide a conceptual and statistical mechanism for drawing conclusions regarding the influence of phenomena at different levels of analysis. In the social sciences it is used to analyze many problems such as educational, organizational or market dilemma. This paper introduces the logic of modeling hierarchical linear equations and estimation based on MPlus software. I present my own model to illustrate the impact of different factors on school acceptation level.

  15. Estimating the intended sound direction of the user: toward an auditory brain-computer interface using out-of-head sound localization.

    Directory of Open Access Journals (Sweden)

    Isao Nambu

    Full Text Available The auditory Brain-Computer Interface (BCI using electroencephalograms (EEG is a subject of intensive study. As a cue, auditory BCIs can deal with many of the characteristics of stimuli such as tone, pitch, and voices. Spatial information on auditory stimuli also provides useful information for a BCI. However, in a portable system, virtual auditory stimuli have to be presented spatially through earphones or headphones, instead of loudspeakers. We investigated the possibility of an auditory BCI using the out-of-head sound localization technique, which enables us to present virtual auditory stimuli to users from any direction, through earphones. The feasibility of a BCI using this technique was evaluated in an EEG oddball experiment and offline analysis. A virtual auditory stimulus was presented to the subject from one of six directions. Using a support vector machine, we were able to classify whether the subject attended the direction of a presented stimulus from EEG signals. The mean accuracy across subjects was 70.0% in the single-trial classification. When we used trial-averaged EEG signals as inputs to the classifier, the mean accuracy across seven subjects reached 89.5% (for 10-trial averaging. Further analysis showed that the P300 event-related potential responses from 200 to 500 ms in central and posterior regions of the brain contributed to the classification. In comparison with the results obtained from a loudspeaker experiment, we confirmed that stimulus presentation by out-of-head sound localization achieved similar event-related potential responses and classification performances. These results suggest that out-of-head sound localization enables us to provide a high-performance and loudspeaker-less portable BCI system.

  16. The modality effect of ego depletion: Auditory task modality reduces ego depletion.

    Science.gov (United States)

    Li, Qiong; Wang, Zhenhong

    2016-08-01

    An initial act of self-control that impairs subsequent acts of self-control is called ego depletion. The ego depletion phenomenon has been observed consistently. The modality effect refers to the effect of the presentation modality on the processing of stimuli. The modality effect was also robustly found in a large body of research. However, no study to date has examined the modality effects of ego depletion. This issue was addressed in the current study. In Experiment 1, after all participants completed a handgrip task, one group's participants completed a visual attention regulation task and the other group's participants completed an auditory attention regulation task, and then all participants again completed a handgrip task. The ego depletion phenomenon was observed in both the visual and the auditory attention regulation task. Moreover, participants who completed the visual task performed worse on the handgrip task than participants who completed the auditory task, which indicated that there was high ego depletion in the visual task condition. In Experiment 2, participants completed an initial task that either did or did not deplete self-control resources, and then they completed a second visual or auditory attention control task. The results indicated that depleted participants performed better on the auditory attention control task than the visual attention control task. These findings suggest that altering task modality may reduce ego depletion. PMID:27241617

  17. The modality effect of ego depletion: Auditory task modality reduces ego depletion.

    Science.gov (United States)

    Li, Qiong; Wang, Zhenhong

    2016-08-01

    An initial act of self-control that impairs subsequent acts of self-control is called ego depletion. The ego depletion phenomenon has been observed consistently. The modality effect refers to the effect of the presentation modality on the processing of stimuli. The modality effect was also robustly found in a large body of research. However, no study to date has examined the modality effects of ego depletion. This issue was addressed in the current study. In Experiment 1, after all participants completed a handgrip task, one group's participants completed a visual attention regulation task and the other group's participants completed an auditory attention regulation task, and then all participants again completed a handgrip task. The ego depletion phenomenon was observed in both the visual and the auditory attention regulation task. Moreover, participants who completed the visual task performed worse on the handgrip task than participants who completed the auditory task, which indicated that there was high ego depletion in the visual task condition. In Experiment 2, participants completed an initial task that either did or did not deplete self-control resources, and then they completed a second visual or auditory attention control task. The results indicated that depleted participants performed better on the auditory attention control task than the visual attention control task. These findings suggest that altering task modality may reduce ego depletion.

  18. Mismatch responses in the awake rat: evidence from epidural recordings of auditory cortical fields.

    Directory of Open Access Journals (Sweden)

    Fabienne Jung

    Full Text Available Detecting sudden environmental changes is crucial for the survival of humans and animals. In the human auditory system the mismatch negativity (MMN, a component of auditory evoked potentials (AEPs, reflects the violation of predictable stimulus regularities, established by the previous auditory sequence. Given the considerable potentiality of the MMN for clinical applications, establishing valid animal models that allow for detailed investigation of its neurophysiological mechanisms is important. Rodent studies, so far almost exclusively under anesthesia, have not provided decisive evidence whether an MMN analogue exists in rats. This may be due to several factors, including the effect of anesthesia. We therefore used epidural recordings in awake black hooded rats, from two auditory cortical areas in both hemispheres, and with bandpass filtered noise stimuli that were optimized in frequency and duration for eliciting MMN in rats. Using a classical oddball paradigm with frequency deviants, we detected mismatch responses at all four electrodes in primary and secondary auditory cortex, with morphological and functional properties similar to those known in humans, i.e., large amplitude biphasic differences that increased in amplitude with decreasing deviant probability. These mismatch responses significantly diminished in a control condition that removed the predictive context while controlling for presentation rate of the deviants. While our present study does not allow for disambiguating precisely the relative contribution of adaptation and prediction error processing to the observed mismatch responses, it demonstrates that MMN-like potentials can be obtained in awake and unrestrained rats.

  19. Reducing auditory hypersensitivities in autistic spectrum disorders: Preliminary findings evaluating the Listening Project Protocol

    Directory of Open Access Journals (Sweden)

    Stephen W Porges

    2014-08-01

    Full Text Available Auditory hypersensitivities are a common feature of autism spectrum disorder (ASD. In the present study the effectiveness of a novel intervention, the Listening Project Protocol (LPP was evaluated in two trials conducted with children diagnosed with ASD. LPP was developed to reduce auditory hypersensitivities. LPP is based on a theoretical neural exercise model that uses computer altered acoustic stimulation to recruit the neural regulation of middle ear muscles. Features of the intervention stimuli were informed by basic research in speech and hearing sciences that has identified the specific acoustic frequencies necessary to understand speech, which must pass through middle ear structures before being processed by other components of the auditory system. LPP was hypothesized to reduce auditory hypersensitivities by increasing the neural tone to the middle ear muscles to functionally dampen competing sounds in frequencies lower than human speech. The trials demonstrated that LPP, when contrasted to control conditions, selectively reduced auditory hypersensitivities. These findings are consistent with the Polyvagal Theory, which emphasizes the role of the middle ear muscles in social communication.

  20. Assembly of the auditory circuitry by a Hox genetic network in the mouse brainstem.

    Directory of Open Access Journals (Sweden)

    Maria Di Bonito

    Full Text Available Rhombomeres (r contribute to brainstem auditory nuclei during development. Hox genes are determinants of rhombomere-derived fate and neuronal connectivity. Little is known about the contribution of individual rhombomeres and their associated Hox codes to auditory sensorimotor circuitry. Here, we show that r4 contributes to functionally linked sensory and motor components, including the ventral nucleus of lateral lemniscus, posterior ventral cochlear nuclei (VCN, and motor olivocochlear neurons. Assembly of the r4-derived auditory components is involved in sound perception and depends on regulatory interactions between Hoxb1 and Hoxb2. Indeed, in Hoxb1 and Hoxb2 mutant mice the transmission of low-level auditory stimuli is lost, resulting in hearing impairments. On the other hand, Hoxa2 regulates the Rig1 axon guidance receptor and controls contralateral projections from the anterior VCN to the medial nucleus of the trapezoid body, a circuit involved in sound localization. Thus, individual rhombomeres and their associated Hox codes control the assembly of distinct functionally segregated sub-circuits in the developing auditory brainstem.

  1. Visual-auditory differences in duration discrimination of intervals in the subsecond and second range

    Directory of Open Access Journals (Sweden)

    Thomas eRammsayer

    2015-10-01

    Full Text Available A common finding in time psychophysics is that temporal acuity is much better for auditory than for visual stimuli. The present study aimed to examine modality-specific differences in duration discrimination within the conceptual framework of the Distinct Timing Hypothesis. This theoretical account proposes that durations in the lower milliseconds range are processed automatically while longer durations are processed by a cognitive mechanism. A sample of 46 participants performed two auditory and visual duration discrimination tasks with extremely brief (50-ms standard duration and longer (1000-ms standard duration intervals. Better discrimination performance for auditory compared to visual intervals could be established for extremely brief and longer intervals. However, when performance on duration discrimination of longer intervals in the one-second range was controlled for modality-specific input from the sensory-automatic timing mechanism, the visual-auditory difference disappeared completely as indicated by virtually identical Weber fractions for both sensory modalities. These findings support the idea of a sensory-automatic mechanism underlying the observed visual-auditory differences in duration discrimination of extremely brief intervals in the millisecond range and longer intervals in the one-second range. Our data are consistent with the notion of a gradual transition from a purely modality-specific, sensory-automatic to a more cognitive, amodal timing mechanism. Within this transition zone, both mechanisms appear to operate simultaneously but the influence of the sensory-automatic timing mechanism is expected to continuously decrease with increasing interval duration.

  2. Auditory cortical activity during cochlear implant-mediated perception of spoken language, melody, and rhythm.

    Science.gov (United States)

    Limb, Charles J; Molloy, Anne T; Jiradejvong, Patpong; Braun, Allen R

    2010-03-01

    Despite the significant advances in language perception for cochlear implant (CI) recipients, music perception continues to be a major challenge for implant-mediated listening. Our understanding of the neural mechanisms that underlie successful implant listening remains limited. To our knowledge, this study represents the first neuroimaging investigation of music perception in CI users, with the hypothesis that CI subjects would demonstrate greater auditory cortical activation than normal hearing controls. H(2) (15)O positron emission tomography (PET) was used here to assess auditory cortical activation patterns in ten postlingually deafened CI patients and ten normal hearing control subjects. Subjects were presented with language, melody, and rhythm tasks during scanning. Our results show significant auditory cortical activation in implant subjects in comparison to control subjects for language, melody, and rhythm. The greatest activity in CI users compared to controls was seen for language tasks, which is thought to reflect both implant and neural specializations for language processing. For musical stimuli, PET scanning revealed significantly greater activation during rhythm perception in CI subjects (compared to control subjects), and the least activation during melody perception, which was the most difficult task for CI users. These results may suggest a possible relationship between auditory performance and degree of auditory cortical activation in implant recipients that deserves further study.

  3. Visual, Auditory, and Cross Modal Sensory Processing in Adults with Autism: An EEG Power and BOLD fMRI Investigation

    Science.gov (United States)

    Hames, Elizabeth’ C.; Murphy, Brandi; Rajmohan, Ravi; Anderson, Ronald C.; Baker, Mary; Zupancic, Stephen; O’Boyle, Michael; Richman, David

    2016-01-01

    Electroencephalography (EEG) and blood oxygen level dependent functional magnetic resonance imagining (BOLD fMRI) assessed the neurocorrelates of sensory processing of visual and auditory stimuli in 11 adults with autism (ASD) and 10 neurotypical (NT) controls between the ages of 20–28. We hypothesized that ASD performance on combined audiovisual trials would be less accurate with observable decreased EEG power across frontal, temporal, and occipital channels and decreased BOLD fMRI activity in these same regions; reflecting deficits in key sensory processing areas. Analysis focused on EEG power, BOLD fMRI, and accuracy. Lower EEG beta power and lower left auditory cortex fMRI activity were seen in ASD compared to NT when they were presented with auditory stimuli as demonstrated by contrasting the activity from the second presentation of an auditory stimulus in an all auditory block vs. the second presentation of a visual stimulus in an all visual block (AA2-VV2).We conclude that in ASD, combined audiovisual processing is more similar than unimodal processing to NTs. PMID:27148020

  4. Visual, Auditory, and Cross Modal Sensory Processing in Adults with Autism: An EEG Power and BOLD fMRI Investigation.

    Science.gov (United States)

    Hames, Elizabeth' C; Murphy, Brandi; Rajmohan, Ravi; Anderson, Ronald C; Baker, Mary; Zupancic, Stephen; O'Boyle, Michael; Richman, David

    2016-01-01

    Electroencephalography (EEG) and blood oxygen level dependent functional magnetic resonance imagining (BOLD fMRI) assessed the neurocorrelates of sensory processing of visual and auditory stimuli in 11 adults with autism (ASD) and 10 neurotypical (NT) controls between the ages of 20-28. We hypothesized that ASD performance on combined audiovisual trials would be less accurate with observable decreased EEG power across frontal, temporal, and occipital channels and decreased BOLD fMRI activity in these same regions; reflecting deficits in key sensory processing areas. Analysis focused on EEG power, BOLD fMRI, and accuracy. Lower EEG beta power and lower left auditory cortex fMRI activity were seen in ASD compared to NT when they were presented with auditory stimuli as demonstrated by contrasting the activity from the second presentation of an auditory stimulus in an all auditory block vs. the second presentation of a visual stimulus in an all visual block (AA2-VV2).We conclude that in ASD, combined audiovisual processing is more similar than unimodal processing to NTs. PMID:27148020

  5. Amplified somatosensory and visual cortical projections to a core auditory area, the anterior auditory field, following early- and late-onset deafness.

    Science.gov (United States)

    Wong, Carmen; Chabot, Nicole; Kok, Melanie A; Lomber, Stephen G

    2015-09-01

    Cross-modal reorganization following the loss of input from a sensory modality can recruit sensory-deprived cortical areas to process information from the remaining senses. Specifically, in early-deaf cats, the anterior auditory field (AAF) is unresponsive to auditory stimuli but can be activated by somatosensory and visual stimuli. Similarly, AAF neurons respond to tactile input in adult-deafened animals. To examine anatomical changes that may underlie this functional adaptation following early or late deafness, afferent projections to AAF were examined in hearing cats, and cats with early- or adult-onset deafness. Unilateral deposits of biotinylated dextran amine were made in AAF to retrogradely label cortical and thalamic afferents to AAF. In early-deaf cats, ipsilateral neuronal labeling in visual and somatosensory cortices increased by 329% and 101%, respectively. The largest increases arose from the anterior ectosylvian visual area and the anterolateral lateral suprasylvian visual area, as well as somatosensory areas S2 and S4. Consequently, labeling in auditory areas was reduced by 36%. The age of deafness onset appeared to influence afferent connectivity, with less marked differences observed in late-deaf cats. Profound changes to visual and somatosensory afferent connectivity following deafness may reflect corticocortical rewiring affording acoustically deprived AAF with cross-modal functionality.

  6. To modulate and be modulated: estrogenic influences on auditory processing of communication signals within a socio-neuro-endocrine framework.

    Science.gov (United States)

    Yoder, Kathleen M; Vicario, David S

    2012-02-01

    Gonadal hormones modulate behavioral responses to sexual stimuli, and communication signals can also modulate circulating hormone levels. In several species, these combined effects appear to underlie a two-way interaction between circulating gonadal hormones and behavioral responses to socially salient stimuli. Recent work in songbirds has shown that manipulating local estradiol levels in the auditory forebrain produces physiological changes that affect discrimination of conspecific vocalizations and can affect behavior. These studies provide new evidence that estrogens can directly alter auditory processing and indirectly alter the behavioral response to a stimulus. These studies show that: 1) Local estradiol action within an auditory area is necessary for socially relevant sounds to induce normal physiological responses in the brains of both sexes; 2) These physiological effects occur much more quickly than predicted by the classical time-frame for genomic effects; 3) Estradiol action within the auditory forebrain enables behavioral discrimination among socially relevant sounds in males; and 4) Estradiol is produced locally in the male brain during exposure to particular social interactions. The accumulating evidence suggests a socio-neuro-endocrinology framework in which estradiol is essential to auditory processing, is increased by a socially relevant stimulus, acts rapidly to shape perception of subsequent stimuli experienced during social interactions, and modulates behavioral responses to these stimuli. Brain estrogens are likely to function similarly in both songbird sexes because aromatase and estrogen receptors are present in both male and female forebrain. Estrogenic modulation of perception in songbirds and perhaps other animals could fine-tune male advertising signals and female ability to discriminate them, facilitating mate selection by modulating behaviors. PMID:22201281

  7. The neurochemical basis of human cortical auditory processing: combining proton magnetic resonance spectroscopy and magnetoencephalography

    Directory of Open Access Journals (Sweden)

    Tollkötter Melanie

    2006-08-01

    Full Text Available Abstract Background A combination of magnetoencephalography and proton magnetic resonance spectroscopy was used to correlate the electrophysiology of rapid auditory processing and the neurochemistry of the auditory cortex in 15 healthy adults. To assess rapid auditory processing in the left auditory cortex, the amplitude and decrement of the N1m peak, the major component of the late auditory evoked response, were measured during rapidly successive presentation of acoustic stimuli. We tested the hypothesis that: (i the amplitude of the N1m response and (ii its decrement during rapid stimulation are associated with the cortical neurochemistry as determined by proton magnetic resonance spectroscopy. Results Our results demonstrated a significant association between the concentrations of N-acetylaspartate, a marker of neuronal integrity, and the amplitudes of individual N1m responses. In addition, the concentrations of choline-containing compounds, representing the functional integrity of membranes, were significantly associated with N1m amplitudes. No significant association was found between the concentrations of the glutamate/glutamine pool and the amplitudes of the first N1m. No significant associations were seen between the decrement of the N1m (the relative amplitude of the second N1m peak and the concentrations of N-acetylaspartate, choline-containing compounds, or the glutamate/glutamine pool. However, there was a trend for higher glutamate/glutamine concentrations in individuals with higher relative N1m amplitude. Conclusion These results suggest that neuronal and membrane functions are important for rapid auditory processing. This investigation provides a first link between the electrophysiology, as recorded by magnetoencephalography, and the neurochemistry, as assessed by proton magnetic resonance spectroscopy, of the auditory cortex.

  8. Modulation of auditory cortex response to pitch variation following training with microtonal melodies.

    Science.gov (United States)

    Zatorre, Robert J; Delhommeau, Karine; Zarate, Jean Mary

    2012-01-01

    We tested changes in cortical functional response to auditory patterns in a configural learning paradigm. We trained 10 human listeners to discriminate micromelodies (consisting of smaller pitch intervals than normally used in Western music) and measured covariation in blood oxygenation signal to increasing pitch interval size in order to dissociate global changes in activity from those specifically associated with the stimulus feature that was trained. A psychophysical staircase procedure with feedback was used for training over a 2-week period. Behavioral tests of discrimination ability performed before and after training showed significant learning on the trained stimuli, and generalization to other frequencies and tasks; no learning occurred in an untrained control group. Before training the functional MRI data showed the expected systematic increase in activity in auditory cortices as a function of increasing micromelody pitch interval size. This function became shallower after training, with the maximal change observed in the right posterior auditory cortex. Global decreases in activity in auditory regions, along with global increases in frontal cortices also occurred after training. Individual variation in learning rate was related to the hemodynamic slope to pitch interval size, such that those who had a higher sensitivity to pitch interval variation prior to learning achieved the fastest learning. We conclude that configural auditory learning entails modulation in the response of auditory cortex to the trained stimulus feature. Reduction in blood oxygenation response to increasing pitch interval size suggests that fewer computational resources, and hence lower neural recruitment, is associated with learning, in accord with models of auditory cortex function, and with data from other modalities. PMID:23227019

  9. Modulation of auditory cortex response to pitch variation following training with microtonal melodies

    Directory of Open Access Journals (Sweden)

    Robert J Zatorre

    2012-12-01

    Full Text Available We tested changes in cortical functional response to auditory configural learning by training ten human listeners to discriminate micromelodies (consisting of smaller pitch intervals than normally used in Western music. We measured covariation in blood oxygenation signal to increasing pitch-interval size in order to dissociate global changes in activity from those specifically associated with the stimulus feature of interest. A psychophysical staircase procedure with feedback was used for training over a two-week period. Behavioral tests of discrimination ability performed before and after training showed significant learning on the trained stimuli, and generalization to other frequencies and tasks; no learning occurred in an untrained control group. Before training the functional MRI data showed the expected systematic increase in activity in auditory cortices as a function of increasing micromelody pitch-interval size. This function became shallower after training, with the maximal change observed in the right posterior auditory cortex. Global decreases in activity in auditory regions, along with global increases in frontal cortices also occurred after training. Individual variation in learning rate was related to the hemodynamic slope to pitch-interval size, such that those who had a higher sensitivity to pitch-interval variation prior to learning achieved the fastest learning. We conclude that configural auditory learning entails modulation in the response of auditory cortex specifically to the trained stimulus feature. Reduction in blood oxygenation response to increasing pitch-interval size suggests that fewer computational resources, and hence lower neural recruitment, is associated with learning, in accord with models of auditory cortex function, and with data from other modalities.

  10. Effective stimuli for constructing reliable neuron models.

    Directory of Open Access Journals (Sweden)

    Shaul Druckmann

    2011-08-01

    Full Text Available The rich dynamical nature of neurons poses major conceptual and technical challenges for unraveling their nonlinear membrane properties. Traditionally, various current waveforms have been injected at the soma to probe neuron dynamics, but the rationale for selecting specific stimuli has never been rigorously justified. The present experimental and theoretical study proposes a novel framework, inspired by learning theory, for objectively selecting the stimuli that best unravel the neuron's dynamics. The efficacy of stimuli is assessed in terms of their ability to constrain the parameter space of biophysically detailed conductance-based models that faithfully replicate the neuron's dynamics as attested by their ability to generalize well to the neuron's response to novel experimental stimuli. We used this framework to evaluate a variety of stimuli in different types of cortical neurons, ages and animals. Despite their simplicity, a set of stimuli consisting of step and ramp current pulses outperforms synaptic-like noisy stimuli in revealing the dynamics of these neurons. The general framework that we propose paves a new way for defining, evaluating and standardizing effective electrical probing of neurons and will thus lay the foundation for a much deeper understanding of the electrical nature of these highly sophisticated and non-linear devices and of the neuronal networks that they compose.

  11. Hierarchical multifunctional nanocomposites

    Science.gov (United States)

    Ghasemi-Nejhad, Mehrdad N.

    2014-03-01

    properties of the fibers can also be improved by the growth of nanotubes on the fibers. The combination of the two will produce super-performing materials, not currently available. Since the improvement of fiber starts with carbon nanotube grown on micron-size fibers (and matrix with a nanomaterial) to give the macro-composite, this process is a bottom-up "hierarchical" advanced manufacturing process, and since the resulting nanocomposites will have "multifunctionality" with improve properties in various functional areas such as chemical and fire resistance, damping, stiffness, strength, fracture toughness, EMI shielding, and electrical and thermal conductivity, the resulting nanocomposites are in fact "multifunctional hierarchical nanocomposites." In this paper, the current state of knowledge in processing, performance, and characterization of these materials are addressed.

  12. Efficacy of individual computer-based auditory training for people with hearing loss: a systematic review of the evidence.

    Directory of Open Access Journals (Sweden)

    Helen Henshaw

    Full Text Available BACKGROUND: Auditory training involves active listening to auditory stimuli and aims to improve performance in auditory tasks. As such, auditory training is a potential intervention for the management of people with hearing loss. OBJECTIVE: This systematic review (PROSPERO 2011: CRD42011001406 evaluated the published evidence-base for the efficacy of individual computer-based auditory training to improve speech intelligibility, cognition and communication abilities in adults with hearing loss, with or without hearing aids or cochlear implants. METHODS: A systematic search of eight databases and key journals identified 229 articles published since 1996, 13 of which met the inclusion criteria. Data were independently extracted and reviewed by the two authors. Study quality was assessed using ten pre-defined scientific and intervention-specific measures. RESULTS: Auditory training resulted in improved performance for trained tasks in 9/10 articles that reported on-task outcomes. Although significant generalisation of learning was shown to untrained measures of speech intelligibility (11/13 articles, cognition (1/1 articles and self-reported hearing abilities (1/2 articles, improvements were small and not robust. Where reported, compliance with computer-based auditory training was high, and retention of learning was shown at post-training follow-ups. Published evidence was of very-low to moderate study quality. CONCLUSIONS: Our findings demonstrate that published evidence for the efficacy of individual computer-based auditory training for adults with hearing loss is not robust and therefore cannot be reliably used to guide intervention at this time. We identify a need for high-quality evidence to further examine the efficacy of computer-based auditory training for people with hearing loss.

  13. Catecholaminergic innervation of central and peripheral auditory circuitry varies with reproductive state in female midshipman fish, Porichthys notatus.

    Directory of Open Access Journals (Sweden)

    Paul M Forlano

    Full Text Available In seasonal breeding vertebrates, hormone regulation of catecholamines, which include dopamine and noradrenaline, may function, in part, to modulate behavioral responses to conspecific vocalizations. However, natural seasonal changes in catecholamine innervation of auditory nuclei is largely unexplored, especially in the peripheral auditory system, where encoding of social acoustic stimuli is initiated. The plainfin midshipman fish, Porichthys notatus, has proven to be an excellent model to explore mechanisms underlying seasonal peripheral auditory plasticity related to reproductive social behavior. Recently, we demonstrated robust catecholaminergic (CA innervation throughout the auditory system in midshipman. Most notably, dopaminergic neurons in the diencephalon have widespread projections to auditory circuitry including direct innervation of the saccule, the main endorgan of hearing, and the cholinergic octavolateralis efferent nucleus (OE which also projects to the inner ear. Here, we tested the hypothesis that gravid, reproductive summer females show differential CA innervation of the auditory system compared to non-reproductive winter females. We utilized quantitative immunofluorescence to measure tyrosine hydroxylase immunoreactive (TH-ir fiber density throughout central auditory nuclei and the sensory epithelium of the saccule. Reproductive females exhibited greater density of TH-ir innervation in two forebrain areas including the auditory thalamus and greater density of TH-ir on somata and dendrites of the OE. In contrast, non-reproductive females had greater numbers of TH-ir terminals in the saccule and greater TH-ir fiber density in a region of the auditory hindbrain as well as greater numbers of TH-ir neurons in the preoptic area. These data provide evidence that catecholamines may function, in part, to seasonally modulate the sensitivity of the inner ear and, in turn, the appropriate behavioral response to reproductive acoustic

  14. Attention deficits revealed by passive auditory change detection for pure tones and lexical tones in ADHD children

    Directory of Open Access Journals (Sweden)

    Ming-Tao eYang

    2015-08-01

    Full Text Available Inattention has been a major problem in children with attention deficit/hyperactivity disorder (ADHD, accounting for their behavioral and cognitive dysfunctions. However, there are at least three processing steps underlying attentional control for auditory change detection, namely pre-attentive change detection, involuntary attention orienting, and attention reorienting for further evaluation. This study aimed to examine whether children with ADHD would show deficits in any of these subcomponents by using mismatch negativity (MMN, P3a, and late discriminative negativity (LDN as event-related potential (ERP markers, under the passive auditory oddball paradigm. Two types of stimuli - pure tones and Mandarin lexical tones - were used to examine if the deficits were general across linguistic and non-linguistic domains. Participants included 15 native Mandarin-speaking children with ADHD and 16 age-matched controls (across groups, age ranged between 6 and 15 years. Two passive auditory oddball paradigms (lexical tones and pure tones were applied. Pure tone paradigm included standard stimuli (1000 Hz, 80% and two deviant stimuli (1015 Hz and 1090 Hz, 10% each. The Mandarin lexical tone paradigm’s standard stimuli was /yi3/ (80% and two deviant stimuli were /yi1/ and /yi2/ (10% each. The results showed no MMN difference, but did show attenuated P3a and enhanced LDN to the large deviants for both pure and lexical tone changes in the ADHD group. Correlation analysis showed that children with higher ADHD tendency, as indexed by parents’ and teachers’ rating on ADHD symptoms, showed less positive P3a amplitudes when responding to large lexical tone deviants. Thus, children with ADHD showed impaired auditory change detection for both pure tones and lexical tones in both involuntary attention switching, and attention reorienting for further evaluation. These ERP markers may therefore be used for evaluation of anti-ADHD drugs that aim to alleviate these

  15. Hierarchical fringe tracking

    CERN Document Server

    Petrov, Romain G; Boskri, Abdelkarim; Folcher, Jean-Pierre; Lagarde, Stephane; Bresson, Yves; Benkhaldoum, Zouhair; Lazrek, Mohamed; Rakshit, Suvendu

    2014-01-01

    The limiting magnitude is a key issue for optical interferometry. Pairwise fringe trackers based on the integrated optics concepts used for example in GRAVITY seem limited to about K=10.5 with the 8m Unit Telescopes of the VLTI, and there is a general "common sense" statement that the efficiency of fringe tracking, and hence the sensitivity of optical interferometry, must decrease as the number of apertures increases, at least in the near infrared where we are still limited by detector readout noise. Here we present a Hierarchical Fringe Tracking (HFT) concept with sensitivity at least equal to this of a two apertures fringe trackers. HFT is based of the combination of the apertures in pairs, then in pairs of pairs then in pairs of groups. The key HFT module is a device that behaves like a spatial filter for two telescopes (2TSF) and transmits all or most of the flux of a cophased pair in a single mode beam. We give an example of such an achromatic 2TSF, based on very broadband dispersed fringes analyzed by g...

  16. Cortical Auditory Event Related Potentials (P300) for Frequency Changing Dynamic Tones

    Science.gov (United States)

    Kalaiah, Mohan Kumar

    2016-01-01

    Background and Objectives P300 has been studied with a variety of stimuli. However, the nature of P300 has not been investigated for deviant stimuli which change its characteristics from standard stimuli after a period of time from onset. Subjects and Methods Nine young adults with normal hearing participated in the study. The P300 was elicited using an oddball paradigm, the probability of standard and deviant stimuli was 80% and 20% respectively. Six stimuli were used to elicit P300, it included two pure-tones (1,000 Hz and 2,000 Hz) and four tone-complexes (tones with frequency changes). Among these stimuli, 1,000 Hz tone served as standard while others served as deviant stimuli. The P300 was recorded in five separate blocks, with one of the deviant stimuli as target in each block. Electroencephalographic was recorded from electrode sites Fz, Cz, C3, C4, and Pz. Latency and amplitude of components of the cortical auditory evoked potentials were measured at Cz. Results Waveforms obtained in the present study shows that, all the deviant stimuli elicited obligatory P1-N1-P2 for stimulus onset. 2,000 Hz deviant tone elicited P300 at a latency of 300 ms. While, tone-complexes elicited acoustic change complex (ACC) for frequency changes and finally elicited P300 at a latency of 600 ms. In addition, the results showed shorter latency and larger amplitude ACC and P300 for rising tone-complexes compared to falling tone-complexes. Conclusions Tone-complexes elicited distinct waveforms compared to 2,000 Hz deviant tone. Rising tone-complexes which had an increase in frequency elicited shorter latency and larger amplitude responses, which could be attributed to perceptual bias for frequency changes. PMID:27144230

  17. Hierarchical clustering for graph visualization

    CERN Document Server

    Clémençon, Stéphan; Rossi, Fabrice; Tran, Viet Chi

    2012-01-01

    This paper describes a graph visualization methodology based on hierarchical maximal modularity clustering, with interactive and significant coarsening and refining possibilities. An application of this method to HIV epidemic analysis in Cuba is outlined.

  18. Direct hierarchical assembly of nanoparticles

    Science.gov (United States)

    Xu, Ting; Zhao, Yue; Thorkelsson, Kari

    2014-07-22

    The present invention provides hierarchical assemblies of a block copolymer, a bifunctional linking compound and a nanoparticle. The block copolymers form one micro-domain and the nanoparticles another micro-domain.

  19. Material differences of auditory source retrieval:Evidence from event-related potential studies

    Institute of Scientific and Technical Information of China (English)

    NIE AiQing; GUO ChunYan; SHEN MoWei

    2008-01-01

    Two event-related potential experiments were conducted to investigate the temporal and the spatial distributions of the old/new effects for the item recognition task and the auditory source retrieval task using picture and Chinese character as stimuli respectively. Stimuli were presented on the center of the screen with their names read out either by female or by male voice simultaneously during the study phase and then two testa were performed separately. One test task was to differentiate the old items from the new ones, and the other task was to judge the items read out by a certain voice during the study phase as targets and other ones as non-targets. The results showed that the old/new effect of the auditory source retrieval task was more sustained over time than that of the item recognition task in both experiments, and the spatial distribution of the former effect was wider than that of the latter one. Both experiments recorded reliable old/new effect over the prefrontal cortex during the source retrieval task. However, there existed some differences of the old/new effect for the auditory source retrieval task between picture and Chinese character, and LORETA source analysis indicated that the differ-ences might be rooted in the temporal lobe. These findings demonstrate that the relevancy of the old/new effects between the item recognition task and the auditory source retrieval task supports the dual-process model; the spatial and the temporal distributions of the old/new effect elicited by the auditory source retrieval task are regulated by both the feature of the experimental material and the perceptual attribute of the voice.

  20. Effective stimuli for constructing reliable neuron models.

    OpenAIRE

    Shaul Druckmann; Berger, Thomas K.; Felix Schürmann; Sean Hill; Henry Markram; Idan Segev

    2011-01-01

    Author Summary Neurons perform complicated non-linear transformations on their input before producing their output - a train of action potentials. This input-output transformation is shaped by the specific composition of ion channels, out of the many possible types, that are embedded in the neuron's membrane. Experimentally, characterizing this transformation relies on injecting different stimuli to the neuron while recording its output; but which of the many possible stimuli should one apply...

  1. Hierarchical architecture of active knits

    International Nuclear Information System (INIS)

    Nature eloquently utilizes hierarchical structures to form the world around us. Applying the hierarchical architecture paradigm to smart materials can provide a basis for a new genre of actuators which produce complex actuation motions. One promising example of cellular architecture—active knits—provides complex three-dimensional distributed actuation motions with expanded operational performance through a hierarchically organized structure. The hierarchical structure arranges a single fiber of active material, such as shape memory alloys (SMAs), into a cellular network of interlacing adjacent loops according to a knitting grid. This paper defines a four-level hierarchical classification of knit structures: the basic knit loop, knit patterns, grid patterns, and restructured grids. Each level of the hierarchy provides increased architectural complexity, resulting in expanded kinematic actuation motions of active knits. The range of kinematic actuation motions are displayed through experimental examples of different SMA active knits. The results from this paper illustrate and classify the ways in which each level of the hierarchical knit architecture leverages the performance of the base smart material to generate unique actuation motions, providing necessary insight to best exploit this new actuation paradigm. (paper)

  2. Binocular combination of second-order stimuli.

    Science.gov (United States)

    Zhou, Jiawei; Liu, Rong; Zhou, Yifeng; Hess, Robert F

    2014-01-01

    Phase information is a fundamental aspect of visual stimuli. However, the nature of the binocular combination of stimuli defined by modulations in contrast, so-called second-order stimuli, is presently not clear. To address this issue, we measured binocular combination for first- (luminance modulated) and second-order (contrast modulated) stimuli using a binocular phase combination paradigm in seven normal adults. We found that the binocular perceived phase of second-order gratings depends on the interocular signal ratio as has been previously shown for their first order counterparts; the interocular signal ratios when the two eyes were balanced was close to 1 in both first- and second-order phase combinations. However, second-order combination is more linear than previously found for first-order combination. Furthermore, binocular combination of second-order stimuli was similar regardless of whether the carriers in the two eyes were correlated, anti-correlated, or uncorrelated. This suggests that, in normal adults, the binocular phase combination of second-order stimuli occurs after the monocular extracting of the second-order modulations. The sensory balance associated with this second-order combination can be obtained from binocular phase combination measurements. PMID:24404180

  3. Binocular combination of second-order stimuli.

    Directory of Open Access Journals (Sweden)

    Jiawei Zhou

    Full Text Available Phase information is a fundamental aspect of visual stimuli. However, the nature of the binocular combination of stimuli defined by modulations in contrast, so-called second-order stimuli, is presently not clear. To address this issue, we measured binocular combination for first- (luminance modulated and second-order (contrast modulated stimuli using a binocular phase combination paradigm in seven normal adults. We found that the binocular perceived phase of second-order gratings depends on the interocular signal ratio as has been previously shown for their first order counterparts; the interocular signal ratios when the two eyes were balanced was close to 1 in both first- and second-order phase combinations. However, second-order combination is more linear than previously found for first-order combination. Furthermore, binocular combination of second-order stimuli was similar regardless of whether the carriers in the two eyes were correlated, anti-correlated, or uncorrelated. This suggests that, in normal adults, the binocular phase combination of second-order stimuli occurs after the monocular extracting of the second-order modulations. The sensory balance associated with this second-order combination can be obtained from binocular phase combination measurements.

  4. Advanced hierarchical distance sampling

    Science.gov (United States)

    Royle, Andy

    2016-01-01

    In this chapter, we cover a number of important extensions of the basic hierarchical distance-sampling (HDS) framework from Chapter 8. First, we discuss the inclusion of “individual covariates,” such as group size, in the HDS model. This is important in many surveys where animals form natural groups that are the primary observation unit, with the size of the group expected to have some influence on detectability. We also discuss HDS integrated with time-removal and double-observer or capture-recapture sampling. These “combined protocols” can be formulated as HDS models with individual covariates, and thus they have a commonality with HDS models involving group structure (group size being just another individual covariate). We cover several varieties of open-population HDS models that accommodate population dynamics. On one end of the spectrum, we cover models that allow replicate distance sampling surveys within a year, which estimate abundance relative to availability and temporary emigration through time. We consider a robust design version of that model. We then consider models with explicit dynamics based on the Dail and Madsen (2011) model and the work of Sollmann et al. (2015). The final major theme of this chapter is relatively newly developed spatial distance sampling models that accommodate explicit models describing the spatial distribution of individuals known as Point Process models. We provide novel formulations of spatial DS and HDS models in this chapter, including implementations of those models in the unmarked package using a hack of the pcount function for N-mixture models.

  5. Anterior insula coordinates hierarchical processing of tactile mismatch responses.

    Science.gov (United States)

    Allen, Micah; Fardo, Francesca; Dietz, Martin J; Hillebrandt, Hauke; Friston, Karl J; Rees, Geraint; Roepstorff, Andreas

    2016-02-15

    The body underlies our sense of self, emotion, and agency. Signals arising from the skin convey warmth, social touch, and the physical characteristics of external stimuli. Surprising or unexpected tactile sensations can herald events of motivational salience, including imminent threats (e.g., an insect bite) and hedonic rewards (e.g., a caressing touch). Awareness of such events is thought to depend upon the hierarchical integration of body-related mismatch responses by the anterior insula. To investigate this possibility, we measured brain activity using functional magnetic resonance imaging, while healthy participants performed a roving tactile oddball task. Mass-univariate analysis demonstrated robust activations in limbic, somatosensory, and prefrontal cortical areas previously implicated in tactile deviancy, body awareness, and cognitive control. Dynamic Causal Modelling revealed that unexpected stimuli increased the strength of forward connections along a caudal to rostral hierarchy-projecting from thalamic and somatosensory regions towards insula, cingulate and prefrontal cortices. Within this ascending flow of sensory information, the AIC was the only region to show increased backwards connectivity to the somatosensory cortex, augmenting a reciprocal exchange of neuronal signals. Further, participants who rated stimulus changes as easier to detect showed stronger modulation of descending PFC to AIC connections by deviance. These results suggest that the AIC coordinates hierarchical processing of tactile prediction error. They are interpreted in support of an embodied predictive coding model where AIC mediated body awareness is involved in anchoring a global neuronal workspace. PMID:26584870

  6. Anterior insula coordinates hierarchical processing of tactile mismatch responses.

    Science.gov (United States)

    Allen, Micah; Fardo, Francesca; Dietz, Martin J; Hillebrandt, Hauke; Friston, Karl J; Rees, Geraint; Roepstorff, Andreas

    2016-02-15

    The body underlies our sense of self, emotion, and agency. Signals arising from the skin convey warmth, social touch, and the physical characteristics of external stimuli. Surprising or unexpected tactile sensations can herald events of motivational salience, including imminent threats (e.g., an insect bite) and hedonic rewards (e.g., a caressing touch). Awareness of such events is thought to depend upon the hierarchical integration of body-related mismatch responses by the anterior insula. To investigate this possibility, we measured brain activity using functional magnetic resonance imaging, while healthy participants performed a roving tactile oddball task. Mass-univariate analysis demonstrated robust activations in limbic, somatosensory, and prefrontal cortical areas previously implicated in tactile deviancy, body awareness, and cognitive control. Dynamic Causal Modelling revealed that unexpected stimuli increased the strength of forward connections along a caudal to rostral hierarchy-projecting from thalamic and somatosensory regions towards insula, cingulate and prefrontal cortices. Within this ascending flow of sensory information, the AIC was the only region to show increased backwards connectivity to the somatosensory cortex, augmenting a reciprocal exchange of neuronal signals. Further, participants who rated stimulus changes as easier to detect showed stronger modulation of descending PFC to AIC connections by deviance. These results suggest that the AIC coordinates hierarchical processing of tactile prediction error. They are interpreted in support of an embodied predictive coding model where AIC mediated body awareness is involved in anchoring a global neuronal workspace.

  7. Anterior insula coordinates hierarchical processing of tactile mismatch responses

    Science.gov (United States)

    Allen, Micah; Fardo, Francesca; Dietz, Martin J.; Hillebrandt, Hauke; Friston, Karl J.; Rees, Geraint; Roepstorff, Andreas

    2016-01-01

    The body underlies our sense of self, emotion, and agency. Signals arising from the skin convey warmth, social touch, and the physical characteristics of external stimuli. Surprising or unexpected tactile sensations can herald events of motivational salience, including imminent threats (e.g., an insect bite) and hedonic rewards (e.g., a caressing touch). Awareness of such events is thought to depend upon the hierarchical integration of body-related mismatch responses by the anterior insula. To investigate this possibility, we measured brain activity using functional magnetic resonance imaging, while healthy participants performed a roving tactile oddball task. Mass-univariate analysis demonstrated robust activations in limbic, somatosensory, and prefrontal cortical areas previously implicated in tactile deviancy, body awareness, and cognitive control. Dynamic Causal Modelling revealed that unexpected stimuli increased the strength of forward connections along a caudal to rostral hierarchy—projecting from thalamic and somatosensory regions towards insula, cingulate and prefrontal cortices. Within this ascending flow of sensory information, the AIC was the only region to show increased backwards connectivity to the somatosensory cortex, augmenting a reciprocal exchange of neuronal signals. Further, participants who rated stimulus changes as easier to detect showed stronger modulation of descending PFC to AIC connections by deviance. These results suggest that the AIC coordinates hierarchical processing of tactile prediction error. They are interpreted in support of an embodied predictive coding model where AIC mediated body awareness is involved in anchoring a global neuronal workspace. PMID:26584870

  8. Task-specific modulation of human auditory evoked responses in a delayed-match-to-sample task

    Directory of Open Access Journals (Sweden)

    Feng eRong

    2011-05-01

    Full Text Available In this study, we focus our investigation on task-specific cognitive modulation of early cortical auditory processing in human cerebral cortex. During the experiments, we acquired whole-head magnetoencephalography (MEG data while participants were performing an auditory delayed-match-to-sample (DMS task and associated control tasks. Using a spatial filtering beamformer technique to simultaneously estimate multiple source activities inside the human brain, we observed a significant DMS-specific suppression of the auditory evoked response to the second stimulus in a sound pair, with the center of the effect being located in the vicinity of the left auditory cortex. For the right auditory cortex, a non-invariant suppression effect was observed in both DMS and control tasks. Furthermore, analysis of coherence revealed a beta band (12 ~ 20 Hz DMS-specific enhanced functional interaction between the sources in left auditory cortex and those in left inferior frontal gyrus, which has been shown to involve in short-term memory processing during the delay period of DMS task. Our findings support the view that early evoked cortical responses to incoming acoustic stimuli can be modulated by task-specific cognitive functions by means of frontal-temporal functional interactions.

  9. The Role of Visual Stimuli in Cross-Modal Stroop Interference.

    Science.gov (United States)

    Lutfi-Proctor, Danielle A; Elliott, Emily M; Cowan, Nelson

    2014-03-01

    It has long been known that naming the color of a color word leads to what is known as the Stroop effect (Stroop, 1935). In the traditional Stroop task, when compared to naming the color of a color-neutral stimulus (e.g. an X or color patch), the presence of an incongruent color word decreases performance (Stroop interference), and a congruent color word increases performance (Stroop facilitation). Research has also shown that auditory color words can impact the color naming performance of colored items in a similar way in a variation known as cross-modal Stroop (Cowan & Barron, 1987). However, whether the item that is colored interacts with the auditory distractor to affect cross-modal Stroop interference is unclear. Research with the traditional, visual Stroop task has suggested that the amount of color the visual item displays and the semantic and phonetic components of the colored word can affect the magnitude of the resulting Stroop interference; as such, it is possible the same components could play a role in cross-modal Stroop interference. We conducted two experiments to examine the impact of the composition of the colored visual item on cross-modal Stroop interference. However, across two different experiments, three test versions, and numerous sets of trials, we were only able to find a small effect of the visual stimulus. This finding suggests that while the impact of the auditory stimuli is consistent and robust, the influence of non-word visual stimuli is quite small and unreliable and, while occasionally being statistically significant, it is not practically so. PMID:25068037

  10. Psychophysiological responses to auditory change.

    Science.gov (United States)

    Chuen, Lorraine; Sears, David; McAdams, Stephen

    2016-06-01

    A comprehensive characterization of autonomic and somatic responding within the auditory domain is currently lacking. We studied whether simple types of auditory change that occur frequently during music listening could elicit measurable changes in heart rate, skin conductance, respiration rate, and facial motor activity. Participants heard a rhythmically isochronous sequence consisting of a repeated standard tone, followed by a repeated target tone that changed in pitch, timbre, duration, intensity, or tempo, or that deviated momentarily from rhythmic isochrony. Changes in all parameters produced increases in heart rate. Skin conductance response magnitude was affected by changes in timbre, intensity, and tempo. Respiratory rate was sensitive to deviations from isochrony. Our findings suggest that music researchers interpreting physiological responses as emotional indices should consider acoustic factors that may influence physiology in the absence of induced emotions. PMID:26927928

  11. Reality of auditory verbal hallucinations

    Science.gov (United States)

    Valkonen-Korhonen, Minna; Holi, Matti; Therman, Sebastian; Lehtonen, Johannes; Hari, Riitta

    2009-01-01

    Distortion of the sense of reality, actualized in delusions and hallucinations, is the key feature of psychosis but the underlying neuronal correlates remain largely unknown. We studied 11 highly functioning subjects with schizophrenia or schizoaffective disorder while they rated the reality of auditory verbal hallucinations (AVH) during functional magnetic resonance imaging (fMRI). The subjective reality of AVH correlated strongly and specifically with the hallucination-related activation strength of the inferior frontal gyri (IFG), including the Broca's language region. Furthermore, how real the hallucination that subjects experienced was depended on the hallucination-related coupling between the IFG, the ventral striatum, the auditory cortex, the right posterior temporal lobe, and the cingulate cortex. Our findings suggest that the subjective reality of AVH is related to motor mechanisms of speech comprehension, with contributions from sensory and salience-detection-related brain regions as well as circuitries related to self-monitoring and the experience of agency. PMID:19620178

  12. Cortical and thalamic connectivity of the auditory anterior ectosylvian cortex of early-deaf cats: Implications for neural mechanisms of crossmodal plasticity.

    Science.gov (United States)

    Meredith, M Alex; Clemo, H Ruth; Corley, Sarah B; Chabot, Nicole; Lomber, Stephen G

    2016-03-01

    Early hearing loss leads to crossmodal plasticity in regions of the cerebrum that are dominated by acoustical processing in hearing subjects. Until recently, little has been known of the connectional basis of this phenomenon. One region whose crossmodal properties are well-established is the auditory field of the anterior ectosylvian sulcus (FAES) in the cat, where neurons are normally responsive to acoustic stimulation and its deactivation leads to the behavioral loss of accurate orienting toward auditory stimuli. However, in early-deaf cats, visual responsiveness predominates in the FAES and its deactivation blocks accurate orienting behavior toward visual stimuli. For such crossmodal reorganization to occur, it has been presumed that novel inputs or increased projections from non-auditory cortical areas must be generated, or that existing non-auditory connections were 'unmasked.' These possibilities were tested using tracer injections into the FAES of adult cats deafened early in life (and hearing controls), followed by light microscopy to localize retrogradely labeled neurons. Surprisingly, the distribution of cortical and thalamic afferents to the FAES was very similar among early-deaf and hearing animals. No new visual projection sources were identified and visual cortical connections to the FAES were comparable in projection proportions. These results support an alternate theory for the connectional basis for cross-modal plasticity that involves enhanced local branching of existing projection terminals that originate in non-auditory as well as auditory cortices. PMID:26724756

  13. Auditory distraction and serial memory

    OpenAIRE

    Jones, D M; Hughes, Rob; Macken, W.J.

    2010-01-01

    One mental activity that is very vulnerable to auditory distraction is serial recall. This review of the contemporary findings relating to serial recall charts the key determinants of distraction. It is evident that there is one form of distraction that is a joint product of the cognitive characteristics of the task and of the obligatory cognitive processing of the sound. For sequences of sound, distraction appears to be an ineluctable product of similarity-of-process, specifically, the seria...

  14. Reality of auditory verbal hallucinations

    OpenAIRE

    Raij TT; Valkonen-Korhonen M; Holi M; Therman S; Lehtonen J; Hari R

    2009-01-01

    Distortion of the sense of reality, actualized in delusions and hallucinations, is the key feature of psychosis but the underlying neuronal correlates remain largely unknown. We studied 11 highly functioning subjects with schizophrenia or schizoaffective disorder while they rated the reality of auditory verbal hallucinations (AVH) during functional magnetic resonance imaging (fMRI). The subjective reality of AVH correlated strongly and specifically with the hallucination-related activation st...

  15. Dissecting the functional anatomy of auditory word repetition

    Directory of Open Access Journals (Sweden)

    Thomas Matthew Hadley Hope

    2014-05-01

    Full Text Available Auditory word repetition involves many different brain regions, whose functions are still far from fully understood. Here, we use a single, multi-factorial, within-subjects fMRI design to identify those regions, and to functionally distinguish the multiple linguistic and non-linguistic processing areas that are all involved in repeating back heard words. The study compared: (1 auditory to visual inputs; (2 phonological to non-phonological inputs; (3 semantic to non-semantic inputs; and (4 speech production to finger-press responses. The stimuli included words (semantic and phonological inputs, pseudowords (phonological input, pictures and sounds of animals or objects (semantic input, and coloured patterns and hums (non-semantic and non-phonological. The speech production tasks involved auditory repetition, reading and naming while the finger press tasks involved one-back matching.The results from the main effects and interactions were compared to predictions from a previously reported functional anatomical model of language based on a meta-analysis of many different neuroimaging experiments. Although many findings from the current experiment replicated those predicted, our within-subject design also revealed novel results by providing sufficient anatomical precision to distinguish several different regions within: (1 the anterior insula (a dorsal region involved in both covert and overt speech production, and a more ventral region involved in overt speech only; (2 the pars orbitalis (with distinct sub-regions responding to phonological and semantic processing; (3 the anterior cingulate and SMA (whose subregions show differential sensitivity to speech and finger press responses; and (4 the cerebellum (with distinct regions for semantic processing, speech production and domain general processing. We also dissociated four different types of phonological effects in, respectively, the left superior temporal sulcus, left putamen, left ventral premoto

  16. The Representation of Prediction Error in Auditory Cortex

    Science.gov (United States)

    Rubin, Jonathan; Ulanovsky, Nachum; Tishby, Naftali

    2016-01-01

    To survive, organisms must extract information from the past that is relevant for their future. How this process is expressed at the neural level remains unclear. We address this problem by developing a novel approach from first principles. We show here how to generate low-complexity representations of the past that produce optimal predictions of future events. We then illustrate this framework by studying the coding of ‘oddball’ sequences in auditory cortex. We find that for many neurons in primary auditory cortex, trial-by-trial fluctuations of neuronal responses correlate with the theoretical prediction error calculated from the short-term past of the stimulation sequence, under constraints on the complexity of the representation of this past sequence. In some neurons, the effect of prediction error accounted for more than 50% of response variability. Reliable predictions often depended on a representation of the sequence of the last ten or more stimuli, although the representation kept only few details of that sequence. PMID:27490251

  17. Perceptual hysteresis in the judgment of auditory pitch shift.

    Science.gov (United States)

    Chambers, Claire; Pressnitzer, Daniel

    2014-07-01

    Perceptual hysteresis can be defined as the enduring influence of the recent past on current perception. Here, hysteresis was investigated in a basic auditory task: pitch comparisons between successive tones. On each trial, listeners were presented with pairs of tones and asked to report the direction of subjective pitch shift, as either "up" or "down." All tones were complexes known as Shepard tones (Shepard, 1964), which comprise several frequency components at octave multiples of a base frequency. The results showed that perceptual judgments were determined both by stimulus-related factors (the interval ratio between the base frequencies within a pair) and by recent context (the intervals in the two previous trials). When tones were presented in ordered sequences, for which the frequency interval between tones was varied in a progressive manner, strong hysteresis was found. In particular, ambiguous stimuli that led to equal probabilities of "up" and "down" responses within a randomized context were almost fully determined within an ordered context. Moreover, hysteresis did not act on the direction of the reported pitch shift, but rather on the perceptual representation of each tone. Thus, hysteresis could be observed within sequences in which listeners varied between "up" and "down" responses, enabling us to largely rule out confounds related to response bias. The strength of the perceptual hysteresis observed suggests that the ongoing context may have a substantial influence on fundamental aspects of auditory perception, such as how we perceive the changes in pitch between successive sounds.

  18. Estradiol selectively enhances auditory function in avian forebrain neurons.

    Science.gov (United States)

    Caras, Melissa L; O'Brien, Matthew; Brenowitz, Eliot A; Rubel, Edwin W

    2012-12-01

    Sex steroids modulate vertebrate sensory processing, but the impact of circulating hormone levels on forebrain function remains unclear. We tested the hypothesis that circulating sex steroids modulate single-unit responses in the avian telencephalic auditory nucleus, field L. We mimicked breeding or nonbreeding conditions by manipulating plasma 17β-estradiol levels in wild-caught female Gambel's white-crowned sparrows (Zonotrichia leucophrys gambelii). Extracellular responses of single neurons to tones and conspecific songs presented over a range of intensities revealed that estradiol selectively enhanced auditory function in cells that exhibited monotonic rate level functions to pure tones. In these cells, estradiol treatment increased spontaneous and maximum evoked firing rates, increased pure tone response strengths and sensitivity, and expanded the range of intensities over which conspecific song stimuli elicited significant responses. Estradiol did not significantly alter the sensitivity or dynamic ranges of cells that exhibited non-monotonic rate level functions. Notably, there was a robust correlation between plasma estradiol concentrations in individual birds and physiological response properties in monotonic, but not non-monotonic neurons. These findings demonstrate that functionally distinct classes of anatomically overlapping forebrain neurons are differentially regulated by sex steroid hormones in a dose-dependent manner.

  19. Auditory sequence analysis and phonological skill.

    Science.gov (United States)

    Grube, Manon; Kumar, Sukhbinder; Cooper, Freya E; Turton, Stuart; Griffiths, Timothy D

    2012-11-01

    This work tests the relationship between auditory and phonological skill in a non-selected cohort of 238 school students (age 11) with the specific hypothesis that sound-sequence analysis would be more relevant to phonological skill than the analysis of basic, single sounds. Auditory processing was assessed across the domains of pitch, time and timbre; a combination of six standard tests of literacy and language ability was used to assess phonological skill. A significant correlation between general auditory and phonological skill was demonstrated, plus a significant, specific correlation between measures of phonological skill and the auditory analysis of short sequences in pitch and time. The data support a limited but significant link between auditory and phonological ability with a specific role for sound-sequence analysis, and provide a possible new focus for auditory training strategies to aid language development in early adolescence. PMID:22951739

  20. Short term memory for tactile stimuli.

    Science.gov (United States)

    Gallace, Alberto; Tan, Hong Z; Haggard, Patrick; Spence, Charles

    2008-01-23

    Research has shown that unreported information stored in rapidly decaying visual representations may be accessed more accurately using partial report than using full report procedures (e.g., [Sperling, G., 1960. The information available in brief visual presentations. Psychological Monographs, 74, 1-29.]). In the 3 experiments reported here, we investigated whether unreported information regarding the actual number of tactile stimuli presented in parallel across the body surface can be accessed using a partial report procedure. In Experiment 1, participants had to report the total number of stimuli in a tactile display composed of up to 6 stimuli presented across their body (numerosity task), or else to detect whether or not a tactile stimulus had previously been presented in a position indicated by a visual probe given at a variable delay after offset of a tactile display (i.e., partial report). The results showed that participants correctly reported up to 3 stimuli in the numerosity judgment task, but their performance was significantly better than chance when up to 5 stimuli were presented in the partial report task. This result shows that short-lasting tactile representations can be accessed using partial report procedures similar to those used previously in visual studies. Experiment 2 showed that the duration of these representations (or the time available to consciously access them) depends on the number of stimuli presented in the display (the greater the number of stimuli that are presented, the faster their representation decays). Finally, the results of a third experiment showed that the differences in performance between the numerosity judgment and partial report tasks could not be explained solely in terms of any difference in task difficulty. PMID:18083147

  1. Action Effects and Task Knowledge: The Influence of Anticipatory Priming on the Identification of Task-Related Stimuli in Experts

    Science.gov (United States)

    Land, William M.

    2016-01-01

    The purpose of the present study was to examine the extent to which anticipation of an action’s perceptual effect primes identification of task-related stimuli. Specifically, skilled (n = 16) and novice (n = 24) tennis players performed a choice-reaction time (CRT) test in which they identified whether the presented stimulus was a picture of a baseball bat or tennis racket. Following their response, auditory feedback associated with either baseball or tennis was presented. The CRT test was performed in blocks in which participants predictably received the baseball sound or tennis sound irrespective of which stimulus picture was displayed. Results indicated that skilled tennis players responded quicker to tennis stimuli when the response was predictably followed by the tennis auditory effect compared to the baseball auditory effect. These findings imply that, within an individual’s area of expertise, domain-relevant knowledge is primed by anticipation of an action’s perceptual effect, thus allowing the cognitive system to more quickly identify environmental information. This finding provides a more complete picture of the influence that anticipation can have on the cognitive-motor system. No differences existed for novices. PMID:27272987

  2. Action Effects and Task Knowledge: The Influence of Anticipatory Priming on the Identification of Task-Related Stimuli in Experts.

    Directory of Open Access Journals (Sweden)

    William M Land

    Full Text Available The purpose of the present study was to examine the extent to which anticipation of an action's perceptual effect primes identification of task-related stimuli. Specifically, skilled (n = 16 and novice (n = 24 tennis players performed a choice-reaction time (CRT test in which they identified whether the presented stimulus was a picture of a baseball bat or tennis racket. Following their response, auditory feedback associated with either baseball or tennis was presented. The CRT test was performed in blocks in which participants predictably received the baseball sound or tennis sound irrespective of which stimulus picture was displayed. Results indicated that skilled tennis players responded quicker to tennis stimuli when the response was predictably followed by the tennis auditory effect compared to the baseball auditory effect. These findings imply that, within an individual's area of expertise, domain-relevant knowledge is primed by anticipation of an action's perceptual effect, thus allowing the cognitive system to more quickly identify environmental information. This finding provides a more complete picture of the influence that anticipation can have on the cognitive-motor system. No differences existed for novices.

  3. Auditory evoked potentials and impairments to psychomotor activity evoked by falling asleep.

    Science.gov (United States)

    Dorokhov, V B; Verbitskaya, Yu S; Lavrova, T P

    2010-05-01

    Sounds provide the most suitable stimuli for studies of information processes occurring in the brain during falling asleep and at different stages of sleep. The widely used analysis of evoked potentials averaged for groups of subjects has a number of disadvantages associated with their individual variability. Thus, in the present study, measures of the individual components of auditory evoked potentials were determined and selectively summed for individual subjects, with subsequent analysis by group. The aim of the present work was to identify measures of auditory evoked potentials providing quantitative assessment of the dynamics of the brain's functional state during the appearance of errors in activity associated with decreases in the level of waking and falling asleep. A monotonous psychomotor test was performed in the lying position with the eyes closed; this consisted of two alternating parts: the first was counting auditory stimuli from 1 to 10 with simultaneous pressing of a button, and the second was counting stimuli from 1 to 5 silently without pressing the button, and so on. Computer-generated sound stimuli (duration 50 msec, envelope filling frequency 1000 Hz, intensity 60 dB) were presented binaurally with interstimulus intervals of 2.4-2.7 sec. A total of 41 subjects took part (both genders, mean age 25 years), of which only 23 fell asleep; data for 14 subjects with sufficient episodes of falling asleep were analyzed. Comparison of measures of auditory evoked potentials (the latencies and amplitudes of the N1, P2, N2, and P3 components) during correct and erroneous psychomotor test trials showed that decreases in the level of consciousness elicited significant increases in the amplitudes of the components of the vertex N1-P2-N2 complex in series without button pressing. The greatest changes in auditory evoked potentials in both series were seen in the N2 component, with latency 330-360 msec, which has a common origin with the EEG theta rhythm and is

  4. Speech distortion measure based on auditory properties

    Institute of Scientific and Technical Information of China (English)

    CHEN Guo; HU Xiulin; ZHANG Yunyu; ZHU Yaoting

    2000-01-01

    The Perceptual Spectrum Distortion (PSD), based on auditory properties of human being, is presented to measure speech distortion. The PSD measure calculates the speech distortion distance by simulating the auditory properties of human being and converting short-time speech power spectrum to auditory perceptual spectrum. Preliminary simulative experiments in comparison with the Itakura measure have been done. The results show that the PSD measure is a perferable speech distortion measure and more consistent with subjective assessment of speech quality.

  5. Auditory stimulation and cardiac autonomic regulation

    OpenAIRE

    Vitor E Valenti; Guida, Heraldo L.; Frizzo, Ana C F; Cardoso, Ana C. V.; Vanderlei, Luiz Carlos M; Luiz Carlos de Abreu

    2012-01-01

    Previous studies have already demonstrated that auditory stimulation with music influences the cardiovascular system. In this study, we described the relationship between musical auditory stimulation and heart rate variability. Searches were performed with the Medline, SciELO, Lilacs and Cochrane databases using the following keywords: "auditory stimulation", "autonomic nervous system", "music" and "heart rate variability". The selected studies indicated that there is a strong correlation bet...

  6. Mechanisms of Auditory Verbal Hallucination in Schizophrenia

    OpenAIRE

    Raymond eCho; Wayne eWu

    2013-01-01

    Recent work on the mechanisms underlying auditory verbal hallucination (AVH) has been heavily informed by self-monitoring accounts that postulate defects in an internal monitoring mechanism as the basis of AVH. A more neglected alternative is an account focusing on defects in auditory processing, namely a spontaneous activation account of auditory activity underlying AVH. Science is often aided by putting theories in competition. Accordingly, a discussion that systematically contrasts the two...

  7. Multimodal Diffusion-MRI and MEG Assessment of Auditory and Language System Development in Autism Spectrum Disorder

    Directory of Open Access Journals (Sweden)

    Jeffrey I Berman

    2016-03-01

    Full Text Available Background: Auditory processing and language impairments are prominent in children with autism spectrum disorder (ASD. The present study integrated diffusion MR measures of white-matter microstructure and magnetoencephalography (MEG measures of cortical dynamics to investigate associations between brain structure and function within auditory and language systems in ASD. Based on previous findings, abnormal structure-function relationships in auditory and language systems in ASD were hypothesized. Methods: Evaluable neuroimaging data was obtained from 44 typically developing (TD children (mean age 10.4±2.4years and 95 children with ASD (mean age 10.2±2.6years. Diffusion MR tractography was used to delineate and quantitatively assess the auditory radiation and arcuate fasciculus segments of the auditory and language systems. MEG was used to measure (1 superior temporal gyrus auditory evoked M100 latency in response to pure-tone stimuli as an indicator of auditory system conduction velocity, and (2 auditory vowel-contrast mismatch field (MMF latency as a passive probe of early linguistic processes. Results: Atypical development of white matter and cortical function, along with atypical lateralization, were present in ASD. In both auditory and language systems, white matter integrity and cortical electrophysiology were found to be coupled in typically developing children, with white matter microstructural features contributing significantly to electrophysiological response latencies. However, in ASD, we observed uncoupled structure-function relationships in both auditory and language systems. Regression analyses in ASD indicated that factors other than white-matter microstructure additionally contribute to the latency of neural evoked responses and ultimately behavior. Results also indicated that whereas delayed M100 is a marker for ASD severity, MMF delay is more associated with language impairment. Conclusion: Present findings suggest atypical

  8. An exploration of spatial auditory BCI paradigms with different sounds: music notes versus beeps.

    Science.gov (United States)

    Huang, Minqiang; Daly, Ian; Jin, Jing; Zhang, Yu; Wang, Xingyu; Cichocki, Andrzej

    2016-06-01

    Visual brain-computer interfaces (BCIs) are not suitable for people who cannot reliably maintain their eye gaze. Considering that this group usually maintains audition, an auditory based BCI may be a good choice for them. In this paper, we explore two auditory patterns: (1) a pattern utilizing symmetrical spatial cues with multiple frequency beeps [called the high low medium (HLM) pattern], and (2) a pattern utilizing non-symmetrical spatial cues with six tones derived from the diatonic scale [called the diatonic scale (DS) pattern]. These two patterns are compared to each other in terms of accuracy to determine which auditory pattern is better. The HLM pattern uses three different frequency beeps and has a symmetrical spatial distribution. The DS pattern uses six spoken stimuli, which are six notes solmizated as "do", "re", "mi", "fa", "sol" and "la", and derived from the diatonic scale. These six sounds are distributed to six, spatially distributed, speakers. Thus, we compare a BCI paradigm using beeps with another BCI paradigm using tones on the diatonic scale, when the stimuli are spatially distributed. Although no significant differences are found between the ERPs, the HLM pattern performs better than the DS pattern: the online accuracy achieved with the HLM pattern is significantly higher than that achieved with the DS pattern (p = 0.0028). PMID:27275376

  9. Abnormal synchrony and effective connectivity in patients with schizophrenia and auditory hallucinations

    Science.gov (United States)

    de la Iglesia-Vaya, Maria; Escartí, Maria José; Molina-Mateo, Jose; Martí-Bonmatí, Luis; Gadea, Marien; Castellanos, Francisco Xavier; Aguilar García-Iturrospe, Eduardo J.; Robles, Montserrat; Biswal, Bharat B.; Sanjuan, Julio

    2014-01-01

    Auditory hallucinations (AH) are the most frequent positive symptoms in patients with schizophrenia. Hallucinations have been related to emotional processing disturbances, altered functional connectivity and effective connectivity deficits. Previously, we observed that, compared to healthy controls, the limbic network responses of patients with auditory hallucinations differed when the subjects were listening to emotionally charged words. We aimed to compare the synchrony patterns and effective connectivity of task-related networks between schizophrenia patients with and without AH and healthy controls. Schizophrenia patients with AH (n = 27) and without AH (n = 14) were compared with healthy participants (n = 31). We examined functional connectivity by analyzing correlations and cross-correlations among previously detected independent component analysis time courses. Granger causality was used to infer the information flow direction in the brain regions. The results demonstrate that the patterns of cortico-cortical functional synchrony differentiated the patients with AH from the patients without AH and from the healthy participants. Additionally, Granger-causal relationships between the networks clearly differentiated the groups. In the patients with AH, the principal causal source was an occipital–cerebellar component, versus a temporal component in the patients without AH and the healthy controls. These data indicate that an anomalous process of neural connectivity exists when patients with AH process emotional auditory stimuli. Additionally, a central role is suggested for the cerebellum in processing emotional stimuli in patients with persistent AH. PMID:25379429

  10. Music for the birds: effects of auditory enrichment on captive bird species.

    Science.gov (United States)

    Robbins, Lindsey; Margulis, Susan W

    2016-01-01

    With the increase of mixed species exhibits in zoos, targeting enrichment for individual species may be problematic. Often, mammals may be the primary targets of enrichment, yet other species that share their environment (such as birds) will unavoidably be exposed to the enrichment as well. The purpose of this study was to determine if (1) auditory stimuli designed for enrichment of primates influenced the behavior of captive birds in the zoo setting, and (2) if the specific type of auditory enrichment impacted bird behavior. Three different African bird species were observed at the Buffalo Zoo during exposure to natural sounds, classical music and rock music. The results revealed that the average frequency of flying in all three bird species increased with naturalistic sounds and decreased with rock music (F = 7.63, df = 3,6, P = 0.018); vocalizations for two of the three species (Superb Starlings and Mousebirds) increased (F = 18.61, df = 2,6, P = 0.0027) in response to all auditory stimuli, however one species (Lady Ross's Turacos) increased frequency of duetting only in response to rock music (X(2) = 18.5, df = 2, P behavior in non-target species as well, in this case leading to increased activity by birds.

  11. Shaping prestimulus neural activity with auditory rhythmic stimulation improves the temporal allocation of attention

    Science.gov (United States)

    Pincham, Hannah L.; Cristoforetti, Giulia; Facoetti, Andrea; Szűcs, Dénes

    2016-01-01

    Human attention fluctuates across time, and even when stimuli have identical physical characteristics and the task demands are the same, relevant information is sometimes consciously perceived and at other times not. A typical example of this phenomenon is the attentional blink, where participants show a robust deficit in reporting the second of two targets (T2) in a rapid serial visual presentation (RSVP) stream. Previous electroencephalographical (EEG) studies showed that neural correlates of correct T2 report are not limited to the RSVP period, but extend before visual stimulation begins. In particular, reduced oscillatory neural activity in the alpha band (8-12 Hz) before the onset of the RSVP has been linked to lower T2 accuracy. We therefore examined whether auditory rhythmic stimuli presented at a rate of 10 Hz (within the alpha band) could increase oscillatory alpha-band activity and improve T2 performance in the attentional blink time window. Behaviourally, the auditory rhythmic stimulation worked to enhance T2 accuracy. This enhanced perception was associated with increases in the posterior T2-evoked N2 component of the event-related potentials and this effect was observed selectively at lag 3. Frontal and posterior oscillatory alpha-band activity was also enhanced during auditory stimulation in the pre-RSVP period and positively correlated with T2 accuracy. These findings suggest that ongoing fluctuations can be shaped by sensorial events to improve the allocation of attention in time. PMID:26986506

  12. Music for the birds: effects of auditory enrichment on captive bird species.

    Science.gov (United States)

    Robbins, Lindsey; Margulis, Susan W

    2016-01-01

    With the increase of mixed species exhibits in zoos, targeting enrichment for individual species may be problematic. Often, mammals may be the primary targets of enrichment, yet other species that share their environment (such as birds) will unavoidably be exposed to the enrichment as well. The purpose of this study was to determine if (1) auditory stimuli designed for enrichment of primates influenced the behavior of captive birds in the zoo setting, and (2) if the specific type of auditory enrichment impacted bird behavior. Three different African bird species were observed at the Buffalo Zoo during exposure to natural sounds, classical music and rock music. The results revealed that the average frequency of flying in all three bird species increased with naturalistic sounds and decreased with rock music (F = 7.63, df = 3,6, P = 0.018); vocalizations for two of the three species (Superb Starlings and Mousebirds) increased (F = 18.61, df = 2,6, P = 0.0027) in response to all auditory stimuli, however one species (Lady Ross's Turacos) increased frequency of duetting only in response to rock music (X(2) = 18.5, df = 2, P influence behavior in non-target species as well, in this case leading to increased activity by birds. PMID:26749511

  13. Reference valence effects of affective s-R compatibility: are visual and auditory results consistent?

    Directory of Open Access Journals (Sweden)

    Zhao Xiaojun

    Full Text Available Humans may be faster to avoid negative words than to approach negative words, and faster to approach positive words than to avoid positive words. That is an example of affective stimulus-response (S-R compatibility. The present study identified the reference valence effects of affective stimulus-response (S-R compatibility when auditory stimulus materials are used. The researchers explored the reference valence effects of affective S-R compatibility using a mixed-design experiment based on visual words, visual pictures and audition. The study computed the average compatibility effect size. A t-test based on visual pictures showed that the compatibility effect size was significantly different from zero, t (22 = 2.43, p<.05 (M = 485 ms. Smaller compatibility effects existed when switching the presentation mode from visual stimuli to auditory stimuli. This study serves as an important reference for the auditory reference valence effects of affective S-R compatibility.

  14. Effects of Auditory Attention Training with the Dichotic Listening Task: Behavioural and Neurophysiological Evidence.

    Directory of Open Access Journals (Sweden)

    Jussi Tallus

    Full Text Available Facilitation of general cognitive capacities such as executive functions through training has stirred considerable research interest during the last decade. Recently we demonstrated that training of auditory attention with forced attention dichotic listening not only facilitated that performance but also generalized to an untrained attentional task. In the present study, 13 participants underwent a 4-week dichotic listening training programme with instructions to report syllables presented to the left ear (FL training group. Another group (n = 13 was trained using the non-forced instruction, asked to report whichever syllable they heard the best (NF training group. The study aimed to replicate our previous behavioural results, and to explore the neurophysiological correlates of training through event-related brain potentials (ERPs. We partially replicated our previous behavioural training effects, as the FL training group tended to show more allocation of auditory spatial attention to the left ear in a standard dichotic listening task. ERP measures showed diminished N1 and enhanced P2 responses to dichotic stimuli after training in both groups, interpreted as improvement in early perceptual processing of the stimuli. Additionally, enhanced anterior N2 amplitudes were found after training, with relatively larger changes in the FL training group in the forced-left condition, suggesting improved top-down control on the trained task. These results show that top-down cognitive training can modulate the left-right allocation of auditory spatial attention, accompanied by a change in an evoked brain potential related to cognitive control.

  15. An Auditory-Tactile Visual Saccade-Independent P300 Brain-Computer Interface.

    Science.gov (United States)

    Yin, Erwei; Zeyl, Timothy; Saab, Rami; Hu, Dewen; Zhou, Zongtan; Chau, Tom

    2016-02-01

    Most P300 event-related potential (ERP)-based brain-computer interface (BCI) studies focus on gaze shift-dependent BCIs, which cannot be used by people who have lost voluntary eye movement. However, the performance of visual saccade-independent P300 BCIs is generally poor. To improve saccade-independent BCI performance, we propose a bimodal P300 BCI approach that simultaneously employs auditory and tactile stimuli. The proposed P300 BCI is a vision-independent system because no visual interaction is required of the user. Specifically, we designed a direction-congruent bimodal paradigm by randomly and simultaneously presenting auditory and tactile stimuli from the same direction. Furthermore, the channels and number of trials were tailored to each user to improve online performance. With 12 participants, the average online information transfer rate (ITR) of the bimodal approach improved by 45.43% and 51.05% over that attained, respectively, with the auditory and tactile approaches individually. Importantly, the average online ITR of the bimodal approach, including the break time between selections, reached 10.77 bits/min. These findings suggest that the proposed bimodal system holds promise as a practical visual saccade-independent P300 BCI. PMID:26678249

  16. Context-dependent coding and gain control in the auditory system of crickets.

    Science.gov (United States)

    Clemens, Jan; Rau, Florian; Hennig, R Matthias; Hildebrandt, K Jannis

    2015-10-01

    Sensory systems process stimuli that greatly vary in intensity and complexity. To maintain efficient information transmission, neural systems need to adjust their properties to these different sensory contexts, yielding adaptive or stimulus-dependent codes. Here, we demonstrated adaptive spectrotemporal tuning in a small neural network, i.e. the peripheral auditory system of the cricket. We found that tuning of cricket auditory neurons was sharper for complex multi-band than for simple single-band stimuli. Information theoretical considerations revealed that this sharpening improved information transmission by separating the neural representations of individual stimulus components. A network model inspired by the structure of the cricket auditory system suggested two putative mechanisms underlying this adaptive tuning: a saturating peripheral nonlinearity could change the spectral tuning, whereas broad feed-forward inhibition was able to reproduce the observed adaptive sharpening of temporal tuning. Our study revealed a surprisingly dynamic code usually found in more complex nervous systems and suggested that stimulus-dependent codes could be implemented using common neural computations.

  17. Hierarchical topic modeling with nested hierarchical Dirichlet process

    Institute of Scientific and Technical Information of China (English)

    Yi-qun DING; Shan-ping LI; Zhen ZHANG; Bin SHEN

    2009-01-01

    This paper deals with the statistical modeling of latent topic hierarchies in text corpora. The height of the topic tree is assumed as fixed, while the number of topics on each level as unknown a priori and to be inferred from data. Taking a nonparametric Bayesian approach to this problem, we propose a new probabilistic generative model based on the nested hierarchical Dirichlet process (nHDP) and present a Markov chain Monte Carlo sampling algorithm for the inference of the topic tree structure as welt as the word distribution of each topic and topic distribution of each document. Our theoretical analysis and experiment results show that this model can produce a more compact hierarchical topic structure and captures more free-grained topic relationships compared to the hierarchical latent Dirichlet allocation model.

  18. Auditory Training and Its Effects upon the Auditory Discrimination and Reading Readiness of Kindergarten Children.

    Science.gov (United States)

    Cullen, Minga Mustard

    The purpose of this investigation was to evaluate the effects of a systematic auditory training program on the auditory discrimination ability and reading readiness of 55 white, middle/upper middle class kindergarten students. Following pretesting with the "Wepman Auditory Discrimination Test,""The Clymer-Barrett Prereading Battery," and the…

  19. Effects of Methylphenidate (Ritalin) on Auditory Performance in Children with Attention and Auditory Processing Disorders.

    Science.gov (United States)

    Tillery, Kim L.; Katz, Jack; Keller, Warren D.

    2000-01-01

    A double-blind, placebo-controlled study examined effects of methylphenidate (Ritalin) on auditory processing in 32 children with both attention deficit hyperactivity disorder and central auditory processing (CAP) disorder. Analyses revealed that Ritalin did not have a significant effect on any of the central auditory processing measures, although…

  20. Seeing the song: left auditory structures may track auditory-visual dynamic alignment.

    Directory of Open Access Journals (Sweden)

    Julia A Mossbridge

    Full Text Available Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements, it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment.

  1. Central auditory function of deafness genes.

    Science.gov (United States)

    Willaredt, Marc A; Ebbers, Lena; Nothwang, Hans Gerd

    2014-06-01

    The highly variable benefit of hearing devices is a serious challenge in auditory rehabilitation. Various factors contribute to this phenomenon such as the diversity in ear defects, the different extent of auditory nerve hypoplasia, the age of intervention, and cognitive abilities. Recent analyses indicate that, in addition, central auditory functions of deafness genes have to be considered in this context. Since reduced neuronal activity acts as the common denominator in deafness, it is widely assumed that peripheral deafness influences development and function of the central auditory system in a stereotypical manner. However, functional characterization of transgenic mice with mutated deafness genes demonstrated gene-specific abnormalities in the central auditory system as well. A frequent function of deafness genes in the central auditory system is supported by a genome-wide expression study that revealed significant enrichment of these genes in the transcriptome of the auditory brainstem compared to the entire brain. Here, we will summarize current knowledge of the diverse central auditory functions of deafness genes. We furthermore propose the intimately interwoven gene regulatory networks governing development of the otic placode and the hindbrain as a mechanistic explanation for the widespread expression of these genes beyond the cochlea. We conclude that better knowledge of central auditory dysfunction caused by genetic alterations in deafness genes is required. In combination with improved genetic diagnostics becoming currently available through novel sequencing technologies, this information will likely contribute to better outcome prediction of hearing devices.

  2. The effect of auditory memory load on intensity resolution in individuals with Parkinson's disease

    Science.gov (United States)

    Richardson, Kelly C.

    Purpose: The purpose of the current study was to investigate the effect of auditory memory load on intensity resolution in individuals with Parkinson's disease (PD) as compared to two groups of listeners without PD. Methods: Nineteen individuals with Parkinson's disease, ten healthy age- and hearing-matched adults, and ten healthy young adults were studied. All listeners participated in two intensity discrimination tasks differing in auditory memory load; a lower memory load, 4IAX task and a higher memory load, ABX task. Intensity discrimination performance was assessed using a bias-free measurement of signal detectability known as d' (d-prime). Listeners further participated in a continuous loudness scaling task where they were instructed to rate the loudness level of each signal intensity using a computerized 150mm visual analogue scale. Results: Group discrimination functions indicated significantly lower intensity discrimination sensitivity (d') across tasks for the individuals with PD, as compared to the older and younger controls. No significant effect of aging on intensity discrimination was observed for either task. All three listeners groups demonstrated significantly lower intensity discrimination sensitivity for the higher auditory memory load, ABX task, compared to the lower auditory memory load, 4IAX task. Furthermore, a significant effect of aging was identified for the loudness scaling condition. The younger controls were found to rate most stimuli along the continuum as significantly louder than the older controls and the individuals with PD. Conclusions: The persons with PD showed evidence of impaired auditory perception for intensity information, as compared to the older and younger controls. The significant effect of aging on loudness perception may indicate peripheral and/or central auditory involvement.

  3. Coding of communication calls in the subcortical and cortical structures of the auditory system.

    Science.gov (United States)

    Suta, D; Popelár, J; Syka, J

    2008-01-01

    The processing of species-specific communication signals in the auditory system represents an important aspect of animal behavior and is crucial for its social interactions, reproduction, and survival. In this article the neuronal mechanisms underlying the processing of communication signals in the higher centers of the auditory system--inferior colliculus (IC), medial geniculate body (MGB) and auditory cortex (AC)--are reviewed, with particular attention to the guinea pig. The selectivity of neuronal responses for individual calls in these auditory centers in the guinea pig is usually low--most neurons respond to calls as well as to artificial sounds; the coding of complex sounds in the central auditory nuclei is apparently based on the representation of temporal and spectral features of acoustical stimuli in neural networks. Neuronal response patterns in the IC reliably match the sound envelope for calls characterized by one or more short impulses, but do not exactly fit the envelope for long calls. Also, the main spectral peaks are represented by neuronal firing rates in the IC. In comparison to the IC, response patterns in the MGB and AC demonstrate a less precise representation of the sound envelope, especially in the case of longer calls. The spectral representation is worse in the case of low-frequency calls, but not in the case of broad-band calls. The emotional content of the call may influence neuronal responses in the auditory pathway, which can be demonstrated by stimulation with time-reversed calls or by measurements performed under different levels of anesthesia. The investigation of the principles of the neural coding of species-specific vocalizations offers some keys for understanding the neural mechanisms underlying human speech perception.

  4. Morphology and physiology of auditory and vibratory ascending interneurones in bushcrickets.

    Science.gov (United States)

    Nebeling, B

    2000-02-15

    Auditory/vibratory interneurones of the bushcricket species Decticus albifrons and Decticus verrucivorus were studied with intracellular dye injection and electrophysiology. The morphologies of five physiologically characterised auditory/vibratory interneurones are shown in the brain, subesophageal and prothoracic ganglia. Based on their physiology, these five interneurones fall into three groups, the purely auditory or sound neurones: S-neurones, the purely vibratory V-neurones, and the bimodal vibrosensitive VS-neurones. The S1-neurones respond phasically to airborne sound whereas the S4-neurones exhibit a tonic spike pattern. Their somata are located in the prothoracic ganglion and they show an ascending axon with dendrites located in the prothoracic, subesophageal ganglia, and the brain. The VS3-neurone, responding to both auditory and vibratory stimuli in a tonic manner, has its axon traversing the brain, the suboesophageal ganglion and the prothoracic ganglion although with dendrites only in the brain. The V1- and V2-neurones respond to vibratory stimulation of the fore- and midlegs with a tonic discharge pattern, and our data show that they receive inhibitory input suppressing their spontaneous activity. Their axon transverses the prothoracic ganglion, subesophageal ganglion and terminate in the brain with dendritic branching. Thus the auditory S-neurones have dendritic arborizations in all three ganglia (prothoracic, subesophageal, and brain) compared to the vibratory (V) and vibrosensitive (VS) neurones, which have dendrites almost only in the brain. The dendrites of the S-neurones are also more extensive than those of the V-, VS-neurones. V- and VS-neurones terminate more laterally in the brain. Due to an interspecific comparison of the identified auditory interneurones the S1-neurone is found to be homologous to the TN1 of crickets and other bushcrickets, and the S4-neurone also can be called AN2. J. Exp. Zool. 286:219-230, 2000.

  5. Visual hierarchical processing and lateralization of cognitive functions through domestic chicks' eyes.

    Directory of Open Access Journals (Sweden)

    Cinzia Chiandetti

    Full Text Available Hierarchical stimuli have proven effective for investigating principles of visual organization in humans. A large body of evidence suggests that the analysis of the global forms precedes the analysis of the local forms in our species. Studies on lateralization also indicate that analytic and holistic encoding strategies are separated between the two hemispheres of the brain. This raises the question of whether precedence effects may reflect the activation of lateralized functions within the brain. Non-human animals have perceptual organization and functional lateralization that are comparable to that of humans. Here we trained the domestic chick in a concurrent discrimination task involving hierarchical stimuli. Then, we evaluated the animals for analytic and holistic encoding strategies in a series of transformational tests by relying on a monocular occlusion technique. A local precedence emerged in both the left and the right hemisphere, adding further evidence in favour of analytic processing in non-human animals.

  6. Visual hierarchical processing and lateralization of cognitive functions through domestic chicks' eyes.

    Science.gov (United States)

    Chiandetti, Cinzia; Pecchia, Tommaso; Patt, Francesco; Vallortigara, Giorgio

    2014-01-01

    Hierarchical stimuli have proven effective for investigating principles of visual organization in humans. A large body of evidence suggests that the analysis of the global forms precedes the analysis of the local forms in our species. Studies on lateralization also indicate that analytic and holistic encoding strategies are separated between the two hemispheres of the brain. This raises the question of whether precedence effects may reflect the activation of lateralized functions within the brain. Non-human animals have perceptual organization and functional lateralization that are comparable to that of humans. Here we trained the domestic chick in a concurrent discrimination task involving hierarchical stimuli. Then, we evaluated the animals for analytic and holistic encoding strategies in a series of transformational tests by relying on a monocular occlusion technique. A local precedence emerged in both the left and the right hemisphere, adding further evidence in favour of analytic processing in non-human animals. PMID:24404163

  7. Influence of age, spatial memory, and ocular fixation on localization of auditory, visual, and bimodal targets by human subjects.

    Science.gov (United States)

    Dobreva, Marina S; O'Neill, William E; Paige, Gary D

    2012-12-01

    visual bias with bimodal stimuli. Results highlight age-, memory-, and modality-dependent deterioration in the processing of auditory and visual space, as well as an age-related increase in the dominance of vision when localizing bimodal sources. PMID:23076429

  8. Visual Hierarchical Processing and Lateralization of Cognitive Functions through Domestic Chicks' Eyes

    OpenAIRE

    Chiandetti, Cinzia; Pecchia, Tommaso; Patt, Francesco; Vallortigara, Giorgio

    2014-01-01

    Hierarchical stimuli have proven effective for investigating principles of visual organization in humans. A large body of evidence suggests that the analysis of the global forms precedes the analysis of the local forms in our species. Studies on lateralization also indicate that analytic and holistic encoding strategies are separated between the two hemispheres of the brain. This raises the question of whether precedence effects may reflect the activation of lateralized functions within the b...

  9. Prediction of hearing thresholds: Comparison of cortical evoked response audiometry and auditory steady state response audiometry techniques

    OpenAIRE

    Wong, LLN; Yeung, KNK

    2007-01-01

    The present study evaluated how well auditory steady state response (ASSR) and tone burst cortical evoked response audiometry (CERA) thresholds predict behavioral thresholds in the same participants. A total of 63 ears were evaluated. For ASSR testing, 100% amplitude modulated and 10% frequency modulated tone stimuli at a modulation frequency of 40Hz were used. Behavioral thresholds were closer to CERA thresholds than ASSR thresholds. ASSR and CERA thresholds were closer to behavioral thresho...

  10. Spectral features control temporal plasticity in auditory cortex.

    Science.gov (United States)

    Kilgard, M P; Pandya, P K; Vazquez, J L; Rathbun, D L; Engineer, N D; Moucha, R

    2001-01-01

    Cortical responses are adjusted and optimized throughout life to meet changing behavioral demands and to compensate for peripheral damage. The cholinergic nucleus basalis (NB) gates cortical plasticity and focuses learning on behaviorally meaningful stimuli. By systematically varying the acoustic parameters of the sound paired with NB activation, we have previously shown that tone frequency and amplitude modulation rate alter the topography and selectivity of frequency tuning in primary auditory cortex. This result suggests that network-level rules operate in the cortex to guide reorganization based on specific features of the sensory input associated with NB activity. This report summarizes recent evidence that temporal response properties of cortical neurons are influenced by the spectral characteristics of sounds associated with cholinergic modulation. For example, repeated pairing of a spectrally complex (ripple) stimulus decreased the minimum response latency for the ripple, but lengthened the minimum latency for tones. Pairing a rapid train of tones with NB activation only increased the maximum following rate of cortical neurons when the carrier frequency of each train was randomly varied. These results suggest that spectral and temporal parameters of acoustic experiences interact to shape spectrotemporal selectivity in the cortex. Additional experiments with more complex stimuli are needed to clarify how the cortex learns natural sounds such as speech.

  11. Stimuli responsive nanomaterials for controlled release applications

    KAUST Repository

    Li, Song

    2012-01-01

    The controlled release of therapeutics has been one of the major challenges for scientists and engineers during the past three decades. Coupled with excellent biocompatibility profiles, various nanomaterials have showed great promise for biomedical applications. Stimuli-responsive nanomaterials guarantee the controlled release of cargo to a given location, at a specific time, and with an accurate amount. In this review, we have combined the major stimuli that are currently used to achieve the ultimate goal of controlled and targeted release by "smart" nanomaterials. The most heavily explored strategies include (1) pH, (2) enzymes, (3) redox, (4) magnetic, and (5) light-triggered release.

  12. ERPs reveal the temporal dynamics of auditory word recognition in specific language impairment.

    Science.gov (United States)

    Malins, Jeffrey G; Desroches, Amy S; Robertson, Erin K; Newman, Randy Lynn; Archibald, Lisa M D; Joanisse, Marc F

    2013-07-01

    We used event-related potentials (ERPs) to compare auditory word recognition in children with specific language impairment (SLI group; N=14) to a group of typically developing children (TD group; N=14). Subjects were presented with pictures of items and heard auditory words that either matched or mismatched the pictures. Mismatches overlapped expected words in word-onset (cohort mismatches; see: DOLL, hear: dog), rhyme (CONE -bone), or were unrelated (SHELL -mug). In match trials, the SLI group showed a different pattern of N100 responses to auditory stimuli compared to the TD group, indicative of early auditory processing differences in SLI. However, the phonological mapping negativity (PMN) response to mismatching items was comparable across groups, suggesting that just like TD children, children with SLI are capable of establishing phonological expectations and detecting violations of these expectations in an online fashion. Perhaps most importantly, we observed a lack of attenuation of the N400 for rhyming words in the SLI group, which suggests that either these children were not as sensitive to rhyme similarity as their typically developing peers, or did not suppress lexical alternatives to the same extent. These findings help shed light on the underlying deficits responsible for SLI.

  13. Abnormal auditory forward masking pattern in the brainstem response of individuals with Asperger syndrome

    Directory of Open Access Journals (Sweden)

    Johan Källstrand

    2010-05-01

    Full Text Available Johan Källstrand1, Olle Olsson2, Sara Fristedt Nehlstedt1, Mia Ling Sköld1, Sören Nielzén21SensoDetect AB, Lund, Sweden; 2Department of Clinical Neuroscience, Section of Psychiatry, Lund University, Lund, SwedenAbstract: Abnormal auditory information processing has been reported in individuals with autism spectrum disorders (ASD. In the present study auditory processing was investigated by recording auditory brainstem responses (ABRs elicited by forward masking in adults diagnosed with Asperger syndrome (AS. Sixteen AS subjects were included in the forward masking experiment and compared to three control groups consisting of healthy individuals (n = 16, schizophrenic patients (n = 16 and attention deficit hyperactivity disorder patients (n = 16, respectively, of matching age and gender. The results showed that the AS subjects exhibited abnormally low activity in the early part of their ABRs that distinctly separated them from the three control groups. Specifically, wave III amplitudes were significantly lower in the AS group than for all the control groups in the forward masking condition (P < 0.005, which was not the case in the baseline condition. Thus, electrophysiological measurements of ABRs to complex sound stimuli (eg, forward masking may lead to a better understanding of the underlying neurophysiology of AS. Future studies may further point to specific ABR characteristics in AS individuals that separate them from individuals diagnosed with other neurodevelopmental diseases.Keywords: asperger syndrome, auditory brainstem response, forward masking, psychoacoustics

  14. Altered auditory BOLD response to conspecific birdsong in zebra finches with stuttered syllables.

    Directory of Open Access Journals (Sweden)

    Henning U Voss

    Full Text Available How well a songbird learns a song appears to depend on the formation of a robust auditory template of its tutor's song. Using functional magnetic resonance neuroimaging we examine auditory responses in two groups of zebra finches that differ in the type of song they sing after being tutored by birds producing stuttering-like syllable repetitions in their songs. We find that birds that learn to produce the stuttered syntax show attenuated blood oxygenation level-dependent (BOLD responses to tutor's song, and more pronounced responses to conspecific song primarily in the auditory area field L of the avian forebrain, when compared to birds that produce normal song. These findings are consistent with the presence of a sensory song template critical for song learning in auditory areas of the zebra finch forebrain. In addition, they suggest a relationship between an altered response related to familiarity and/or saliency of song stimuli and the production of variant songs with stuttered syllables.

  15. Formation and disruption of tonotopy in a large-scale model of the auditory cortex.

    Science.gov (United States)

    Tomková, Markéta; Tomek, Jakub; Novák, Ondřej; Zelenka, Ondřej; Syka, Josef; Brom, Cyril

    2015-10-01

    There is ample experimental evidence describing changes of tonotopic organisation in the auditory cortex due to environmental factors. In order to uncover the underlying mechanisms, we designed a large-scale computational model of the auditory cortex. The model has up to 100 000 Izhikevich's spiking neurons of 17 different types, almost 21 million synapses, which are evolved according to Spike-Timing-Dependent Plasticity (STDP) and have an architecture akin to existing observations. Validation of the model revealed alternating synchronised/desynchronised states and different modes of oscillatory activity. We provide insight into these phenomena via analysing the activity of neuronal subtypes and testing different causal interventions into the simulation. Our model is able to produce experimental predictions on a cell type basis. To study the influence of environmental factors on the tonotopy, different types of auditory stimulations during the evolution of the network were modelled and compared. We found that strong white noise resulted in completely disrupted tonotopy, which is consistent with in vivo experimental observations. Stimulation with pure tones or spontaneous activity led to a similar degree of tonotopy as in the initial state of the network. Interestingly, weak white noise led to a substantial increase in tonotopy. As the STDP was the only mechanism of plasticity in our model, our results suggest that STDP is a sufficient condition for the emergence and disruption of tonotopy under various types of stimuli. The presented large-scale model of the auditory cortex and the core simulator, SUSNOIMAC, have been made publicly available. PMID:26344164

  16. Test of a motor theory of long-term auditory memory.

    Science.gov (United States)

    Schulze, Katrin; Vargha-Khadem, Faraneh; Mishkin, Mortimer

    2012-05-01

    Monkeys can easily form lasting central representations of visual and tactile stimuli, yet they seem unable to do the same with sounds. Humans, by contrast, are highly proficient in auditory long-term memory (LTM). These mnemonic differences within and between species raise the question of whether the human ability is supported in some way by speech and language, e.g., through subvocal reproduction of speech sounds and by covert verbal labeling of environmental stimuli. If so, the explanation could be that storing rapidly fluctuating acoustic signals requires assistance from the motor system, which is uniquely organized to chain-link rapid sequences. To test this hypothesis, we compared the ability of normal participants to recognize lists of stimuli that can be easily reproduced, labeled, or both (pseudowords, nonverbal sounds, and words, respectively) versus their ability to recognize a list of stimuli that can be reproduced or labeled only with great difficulty (reversed words, i.e., words played backward). Recognition scores after 5-min delays filled with articulatory-suppression tasks were relatively high (75-80% correct) for all sound types except reversed words; the latter yielded scores that were not far above chance (58% correct), even though these stimuli were discriminated nearly perfectly when presented as reversed-word pairs at short intrapair intervals. The combined results provide preliminary support for the hypothesis that participation of the oromotor system may be essential for laying down the memory of speech sounds and, indeed, that speech and auditory memory may be so critically dependent on each other that they had to coevolve. PMID:22511719

  17. Auditory hallucinations suppressed by etizolam in a patient with schizophrenia.

    Science.gov (United States)

    Benazzi, F; Mazzoli, M; Rossi, E

    1993-10-01

    A patient presented with a 15 year history of schizophrenia with auditory hallucinations. Though unresponsive to prolonged trials of neuroleptics, the auditory hallucinations disappeared with etizolam. PMID:7902201

  18. Changes in Electroencephalogram Approximate Entropy Reflect Auditory Processing and Functional Complexity in Frogs

    Institute of Scientific and Technical Information of China (English)

    Yansu LIU; Yanzhu FAN; Fei XUE; Xizi YUE; Steven E BRAUTH; Yezhong TANG; Guangzhan FANG

    2016-01-01

    Brain systems engage in what are generally considered to be among the most complex forms of information processing. In the present study, we investigated the functional complexity of anuran auditory processing using the approximate entropy (ApEn) protocol for electroencephalogram (EEG) recordings from the forebrain and midbrain while male and female music frogs (Babina daunchina) listened to acoustic stimuli whose biological significance varied. The stimuli used were synthesized white noise (reflecting a novel signal), conspecific male advertisement calls with either high or low sexual attractiveness (relfecting sexual selection) and silence (relfecting a baseline). The results showed that 1) ApEn evoked by conspeciifc calls exceeded ApEn evoked by synthesized white noise in the left mesencephalon indicating this structure plays a critical role in processing acoustic signals with biological signiifcance;2) ApEn in the mesencephalon was significantly higher than for the telencephalon, consistent with the fact that the anuran midbrain contains a large well-organized auditory nucleus (torus semicircularis) while the forebrain does not; 3) for females ApEn in the mesencephalon was signiifcantly different than that of males, suggesting that males and females process biological stimuli related to mate choice differently.

  19. Auditory enhancement of visual perception at threshold depends on visual abilities.

    Science.gov (United States)

    Caclin, Anne; Bouchet, Patrick; Djoulah, Farida; Pirat, Elodie; Pernier, Jacques; Giard, Marie-Hélène

    2011-06-17

    Whether or not multisensory interactions can improve detection thresholds, and thus widen the range of perceptible events is a long-standing debate. Here we revisit this question, by testing the influence of auditory stimuli on visual detection threshold, in subjects exhibiting a wide range of visual-only performance. Above the perceptual threshold, crossmodal interactions have indeed been reported to depend on the subject's performance when the modalities are presented in isolation. We thus tested normal-seeing subjects and short-sighted subjects wearing their usual glasses. We used a paradigm limiting potential shortcomings of previous studies: we chose a criterion-free threshold measurement procedure and precluded exogenous cueing effects by systematically presenting a visual cue whenever a visual target (a faint Gabor patch) might occur. Using this carefully controlled procedure, we found that concurrent sounds only improved visual detection thresholds in the sub-group of subjects exhibiting the poorest performance in the visual-only conditions. In these subjects, for oblique orientations of the visual stimuli (but not for vertical or horizontal targets), the auditory improvement was still present when visual detection was already helped with flanking visual stimuli generating a collinear facilitation effect. These findings highlight that crossmodal interactions are most efficient to improve perceptual performance when an isolated modality is deficient.

  20. Auditory Association Cortex Lesions Impair Auditory Short-Term Memory in Monkeys

    Science.gov (United States)

    Colombo, Michael; D'Amato, Michael R.; Rodman, Hillary R.; Gross, Charles G.

    1990-01-01

    Monkeys that were trained to perform auditory and visual short-term memory tasks (delayed matching-to-sample) received lesions of the auditory association cortex in the superior temporal gyrus. Although visual memory was completely unaffected by the lesions, auditory memory was severely impaired. Despite this impairment, all monkeys could discriminate sounds closer in frequency than those used in the auditory memory task. This result suggests that the superior temporal cortex plays a role in auditory processing and retention similar to the role the inferior temporal cortex plays in visual processing and retention.

  1. Positron Emission Tomography Imaging Reveals Auditory and Frontal Cortical Regions Involved with Speech Perception and Loudness Adaptation.

    Directory of Open Access Journals (Sweden)

    Georg Berding

    Full Text Available Considerable progress has been made in the treatment of hearing loss with auditory implants. However, there are still many implanted patients that experience hearing deficiencies, such as limited speech understanding or vanishing perception with continuous stimulation (i.e., abnormal loudness adaptation. The present study aims to identify specific patterns of cerebral cortex activity involved with such deficiencies. We performed O-15-water positron emission tomography (PET in patients implanted with electrodes within the cochlea, brainstem, or midbrain to investigate the pattern of cortical activation in response to speech or continuous multi-tone stimuli directly inputted into the implant processor that then delivered electrical patterns through those electrodes. Statistical parametric mapping was performed on a single subject basis. Better speech understanding was correlated with a larger extent of bilateral auditory cortex activation. In contrast to speech, the continuous multi-tone stimulus elicited mainly unilateral auditory cortical activity in which greater loudness adaptation corresponded to weaker activation and even deactivation. Interestingly, greater loudness adaptation was correlated with stronger activity within the ventral prefrontal cortex, which could be up-regulated to suppress the irrelevant or aberrant signals into the auditory cortex. The ability to detect these specific cortical patterns and differences across patients and stimuli demonstrates the potential for using PET to diagnose auditory function or dysfunction in implant patients, which in turn could guide the development of appropriate stimulation strategies for improving hearing rehabilitation. Beyond hearing restoration, our study also reveals a potential role of the frontal cortex in suppressing irrelevant or aberrant activity within the auditory cortex, and thus may be relevant for understanding and treating tinnitus.

  2. Positron Emission Tomography Imaging Reveals Auditory and Frontal Cortical Regions Involved with Speech Perception and Loudness Adaptation.

    Science.gov (United States)

    Berding, Georg; Wilke, Florian; Rode, Thilo; Haense, Cathleen; Joseph, Gert; Meyer, Geerd J; Mamach, Martin; Lenarz, Minoo; Geworski, Lilli; Bengel, Frank M; Lenarz, Thomas; Lim, Hubert H

    2015-01-01

    Considerable progress has been made in the treatment of hearing loss with auditory implants. However, there are still many implanted patients that experience hearing deficiencies, such as limited speech understanding or vanishing perception with continuous stimulation (i.e., abnormal loudness adaptation). The present study aims to identify specific patterns of cerebral cortex activity involved with such deficiencies. We performed O-15-water positron emission tomography (PET) in patients implanted with electrodes within the cochlea, brainstem, or midbrain to investigate the pattern of cortical activation in response to speech or continuous multi-tone stimuli directly inputted into the implant processor that then delivered electrical patterns through those electrodes. Statistical parametric mapping was performed on a single subject basis. Better speech understanding was correlated with a larger extent of bilateral auditory cortex activation. In contrast to speech, the continuous multi-tone stimulus elicited mainly unilateral auditory cortical activity in which greater loudness adaptation corresponded to weaker activation and even deactivation. Interestingly, greater loudness adaptation was correlated with stronger activity within the ventral prefrontal cortex, which could be up-regulated to suppress the irrelevant or aberrant signals into the auditory cortex. The ability to detect these specific cortical patterns and differences across patients and stimuli demonstrates the potential for using PET to diagnose auditory function or dysfunction in implant patients, which in turn could guide the development of appropriate stimulation strategies for improving hearing rehabilitation. Beyond hearing restoration, our study also reveals a potential role of the frontal cortex in suppressing irrelevant or aberrant activity within the auditory cortex, and thus may be relevant for understanding and treating tinnitus. PMID:26046763

  3. Increasing Working Memory Load Reduces Processing of Cross-Modal Task-Irrelevant Stimuli Even after Controlling for Task Difficulty and Executive Capacity

    Science.gov (United States)

    Simon, Sharon S.; Tusch, Erich S.; Holcomb, Phillip J.; Daffner, Kirk R.

    2016-01-01

    The classic account of the load theory (LT) of attention suggests that increasing cognitive load leads to greater processing of task-irrelevant stimuli due to competition for limited executive resource that reduces the ability to actively maintain current processing priorities. Studies testing this hypothesis have yielded widely divergent outcomes. The inconsistent results may, in part, be related to variability in executive capacity (EC) and task difficulty across subjects in different studies. Here, we used a cross-modal paradigm to investigate whether augmented working memory (WM) load leads to increased early distracter processing, and controlled for the potential confounders of EC and task difficulty. Twenty-three young subjects were engaged in a primary visual WM task, under high and low load conditions, while instructed to ignore irrelevant auditory stimuli. Demands of the high load condition were individually titrated to make task difficulty comparable across subjects with differing EC. Event-related potentials (ERPs) were used to measure neural activity in response to stimuli presented in both the task relevant modality (visual) and task-irrelevant modality (auditory). Behavioral results indicate that the load manipulation and titration procedure of the primary visual task were successful. ERPs demonstrated that in response to visual target stimuli, there was a load-related increase in the posterior slow wave, an index of sustained attention and effort. Importantly, under high load, there was a decrease of the auditory N1 in response to distracters, a marker of early auditory processing. These results suggest that increased WM load is associated with enhanced attentional engagement and protection from distraction in a cross-modal setting, even after controlling for task difficulty and EC. Our findings challenge the classic LT and offer support for alternative models.

  4. Increasing Working Memory Load Reduces Processing of Cross-Modal Task-Irrelevant Stimuli Even after Controlling for Task Difficulty and Executive Capacity.

    Science.gov (United States)

    Simon, Sharon S; Tusch, Erich S; Holcomb, Phillip J; Daffner, Kirk R

    2016-01-01

    The classic account of the load theory (LT) of attention suggests that increasing cognitive load leads to greater processing of task-irrelevant stimuli due to competition for limited executive resource that reduces the ability to actively maintain current processing priorities. Studies testing this hypothesis have yielded widely divergent outcomes. The inconsistent results may, in part, be related to variability in executive capacity (EC) and task difficulty across subjects in different studies. Here, we used a cross-modal paradigm to investigate whether augmented working memory (WM) load leads to increased early distracter processing, and controlled for the potential confounders of EC and task difficulty. Twenty-three young subjects were engaged in a primary visual WM task, under high and low load conditions, while instructed to ignore irrelevant auditory stimuli. Demands of the high load condition were individually titrated to make task difficulty comparable across subjects with differing EC. Event-related potentials (ERPs) were used to measure neural activity in response to stimuli presented in both the task relevant modality (visual) and task-irrelevant modality (auditory). Behavioral results indicate that the load manipulation and titration procedure of the primary visual task were successful. ERPs demonstrated that in response to visual target stimuli, there was a load-related increase in the posterior slow wave, an index of sustained attention and effort. Importantly, under high load, there was a decrease of the auditory N1 in response to distracters, a marker of early auditory processing. These results suggest that increased WM load is associated with enhanced attentional engagement and protection from distraction in a cross-modal setting, even after controlling for task difficulty and EC. Our findings challenge the classic LT and offer support for alternative models. PMID:27536226

  5. Narrow, duplicated internal auditory canal

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, T. [Servico de Neurorradiologia, Hospital Garcia de Orta, Avenida Torrado da Silva, 2801-951, Almada (Portugal); Shayestehfar, B. [Department of Radiology, UCLA Oliveview School of Medicine, Los Angeles, California (United States); Lufkin, R. [Department of Radiology, UCLA School of Medicine, Los Angeles, California (United States)

    2003-05-01

    A narrow internal auditory canal (IAC) constitutes a relative contraindication to cochlear implantation because it is associated with aplasia or hypoplasia of the vestibulocochlear nerve or its cochlear branch. We report an unusual case of a narrow, duplicated IAC, divided by a bony septum into a superior relatively large portion and an inferior stenotic portion, in which we could identify only the facial nerve. This case adds support to the association between a narrow IAC and aplasia or hypoplasia of the vestibulocochlear nerve. The normal facial nerve argues against the hypothesis that the narrow IAC is the result of a primary bony defect which inhibits the growth of the vestibulocochlear nerve. (orig.)

  6. Auditory hallucinations in nonverbal quadriplegics.

    Science.gov (United States)

    Hamilton, J

    1985-11-01

    When a system for communicating with nonverbal, quadriplegic, institutionalized residents was developed, it was discovered that many were experiencing auditory hallucinations. Nine cases are presented in this study. The "voices" described have many similar characteristics, the primary one being that they give authoritarian commands that tell the residents how to behave and to which the residents feel compelled to respond. Both the relationship of this phenomenon to the theoretical work of Julian Jaynes and its effect on the lives of the residents are discussed.

  7. Cyclodextrin-Mediated Hierarchical Self-Assembly and Its Potential in Drug Delivery Applications.

    Science.gov (United States)

    Antoniuk, Iurii; Amiel, Catherine

    2016-09-01

    Hierarchical self-assembly exploits various non-covalent interactions to manufacture sophisticated organized systems at multiple length scales with interesting properties for pharmaceutical industry such as possibility of spatially controlled drug loading and multiresponsiveness to external stimuli. Cyclodextrin (CD)-mediated host-guest interactions proved to be an efficient tool to construct hierarchical architectures primarily due to the high specificity and reversibility of the inclusion complexation of CDs with a number of hydrophobic guest molecules, their excellent bioavailability, and easiness of chemical modification. In this review, we will outline the recent progress in the development of CD-based hierarchical architectures such as nanoscale drug and gene delivery carriers and physically cross-linked supramolecular hydrogels designed for a sustained release of actives. PMID:27342436

  8. The WIN-Speller: A new Intuitive Auditory Brain-Computer Interface Spelling Application

    Directory of Open Access Journals (Sweden)

    Sonja C Kleih

    2015-10-01

    Full Text Available The objective of this study was to test the usability of a new auditory Brain-Computer Interface (BCI application for communication. We introduce a word based, intuitive auditory spelling paradigm the WIN-speller. In the WIN-speller letters are grouped by words, such as the word KLANG representing the letters A, G, K, L and N. Thereby, the decoding step between perceiving a code and translating it to the stimuli it represents becomes superfluous. We tested 11 healthy volunteers and 4 end-users with motor impairment in the copy spelling mode. Spelling was successful with an average accuracy of 84% in the healthy sample. Three of the end-users communicated with average accuracies of 80% or higher while one user was not able to communicate reliably. Even though further evaluation is required, the WIN-speller represents a potential alternative for BCI based communication in end-users.

  9. Differential maturation of brain signal complexity in the human auditory and visual system

    Directory of Open Access Journals (Sweden)

    Sarah Lippe

    2009-11-01

    Full Text Available Brain development carries with it a large number of structural changes at the local level which impact on the functional interactions of distributed neuronal networks for perceptual processing. Such changes enhance information processing capacity, which can be indexed by estimation of neural signal complexity. Here, we show that during development, EEG signal complexity increases from one month to 5 years of age in response to auditory and visual stimulation. However, the rates of change in complexity were not equivalent for the two responses. Infants’ signal complexity for the visual condition was greater than auditory signal complexity, whereas adults showed the same level of complexity to both types of stimuli. The differential rates of complexity change may reflect a combination of innate and experiential factors on the structure and function of the two sensory systems.

  10. Further Evidence of Auditory Extinction in Aphasia

    Science.gov (United States)

    Marshall, Rebecca Shisler; Basilakos, Alexandra; Love-Myers, Kim

    2013-01-01

    Purpose: Preliminary research ( Shisler, 2005) suggests that auditory extinction in individuals with aphasia (IWA) may be connected to binding and attention. In this study, the authors expanded on previous findings on auditory extinction to determine the source of extinction deficits in IWA. Method: Seventeen IWA (M[subscript age] = 53.19 years)…

  11. Mapping tonotopy in human auditory cortex

    NARCIS (Netherlands)

    van Dijk, Pim; Langers, Dave R M; Moore, BCJ; Patterson, RD; Winter, IM; Carlyon, RP; Gockel, HE

    2013-01-01

    Tonotopy is arguably the most prominent organizational principle in the auditory pathway. Nevertheless, the layout of tonotopic maps in humans is still debated. We present neuroimaging data that robustly identify multiple tonotopic maps in the bilateral auditory cortex. In contrast with some earlier

  12. Auditory Processing Disorder and Foreign Language Acquisition

    Science.gov (United States)

    Veselovska, Ganna

    2015-01-01

    This article aims at exploring various strategies for coping with the auditory processing disorder in the light of foreign language acquisition. The techniques relevant to dealing with the auditory processing disorder can be attributed to environmental and compensatory approaches. The environmental one involves actions directed at creating a…

  13. Intersubject information mapping: revealing canonical representations of complex natural stimuli

    Directory of Open Access Journals (Sweden)

    Nikolaus Kriegeskorte

    2015-03-01

    Full Text Available Real-world time-continuous stimuli such as video promise greater naturalism for studies of brain function. However, modeling the stimulus variation is challenging and introduces a bias in favor of particular descriptive dimensions. Alternatively, we can look for brain regions whose signal is correlated between subjects, essentially using one subject to model another. Intersubject correlation mapping (ICM allows us to find brain regions driven in a canonical manner across subjects by a complex natural stimulus. However, it requires a direct voxel-to-voxel match between the spatiotemporal activity patterns and is thus only sensitive to common activations sufficiently extended to match up in Talairach space (or in an alternative, e.g. cortical-surface-based, common brain space. Here we introduce the more general approach of intersubject information mapping (IIM. For each brain region, IIM determines how much information is shared between the subjects' local spatiotemporal activity patterns. We estimate the intersubject mutual information using canonical correlation analysis applied to voxels within a spherical searchlight centered on each voxel in turn. The intersubject information estimate is invariant to linear transforms including spatial rearrangement of the voxels within the searchlight. This invariance to local encoding will be crucial in exploring fine-grained brain representations, which cannot be matched up in a common space and, more fundamentally, might be unique to each individual – like fingerprints. IIM yields a continuous brain map, which reflects intersubject information in fine-grained patterns. Performed on data from functional magnetic resonance imaging (fMRI of subjects viewing the same television show, IIM and ICM both highlighted sensory representations, including primary visual and auditory cortices. However, IIM revealed additional regions in higher association cortices, namely temporal pole and orbitofrontal cortex. These

  14. Categorization of Multidimensional Stimuli by Pigeons

    Science.gov (United States)

    Berg, Mark E.; Grace, Randolph C.

    2011-01-01

    Six pigeons responded in a visual category learning task in which the stimuli were dimensionally separable Gabor patches that varied in frequency and orientation. We compared performance in two conditions which varied in terms of whether accurate performance required that responding be controlled jointly by frequency and orientation, or…

  15. Musicians' Perception of Beat in Monotonic Stimuli.

    Science.gov (United States)

    Duke, Robert A.

    1989-01-01

    Assesses musicians' perceptions of beat in monotonic stimuli and attempts to define empirically the range of perceived beat tempo in music. Subjects performed a metric pulse in response to periodic stimulus tones. Results indicate a relatively narrow range within which beats are perceived by trained musicians. (LS)

  16. Computer programming for generating visual stimuli.

    Science.gov (United States)

    Bukhari, Farhan; Kurylo, Daniel D

    2008-02-01

    Critical to vision research is the generation of visual displays with precise control over stimulus metrics. Generating stimuli often requires adapting commercial software or developing specialized software for specific research applications. In order to facilitate this process, we give here an overview that allows nonexpert users to generate and customize stimuli for vision research. We first give a review of relevant hardware and software considerations, to allow the selection of display hardware, operating system, programming language, and graphics packages most appropriate for specific research applications. We then describe the framework of a generic computer program that can be adapted for use with a broad range of experimental applications. Stimuli are generated in the context of trial events, allowing the display of text messages, the monitoring of subject responses and reaction times, and the inclusion of contingency algorithms. This approach allows direct control and management of computer-generated visual stimuli while utilizing the full capabilities of modern hardware and software systems. The flowchart and source code for the stimulus-generating program may be downloaded from www.psychonomic.org/archive.

  17. Chemical evolution in hierarchical scenarios

    Directory of Open Access Journals (Sweden)

    Tissera P.B.

    2012-02-01

    Full Text Available We studied the chemical properties of Milky-Way mass galaxies. We found common global chemical patterns with particularities which reflect their different assembly histories in a hierarchical scenario. We carried out a comprehensively analysis of the dynamical components (central spheroid, disc, inner and outer haloes and their chemical properties.

  18. Hierarchical classification of social groups

    OpenAIRE

    Витковская, Мария

    2001-01-01

    Classification problems are important for every science, and for sociology as well. Social phenomena, examined from the aspect of classification of social groups, can be examined deeper. At present one common classification of groups does not exist. This article offers the hierarchical classification of social group.

  19. Processing of sounds by population spikes in a model of primary auditory cortex

    Directory of Open Access Journals (Sweden)

    Alex Loebel

    2007-10-01

    Full Text Available We propose a model of the primary auditory cortex (A1, in which each iso-frequency column is represented by a recurrent neural network with short-term synaptic depression. Such networks can emit Population Spikes, in which most of the neurons fire synchronously for a short time period. Different columns are interconnected in a way that reflects the tonotopic map in A1, and population spikes can propagate along the map from one column to the next, in a temporally precise manner that depends on the specific input presented to the network. The network, therefore, processes incoming sounds by precise sequences of population spikes that are embedded in a continuous asynchronous activity, with both of these response components carrying information about the inputs and interacting with each other. With these basic characteristics, the model can account for a wide range of experimental findings. We reproduce neuronal frequency tuning curves, whose width depends on the strength of the intracortical inhibitory and excitatory connections. Non-simultaneous two-tone stimuli show forward masking depending on their temporal separation, as well as on the duration of the first stimulus. The model also exhibits non-linear suppressive interactions between sub-threshold tones and broad-band noise inputs, similar to the hypersensitive locking suppression recently demonstrated in auditory cortex.We derive several predictions from the model. In particular, we predict that spontaneous activity in primary auditory cortex gates the temporally locked responses of A1 neurons to auditory stimuli. Spontaneous activity could, therefore, be a mechanism for rapid and reversible modulation of cortical processing.

  20. Tactile feedback improves auditory spatial localization.

    Science.gov (United States)

    Gori, Monica; Vercillo, Tiziana; Sandini, Giulio; Burr, David

    2014-01-01

    Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gori et al., 2014). To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds before and after training, either with tactile feedback, verbal feedback, or no feedback. Audio thresholds were first measured with a spatial bisection task: subjects judged whether the second sound of a three sound sequence was spatially closer to the first or the third sound. The tactile feedback group underwent two audio-tactile feedback sessions of 100 trials, where each auditory trial was followed by the same spatial sequence played on the subject's forearm; auditory spatial bisection thresholds were evaluated after each session. In the verbal feedback condition, the positions of the sounds were verbally reported to the subject after each feedback trial. The no feedback group did the same sequence of trials, with no feedback. Performance improved significantly only after audio-tactile feedback. The results suggest that direct tactile feedback interacts with the auditory spatial localization system, possibly by a process of cross-sensory recalibration. Control tests with the subject rotated suggested that this effect occurs only when the tactile and acoustic sequences are spatially congruent. Our results suggest that the tactile system can be used to recalibrate the auditory sense of space. These results encourage the possibility of designing rehabilitation programs to help blind persons establish a robust auditory sense of space, through training with the tactile modality. PMID:25368587

  1. Tactile feedback improves auditory spatial localization

    Directory of Open Access Journals (Sweden)

    Monica eGori

    2014-10-01

    Full Text Available Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial-bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gori et al., 2014. To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds before and after training, either with tactile feedback, verbal feedback or no feedback. Audio thresholds were first measured with a spatial bisection task: subjects judged whether the second sound of a three sound sequence was spatially closer to the first or the third sound. The tactile-feedback group underwent two audio-tactile feedback sessions of 100 trials, where each auditory trial was followed by the same spatial sequence played on the subject’s forearm; auditory spatial bisection thresholds were evaluated after each session. In the verbal-feedback condition, the positions of the sounds were verbally reported to the subject after each feedback trial. The no-feedback group did the same sequence of trials, with no feedback. Performance improved significantly only after audio-tactile feedback. The results suggest that direct tactile feedback interacts with the auditory spatial localization system, possibly by a process of cross-sensory recalibration. Control tests with the subject rotated suggested that this effect occurs only when the tactile and acoustic sequences are spatially coherent. Our results suggest that the tactile system can be used to recalibrate the auditory sense of space. These results encourage the possibility of designing rehabilitation programs to help blind persons establish a robust auditory sense of space, through training with the tactile modality.

  2. Temporal auditory processing in elders

    Directory of Open Access Journals (Sweden)

    Azzolini, Vanuza Conceição

    2010-03-01

    Full Text Available Introduction: In the trial of aging all the structures of the organism are modified, generating intercurrences in the quality of the hearing and of the comprehension. The hearing loss that occurs in consequence of this trial occasion a reduction of the communicative function, causing, also, a distance of the social relationship. Objective: Comparing the performance of the temporal auditory processing between elderly individuals with and without hearing loss. Method: The present study is characterized for to be a prospective, transversal and of diagnosis character field work. They were analyzed 21 elders (16 women and 5 men, with ages between 60 to 81 years divided in two groups, a group "without hearing loss"; (n = 13 with normal auditive thresholds or restricted hearing loss to the isolated frequencies and a group "with hearing loss" (n = 8 with neurosensory hearing loss of variable degree between light to moderately severe. Both the groups performed the tests of frequency (PPS and duration (DPS, for evaluate the ability of temporal sequencing, and the test Randon Gap Detection Test (RGDT, for evaluate the temporal resolution ability. Results: It had not difference statistically significant between the groups, evaluated by the tests DPS and RGDT. The ability of temporal sequencing was significantly major in the group without hearing loss, when evaluated by the test PPS in the condition "muttering". This result presented a growing one significant in parallel with the increase of the age group. Conclusion: It had not difference in the temporal auditory processing in the comparison between the groups.

  3. Analytical Evaluation of Hierarchical Planning Systems

    OpenAIRE

    Dempster, M.A.H.; Fisher, M.L.; Jansen, L; Lageweg, B.J.; J. K. Lenstra; Rinnooy Kan, A.H.G.

    1984-01-01

    Hierarchical planning systems have become popular for multilevel decision problems. After reviewing the concept of hierarchical planning and citing some examples, the authors describe a method for analytic evaluation of a hierarchical planning system. They show that multilevel decision problems can be nicely modeled as multistage stochastic programs. Then any hierarchical planning system can be measured against the yardstick of optimality in this stochastic program. They demonstrate this ap...

  4. Error-dependent modulation of speech-induced auditory suppression for pitch-shifted voice feedback

    Directory of Open Access Journals (Sweden)

    Larson Charles R

    2011-06-01

    Full Text Available Abstract Background The motor-driven predictions about expected sensory feedback (efference copies have been proposed to play an important role in recognition of sensory consequences of self-produced motor actions. In the auditory system, this effect was suggested to result in suppression of sensory neural responses to self-produced voices that are predicted by the efference copies during vocal production in comparison with passive listening to the playback of the identical self-vocalizations. In the present study, event-related potentials (ERPs were recorded in response to upward pitch shift stimuli (PSS with five different magnitudes (0, +50, +100, +200 and +400 cents at voice onset during active vocal production and passive listening to the playback. Results Results indicated that the suppression of the N1 component during vocal production was largest for unaltered voice feedback (PSS: 0 cents, became smaller as the magnitude of PSS increased to 200 cents, and was almost completely eliminated in response to 400 cents stimuli. Conclusions Findings of the present study suggest that the brain utilizes the motor predictions (efference copies to determine the source of incoming stimuli and maximally suppresses the auditory responses to unaltered feedback of self-vocalizations. The reduction of suppression for 50, 100 and 200 cents and its elimination for 400 cents pitch-shifted voice auditory feedback support the idea that motor-driven suppression of voice feedback leads to distinctly different sensory neural processing of self vs. non-self vocalizations. This characteristic may enable the audio-vocal system to more effectively detect and correct for unexpected errors in the feedback of self-produced voice pitch compared with externally-generated sounds.

  5. Effect of Auditory Constraints on Motor Learning Depends on Stage of Recovery Post Stroke

    Directory of Open Access Journals (Sweden)

    Viswanath eAluru

    2014-06-01

    Full Text Available In order to develop evidence-based rehabilitation protocols post stroke, one must first reconcile the vast heterogeneity in the post-stroke population and develop protocols to facilitate motor learning in the various subgroups. The main purpose of this study is to show that auditory constraints interact with the stage of recovery post stroke to influence motor learning. We characterized the stages of upper limb recovery using task-based kinematic measures in twenty subjects with chronic hemiparesis, and used a bimanual wrist extension task using a custom-made wrist trainer to facilitate learning of wrist extension in the paretic hand under four auditory conditions: 1 without auditory cueing; 2 to non-musical happy sounds; 3 to self-selected music; and 4 to a metronome beat set at a comfortable tempo. Two bimanual trials (15 s each were followed by one unimanual trial with the paretic hand over six cycles under each condition. Clinical metrics, wrist and arm kinematics and electromyographic activity were recorded. Hierarchical cluster analysis with the Mahalanobis metric based on baseline speed and extent of wrist movement stratified subjects into three distinct groups which reflected their stage of recovery: spastic paresis, spastic co-contraction, and minimal paresis. In spastic paresis, the metronome beat increased wrist extension, but also increased muscle co-activation across the wrist. In contrast, in spastic co-contraction, no auditory stimulation increased wrist extension and reduced co-activation. In minimal paresis, wrist extension did not improve under any condition. The results suggest that auditory task constraints interact with stage of recovery during motor learning after stroke, perhaps due to recruitment of distinct neural substrates over the course of recovery. The findings advance our understanding of the mechanisms of progression of motor recovery and lay the foundation for personalized treatment algorithms post stroke.

  6. Effect of auditory constraints on motor performance depends on stage of recovery post-stroke.

    Science.gov (United States)

    Aluru, Viswanath; Lu, Ying; Leung, Alan; Verghese, Joe; Raghavan, Preeti

    2014-01-01

    In order to develop evidence-based rehabilitation protocols post-stroke, one must first reconcile the vast heterogeneity in the post-stroke population and develop protocols to facilitate motor learning in the various subgroups. The main purpose of this study is to show that auditory constraints interact with the stage of recovery post-stroke to influence motor learning. We characterized the stages of upper limb recovery using task-based kinematic measures in 20 subjects with chronic hemiparesis. We used a bimanual wrist extension task, performed with a custom-made wrist trainer, to facilitate learning of wrist extension in the paretic hand under four auditory conditions: (1) without auditory cueing; (2) to non-musical happy sounds; (3) to self-selected music; and (4) to a metronome beat set at a comfortable tempo. Two bimanual trials (15 s each) were followed by one unimanual trial with the paretic hand over six cycles under each condition. Clinical metrics, wrist and arm kinematics, and electromyographic activity were recorded. Hierarchical cluster analysis with the Mahalanobis metric based on baseline speed and extent of wrist movement stratified subjects into three distinct groups, which reflected their stage of recovery: spastic paresis, spastic co-contraction, and minimal paresis. In spastic paresis, the metronome beat increased wrist extension, but also increased muscle co-activation across the wrist. In contrast, in spastic co-contraction, no auditory stimulation increased wrist extension and reduced co-activation. In minimal paresis, wrist extension did not improve under any condition. The results suggest that auditory task constraints interact with stage of recovery during motor learning after stroke, perhaps due to recruitment of distinct neural substrates over the course of recovery. The findings advance our understanding of the mechanisms of progression of motor recovery and lay the foundation for personalized treatment algorithms post-stroke. PMID

  7. Modelling auditory attention: Insights from the Theory of Visual Attention (TVA)

    DEFF Research Database (Denmark)

    Roberts, K. L.; Andersen, Tobias; Kyllingsbæk, Søren;

    We report initial progress towards creating an auditory analogue of a mathematical model of visual attention: the ‘Theory of Visual Attention’ (TVA; Bundesen, 1990). TVA is one of the best established models of visual attention. It assumes that visual stimuli are initially processed in parallel...... to the data produces the following parameters: the minimum amount of information required for target identification (t0); the rate at which information is encoded, assuming an exponential function (v); the relative attentional weight to targets versus distractors (α); and the capacity of VSTM (K). TVA has...

  8. Visual, Auditory, and Cross Modal Sensory Processing in Adults with Autism:An EEG Power and BOLD fMRI Investigation

    Directory of Open Access Journals (Sweden)

    Elizabeth C Hames

    2016-04-01

    Full Text Available Electroencephalography (EEG and Blood Oxygen Level Dependent Functional Magnetic Resonance Imagining (BOLD fMRI assessed the neurocorrelates of sensory processing of visual and auditory stimuli in 11 adults with autism (ASD and 10 neurotypical (NT controls between the ages of 20-28. We hypothesized that ASD performance on combined audiovisual trials would be less accurate with observable decreased EEG power across frontal, temporal, and occipital channels and decreased BOLD fMRI activity in these same regions; reflecting deficits in key sensory processing areas. Analysis focused on EEG power, BOLD fMRI, and accuracy. Lower EEG beta power and lower left auditory cortex fMRI activity were seen in ASD compared to NT when they were presented with auditory stimuli as demonstrated by contrasting the activity from the second presentation of an auditory stimulus in an all auditory block versus the second presentation of a visual stimulus in an all visual block (AA2­VV2. We conclude that in ASD, combined audiovisual processing is more similar than unimodal processing to NTs.

  9. The temporal primacy of self-related stimuli and negative stimuli: an ERP-based comparative study.

    Science.gov (United States)

    Zhu, Min; Luo, Junlong; Zhao, Na; Hu, Yinying; Yan, Lingyue; Gao, Xiangping

    2016-10-01

    Numerous studies have shown there exist attention biases for self-related and negative stimuli. Few studies, however, have been carried out to compare the effects of such stimuli on the neural mechanisms of early attentional alertness and subsequent cognitive processing. The purpose of the present study was to examine the temporal primacy of both self-related stimuli and negative stimuli in the neurophysiologic level. In a modified oddball task, event-related potentials of the deviant stimuli (i.e., self-face, negative face and neutral face) were recorded. Results revealed that larger P2 amplitudes were elicited by self-related and negative stimuli than by neutral stimuli. Negative stimuli, however, elicited shorter P2 latencies than self-related and neutral stimuli. As for the N2 component, self-related and negative stimuli elicited smaller amplitudes and shorter latencies than neutral stimuli, but otherwise did not differ. Self-related stimuli also elicited larger P3 and late positive component (LPC) amplitudes than negative and neutral stimuli. The pattern of results suggests that the primacy of negative stimuli occurred at an early attention stage of processing, while the primacy of self-related stimuli occurred at the subsequent cognitive evaluation and memory stage. PMID:26513485

  10. Coupling between Theta Oscillations and Cognitive Control Network during Cross-Modal Visual and Auditory Attention: Supramodal vs Modality-Specific Mechanisms

    Science.gov (United States)

    Wang, Wuyi; Viswanathan, Shivakumar; Lee, Taraz; Grafton, Scott T.

    2016-01-01

    Cortical theta band oscillations (4–8 Hz) in EEG signals have been shown to be important for a variety of different cognitive control operations in visual attention paradigms. However the synchronization source of these signals as defined by fMRI BOLD activity and the extent to which theta oscillations play a role in multimodal attention remains unknown. Here we investigated the extent to which cross-modal visual and auditory attention impacts theta oscillations. Using a simultaneous EEG-fMRI paradigm, healthy human participants performed an attentional vigilance task with six cross-modal conditions using naturalistic stimuli. To assess supramodal mechanisms, modulation of theta oscillation amplitude for attention to either visual or auditory stimuli was correlated with BOLD activity by conjunction analysis. Negative correlation was localized to cortical regions associated with the default mode network and positively with ventral premotor areas. Modality-associated attention to visual stimuli was marked by a positive correlation of theta and BOLD activity in fronto-parietal area that was not observed in the auditory condition. A positive correlation of theta and BOLD activity was observed in auditory cortex, while a negative correlation of theta and BOLD activity was observed in visual cortex during auditory attention. The data support a supramodal interaction of theta activity with of DMN function, and modality-associated processes within fronto-parietal networks related to top-down theta related cognitive control in cross-modal visual attention. On the other hand, in sensory cortices there are opposing effects of theta activity during cross-modal auditory attention. PMID:27391013

  11. Testosterone alters genomic responses to song and monoaminergic innervation of auditory areas in a seasonally breeding songbird.

    Science.gov (United States)

    Matragrano, Lisa L; LeBlanc, Meredith M; Chitrapu, Anjani; Blanton, Zane E; Maney, Donna L

    2013-06-01

    Behavioral responses to social stimuli often vary according to endocrine state. Our previous work has suggested that such changes in behavior may be due in part to hormone-dependent sensory processing. In the auditory forebrain of female white-throated sparrows, expression of the immediate early gene ZENK (egr-1) is higher in response to conspecific song than to a control sound only when plasma estradiol reaches breeding-typical levels. Estradiol also increases the number of detectable noradrenergic neurons in the locus coeruleus and the density of noradrenergic and serotonergic fibers innervating auditory areas. We hypothesize, therefore, that reproductive hormones alter auditory responses by acting on monoaminergic systems. This possibility has not been examined in males. Here, we treated non-breeding male white-throated sparrows with testosterone to mimic breeding-typical levels and then exposed them to conspecific male song or frequency-matched tones. We observed selective ZENK responses in the caudomedial nidopallium only in the testosterone-treated males. Responses in another auditory area, the caudomedial mesopallium, were selective regardless of hormone treatment. Testosterone treatment reduced serotonergic fiber density in the auditory forebrain, thalamus, and midbrain, and although it increased the number of noradrenergic neurons detected in the locus coeruleus, it reduced noradrenergic fiber density in the auditory midbrain. Thus, whereas we previously reported that estradiol enhances monoaminergic innervation of the auditory pathway in females, we show here that testosterone decreases it in males. Mechanisms underlying testosterone-dependent selectivity of the ZENK response may differ from estradiol-dependent ones

  12. Hierarchical Prisoner's Dilemma in Hierarchical Public-Goods Game

    CERN Document Server

    Fujimoto, Yuma; Kaneko, Kunihiko

    2016-01-01

    The dilemma in cooperation is one of the major concerns in game theory. In a public-goods game, each individual pays a cost for cooperation, or to prevent defection, and receives a reward from the collected cost in a group. Thus, defection is beneficial for each individual, while cooperation is beneficial for the group. Now, groups (say, countries) consisting of individual players also play games. To study such a multi-level game, we introduce a hierarchical public-goods (HPG) game in which two groups compete for finite resources by utilizing costs collected from individuals in each group. Analyzing this HPG game, we found a hierarchical prisoner's dilemma, in which groups choose the defection policy (say, armaments) as a Nash strategy to optimize each group's benefit, while cooperation optimizes the total benefit. On the other hand, for each individual within a group, refusing to pay the cost (say, tax) is a Nash strategy, which turns to be a cooperation policy for the group, thus leading to a hierarchical d...

  13. From sounds to words: a neurocomputational model of adaptation, inhibition and memory processes in auditory change detection.

    Science.gov (United States)

    Garagnani, Max; Pulvermüller, Friedemann

    2011-01-01

    Most animals detect sudden changes in trains of repeated stimuli but only some can learn a wide range of sensory patterns and recognise them later, a skill crucial for the evolutionary success of higher mammals. Here we use a neural model mimicking the cortical anatomy of sensory and motor areas and their connections to explain brain activity indexing auditory change and memory access. Our simulations indicate that while neuronal adaptation and local inhibition of cortical activity can explain aspects of change detection as observed when a repeated unfamiliar sound changes in frequency, the brain dynamics elicited by auditory stimulation with well-known patterns (such as meaningful words) cannot be accounted for on the basis of adaptation and inhibition alone. Specifically, we show that the stronger brain responses observed to familiar stimuli in passive oddball tasks are best explained in terms of activation of memory circuits that emerged in the cortex during the learning of these stimuli. Such memory circuits, and the activation enhancement they entail, are absent for unfamiliar stimuli. The model illustrates how basic neurobiological mechanisms, including neuronal adaptation, lateral inhibition, and Hebbian learning, underlie neuronal assembly formation and dynamics, and differentially contribute to the brain's major change detection response, the mismatch negativity. PMID:20728545

  14. Spatial auditory processing in pinnipeds

    Science.gov (United States)

    Holt, Marla M.

    Given the biological importance of sound for a variety of activities, pinnipeds must be able to obtain spatial information about their surroundings thorough acoustic input in the absence of other sensory cues. The three chapters of this dissertation address spatial auditory processing capabilities of pinnipeds in air given that these amphibious animals use acoustic signals for reproduction and survival on land. Two chapters are comparative lab-based studies that utilized psychophysical approaches conducted in an acoustic chamber. Chapter 1 addressed the frequency-dependent sound localization abilities at azimuth of three pinniped species (the harbor seal, Phoca vitulina, the California sea lion, Zalophus californianus, and the northern elephant seal, Mirounga angustirostris). While performances of the sea lion and harbor seal were consistent with the duplex theory of sound localization, the elephant seal, a low-frequency hearing specialist, showed a decreased ability to localize the highest frequencies tested. In Chapter 2 spatial release from masking (SRM), which occurs when a signal and masker are spatially separated resulting in improvement in signal detectability relative to conditions in which they are co-located, was determined in a harbor seal and sea lion. Absolute and masked thresholds were measured at three frequencies and azimuths to determine the detection advantages afforded by this type of spatial auditory processing. Results showed that hearing sensitivity was enhanced by up to 19 and 12 dB in the harbor seal and sea lion, respectively, when the signal and masker were spatially separated. Chapter 3 was a field-based study that quantified both sender and receiver variables of the directional properties of male northern elephant seal calls produce within communication system that serves to delineate dominance status. This included measuring call directivity patterns, observing male-male vocally-mediated interactions, and an acoustic playback study

  15. An Auditory Model of Improved Adaptive ZCPA

    Directory of Open Access Journals (Sweden)

    Jinping Zhang

    2013-07-01

    Full Text Available An improved ZCAP auditory model with adaptability is proposed in this paper, and the  adaptive method designed for ZCPA model is suitable for other auditory model with inner-hair-cell sub-model. The first step in the implement process of the proposed ZCPA model is to carry out the calculation of inner product between signal and complex Gammatone filters to obtain important frequency components  of signal. And then, according to  the result of the first step, the parameters of the basilar membrane sub-model and frequency box are automatically adjusted, such as the number of the basilar membrane filters, center frequency and bandwith of each basilar membrane filter, position of each frequency box, and so on. Lastly  an auditory model is built, and the final output is auditory spectrum.The results of numerical simulation and experiments have showed that the proposed model could realize accurate frequency selection, and the auditory spectrum is more distinctly than that of conventional ZCPA model. Moreover, the proposed model can completely avoided the influence of the number of filter on the shape of auditory spectrum existing in conventional ZCPA model so that the shape of auditory spectrum is steady, and the data quantity is small.

  16. Auditory Efferent System Modulates Mosquito Hearing.

    Science.gov (United States)

    Andrés, Marta; Seifert, Marvin; Spalthoff, Christian; Warren, Ben; Weiss, Lukas; Giraldo, Diego; Winkler, Margret; Pauls, Stephanie; Göpfert, Martin C

    2016-08-01

    The performance of vertebrate ears is controlled by auditory efferents that originate in the brain and innervate the ear, synapsing onto hair cell somata and auditory afferent fibers [1-3]. Efferent activity can provide protection from noise and facilitate the detection and discrimination of sound by modulating mechanical amplification by hair cells and transmitter release as well as auditory afferent action potential firing [1-3]. Insect auditory organs are thought to lack efferent control [4-7], but when we inspected mosquito ears, we obtained evidence for its existence. Antibodies against synaptic proteins recognized rows of bouton-like puncta running along the dendrites and axons of mosquito auditory sensory neurons. Electron microscopy identified synaptic and non-synaptic sites of vesicle release, and some of the innervating fibers co-labeled with somata in the CNS. Octopamine, GABA, and serotonin were identified as efferent neurotransmitters or neuromodulators that affect auditory frequency tuning, mechanical amplification, and sound-evoked potentials. Mosquito brains thus modulate mosquito ears, extending the use of auditory efferent systems from vertebrates to invertebrates and adding new levels of complexity to mosquito sound detection and communication. PMID:27476597

  17. Photonic water dynamically responsive to external stimuli.

    Science.gov (United States)

    Sano, Koki; Kim, Youn Soo; Ishida, Yasuhiro; Ebina, Yasuo; Sasaki, Takayoshi; Hikima, Takaaki; Aida, Takuzo

    2016-01-01

    Fluids that contain ordered nanostructures with periodic distances in the visible-wavelength range, anomalously exhibit structural colours that can be rapidly modulated by external stimuli. Indeed, some fish can dynamically change colour by modulating the periodic distance of crystalline guanine sheets cofacially oriented in their fluid cytoplasm. Here we report that a dilute aqueous colloidal dispersion of negatively charged titanate nanosheets exhibits structural colours. In this 'photonic water', the nanosheets spontaneously adopt a cofacial geometry with an ultralong periodic distance of up to 675 nm due to a strong electrostatic repulsion. Consequently, the photonic water can even reflect near-infrared light up to 1,750 nm. The structural colour becomes more vivid in a magnetic flux that induces monodomain structural ordering of the colloidal dispersion. The reflective colour of the photonic water can be modulated over the entire visible region in response to appropriate physical or chemical stimuli. PMID:27572806

  18. Blind Braille readers mislocate tactile stimuli.

    Science.gov (United States)

    Sterr, Annette; Green, Lisa; Elbert, Thomas

    2003-05-01

    In a previous experiment, we observed that blind Braille readers produce errors when asked to identify on which finger of one hand a light tactile stimulus had occurred. With the present study, we aimed to specify the characteristics of this perceptual error in blind and sighted participants. The experiment confirmed that blind Braille readers mislocalised tactile stimuli more often than sighted controls, and that the localisation errors occurred significantly more often at the right reading hand than at the non-reading hand. Most importantly, we discovered that the reading fingers showed the smallest error frequency, but the highest rate of stimulus attribution. The dissociation of perceiving and locating tactile stimuli in the blind suggests altered tactile information processing. Neuroplasticity, changes in tactile attention mechanisms as well as the idea that blind persons may employ different strategies for tactile exploration and object localisation are discussed as possible explanations for the results obtained.

  19. Photonic water dynamically responsive to external stimuli

    Science.gov (United States)

    Sano, Koki; Kim, Youn Soo; Ishida, Yasuhiro; Ebina, Yasuo; Sasaki, Takayoshi; Hikima, Takaaki; Aida, Takuzo

    2016-08-01

    Fluids that contain ordered nanostructures with periodic distances in the visible-wavelength range, anomalously exhibit structural colours that can be rapidly modulated by external stimuli. Indeed, some fish can dynamically change colour by modulating the periodic distance of crystalline guanine sheets cofacially oriented in their fluid cytoplasm. Here we report that a dilute aqueous colloidal dispersion of negatively charged titanate nanosheets exhibits structural colours. In this `photonic water', the nanosheets spontaneously adopt a cofacial geometry with an ultralong periodic distance of up to 675 nm due to a strong electrostatic repulsion. Consequently, the photonic water can even reflect near-infrared light up to 1,750 nm. The structural colour becomes more vivid in a magnetic flux that induces monodomain structural ordering of the colloidal dispersion. The reflective colour of the photonic water can be modulated over the entire visible region in response to appropriate physical or chemical stimuli.

  20. Utilising reinforcement learning to develop strategies for driving auditory neural implants

    Science.gov (United States)

    Lee, Geoffrey W.; Zambetta, Fabio; Li, Xiaodong; Paolini, Antonio G.

    2016-08-01

    Objective. In this paper we propose a novel application of reinforcement learning to the area of auditory neural stimulation. We aim to develop a simulation environment which is based off real neurological responses to auditory and electrical stimulation in the cochlear nucleus (CN) and inferior colliculus (IC) of an animal model. Using this simulator we implement closed loop reinforcement learning algorithms to determine which methods are most effective at learning effective acoustic neural stimulation strategies. Approach. By recording a comprehensive set of acoustic frequency presentations and neural responses from a set of animals we created a large database of neural responses to acoustic stimulation. Extensive electrical stimulation in the CN and the recording of neural responses in the IC provides a mapping of how the auditory system responds to electrical stimuli. The combined dataset is used as the foundation for the simulator, which is used to implement and test learning algorithms. Main results. Reinforcement learning, utilising a modified n-Armed Bandit solution, is implemented to demonstrate the model’s function. We show the ability to effectively learn stimulation patterns which mimic the cochlea’s ability to covert acoustic frequencies to neural activity. Time taken to learn effective replication using neural stimulation takes less than 20 min under continuous testing. Significance. These results show the utility of reinforcement learning in the field of neural stimulation. These results can be coupled with existing sound processing technologies to develop new auditory prosthetics that are adaptable to the recipients current auditory pathway. The same process can theoretically be abstracted to other sensory and motor systems to develop similar electrical replication of neural signals.

  1. Segregation and integration of auditory streams when listening to multi-part music.

    Directory of Open Access Journals (Sweden)

    Marie Ragert

    Full Text Available In our daily lives, auditory stream segregation allows us to differentiate concurrent sound sources and to make sense of the scene we are experiencing. However, a combination of segregation and the concurrent integration of auditory streams is necessary in order to analyze the relationship between streams and thus perceive a coherent auditory scene. The present functional magnetic resonance imaging study investigates the relative role and neural underpinnings of these listening strategies in multi-part musical stimuli. We compare a real human performance of a piano duet and a synthetic stimulus of the same duet in a prioritized integrative attention paradigm that required the simultaneous segregation and integration of auditory streams. In so doing, we manipulate the degree to which the attended part of the duet led either structurally (attend melody vs. attend accompaniment or temporally (asynchronies vs. no asynchronies between parts, and thus the relative contributions of integration and segregation used to make an assessment of the leader-follower relationship. We show that perceptually the relationship between parts is biased towards the conventional structural hierarchy in western music in which the melody generally dominates (leads the accompaniment. Moreover, the assessment varies as a function of both cognitive load, as shown through difficulty ratings and the interaction of the temporal and the structural relationship factors. Neurally, we see that the temporal relationship between parts, as one important cue for stream segregation, revealed distinct neural activity in the planum temporale. By contrast, integration used when listening to both the temporally separated performance stimulus and the temporally fused synthetic stimulus resulted in activation of the intraparietal sulcus. These results support the hypothesis that the planum temporale and IPS are key structures underlying the mechanisms of segregation and integration of

  2. The Auditory-Visual Speech Benefit on Working Memory in Older Adults with Hearing Impairment

    Directory of Open Access Journals (Sweden)

    Jana B. Frtusova

    2016-04-01

    Full Text Available This study examined the effect of auditory-visual (AV speech stimuli on working memory in hearing impaired participants (HIP in comparison to age- and education-matched normal elderly controls (NEC. Participants completed a working memory n-back task (0- to 2-back in which sequences of digits were presented in visual-only (i.e., speech-reading, auditory-only (A-only, and AV conditions. Auditory event-related potentials (ERP were collected to assess the relationship between perceptual and working memory processing. The behavioural results showed that both groups were faster in the AV condition in comparison to the unisensory conditions. The ERP data showed perceptual facilitation in the AV condition, in the form of reduced amplitudes and latencies of the auditory N1 and/or P1 components, in the HIP group. Furthermore, a working memory ERP component, the P3, peaked earlier for both groups in the AV condition compared to the A-only condition. In general, the HIP group showed a more robust AV benefit; however, the NECs showed a dose-response relationship between perceptual facilitation and working memory improvement, especially for facilitation of processing speed. Two measures, reaction time and P3 amplitude, suggested that the presence of visual speech cues may have helped the HIP to counteract the demanding auditory processing, to the level that no group differences were evident during the AV modality despite lower performance during the A-only condition. Overall, this study provides support for the theory of an integrated perceptual-cognitive system. The practical significance of these findings is also discussed.

  3. The Auditory-Visual Speech Benefit on Working Memory in Older Adults with Hearing Impairment.

    Science.gov (United States)

    Frtusova, Jana B; Phillips, Natalie A

    2016-01-01

    This study examined the effect of auditory-visual (AV) speech stimuli on working memory in older adults with poorer-hearing (PH) in comparison to age- and education-matched older adults with better hearing (BH). Participants completed a working memory n-back task (0- to 2-back) in which sequences of digits were presented in visual-only (i.e., speech-reading), auditory-only (A-only), and AV conditions. Auditory event-related potentials (ERP) were collected to assess the relationship between perceptual and working memory processing. The behavioral results showed that both groups were faster in the AV condition in comparison to the unisensory conditions. The ERP data showed perceptual facilitation in the AV condition, in the form of reduced amplitudes and latencies of the auditory N1 and/or P1 components, in the PH group. Furthermore, a working memory ERP component, the P3, peaked earlier for both groups in the AV condition compared to the A-only condition. In general, the PH group showed a more robust AV benefit; however, the BH group showed a dose-response relationship between perceptual facilitation and working memory improvement, especially for facilitation of processing speed. Two measures, reaction time and P3 amplitude, suggested that the presence of visual speech cues may have helped the PH group to counteract the demanding auditory processing, to the level that no group differences were evident during the AV modality despite lower performance during the A-only condition. Overall, this study provides support for the theory of an integrated perceptual-cognitive system. The practical significance of these findings is also discussed. PMID:27148106

  4. [Anesthesia with flunitrazepam/fentanyl and isoflurane/fentanyl. Unconscious perception and mid-latency auditory evoked potentials].

    Science.gov (United States)

    Schwender, D; Kaiser, A; Klasing, S; Faber-Züllig, E; Golling, W; Pöppel, E; Peter, K

    1994-05-01

    There is a high incidence of intraoperative awareness during cardiac surgery. Mid-latency auditory evoked potentials (MLAEP) reflect the primary cortical processing of auditory stimuli. In the present study, we investigated MLAEP and explicit and implicit memory for information presented during cardiac anaesthesia. PATIENTS AND METHODS. Institutional approval and informed consent was obtained in 30 patients scheduled for elective cardiac surgery. Anaesthesia was induced in group I (n = 10) with flunitrazepam/fentanyl (0.01 mg/kg) and maintained with flunitrazepam/fentanyl (1.2 mg/h). The patients in group II (n = 10) received etomidate (0.25 mg/kg) and fentanyl (0.005 mg/kg) for induction and isoflurane (0.6-1.2 vol%)/fentanyl (1.2 mg/h) for maintenance of general anaesthesia. Group III (n = 10) served as a control and patients were anaesthetized as in I or II. After sternotomy an audiotape that included an implicit memory task was presented to the patients in groups I and II. The story of Robinson Crusoe was told, and it was suggested to the patients that they remember Robinson Crusoe when asked what they associated with the word Friday 3-5 days postoperatively. Auditory evoked potentials were recorded awake and during general anaesthesia before and after the audiotape presentation on vertex (positive) and mastoids on both sides (negative). Auditory clicks were presented binaurally at 70 dBnHL at a rate of 9.3 Hz. Using the electrodiagnostic system Pathfinder I (Nicolet), 1000 successive stimulus responses were averaged over a 100 ms poststimulus interval and analyzed off-line. Latencies of the peak V, Na, Pa were measured. V belongs to the brainstem-generated potentials, which demonstrates that auditory stimuli were correctly transduced. Na, Pa are generated in the primary auditory cortex of the temporal lobe and are the electrophysiological correlate of the primary cortical processing of the auditory stimuli. RESULTS. None of the patients had an explicit memory

  5. Cognitive Interpretations of Ambiguous Visual Stimuli

    OpenAIRE

    Naber, Marnix

    2012-01-01

    Brains can sense and distinguish signals from background noise in physical environments, and recognize and classify them as distinct entities. Ambiguity is an inherent part of this process. It is a cognitive property that is generated by the noisy character of the signals, and by the design of the sensory systems that process them. Stimuli can be ambiguous if they are noisy, incomplete, or only briefly sensed. Such conditions may ...

  6. Remindings influence the interpretation of ambiguous stimuli

    OpenAIRE

    Tullis, Jonathan G.; Braverman, Michael; Ross, Brian H; Benjamin, Aaron S.

    2014-01-01

    Remindings–stimulus-guided retrievals of prior events–may help us interpret ambiguous events by linking the current situation to relevant prior experiences. Evidence suggests that remindings play an important role in interpreting complex ambiguous stimuli (Ross & Bradshaw, 1994); here we evaluate whether remindings influence word interpretation and memory in a new paradigm. Learners studied words on distinct visual backgrounds and generated a sentence for each word. Homographs were either pre...

  7. Functional Neurochemistry of the Auditory System

    Directory of Open Access Journals (Sweden)

    Nourollah Agha Ebrahimi

    1993-03-01

    Full Text Available Functional Neurochemistry is one of the fields of studies in the auditory system which has had an outstanding development in the recent years. Many of the findings in the mentioned field had led not only the basic auditory researches but also the clinicians to new points of view in audiology.Here, we are aimed at discussing the latest investigations in the Functional Neurochemistry of the auditory system and have focused this review mainly on the researches which will arise flashes of hope for future clinical studies

  8. Auditory Neuropathy/Dyssynchrony in Biotinidase Deficiency

    Science.gov (United States)

    Yaghini, Omid

    2016-01-01

    Biotinidase deficiency is a disorder inherited autosomal recessively showing evidence of hearing loss and optic atrophy in addition to seizures, hypotonia, and ataxia. In the present study, a 2-year-old boy with Biotinidase deficiency is presented in which clinical symptoms have been reported with auditory neuropathy/auditory dyssynchrony (AN/AD). In this case, transient-evoked otoacoustic emissions showed bilaterally normal responses representing normal function of outer hair cells. In contrast, acoustic reflex test showed absent reflexes bilaterally, and visual reinforcement audiometry and auditory brainstem responses indicated severe to profound hearing loss in both ears. These results suggest AN/AD in patients with Biotinidase deficiency. PMID:27144235

  9. Functional Neurochemistry of the Auditory System

    OpenAIRE

    Nourollah Agha Ebrahimi

    1993-01-01

    Functional Neurochemistry is one of the fields of studies in the auditory system which has had an outstanding development in the recent years. Many of the findings in the mentioned field had led not only the basic auditory researches but also the clinicians to new points of view in audiology.Here, we are aimed at discussing the latest investigations in the Functional Neurochemistry of the auditory system and have focused this review mainly on the researches which will arise flashes of hope f...

  10. Auditory filters at low-frequencies

    DEFF Research Database (Denmark)

    Orellana, Carlos Andrés Jurado; Pedersen, Christian Sejer; Møller, Henrik

    2009-01-01

    Prediction and assessment of low-frequency noise problems requires information about the auditory filter characteristics at low-frequencies. Unfortunately, data at low-frequencies is scarce and practically no results have been published for frequencies below 100 Hz. Extrapolation of ERB results......-ear transfer function), the asymmetry of the auditory filter changed from steeper high-frequency slopes at 1000 Hz to steeper low-frequency slopes below 100 Hz. Increasing steepness at low-frequencies of the middle-ear high-pass filter is thought to cause this effect. The dynamic range of the auditory filter...

  11. Spatial Brightness Perception of Trichromatic Stimuli

    Energy Technology Data Exchange (ETDEWEB)

    Royer, Michael P.; Houser, Kevin W.

    2012-11-16

    An experiment was conducted to examine the effect of tuning optical radiation on brightness perception for younger (18-25 years of age) and older (50 years of age or older) observers. Participants made forced-choice evaluations of the brightness of a full factorial of stimulus pairs selected from two groups of four metameric stimuli. The large-field stimuli were created by systematically varying either the red or the blue primary of an RGB LED mixture. The results indicate that light stimuli of equal illuminance and chromaticity do not appear equally bright to either younger or older subjects. The rank-order of brightness is not predicted by any current model of human vision or theory of brightness perception including Scotopic to Photopic or Cirtopic to Photopic ratio theory, prime color theory, correlated color temperature, V(λ)-based photometry, color quality metrics, linear brightness models, or color appearance models. Age may affect brightness perception when short-wavelength primaries are used, especially those with a peak wavelength shorter than 450 nm. The results suggest further development of metrics to predict brightness perception is warranted, and that including age as a variable in predictive models may be valuable.

  12. Observing of chain-schedule stimuli.

    Science.gov (United States)

    Slezak, Jonathan M; Anderson, Karen G

    2014-06-01

    A classical-conditioning account of the processes maintaining behavior under chained schedules entails a backward transmission of conditioned-reinforcement effects. Assessing this process in traditional chain schedules is limited because the response maintained by stimulus onset accompanied by each link in a chain schedule may also be maintained by the primary reinforcer. In the present experiment, an observing response was used to measure the conditioned-reinforcing effects of stimuli associated with a three-link chain variable-time (VT) food schedule, and resistance-to-change tests (extinction and prefeeding) were implemented to examine if a backward transmission of reinforcement effects occur. Four pigeons served as subjects. Observing was maintained by the production of stimuli correlated with links of a three-link chain VT schedule with the middle-link stimulus maintaining the highest rate of observing, followed by the initial-link stimulus and the terminal-link stimulus maintaining the lowest observing rate. Results from resistance-to-change tests of extinction and prefeeding were not supportive of a backward transmission of reinforcement effects and in general, the pattern of resistance-to-change was forward. Based on past and current research, it appears that a backward pattern of relative rate decreases in responses maintained by stimuli correlated with a chain schedule due to disruption (i.e., extinction and prefeeding) is not a ubiquitous process that is evident within different chain-schedule arrangements.

  13. Anagrus breviphragma Soyka Short Distance Search Stimuli

    Directory of Open Access Journals (Sweden)

    Elisabetta Chiappini

    2015-01-01

    Full Text Available Anagrus breviphragma Soyka (Hymenoptera: Mymaridae successfully parasitises eggs of Cicadella viridis (L. (Homoptera: Cicadellidae, embedded in vegetal tissues, suggesting the idea of possible chemical and physical cues, revealing the eggs presence. In this research, three treatments were considered in order to establish which types of cue are involved: eggs extracted from leaf, used as a control, eggs extracted from leaf and cleaned in water and ethanol, used to evaluate the presence of chemicals soluble in polar solvents, and eggs extracted from leaf and covered with Parafilm (M, used to avoid physical stimuli due to the bump on the leaf surface. The results show that eggs covered with Parafilm present a higher number of parasitised eggs and a lower probing starting time with respect to eggs washed with polar solvents or eggs extracted and untreated, both when the treatments were singly tested or when offered in sequence, independently of the treatment position. These results suggest that the exploited stimuli are not physical due to the bump but chemicals that can spread in the Parafilm, circulating the signal on the whole surface, and that the stimuli that elicit probing and oviposition are not subjected to learning.

  14. Simulation of Stimuli-Responsive Polymer Networks

    Directory of Open Access Journals (Sweden)

    Thomas Gruhn

    2013-11-01

    Full Text Available The structure and material properties of polymer networks can depend sensitively on changes in the environment. There is a great deal of progress in the development of stimuli-responsive hydrogels for applications like sensors, self-repairing materials or actuators. Biocompatible, smart hydrogels can be used for applications, such as controlled drug delivery and release, or for artificial muscles. Numerical studies have been performed on different length scales and levels of details. Macroscopic theories that describe the network systems with the help of continuous fields are suited to study effects like the stimuli-induced deformation of hydrogels on large scales. In this article, we discuss various macroscopic approaches and describe, in more detail, our phase field model, which allows the calculation of the hydrogel dynamics with the help of a free energy that considers physical and chemical impacts. On a mesoscopic level, polymer systems can be modeled with the help of the self-consistent field theory, which includes the interactions, connectivity, and the entropy of the polymer chains, and does not depend on constitutive equations. We present our recent extension of the method that allows the study of the formation of nano domains in reversibly crosslinked block copolymer networks. Molecular simulations of polymer networks allow the investigation of the behavior of specific systems on a microscopic scale. As an example for microscopic modeling of stimuli sensitive polymer networks, we present our Monte Carlo simulations of a filament network system with crosslinkers.

  15. Hierarchical matrices algorithms and analysis

    CERN Document Server

    Hackbusch, Wolfgang

    2015-01-01

    This self-contained monograph presents matrix algorithms and their analysis. The new technique enables not only the solution of linear systems but also the approximation of matrix functions, e.g., the matrix exponential. Other applications include the solution of matrix equations, e.g., the Lyapunov or Riccati equation. The required mathematical background can be found in the appendix. The numerical treatment of fully populated large-scale matrices is usually rather costly. However, the technique of hierarchical matrices makes it possible to store matrices and to perform matrix operations approximately with almost linear cost and a controllable degree of approximation error. For important classes of matrices, the computational cost increases only logarithmically with the approximation error. The operations provided include the matrix inversion and LU decomposition. Since large-scale linear algebra problems are standard in scientific computing, the subject of hierarchical matrices is of interest to scientists ...

  16. Automatic Hierarchical Color Image Classification

    Directory of Open Access Journals (Sweden)

    Jing Huang

    2003-02-01

    Full Text Available Organizing images into semantic categories can be extremely useful for content-based image retrieval and image annotation. Grouping images into semantic classes is a difficult problem, however. Image classification attempts to solve this hard problem by using low-level image features. In this paper, we propose a method for hierarchical classification of images via supervised learning. This scheme relies on using a good low-level feature and subsequently performing feature-space reconfiguration using singular value decomposition to reduce noise and dimensionality. We use the training data to obtain a hierarchical classification tree that can be used to categorize new images. Our experimental results suggest that this scheme not only performs better than standard nearest-neighbor techniques, but also has both storage and computational advantages.

  17. Intuitionistic fuzzy hierarchical clustering algorithms

    Institute of Scientific and Technical Information of China (English)

    Xu Zeshui

    2009-01-01

    Intuitionistic fuzzy set (IFS) is a set of 2-tuple arguments, each of which is characterized by a mem-bership degree and a nonmembership degree. The generalized form of IFS is interval-valued intuitionistic fuzzy set (IVIFS), whose components are intervals rather than exact numbers. IFSs and IVIFSs have been found to be very useful to describe vagueness and uncertainty. However, it seems that little attention has been focused on the clus-tering analysis of IFSs and IVIFSs. An intuitionistic fuzzy hierarchical algorithm is introduced for clustering IFSs, which is based on the traditional hierarchical clustering procedure, the intuitionistic fuzzy aggregation operator, and the basic distance measures between IFSs: the Hamming distance, normalized Hamming, weighted Hamming, the Euclidean distance, the normalized Euclidean distance, and the weighted Euclidean distance. Subsequently, the algorithm is extended for clustering IVIFSs. Finally the algorithm and its extended form are applied to the classifications of building materials and enterprises respectively.

  18. Hierarchically arranged helical fibre actuators driven by solvents and vapours.

    Science.gov (United States)

    Chen, Peining; Xu, Yifan; He, Sisi; Sun, Xuemei; Pan, Shaowu; Deng, Jue; Chen, Daoyong; Peng, Huisheng

    2015-12-01

    Mechanical responsiveness in many plants is produced by helical organizations of cellulose microfibrils. However, simple mimicry of these naturally occurring helical structures does not produce artificial materials with the desired tunable actuations. Here, we show that actuating fibres that respond to solvent and vapour stimuli can be created through the hierarchical and helical assembly of aligned carbon nanotubes. Primary fibres consisting of helical assemblies of multiwalled carbon nanotubes are twisted together to form the helical actuating fibres. The nanoscale gaps between the nanotubes and micrometre-scale gaps among the primary fibres contribute to the rapid response and large actuation stroke of the actuating fibres. The compact coils allow the actuating fibre to rotate reversibly. We show that these fibres, which are lightweight, flexible and strong, are suitable for a variety of applications such as energy-harvesting generators, deformable sensing springs and smart textiles.

  19. Hierarchically arranged helical fibre actuators driven by solvents and vapours

    Science.gov (United States)

    Chen, Peining; Xu, Yifan; He, Sisi; Sun, Xuemei; Pan, Shaowu; Deng, Jue; Chen, Daoyong; Peng, Huisheng

    2015-12-01

    Mechanical responsiveness in many plants is produced by helical organizations of cellulose microfibrils. However, simple mimicry of these naturally occurring helical structures does not produce artificial materials with the desired tunable actuations. Here, we show that actuating fibres that respond to solvent and vapour stimuli can be created through the hierarchical and helical assembly of aligned carbon nanotubes. Primary fibres consisting of helical assemblies of multiwalled carbon nanotubes are twisted together to form the helical actuating fibres. The nanoscale gaps between the nanotubes and micrometre-scale gaps among the primary fibres contribute to the rapid response and large actuation stroke of the actuating fibres. The compact coils allow the actuating fibre to rotate reversibly. We show that these fibres, which are lightweight, flexible and strong, are suitable for a variety of applications such as energy-harvesting generators, deformable sensing springs and smart textiles.

  20. Assessing the aging effect on auditory-verbal memory by Persian version of dichotic auditory verbal memory test

    Directory of Open Access Journals (Sweden)

    Zahra Shahidipour

    2014-01-01

    Conclusion: Based on the obtained results, significant reduction in auditory memory was seen in aged group and the Persian version of dichotic auditory-verbal memory test, like many other auditory verbal memory tests, showed the aging effects on auditory verbal memory performance.